Search Immortality Topics:

Page 23«..1020..22232425..3040..»


Category Archives: Machine Learning

Multicolored Mars: Researchers Use Machine Learning To Map Source of Ancient Martian Meteorites – Tech Times

Marswas struck by an asteroid between five and ten million years ago. It produced a huge crater and launched a fresh meteorite made of ancient Martian crust into space, which finally plummeted into Africa.

Thanks to asupercomputer-powered techniquethat enabled scientists to study the geology of planets without leaving the planet, the meteorite source was located.

(Photo : A. Lagain et al./Nature Commun.)By using a machine learning method on one of the fastest supercomputers in the Southern Hemisphere, located at the Pawsey Supercomputing Research Centre, the team was able to identify around 90 million impact craters. Kosta Servis, a senior data scientist at the Center, contributed to the algorithm's development.

A global team of researchers discovered around 90 million impact craters on the Red planet using a machine learning algorithm on one of the fastest supercomputers in the Southern Hemisphere, located at the Pawsey Supercomputing Research Centre, according toNature's report.

The map was created by researchers looking into the origin of the Black Beauty meteorite, which was discovered in the Sahara Desert in 2011.

Martian rocksthat makeup Black Beauty was created roughly 4.5 billion years ago, when the crusts of Earth and Mars were still developing, according to the study.

The researchers eventually determined the precise area of this impact after using the algorithm to eliminate some of the potential outcomes. The 10-kilometer-wide Karratha crater, according to the researchers, may serve as the focal point of a future Mars mission.

The technique underlying the discovery will also be used to locate billions of impact craters on the surfaces of Mercury and the Moon, as well as to determine the origin of other Martian meteorites. There have been over 300 Martian meteorites discovered on Earth thus far.

(Photo : NASA/Jet Propulsion Laboratory/ Cornell University via Getty Images)MARS - JANUARY 6: In this handout released by NASA, angular and smooth surface of rocks are seen in an image taken by the panoramic camera on the Mars Exploration Rover Spirit January 6, 2003. The rover landed on Mars January 3 and sent it's first high resolution color image January 6.

Read also: Drones on Mars: Skycopters May Be Exploring Mars Soon After Ground-Breaking Test in A Volcano!

The crater was given the name Karratha by researchers in honor of a Western Australian city home to some of the planet's oldest rocks. The team wantsNASAto give the area around Karratha Crater top priority as a potential location for a future Mars landing.

Thousands of high-resolution planetary photos from several Mars missions were analyzed by the team in order to identify the origin of the rocks on Mars.

Dr. Anthony Lagain of the Space Science and Technology Centre at Curtin University served as the study's principal investigator, and co-authors included researchers from Paris-Saclay University, the Paris Observatory, the Museum of Natural History, the French National Centre for Scientific Research, the Flix Houphouet-Boigny University in Cte d'Ivoire, Northern Arizona University, and Rutgers University in the US.

Related Article:Spaghetti on Mars? NASA Finally Reveals What This Weird Noodle-Like Object Is

This article is owned by Tech Times

Written by Joaquin Victor Tacla

2022 TECHTIMES.com All rights reserved. Do not reproduce without permission.

See the original post:
Multicolored Mars: Researchers Use Machine Learning To Map Source of Ancient Martian Meteorites - Tech Times

Posted in Machine Learning | Comments Off on Multicolored Mars: Researchers Use Machine Learning To Map Source of Ancient Martian Meteorites – Tech Times

The ABCs of AI, algorithms and machine learning – Marketplace

Advanced computer programs influence, and can even dictate, meaningful parts of our lives. Think of streaming services, credit scores, facial recognition software.

As this technology becomes more sophisticated and more pervasive, its important to understand the basic terminology.

People often use algorithm, machine learning and artificial intelligence interchangeably. There is some overlap, but theyre not the same things.

We decided to call up a few experts to help us get a firm grasp on these concepts, starting with a basic definition of algorithm. The following is an edited transcript of the episode.

Melanie Mitchell, Davis professor of complexity at the Santa Fe Institute, offered a simple explanation of a computer algorithm.

An algorithm is a set of steps for solving a problem or accomplishing a goal, she said.

The next step up is machine learning, which uses algorithms.

Rather than a person programming in the rules, the system itself has learned, Mitchell said.

For example, speech recognition software, which uses data to learn which sounds combine to become words and sentences. And this kind of machine learning is a key component of artificial intelligence.

Artificial intelligence is basically capabilities of computers to mimic human cognitive functions, said Anjana Susarla, who teaches responsible AI at Michigan State Universitys Broad College of Business.

She said we should think of AI as an umbrella term.

AI is much more broader, all-encompassing, compared to only machine learning or algorithms, Susarla said.

Thats why you might hear AI as a loose description for a range of things that show some level of intelligence. Like software that examines the photos on your phone to sort out the ones with cats to advanced spelunking robots that explore caves.

Heres another way to think of the differences among these tools: cooking.

Bethany Edmunds, professor and director of computing programs at Northeastern University, compares it to cooking.

She says an algorithm is basically a recipe step-by-step instructions on how to prepare something to solve the problem of being hungry.

If you took the machine learning approach, you would show a computer the ingredients you have and what you want for the end result. Lets say, a cake.

So maybe it would take every combination of every type of food and put them all together to try and replicate the cake that was provided for it, she said.

AI would turn the whole problem of being hungry over to the computer program, determining or even buying ingredients, choosing a recipe or creating a new one. Just like a human would.

So why do these distinctions matter? Well, for one thing, these tools sometimes produce results with biased outcomes.

Its really important to be able to articulate what those concerns are, Edmunds said. So that you can really dissect where the problem is and how we go about solving it.

Because algorithms, machine learning and AI are pretty much baked into our lives at this point.

Columbia Universitys engineering school has a further explanation of artificial intelligence and machine learning, and it lists other tools besides machine learning that can be part of AI. Like deep learning, neural networks, computer vision and natural language processing.

Over at the Massachusetts Institute of Technology, they point out that machine learning and AI are often used interchangeably because these days, most AI includes some amount of machine learning. A piece from MITs Sloan School of Management also gets into the different subcategories of machine learning. Supervised, unsupervised and reinforcement, like trial and error with kind of digital rewards. For example, teaching an autonomous vehicle to drive by letting the system know when it made the right decision like not hitting a pedestrian, for instance.

That piece also points to a 2020 survey from Deloitte, which found that 67% of companies are already using machine learning, and 97% were planning to in the future.

IBM has a helpful graphic to explain the relationship among AI, machine learning, neural networks and deep learning, presenting them as Russian nesting dolls with the broad category of AI as the biggest one.

And finally, with so many businesses using these tools, the Federal Trade Commission has a blog laying out some of the consumer risks associated with AI and the agencys expectations of how companies should deploy it.

Excerpt from:
The ABCs of AI, algorithms and machine learning - Marketplace

Posted in Machine Learning | Comments Off on The ABCs of AI, algorithms and machine learning – Marketplace

Global Machine Learning Market is Expected to Grow at a CAGR of 39.2 % by 2028 – Digital Journal

According to the latest research by SkyQuest Technology, the Global Machine Learning Market was valued at US$ 16.2 billion in 2021, and it is expected to reach a market size of US$ 164.05 billion by 2028, at a CAGR of 39.2 % over the forecast period 20222028. The research provides up-to-date Machine Learning Market analysis of the current market landscape, latest trends, drivers, and overall market environment.

Software systems may forecast events more correctly with the use of machine learning (ML), a type of artificial intelligence (AI), without needing to be explicitly told to do so. Machine learning algorithms use historical data as input to anticipate new output values. As organizations adopt more advanced security frameworks, the global machine learning market is anticipated to grow as machine learning becomes a prominent trend in security analytics. Due to the massive amount of data being generated and communicated over several networks, cyber professionals struggle considerably to identify and assess potential cyber threats and assaults.

Machine-learning algorithms can assist businesses and security teams in anticipating, detecting, and recognising cyber-attacks more quickly as these risks become more widespread and sophisticated. For example, supply chain attacks increased by 42% in the first quarter of 2021 in the US, affecting up to 7,000,000 people. For instance, AT&T and IBM claim that the promise of edge computing and 5G wireless networking for the digital revolution will be proven. They have created virtual worlds that, when paired with IBM hybrid cloud and AI technologies, allow business clients to truly experience the possibilities of an AT&T connection.

Computer vision is a cutting-edge technique that combines machine learning and deep learning for medical imaging diagnosis. This has been accepted by the Microsoft InnerEye programme, which focuses on image diagnostic tools for image analysis. For instance, using minute samples of linguistic data, an AI model created by a team of researchers from IBM and Pfizer can accurately forecast the eventual onset of Alzheimers disease in healthy persons by 71 percent (obtained via clinical verbal cognition tests).

Read Market Research Report, Global Machine Learning Market by Component, (Solutions, and Services), Enterprise Size (SMEs And Large Enterprises), Deployment (Cloud, On-Premise), End-User [Healthcare, Retail, IT and Telecommunications, Banking, Financial Services and Insurance (BFSI), Automotive & Transportation, Advertising & Media, Manufacturing, Others (Energy & Utilities, Etc.)], and Region Forecast and Analysis 20222028 By Skyquest

Get Sample PDF : https://skyquestt.com/sample-request/machine-learning-market

Large enterprises segment dominated the machine learning market in 2021. This is because data science and artificial intelligence technologies are being used more often to incorporate quantitative insights into business operations. For instance, under a contract between Pitney Bowes and IBM, IBM will offer managed infrastructure, IT automation, and machine learning services to help Pitney Bowes convert and adopt hybrid cloud computing to support its global business strategy and goals.

Small and midsized firms are expected to grow considerably throughout the anticipated timeframe. It is projected that AI and ML would be the main technologies allowing SMEs to reduce ICT investments and access digital resources. For instance, the IPwe Platform, IPwe Registry, and Global Patent Marketplace are just a few of the small- and medium-sized firms (SMEs) and other organizations that are reportedly already using IPwes technology.

The healthcare sector had the biggest share the global machine learning market in 2021 owing to the industrys leading market players doing rapid research and development, as well as the partnerships formed in an effort to increase their market share. For instance, per the terms of the two businesses signed definitive agreement, Francisco Partners would buy IBMs healthcare data and analytics assets that are presently a part of the Watson Health company. An established worldwide investment company with a focus on working with IT startups is called Francisco Partners. Francisco Partners acquired a wide range of assets, including Health Insights, MarketScan, Clinical Development, Social Program Management, Micromedex, and imaging software services.

The prominent market players are constantly adopting various innovation and growth strategies to capture more market share. The key market players are IBM Corporation, SAP SE, Oracle Corporation, Hewlett Packard Enterprise Company, Microsoft Corporation, Amazon Inc., Intel Corporation, Fair Isaac Corporation, SAS Institute Inc., BigML, Inc., among others.

The report published by SkyQuest Technology Consulting provides in-depth qualitative insights, historical data, and verifiable projections about Machine Learning Market Revenue. The projections featured in the report have been derived using proven research methodologies and assumptions.

Speak With Our Analyst : https://skyquestt.com/speak-with-analyst/machine-learning-market

Report Findings

What does this Report Deliver?

SkyQuest has Segmented the Global Machine Learning Market based on Component, Enterprise Size, Deployment, End-User, and Region:

Read Full Report : https://skyquestt.com/report/machine-learning-market

Key Players in the Global Machine Learning Market

About Us-SkyQuest Technology Group is a Global Market Intelligence, Innovation Management & Commercialization organization that connects innovation to new markets, networks & collaborators for achieving Sustainable Development Goals.

Find Insightful Blogs/Case Studies on Our Website:Market Research Case Studies

Go here to see the original:
Global Machine Learning Market is Expected to Grow at a CAGR of 39.2 % by 2028 - Digital Journal

Posted in Machine Learning | Comments Off on Global Machine Learning Market is Expected to Grow at a CAGR of 39.2 % by 2028 – Digital Journal

Covision Quality joins NVIDIA Metropolis to scale its industrial visual inspection software leveraging unsupervised machine learning – GlobeNewswire

BRESSANONE, Italy, July 25, 2022 (GLOBE NEWSWIRE) -- Covision Quality, a leading provider of visual inspection software based on unsupervised machine learning technology, today announced it has joined NVIDIA Metropolis a partner program, application framework, and set of developer tools that bring to market a new generation of vision AI applications that make the worlds most important spaces and operations safer and more efficient.

Covision Qualitys interface from the perspective of the end-of-line quality control operator. In this case, the red border on the image of the manufactured part indicates that the part is not OK, thus can not be sent to the end customer and needs to be discarded.

Thanks to its unsupervised machine learning technology, the Covision Quality software can be trained in an hour on average and generates reduction of pseudo-scrap rates by up to 90% for its customers. Its workstations that are deployed at customer sites harness the power of NVIDIA RTX A5000 GPU-accelerated computing, which allows the software to run in real time processing images, inspecting components, and communicating decisions to the PLC. In addition, Covision Quality leverages NVIDIA Metropolis, the TensorRT SDK, and CUDA software.

NVIDIA Metropolis makes it easier and more cost effective for enterprises, governments, and integration partners to use world-class AI-enabled solutions to improve critical operational efficiency and solve safety problems. The NVIDIA Metropolis ecosystem contains a large and growing breadth of members who are investing in the most advanced AI techniques and most efficient deployment platforms, and using an enterprise-class approach to their solutions. Members have the opportunity to gain early access to NVIDIA platform updates to further enhance and accelerate their AI application development efforts. The program also offers the opportunity for members to collaborate with industry-leading experts and other AI-driven organizations.

Covision Quality is a spin-off of Covision Lab, a leading European computer vision and machine learning application center and company builder. Covision Quality licenses its visual inspection software product to manufacturing companies in several industries, ranging from metal manufacturing to packaging. Customers of Covision Quality include GKN Sinter Metals, a global market leader for sinter metal components, and Aluflexpack Group, a leading international manufacturer of flexible packaging.

Franz Tschimben, CEO of Covision Quality, sees an important value-add in joining the NVIDIA Metropolis program: Joining NVIDIA Metropolis marks yet another milestone in our companys young history and in our relationship with NVIDIA, which started with our company joining the NVIDIA Inception program last year. It is a testament to the great work the team is doing in providing a scalable visual inspection software product to our customers, drastically reducing time to deployment of visual inspection systems and pseudo scrap rates. We expect that NVIDIA Metropolis, which sits at the heart of many developments that are happening in the industry today, will give us a boost in our go-to-market efforts and support us in connecting to customers and system integrators.

About Covision QualityCovision Quality licenses its visual inspection software product to manufacturing companies in several industries, ranging from metal manufacturing to packaging. Thanks to its unsupervised machine learning technology, the Covision Quality software can be trained in an hour on average and generates reduction of pseudo-scrap rates for its customers by up to 90%. Covision Quality is the recipient of the Cowen Startup award at Automate Show 2022 in Detroit, United States.

Covision Quality is a spin-off of Covision Lab, a leading European computer vision and machine learning application center and company builder.For more information, visit http://www.covisionquality.com

Contact information:Covision Qualityhttps://www.covisionquality.com/en 39042 Bressanone, Italy+39 333 4421494info@covisionlab.com

A photo accompanying this announcement is available at https://www.globenewswire.com/NewsRoom/AttachmentNg/19998b6c-83b8-41df-8e60-c5d558e3e408

Read this article:
Covision Quality joins NVIDIA Metropolis to scale its industrial visual inspection software leveraging unsupervised machine learning - GlobeNewswire

Posted in Machine Learning | Comments Off on Covision Quality joins NVIDIA Metropolis to scale its industrial visual inspection software leveraging unsupervised machine learning – GlobeNewswire

Explained: How to tell if artificial intelligence is working the way we want it to – MIT News

About a decade ago, deep-learning models started achieving superhuman results on all sorts of tasks, from beating world-champion board game players to outperforming doctors at diagnosing breast cancer.

These powerful deep-learning models are usually based on artificial neural networks, which were first proposed in the 1940s and have become a popular type of machine learning. A computer learns to process data using layers of interconnected nodes, or neurons, that mimic the human brain.

As the field of machine learning has grown, artificial neural networks have grown along with it.

Deep-learning models are now often composed of millions or billions of interconnected nodes in many layers that are trained to perform detection or classification tasks using vast amounts of data. But because the models are so enormously complex, even the researchers who design them dont fully understand how they work. This makes it hard to know whether they are working correctly.

For instance, maybe a model designed to help physicians diagnose patients correctly predicted that a skin lesion was cancerous, but it did so by focusing on an unrelated mark that happens to frequently occur when there is cancerous tissue in a photo, rather than on the cancerous tissue itself. This is known as a spurious correlation. The model gets the prediction right, but it does so for the wrong reason. In a real clinical setting where the mark does not appear on cancer-positive images, it could result in missed diagnoses.

With so much uncertainty swirling around these so-called black-box models, how can one unravel whats going on inside the box?

This puzzle has led to a new and rapidly growing area of study in which researchers develop and test explanation methods (also called interpretability methods) that seek to shed some light on how black-box machine-learning models make predictions.

What are explanation methods?

At their most basic level, explanation methods are either global or local. A local explanation method focuses on explaining how the model made one specific prediction, while global explanations seek to describe the overall behavior of an entire model. This is often done by developing a separate, simpler (and hopefully understandable) model that mimics the larger, black-box model.

But because deep learning models work in fundamentally complex and nonlinear ways, developing an effective global explanation model is particularly challenging. This has led researchers to turn much of their recent focus onto local explanation methods instead, explains Yilun Zhou, a graduate student in the Interactive Robotics Group of the Computer Science and Artificial Intelligence Laboratory (CSAIL) who studies models, algorithms, and evaluations in interpretable machine learning.

The most popular types of local explanation methods fall into three broad categories.

The first and most widely used type of explanation method is known as feature attribution. Feature attribution methods show which features were most important when the model made a specific decision.

Features are the input variables that are fed to a machine-learning model and used in its prediction. When the data are tabular, features are drawn from the columns in a dataset (they are transformed using a variety of techniques so the model can process the raw data). For image-processing tasks, on the other hand, every pixel in an image is a feature. If a model predicts that an X-ray image shows cancer, for instance, the feature attribution method would highlight the pixels in that specific X-ray that were most important for the models prediction.

Essentially, feature attribution methods show what the model pays the most attention to when it makes a prediction.

Using this feature attribution explanation, you can check to see whether a spurious correlation is a concern. For instance, it will show if the pixels in a watermark are highlighted or if the pixels in an actual tumor are highlighted, says Zhou.

A second type of explanation method is known as a counterfactual explanation. Given an input and a models prediction, these methods show how to change that input so it falls into another class. For instance, if a machine-learning model predicts that a borrower would be denied a loan, the counterfactual explanation shows what factors need to change so her loan application is accepted. Perhaps her credit score or income, both features used in the models prediction, need to be higher for her to be approved.

The good thing about this explanation method is it tells you exactly how you need to change the input to flip the decision, which could have practical usage. For someone who is applying for a mortgage and didnt get it, this explanation would tell them what they need to do to achieve their desired outcome, he says.

The third category of explanation methods are known as sample importance explanations. Unlike the others, this method requires access to the data that were used to train the model.

A sample importance explanation will show which training sample a model relied on most when it made a specific prediction; ideally, this is the most similar sample to the input data. This type of explanation is particularly useful if one observes a seemingly irrational prediction. There may have been a data entry error that affected a particular sample that was used to train the model. With this knowledge, one could fix that sample and retrain the model to improve its accuracy.

How are explanation methods used?

One motivation for developing these explanations is to perform quality assurance and debug the model. With more understanding of how features impact a models decision, for instance, one could identify that a model is working incorrectly and intervene to fix the problem, or toss the model out and start over.

Another, more recent, area of research is exploring the use of machine-learning models to discover scientific patterns that humans havent uncovered before. For instance, a cancer diagnosing model that outperforms clinicians could be faulty, or it could actually be picking up on some hidden patterns in an X-ray image that represent an early pathological pathway for cancer that were either unknown to human doctors or thought to be irrelevant, Zhou says.

It's still very early days for that area of research, however.

Words of warning

While explanation methods can sometimes be useful for machine-learning practitioners when they are trying to catch bugs in their models or understand the inner-workings of a system, end-users should proceed with caution when trying to use them in practice, says Marzyeh Ghassemi, an assistant professor and head of the Healthy ML Group in CSAIL.

As machine learning has been adopted in more disciplines, from health care to education, explanation methods are being used to help decision makers better understand a models predictions so they know when to trust the model and use its guidance in practice. But Ghassemi warns against using these methods in that way.

We have found that explanations make people, both experts and nonexperts, overconfident in the ability or the advice of a specific recommendation system. I think it is very important for humans not to turn off that internal circuitry asking, let me question the advice that I amgiven, she says.

Scientists know explanations make people over-confident based on other recent work, she adds, citing some recent studies by Microsoft researchers.

Far from a silver bullet, explanation methods have their share of problems. For one, Ghassemis recent research has shown that explanation methods can perpetuate biases and lead to worse outcomes for people from disadvantaged groups.

Another pitfall of explanation methods is that it is often impossible to tell if the explanation method is correct in the first place. One would need to compare the explanations to the actual model, but since the user doesnt know how the model works, this is circular logic, Zhou says.

He and other researchers are working on improving explanation methods so they are more faithful to the actual models predictions, but Zhou cautions that, even the best explanation should be taken with a grain of salt.

In addition, people generally perceive these models to be human-like decision makers, and we are prone to overgeneralization. We need to calm people down and hold them back to really make sure that the generalized model understanding they build from these local explanations are balanced, he adds.

Zhous most recent research seeks to do just that.

Whats next for machine-learning explanation methods?

Rather than focusing on providing explanations, Ghassemi argues that more effort needs to be done by the research community to study how information is presented to decision makers so they understand it, and more regulation needs to be put in place to ensure machine-learning models are used responsibly in practice. Better explanation methods alone arent the answer.

I have been excited to see that there is a lot more recognition, even in industry, that we cant just take this information and make a pretty dashboard and assume people will perform better with that. You need to have measurable improvements in action, and Im hoping that leads to real guidelines about improving the way we display information in these deeply technical fields, like medicine, she says.

And in addition to new work focused on improving explanations, Zhou expects to see more research related to explanation methods for specific use cases, such as model debugging, scientific discovery, fairness auditing, and safety assurance. By identifying fine-grained characteristics of explanation methods and the requirements of different use cases, researchers could establish a theory that would match explanations with specific scenarios, which could help overcome some of the pitfalls that come from using them in real-world scenarios.

Link:
Explained: How to tell if artificial intelligence is working the way we want it to - MIT News

Posted in Machine Learning | Comments Off on Explained: How to tell if artificial intelligence is working the way we want it to – MIT News

Federated learning uses the data right on our devices – GCN.com

An approach called federated learning trains machine learning models on devices like smartphones and laptops, rather than requiring the transfer of private data to central servers.

The biggest benchmarking data set to date for a machine learning technique designed with data privacy in mind is now available open source.

By training in-situ on data where it is generated, we can train on larger real-world data, explains Fan Lai, a doctoral student in computer science and engineering at the University of Michigan, who presents the FedScale training environment at the International Conference on Machine Learning this week. Apaperon the work is available on ArXiv.

This also allows us to mitigate privacy risks and high communication and storage costs associated with collecting the raw data from end-user devices into the cloud, Lai says.

Still a new technology, federated learning relies on analgorithmthat serves as a centralized coordinator. It delivers the model to the devices, trains it locally on the relevant user data, and then brings each partially trained model back and uses them to generate a final global model.

For a number of applications, this workflow provides an added data privacy and security safeguard. Messaging apps,health care data, personal documents, and other sensitive but useful training materials can improve models without fear of data center vulnerabilities.

In addition to protecting privacy, federated learning could make model training more resource-efficient by cutting down and sometimes eliminating big data transfers, but it faces several challenges before it can be widely used. Training across multiple devices means that there are no guarantees about the computing resources available, and uncertainties like user connection speeds and device specs lead to a pool of data options with varying quality.

Federated learning is growing rapidly as a research area, says Mosharaf Chowdhury, associate professor of computer science and engineering. But most of the work makes use of a handful of data sets, which are very small and do not represent many aspects of federated learning.

And this is where FedScale comes in. The platform can simulate the behavior of millions of user devices on a few GPUs and CPUs, enabling developers of machine learning models to explore how their federated learning program will perform without the need for large-scale deployment. It serves a variety of popular learning tasks, including image classification, object detection, language modeling, speech recognition, and machine translation.

Anything that uses machine learning on end-user data could be federated, Chowdhury says. Applications should be able to learn and improve how they provide their services without actually recording everything their users do.

The authors specify several conditions that must be accounted for to realistically mimic the federated learning experience: heterogeneity of data, heterogeneity of devices, heterogeneous connectivity and availability conditions, all with an ability to operate at multiple scales on a broad variety of machine learning tasks. FedScales data sets are the largest released to date that cater specifically to these challenges in federated learning, according to Chowdhury.

Over the course of the last couple years, we have collected dozens of data sets. The raw data are mostly publicly available, but hard to use because they are in various sources and formats, Lai says. We are continuously working on supporting large-scale on-device deployment, as well.

The FedScale team has also launched a leaderboard to promote the most successful federated learning solutions trained on the universitys system.

The National Science Foundation and Cisco supported the work.

This article was originally published inFuturity. It has been republished under theAttribution 4.0 International license

Read this article:
Federated learning uses the data right on our devices - GCN.com

Posted in Machine Learning | Comments Off on Federated learning uses the data right on our devices – GCN.com