Search Immortality Topics:

Page 108«..1020..107108109110..120130..»


Category Archives: Machine Learning

SAP Makes Support Experience Even Smarter With ML and AI – AiThority

SAP SE announced several updates, including the Schedule a ManagerandAsk an Expert Peerservices, to its Next-Generation Support approach focused on the customer support experience and enabling customer success. Based on artificial intelligence (AI) and machine learning technologies, SAP has further developed existing functionalities with new, automated capabilities such as theIncident Solution Matching service and automatic translation.

When it comes to customer support, weve seen great success in flipping the customer engagement model by leveraging AI and machine learning technologies across our product support functionalities and solutions, saidAndreas Heckmann, head of Customer Solution Support and Innovation and executive vice president, SAP. To simplify and enhance the customer experience through our award-winning support channels, were making huge steps towards our goal of meeting customers needs by anticipating what they may need before it even occurs.

Recommended AI News: Kofax Presents Partner of the Year Awards

AI and machine learning technologies are key to improving and simplifying the customer support experience. They continue to play an important role in expanding Next-Generation Support to help SAP deliver maximum business outcomes for customers. SAP has expanded its offerings by adding new features to existing services, for example:

Recommended AI News: Kyocera Selects Skyhook to Power Precision Location Services for Rugged DuraXV Extreme

Customers expect their issues to be resolved quickly, and SAP strives toward a consistent line of communication across all support channels, including real-time options. SAP continues to improve, innovate and extend live support for technical issues by connecting directly with customers to provide a personal customer experience. Building on top of live support services, such asExpert ChatandSchedule an Expert, SAP has made significant strides in upgrading its real-time support channels. For example, it now offers the Schedule a Manager service and a peer-to-peer collaboration channel through the Ask an Expert Peer service.

By continuing to invest in AI and machine learningbased technologies, SAP enables more efficient support processes for customers while providing the foundation for predictive support functionalities and superior customer support experiences.

Customers can learn more about the Next-Generation Support approach through theProduct Support Accreditation program, available to SAP customers and partners at no additional cost. Customers can be empowered to get the best out of SAPs product support tools and the Next-Generation Support approach.

Recommended AI News: O.C. Tanner Recognized as a Leader in Everest Group PEAK Matrix Rewards & Recognition Solutions Assessment 2020

Read more:
SAP Makes Support Experience Even Smarter With ML and AI - AiThority

Posted in Machine Learning | Comments Off on SAP Makes Support Experience Even Smarter With ML and AI – AiThority

Google Engineers ‘Mutate’ AI to Make It Evolve Systems Faster Than We Can Code Them – ScienceAlert

Much of the work undertaken by artificial intelligence involves a training process known as machine learning, where AI gets better at a task such as recognising a cat or mapping a route the more it does it. Now that same technique is being use to create new AI systems, without any human intervention.

For years, engineers at Google have been working on a freakishly smart machine learning system known as theAutoML system(or automatic machine learning system), which is already capable of creating AI that outperforms anything we've made.

Now, researchers have tweaked it to incorporate concepts of Darwinian evolution and shown it can build AI programs that continue to improve upon themselves faster than they would if humans were doing the coding.

The new system is called AutoML-Zero, and although it may sound a little alarming, it could lead to the rapid development of smarter systems - for example, neural networked designed to more accurately mimic the human brain with multiple layers and weightings, something human coders have struggled with.

"It is possible today to automatically discover complete machine learning algorithms just using basic mathematical operations as building blocks," write the researchers in their pre-print paper. "We demonstrate this by introducing a novel framework that significantly reduces human bias through a generic search space."

The original AutoML system is intended to make it easier for apps to leverage machine learning, and already includes plenty of automated features itself, but AutoML-Zero takes the required amount of human input way down.

Using a simple three-step process - setup, predict and learn - it can be thought of as machine learning from scratch.

The system starts off with a selection of 100 algorithms made by randomly combining simple mathematical operations. A sophisticated trial-and-error process then identifies the best performers, which are retained - with some tweaks - for another round of trials. In other words, the neural network is mutating as it goes.

When new code is produced, it's tested on AI tasks - like spotting the difference between a picture of a truck and a picture of a dog - and the best-performing algorithms are then kept for future iteration. Like survival of the fittest.

And it's fast too: the researchers reckon up to 10,000 possible algorithms can be searched through per second per processor (the more computer processors available for the task, the quicker it can work).

Eventually, this should see artificial intelligence systems become more widely used, and easier to access for programmers with no AI expertise. It might even help us eradicate human bias from AI, because humans are barely involved.

Work to improve AutoML-Zero continues, with the hope that it'll eventually be able to spit out algorithms that mere human programmers would never have thought of. Right now it's only capable of producing simple AI systems, but the researchers think the complexity can be scaled up rather rapidly.

"While most people were taking baby steps, [the researchers] took a giant leap into the unknown," computer scientist Risto Miikkulainen from the University of Texas, Austin, who was not involved in the work, told Edd Gent at Science. "This is one of those papers that could launch a lot of future research."

The research paper has yet to be published in a peer-reviewed journal, but can be viewed online at arXiv.org.

The rest is here:
Google Engineers 'Mutate' AI to Make It Evolve Systems Faster Than We Can Code Them - ScienceAlert

Posted in Machine Learning | Comments Off on Google Engineers ‘Mutate’ AI to Make It Evolve Systems Faster Than We Can Code Them – ScienceAlert

Artificial Intelligence That Can Evolve on Its Own Is Being Tested by Google Scientists – Newsweek

Computer scientists working for a high-tech division of Google are testing how machine learning algorithms can be created from scratch, then evolve naturally, based on simple math.

Experts behind Google's AutoML suite of artificial intelligence tools have now showcased fresh research which suggests the existing software could potentially be updated to "automatically discover" completely unknown algorithms while also reducing human bias during the data input process.

Read more

According to ScienceMag, the software, known as AutoML-Zero, resembles the process of evolution, with code improving every generation with little human interaction.

Machine learning tools are "trained" to find patterns in vast amounts of data while automating such processes and constantly being refined based on past experience.

But researchers say this comes with drawbacks that AutoML-Zero aims to fix. Namely, the introduction of bias.

"Human-designed components bias the search results in favor of human-designed algorithms, possibly reducing the innovation potential of AutoML," their team's paper states. "Innovation is also limited by having fewer options: you cannot discover what you cannot search for."

The analysis, which was published last month on arXiv, is titled "Evolving Machine Learning Algorithms From Scratch" and is credited to a team working for Google Brain division.

"The nice thing about this kind of AI is that it can be left to its own devices without any pre-defined parameters, and is able to plug away 24/7 working on developing new algorithms," Ray Walsh, a computer expert and digital researcher at ProPrivacy, told Newsweek.

As noted by ScienceMag, AutoML-Zero is designed to create a population of 100 "candidate algorithms" by combining basic random math, then testing the results on simple tasks such as image differentiation. The best performing algorithms then "evolve" by randomly changing their code.

The resultswhich will be variants of the most successful algorithmsthen get added to the general population, as older and less successful algorithms get left behind, and the process continues to repeat. The network grows significantly, in turn giving the system more natural algorithms to work with.

Haran Jackson, the chief technology officer (CTO) at Techspert, who has a PhD in Computing from the University of Cambridge, told Newsweek that AutoML tools are typically used to "identify and extract" the most useful features from datasetsand this approach is a welcome development.

"As exciting as AutoML is, it is restricted to finding top-performing algorithms out of the, admittedly large, assortment of algorithms that we already know of," he said.

"There is a sense amongst many members of the community that the most impressive feats of artificial intelligence will only be achieved with the invention of new algorithms that are fundamentally different to those that we as a species have so far devised.

"This is what makes the aforementioned paper so interesting. It presents a method by which we can automatically construct and test completely novel machine learning algorithms."

Jackson, too, said the approach taken was similar to the facts of evolution first proposed by Charles Darwin, noting how the Google team was able to induce "mutations" into the set of algorithms.

"The mutated algorithms that did a better job of solving real-world problems were kept alive, with the poorly-performing ones being discarded," he elaborated.

"This was done repeatedly, until a set of high-performing algorithms was found. One intriguing aspect of the study is that this process 'rediscovered' some of the neural network algorithms that we already know and use. It's extremely exciting to see if it can turn up any algorithms that we haven't even thought of yet, the impact of which to our daily lives may be enormous." Google has been contacted for comment.

The development of AutoML was previously praised by Alphabet's CEO Sundar Pichai, who said it had been used to improve an algorithm that could detect the spread of breast cancer to adjacent lymph nodes. "It's inspiring to see how AI is starting to bear fruit," he wrote in a 2018 blog post.

The Google Brain team members who collaborated on the paper said the concepts in the most recent research were a solid starting point, but stressed that the project is far from over.

"Starting from empty component functions and using only basic mathematical operations, we evolved linear regressors, neural networks, gradient descent... multiplicative interactions. These results are promising, but there is still much work to be done," the scientists' preprint paper noted.

Walsh told Newsweek: "The developers of AutoML-Zero believe they have produced a system that has the ability to output algorithms human developers may never have thought of.

"According to the developers, due to its lack of human intervention AutoML-Zero has the potential to produce algorithms that are more free from human biases. This theoretically could result in cutting-edge algorithms that businesses could rely on to improve their efficiency.

"However, it is worth bearing in mind that for the time being the AI is still proof of concept and it will be some time before it is able to output the complex kinds of algorithms currently in use. On the other hand, the research [demonstrates how] the future of AI may be algorithms produced by other machines."

View post:
Artificial Intelligence That Can Evolve on Its Own Is Being Tested by Google Scientists - Newsweek

Posted in Machine Learning | Comments Off on Artificial Intelligence That Can Evolve on Its Own Is Being Tested by Google Scientists – Newsweek

Teslas acquisition of DeepScale starts to pay off with new IP in machine learning – Electrek

Teslas acquisition of machine-learning startup DeepScale is starting to pay off, with the team hired through the acquisition starting to deliver new IP for the automaker.

Late last year, it was revealed that Tesla acquired DeepScale, a Bay Area-based startup that focuses on Deep Neural Network (DNN) for self-driving vehicles, for an undisclosed amount.

They specialized in computing power-efficient deep learning systems, which is also an area of focus for Tesla, who decided to design its own computer chip to power its self-driving software.

There was speculation that Tesla acquired the small startup team in order to accelerate its machine learning development.

Now we are seeing some of that teams work, thanks to a new patent application.

Just days after Tesla acquired the startup in October 2019, the automaker applied for a new patent with three members of DeepScale listed as inventors: Matthew Cooper, Paras Jain, and Harsimran Singh Sidhu.

The patent application called Systems and Methods for Training Machine Models with Augmented Data was published yesterday.

Tesla writes about it in the application:

Systems and methods for training machine models with augmented data. An example method includes identifying a set of images captured by a set of cameras while affixed to one or more image collection systems. For each image in the set of images, a training output for the image is identified. For one or more images in the set of images, an augmented image for a set of augmented images is generated. Generating an augmented image includes modifying the image with an image manipulation function that maintains camera properties of the image. The augmented training image is associated with the training output of the image. A set of parameters of the predictive computer model are trained to predict the training output based on an image training set including the images and the set of augmented images.

The system that the DeepScale team, now working under Tesla, is trying to patent here is related to training a neural net using data from several different sensors observing scenes, like the eight cameras in Teslas Autopilot sensor array.

They write about the difficulties of such a situation in the patent application:

In typical machine learning applications, data may be augmented in various ways to avoid overfitting the model to the characteristics of the capture equipment used to obtain the training data. For example, in typical sets of images used for training computer models, the images may represent objects captured with many different capture environments having varying sensor characteristics with respect to the objects being captured. For example, such images may be captured by various sensor characteristics, such as various scales (e.g., significantly different distances within the image), with various focal lengths, by various lens types, with various pre- or post-processing, different software environments, sensor array hardware, and so forth. These sensors may also differ with respect to different extrinsic parameters, such as the position and orientation of the imaging sensors with respect to the environment as the image is captured. All of these different types of sensor characteristics can cause the captured images to present differently and variously throughout the different images in the image set and make it more difficult to properly train a computer model.

Here they summarize their solution to the problem:

One embodiment is a method for training a set of parameters of a predictive computer model. This embodiment may include: identifying a set of images captured by a set of cameras while affixed to one or more image collection systems; for each image in the set of images, identifying a training output for the image; for one or more images in the set of images, generating an augmented image for a set of augmented images by: generating an augmented image for a set of augmented images by modifying the image with an image manipulation function that maintains camera properties of the image, and associating the augmented training image with the training output of the image; training the set of parameters of the predictive computer model to predict the training output based on an image training set including the images and the set of augmented images.

An additional embodiment may include a system having one or more processors and non-transitory computer storage media storing instructions that when executed by the one or more processors, cause the processors to perform operations comprising: identifying a set of images captured by a set of cameras while affixed to one or more image collection systems; for each image in the set of images, identifying a training output for the image; for one or more images in the set of images, generating an augmented image for a set of augmented images by: generating an augmented image for a set of augmented images by modifying the image with an image manipulation function that maintains camera properties of the image, and associating the augmented training image with the training output of the image; training the set of parameters of the predictive computer model to predict the training output based on an image training set including the images and the set of augmented images.

Another embodiment may include a non-transitory computer-readable medium having instructions for execution by a processor, the instructions when executed by the processor causing the processor to: identify a set of images captured by a set of cameras while affixed to one or more image collection systems; for each image in the set of images, identify a training output for the image; for one or more images in the set of images, generate an augmented image for a set of augmented images by: generate an augmented image for a set of augmented images by modifying the image with an image manipulation function that maintains camera properties of the image, and associate the augmented training image with the training output of the image; train the computer model to learn to predict the training output based on an image training set including the images and the set of augmented images.

As we previously reported, Tesla is going through a significant foundational rewrite in the Tesla Autopilot. As part of the rewrite, CEO Elon Musk says that the neural net is absorbing more and more of the problem.

It will also include a more in-depth labeling system.

Musk described 3D labeling as a game-changer:

Its where the car goes into a scene with eight cameras, and kind of paint a path, and then you can label that path in 3D.

This new way to train machine learning systems with multiple cameras, like Teslas Autopilot, with augmented data could be part of this new Autopilot update.

Here are some drawings from the patent application:

Heres Teslas patent application in full:

FTC: We use income earning auto affiliate links. More.

Subscribe to Electrek on YouTube for exclusive videos and subscribe to the podcast.

View original post here:
Teslas acquisition of DeepScale starts to pay off with new IP in machine learning - Electrek

Posted in Machine Learning | Comments Off on Teslas acquisition of DeepScale starts to pay off with new IP in machine learning – Electrek

New AI improves itself through Darwinian-style evolution – Big Think

Machine learning has fundamentally changed how we engage with technology. Today, it's able to curate social media feeds, recognize complex images, drive cars down the interstate, and even diagnose medical conditions, to name a few tasks.

But while machine learning technology can do some things automatically, it still requires a lot of input from human engineers to set it up, and point it in the right direction. Inevitably, that means human biases and limitations are baked into the technology.

So, what if scientists could minimize their influence on the process by creating a system that generates its own machine-learning algorithms? Could it discover new solutions that humans never considered?

To answer these questions, a team of computer scientists at Google developed a project called AutoML-Zero, which is described in a preprint paper published on arXiv.

"Human-designed components bias the search results in favor of human-designed algorithms, possibly reducing the innovation potential of AutoML," the paper states. "Innovation is also limited by having fewer options: you cannot discover what you cannot search for."

Automatic machine learning (AutoML) is a fast-growing area of deep learning. In simple terms, AutoML seeks to automate the end-to-end process of applying machine learning to real-world problems. Unlike other machine-learning techniques, AutoML requires relatively little human effort, which means companies might soon be able to utilize it without having to hire a team of data scientists.

AutoML-Zero is unique because it uses simple mathematical concepts to generate algorithms "from scratch," as the paper states. Then, it selects the best ones, and mutates them through a process that's similar to Darwinian evolution.

AutoML-Zero first randomly generates 100 candidate algorithms, each of which then performs a task, like recognizing an image. The performance of these algorithms is compared to hand-designed algorithms. AutoML-Zero then selects the top-performing algorithm to be the "parent."

"This parent is then copied and mutated to produce a child algorithm that is added to the population, while the oldest algorithm in the population is removed," the paper states.

The system can create thousands of populations at once, which are mutated through random procedures. Over enough cycles, these self-generated algorithms get better at performing tasks.

"The nice thing about this kind of AI is that it can be left to its own devices without any pre-defined parameters, and is able to plug away 24/7 working on developing new algorithms," Ray Walsh, a computer expert and digital researcher at ProPrivacy, told Newsweek.

If computer scientists can scale up this kind of automated machine-learning to complete more complex tasks, it could usher in a new era of machine learning where systems are designed by machines instead of humans. This would likely make it much cheaper to reap the benefits of deep learning, while also leading to novel solutions to real-world problems.

Still, the recent paper was a small-scale proof of concept, and the researchers note that much more research is needed.

"Starting from empty component functions and using only basic mathematical operations, we evolved linear regressors, neural networks, gradient descent... multiplicative interactions. These results are promising, but there is still much work to be done," the scientists' preprint paper noted.

Related Articles Around the Web

Go here to see the original:
New AI improves itself through Darwinian-style evolution - Big Think

Posted in Machine Learning | Comments Off on New AI improves itself through Darwinian-style evolution – Big Think

Research Team Uses Machine Learning to Track COVID-19 Spread in Communities and Predict Patient Outcomes – The Ritz Herald

The COVID-19 pandemic is raising critical questions regarding the dynamics of the disease, its risk factors, and the best approach to address it in healthcare systems. MIT Sloan School of Management Prof. Dimitris Bertsimas and nearly two dozen doctoral students are using machine learning and optimization to find answers. Their effort is summarized in the COVIDanalytics platform where their models are generating accurate real-time insight into the pandemic. The group is focusing on four main directions; predicting disease progression, optimizing resource allocation, uncovering clinically important insights, and assisting in the development of COVID-19 testing.

The backbone for each of these analytics projects is data, which weve extracted from public registries, clinical Electronic Health Records, as well as over 120 research papers that we compiled in a new database. Were testing our models against incoming data to determine if it makes good predictions, and we continue to add new data and use machine-learning to make the models more accurate, says Bertsimas.

The first project addresses dilemmas at the front line, such as the need for more supplies and equipment. Protective gear must go to healthcare workers and ventilators to critically ill patients. The researchers developed an epidemiological model to track the progression of COVID-19 in a community, so hospitals can predict surges and determine how to allocate resources.

The team quickly realized that the dynamics of the pandemic differ from one state to another, creating opportunities to mitigate shortages by pooling some of the ventilator supply across states. Thus, they employed optimization to see how ventilators could be shared among the states and created an interactive application that can help both the federal and state governments.

Different regions will hit their peak number of cases at different times, meaning their need for supplies will fluctuate over the course of weeks. This model could be helpful in shaping future public policy, notes Bertsimas.

Recently, the researchers connected with long-time collaborators at Hartford HealthCare to deploy the model, helping the network of seven campuses to assess their needs. Coupling county level data with the patient records, they are rethinking the way resources are allocated across the different clinics to minimize potential shortages.

The third project focuses on building a mortality and disease progression calculator to predict whether someone has the virus, and whether they need hospitalization or even more intensive care. He points out that current advice for patients is at best based on age, and perhaps some symptoms. As data about individual patients is limited, their model uses machine learning based on symptoms, demographics, comorbidities, lab test results as well as a simulation model to generate patient data. Data from new studies is continually added to the model as it becomes available.

We started with data published in Wuhan, Italy, and the U.S., including infection and death rate as well as data coming from patients in the ICU and the effects of social isolation. We enriched them with clinical records from a major hospital in Lombardy which was severely impacted by the spread of the virus. Through that process, we created a new model that is quite accurate. Its power comes from its ability to learn from the data, says Bertsimas.

By probing the severity of the disease in a patient, it can actually guide clinicians in congested areas in a much better way, says Bertsimas.

Their fourth project involves creating a convenient test for COVID-19. Using data from about 100 samples from Morocco, the group is using machine-learning to augment a test previously designed at the Mohammed VI Polytechnic University to come up with more precise results. The model can accurately detect the virus in patients around 90% of the time, while false positives are low.

The team is currently working on expanding the epidemiological model to a global scale, creating more accurate and informed clinical risk calculators, and identifying potential ways that would allow us to go back to normality.

We have released all our source code and made the public database available for other people too. We will continue to do our own analysis, but if other people have better ideas, we welcome them, says Bertsimas.

Continued here:
Research Team Uses Machine Learning to Track COVID-19 Spread in Communities and Predict Patient Outcomes - The Ritz Herald

Posted in Machine Learning | Comments Off on Research Team Uses Machine Learning to Track COVID-19 Spread in Communities and Predict Patient Outcomes – The Ritz Herald