Search Immortality Topics:

Page 124«..1020..123124125126..130140..»


Category Archives: Machine Learning

Machine learning could speed the arrival of ultra-fast-charging electric car – Chemie.de

Using machine learning, a Stanford-led research team has slashed battery testing times - a key barrier to longer-lasting, faster-charging batteries for electric vehicles.

Battery performance can make or break the electric vehicle experience, from driving range to charging time to the lifetime of the car. Now, artificial intelligence has made dreams like recharging an EV in the time it takes to stop at a gas station a more likely reality, and could help improve other aspects of battery technology.

For decades, advances in electric vehicle batteries have been limited by a major bottleneck: evaluation times. At every stage of the battery development process, new technologies must be tested for months or even years to determine how long they will last. But now, a team led by Stanford professors Stefano Ermon and William Chueh has developed a machine learning-based method that slashes these testing times by 98 percent. Although the group tested their method on battery charge speed, they said it can be applied to numerous other parts of the battery development pipeline and even to non-energy technologies.

"In battery testing, you have to try a massive number of things, because the performance you get will vary drastically," said Ermon, an assistant professor of computer science. "With AI, we're able to quickly identify the most promising approaches and cut out a lot of unnecessary experiments."

The study, published by Nature on Feb. 19, was part of a larger collaboration among scientists from Stanford, MIT and the Toyota Research Institute that bridges foundational academic research and real-world industry applications. The goal: finding the best method for charging an EV battery in 10 minutes that maximizes the battery's overall lifetime. The researchers wrote a program that, based on only a few charging cycles, predicted how batteries would respond to different charging approaches. The software also decided in real time what charging approaches to focus on or ignore. By reducing both the length and number of trials, the researchers cut the testing process from almost two years to 16 days.

"We figured out how to greatly accelerate the testing process for extreme fast charging," said Peter Attia, who co-led the study while he was a graduate student. "What's really exciting, though, is the method. We can apply this approach to many other problems that, right now, are holding back battery development for months or years."

Designing ultra-fast-charging batteries is a major challenge, mainly because it is difficult to make them last. The intensity of the faster charge puts greater strain on the battery, which often causes it to fail early. To prevent this damage to the battery pack, a component that accounts for a large chunk of an electric car's total cost, battery engineers must test an exhaustive series of charging methods to find the ones that work best.

The new research sought to optimize this process. At the outset, the team saw that fast-charging optimization amounted to many trial-and-error tests - something that is inefficient for humans, but the perfect problem for a machine.

"Machine learning is trial-and-error, but in a smarter way," said Aditya Grover, a graduate student in computer science who co-led the study. "Computers are far better than us at figuring out when to explore - try new and different approaches - and when to exploit, or zero in, on the most promising ones."

The team used this power to their advantage in two key ways. First, they used it to reduce the time per cycling experiment. In a previous study, the researchers found that instead of charging and recharging every battery until it failed - the usual way of testing a battery's lifetime -they could predict how long a battery would last after only its first 100 charging cycles. This is because the machine learning system, after being trained on a few batteries cycled to failure, could find patterns in the early data that presaged how long a battery would last.

Second, machine learning reduced the number of methods they had to test. Instead of testing every possible charging method equally, or relying on intuition, the computer learned from its experiences to quickly find the best protocols to test.

By testing fewer methods for fewer cycles, the study's authors quickly found an optimal ultra-fast-charging protocol for their battery. In addition to dramatically speeding up the testing process, the computer's solution was also better - and much more unusual - than what a battery scientist would likely have devised, said Ermon.

"It gave us this surprisingly simple charging protocol - something we didn't expect," Ermon said. Instead of charging at the highest current at the beginning of the charge, the algorithm's solution uses the highest current in the middle of the charge. "That's the difference between a human and a machine: The machine is not biased by human intuition, which is powerful but sometimes misleading."

The researchers said their approach could accelerate nearly every piece of the battery development pipeline: from designing the chemistry of a battery to determining its size and shape, to finding better systems for manufacturing and storage. This would have broad implications not only for electric vehicles but for other types of energy storage, a key requirement for making the switch to wind and solar power on a global scale.

"This is a new way of doing battery development," said Patrick Herring, co-author of the study and a scientist at the Toyota Research Institute. "Having data that you can share among a large number of people in academia and industry, and that is automatically analyzed, enables much faster innovation."

The study's machine learning and data collection system will be made available for future battery scientists to freely use, Herring added. By using this system to optimize other parts of the process with machine learning, battery development - and the arrival of newer, better technologies - could accelerate by an order of magnitude or more, he said.

The potential of the study's method extends even beyond the world of batteries, Ermon said. Other big data testing problems, from drug development to optimizing the performance of X-rays and lasers, could also be revolutionized by the use of machine learning optimization. And ultimately, he said, it could even help to optimize one of the most fundamental processes of all.

"The bigger hope is to help the process of scientific discovery itself," Ermon said. "We're asking: Can we design these methods to come up with hypotheses automatically? Can they help us extract knowledge that humans could not? As we get better and better algorithms, we hope the whole scientific discovery process may drastically speed up."

Read the original here:
Machine learning could speed the arrival of ultra-fast-charging electric car - Chemie.de

Posted in Machine Learning | Comments Off on Machine learning could speed the arrival of ultra-fast-charging electric car – Chemie.de

Machine learning finds a novel antibiotic able to kill superbugs – STAT – STAT

For decades, discovering novel antibiotics meant digging through the same patch of dirt. Biologists spent countless hours screening soil-dwelling microbes for properties known to kill harmful bacteria. But as superbugs resistant to existing antibiotics have spread widely, breakthroughs were becoming as rare as new places to dig.

Now, artificial intelligence is giving scientists a reason to dramatically expand their search into databases of molecules that look nothing like existing antibiotics.

A study published Thursday in the journal Cell describes how researchers at the Massachusetts Institute of Technology used machine learning to identify a molecule that appears capable of countering some of the worlds most formidable pathogens.

advertisement

When tested in mice, the molecule, dubbed halicin, effectively treated the gastrointestinal bug Clostridium difficile (C. diff), a common killer of hospitalized patients, and another type of drug-resistant bacteria that often causes infections in the blood, urinary tract, and lungs.

The most surprising feature of the molecule? It is structurally distinct from existing antibiotics, the researchers said. It was found in a drug-repurposing database where it was initially identified as a possible treatment for diabetes, a feat that showcases the power of machine learning to support discovery efforts.

Now were finding leads among chemical structures that in the past we wouldnt have even hallucinated that those could be an antibiotic, said Nigam Shah, professor of biomedical informatics at Stanford University. It greatly expands the search space into dimensions we never knew existed.

Shah, who was not involved in the research, said that the generation of a promising molecule is just the first step in a long and uncertain process of testing its safety and effectiveness in humans.

But the research demonstrates how machine learning, when paired with expert biologists, can speed up time-consuming preclinical work, and give researchers greater confidence that the molecule theyre examining is worth pursuing through more costly phases of drug discovery.

That is an especially pressing challenge in the development of new antibiotics, because a lack of economic incentives has caused pharmaceutical companies to pull back from the search for badly needed treatments. Each year in the U.S., drug-resistant bacteria and fungi cause more than 2.8 million infections and 35,000 deaths, with more than a third of fatalities attributable to C. diff, according to the the Centers for Disease Control and Prevention.

The damage is far greater in countries with fewer health care resources.

Without the development of novel antibiotics, the World Health Organization estimates that the global death toll from drug resistant infections is expected to rise to 10 million a year by 2050, up from about 700,000 a year currently.

In addition to finding halicin, the researchers at MIT reported that their machine learning model identified eight other antibacterial compounds whose structures differ significantly from known antibiotics.

I do think this platform will very directly reduce the cost involved in the discovery phase of antibiotic development, said James Collins, a co-author of the study who is a professor of bioengineering at MIT. With these models, one can now get after novel chemistries in a shorter period of time involving less investment.

The machine learning platform was developed by Regina Barzilay, a professor of computer science and artificial intelligence who works with Collins as co-lead of the Jameel Clinic for Machine Learning in Health at MIT. It relies on a deep neural network, a type of AI architecture that uses multiple processing layers to analyze different aspects of data to deliver an output.

Prior types of machine learning systems required close supervision from humans to analyze molecular properties in drug discovery and produced spotty results. But Barzilays model is part of a new generation of machine learning systems that can automatically learn chemical properties connected to a specific function, such as an ability to kill bacteria.

Barzilay worked with Collins and other biologists at MIT to train the system on more than 2,500 chemical structures, including those that looked nothing like antibiotics. The effect was to counteract bias that typically trips up most human scientists who are trained to look for molecular structures that look a lot like other antibiotics.

The neural net was able to isolate molecules that were predicted to have antibacterial qualities but didnt look like existing antibiotics, resulting in the identification of halicin.

To use a crude analogy, its like you show an AI all the different means of transportation, but youve not shown it an electric scooter, said Shah, the bioinformatics professor at Stanford. And then it independently looks at an electronic scooter and says, Yeah, this could be useful for transportation.

In follow-up testing in the lab, Collins said, halicin displayed a remarkable ability to fight a wide range of multidrug-resistant pathogens. Tested against 36 such pathogens, it displayed potency against 35 of them. Collins said testing in mice showed excellent activity against C. diff, tuberculosis, and other bacteria.

The ability to identify molecules with specific antibiotic properties could aid in the development of drugs to treat so-called orphan conditions that affect a small percentage of the population but are not targeted by drug companies because of the lack of financial rewards.

Collins noted that commercializing halicin would take many months of study to evaluate its toxicity in humans, followed by multiple phases of clinical trials to establish safety and efficacy.

Read the original post:
Machine learning finds a novel antibiotic able to kill superbugs - STAT - STAT

Posted in Machine Learning | Comments Off on Machine learning finds a novel antibiotic able to kill superbugs – STAT – STAT

Machine Learning: Real-life applications and it’s significance in Data Science – Techstory

Do you know how Google Maps predicts traffic? Are you amused by how Amazon Prime or Netflix subscribes to you just the movie you would watch? We all know it must be some approach of Artificial Intelligence. Machine Learning involves algorithms and statistical models to perform tasks. This same approach is used to find faces in Facebook and detect cancer too. A Machine Learning course can educate in the development and application of such models.

Artificial Intelligence mimics human intelligence. Machine Learning is one of the significant branches of it. There is an ongoing and increasing need for its development.

Tasks as simple as Spam detection in Gmail illustrates its significance in our day-to-day lives. That is why the roles of Data scientists are in demand to yield more productivity at present. An aspiring data scientist can learn to develop algorithms and apply such by availing Machine Learning certification.

Machine learning as a subset of Artificial Intelligence, is applied for varied purposes. There is a misconception that applying Machine Learning algorithms would need a prior mathematical knowledge. But, a Machine Learning Online course would suggest otherwise. On contrary to the popular approach of studying, here top-to-bottom approach is involved. An aspiring data scientist, a business person or anyone can learn how to apply statistical models for various purposes. Here, is a list of some well-known applications of Machine Learning.

Microsofts research lab uses Machine Learning to study cancer. This helps in Individualized oncological treatment and detailed progress reports generation. The data engineers apply pattern recognition, Natural Language Processing and Computer vision algorithms to work through large data. This aids oncologists to conduct precise and breakthrough tests.

Likewise, machine learning is applied in biomedical engineering. This has led to automation of diagnostic tools. Such tools are used in detecting neurological and psychiatric disorders of many sorts.

We all have had a conversation with Siri or Alexa. They use speech recognition to input our requests. Machine Learning is applied here to auto generate responses based on previous data. Hello Barbie is the Siri version for the kids to play with. It uses advanced analytics, machine learning and Natural language processing to respond. This is the first AI enabled toy which could lead to more such inventions.

Google uses Machine Learning statistical models to acquire inputs. The statistical models collect details such as distance from the start point to the endpoint, duration and bus schedules. Such historical data is rescheduled and reused. Machine Learning algorithms are developed with the objective of data prediction. They recognise the pattern between such inputs and predict approximate time delays.

Another well-known application of Google, Google translate involves Machine Learning. Deep learning aids in learning language rules through recorded conversations. Neural networks such as Long-short term memory networks aids in long-term information updates and learning. Recurrent Neural networks identify the sequences of learning. Even bi-lingual processing is made feasible nowadays.

Facebook uses image recognition and computer vision to detect images. Such images are fed as inputs. The statistical models developed using Machine Learning maps any information associated with these images. Facebook generates automated captions for images. These captions are meant to provide directions for visually impaired people. This innovation of Facebook has nudged Data engineers to come up with other such valuable real-time applications.

The aim here is to increase the possibility of the customer, watching a movie recommendation. It is achieved by studying the previous thumbnails. An algorithm is developed to study these thumbnails and derive recommendation results. Every image of available movies has separate thumbnails. A recommendation is generated by pattern recognition among the numerical data. The thumbnails are assigned individual numerical values.

Tesla uses computer vision, data prediction, and path planning for this purpose. The machine learning practices applied makes the innovation stand-out. The deep neural networks work with trained data and generate instructions. Many technological advancements such as changing lanes are instructed based on imitation learning.

Gmail, Yahoo mail and Outlook engage machine learning techniques such as neural networks. These networks detect patterns in historical data. They train on received data about spamming messages and phishing messages. It is noted that these spam filters provide 99.9 percent accuracy.

As people grow more health conscious, the development of fitness monitoring applications are on the rise. Being on top of the market, Fitbit ensures its productivity by the employment of machine learning methods. The trained machine learning models predicts user activities. This is achieved through data pre-processing, data processing and data partitioning. There is a need to improve the application in terms of additional purposes.

The above mentioned applications are like the tip of an iceberg. Machine learning being a subset of Artificial Intelligence finds its necessity in many other streams of daily activities.

comments

Go here to see the original:
Machine Learning: Real-life applications and it's significance in Data Science - Techstory

Posted in Machine Learning | Comments Off on Machine Learning: Real-life applications and it’s significance in Data Science – Techstory

Inspur Re-Elected as Member of SPEC OSSC and Chair of SPEC Machine Learning – Yahoo Finance

Highlights:

Recently, the international evaluation agency Standard Performance Evaluation Corporation (SPEC) has finalized the election of new Open System Steering Committee (OSSC) executive members, which include Inspur, Intel, AMD, IBM, Oracle and other three companies.

It is worth noting that Inspur, a re-elected OSSC member, was also re-elected as the chair of the SPEC Machine Learning (SPEC ML) working group. The development plan of ML test benchmark proposed by Inspur has been approved by members which aims to provide users with standard on evaluating machine learning computing performance.

SPEC is a global and authoritative third-party application performance testing organization established in 1988, which aims to establish and maintain a series of performance, function, and energy consumption benchmarks, and provides important reference standards for users to evaluate the performance and energy efficiency of computing systems. The organization consists of 138 well-known technology companies, universities and research institutions in the industry such as Intel, Oracle, NVIDIA, Apple, Microsoft, Inspur, Berkeley, Lawrence Berkeley National Laboratory, etc., and its test standard has become an important indicator for many users to evaluate overall computing performance.

The OSSC executive committee is the permanent body of the SPEC OSG (short for Open System Group, the earliest and largest committee established by SPEC) and is responsible for supervising and reviewing the daily work of major technical groups of OSG, major issues, additions and deletions of members, development direction of research and decision of testing standards, etc. Meanwhile, OSSC executive committee uniformly manages the development and maintenance of SPEC CPU, SPEC Power, SPEC Java, SPEC Virt and other benchmarks.

Machine Learning is an important direction in AI development. Different computing accelerator technologies such as GPU, FPGA, ASIC, and different AI frameworks such as TensorFlow and Pytorch provide customers with a rich marketplace of options. However, the next important thing for the customer to consider is how to evaluate the computing efficiency of various AI computing platforms. Both enterprises and research institutions require a set of benchmarks and methods to effectively measure performance to find the right solution for their needs.

In the past year, Inspur has done much to advance the SPEC ML standard specific component development, contributing test models, architectures, use cases, methods and so on, which have been duly acknowledged by SPEC organization and its members.

Joe Qiao, General Manager of Inspur Solution and Evaluation Department, believes that SPEC ML can provide an objective comparison standard for AI / ML applications, which will help users choose a computing system that best meet their application needs. Meanwhile, it also provides a unified measurement standard for manufacturers to improve their technologies and solution capabilities, advancing the development of the AI industry.

About Inspur

Inspur is a leading provider of data center infrastructure, cloud computing, and AI solutions, ranking among the worlds top 3 server manufacturers. Through engineering and innovation, Inspur delivers cutting-edge computing hardware design and extensive product offerings to address important technology arenas like open computing, cloud data center, AI and deep learning. Performance-optimized and purpose-built, our world-class solutions empower customers to tackle specific workloads and real-world challenges. To learn more, please go to http://www.inspursystems.com.

View source version on businesswire.com: https://www.businesswire.com/news/home/20200221005123/en/

Contacts

Media Fiona LiuLiuxuan01@inspur.com

Read more:
Inspur Re-Elected as Member of SPEC OSSC and Chair of SPEC Machine Learning - Yahoo Finance

Posted in Machine Learning | Comments Off on Inspur Re-Elected as Member of SPEC OSSC and Chair of SPEC Machine Learning – Yahoo Finance

Artificial Intelligence and Machine Learning in the Operating Room – 24/7 Wall St.

Most applications of artificial intelligence (AI) and machine learning technology provide only data to physicians, leaving the doctors to form a judgment on how to proceed. Because AI doesnt actually perform any procedure or prescribe a course of medication, the software that diagnoses health problems does not have to pass a randomized clinical trial as do devices such as insulin pumps or new medications.

A new study published Monday at JAMA Network discusses a trial including 68 patients undergoing elective noncardiac surgery under general anesthesia. The object of the trial was to determine if a predictive early warning system for possible hypotension (low blood pressure) during the surgery might reduce the time-weighted average of hypotension episodes during the surgery.

In other words, not only would the device and its software keep track of the patients mean average blood pressure, but it would sound an alarm if an 85% or greater risk of a patients blood pressure falling below 65 mm of mercury (Hg) was possible in the next 15 minutes. The device also encouraged the anesthesiologist to take preemptive action.

Patients in the control group were connected to the same AI device and software, but only routine pulse and blood pressure data were displayed. That means that the anesthesiologist had no early warning about a hypotension event and could take no action to prevent the event.

Among patients fully connected to the device and software, the median time-weighted average of hypotension was 0.1 mm Hg, compared to an average of 0.44 mm Hg in the control group. In the control group, the median time of hypotension per patient was 32.7 minutes, while it was just 8.0 minutes among the other patients. Most important, perhaps, two patients in the control group died from serious adverse events, while no patients connected to the AI device and software died.

The algorithm used by the device was developed by different researchers who had trained the software on thousands of waveform features to identify a possible hypotension event 15 minutes before it occurs during surgery. The devices used were a Flotrac IQ sensor with the early warning software installed and a HemoSphere monitor. The devices are made by Edwards Lifesciences, and Edwards also had five of eight researchers among the developers of the algorithm. The study itself was conducted in the Netherlands at Amsterdam University Medical Centers.

In an editorial at JAMA Network, associate editor Derek Angus wrote:

The final model predicts the likelihood of future hypotension via measurement of multiple variables characterizing dynamic interactions between left ventricular contractility, preload, and afterload. Although clinicians can look at arterial pulse pressure waveforms and, in combination with other patient features, make educated guesses about the possibility of upcoming episodes of hypotension, the likelihood is high that an AI algorithm could make more accurate predictions.

Among the past decades biggest health news stories were the development of immunotherapies for cancer and a treatment for cystic fibrosis. AI is off to a good start in the new decade.

By Paul Ausick

View original post here:
Artificial Intelligence and Machine Learning in the Operating Room - 24/7 Wall St.

Posted in Machine Learning | Comments Off on Artificial Intelligence and Machine Learning in the Operating Room – 24/7 Wall St.

How businesses and governments should embrace AI and Machine Learning – TechCabal

Leadership team of credit-as-a-service startup Migo, one of a growing number of businesses using AI to create consumer-facing products.

The ability to make good decisions is literally the reason people trust you with responsibilities. Whether you work for a government or lead a team at a private company, your decision-making process will affect lives in very real ways.

Organisations often make poor decisions because they fail to learn from the past. Wherever a data-collection reluctance exists, there is a fair chance that mistakes will be repeated. Bad policy goals will often be a consequence of faulty evidentiary support, a failure to sufficiently look ahead by not sufficiently looking back.

But as Daniel Kahneman, author of Thinking Fast and Slow, says:

The idea that the future is unpredictable is undermined every day by the ease with which the past is explained. If governments and business leaders will live up to their responsibilities, enthusiastically embracing methodical decision-making tools should be a no-brainer.

Mass media representations project artificial intelligence in futuristic, geeky terms. But nothing could be further from the truth.

While it is indeed scientific, AI can be applied in practical everyday life today. Basic interactions with AI include algorithms that recommend articles to you, friend suggestions on social media and smart voice assistants like Alexa and Siri.

In the same way, government agencies can integrate AI into regular processes necessary for society to function properly.

Managing money is an easy example to begin with. AI systems can be used to streamline data points required during budget preparations and other fiscal processes. Based on data collected from previous fiscal cycles, government agencies could reasonably forecast needs and expectations for future years.

With its large trove of citizen data, governments could employ AI to effectively reduce inequalities in outcomes and opportunities. Big Data gives a birds-eye view of the population, providing adequate tools for equitably distributing essential infrastructure.

Perhaps a more futuristic example is in drafting legislation. Though a young discipline, legimatics includes the use of artificial intelligence in legal and legislative problem-solving.

Democracies like Nigeria consider public input a crucial aspect of desirable law-making. While AI cannot yet be relied on to draft legislation without human involvement, an AI-based approach can produce tools for specific parts of legislative drafting or decision support systems for the application of legislation.

In Africa, businesses are already ahead of most governments in AI adoption. Credit scoring based on customer data has become popular in the digital lending space.

However, there is more for businesses to explore with the predictive powers of AI. A particularly exciting prospect is the potential for new discoveries based on unstructured data.

Machine learning could broadly be split into two sections: supervised and unsupervised learning. With supervised learning, a data analyst sets goals based on the labels and known classifications of the dataset. The resulting insights are useful but do not produce the sort of new knowledge that comes from unsupervised learning processes.

In essence, AI can be a medium for market-creating innovations based on previously unknown insight buried in massive caches of data.

Digital lending became a market opportunity in Africa thanks to growing smartphone availability. However, customer data had to be available too for algorithms to do their magic.

This is why it is desirable for more data-sharing systems to be normalised on the continent to generate new consumer products. Fintech sandboxes that bring the public and private sectors together aiming to achieve open data standards should therefore be encouraged.

Artificial intelligence, like other technologies, is neutral. It can be used for social good but also can be diverted for malicious purposes. For both governments and businesses, there must be circumspection and a commitment to use AI responsibly.

China is a cautionary tale. The Communist state currently employs an all-watching system of cameras to enforce round-the-clock citizen surveillance.

By algorithmically rating citizens on a so-called social credit score, Chinas ultra-invasive AI effectively precludes individual freedom, compelling her 1.3 billion people to live strictly by the Politburos ideas of ideal citizenship.

On the other hand, businesses must be ethical in providing transparency to customers about how data is harvested to create products. At the core of all exchange must be trust, and a verifiable, measurable commitment to do no harm.

Doing otherwise condemns modern society to those dystopian days everybody dreads.

How can businesses and governments use Artificial Intelligence to find solutions to challenges facing the continent? Join entrepreneurs, innovators, investors and policymakers in Africas AI community at TechCabals emerging tech townhall. At the event, stakeholders including telcos and financial institutions will examine how businesses, individuals and countries across the continent can maximize the benefits of emerging technologies, specifically AI and Blockchain. Learn more about the event and get tickets here.

Continue reading here:
How businesses and governments should embrace AI and Machine Learning - TechCabal

Posted in Machine Learning | Comments Off on How businesses and governments should embrace AI and Machine Learning – TechCabal