Search Immortality Topics:

Page 13«..10..12131415..2030..»

Category Archives: Machine Learning

Machine-learning, robotics and biology to deliver drug discovery of tomorrow – – pharmaphorum

Biology 2.0: Combining machine-learning, robotics and biology to deliver drug discovery of tomorrow

Intelligent OMICS, Arctoris and Medicines Discovery Catapult test in silico pipeline for identifying new molecules for cancer treatment.

Medicines discovery innovators, Intelligent OMICS, supported by Arctoris and Medicines Discovery Catapult, are applying artificial intelligence to find new disease drivers and candidate drugs for lung cancer. This collaboration, backed by Innovate UK, will de-risk future R&D projects and also demonstrate new cost and time-saving approaches to drug discovery.

Analysing a broad set of existing biological information, previously hidden components of disease biology can be identified which in turn lead to the identification of new drugs for development. This provides the catalyst for an AI-driven acceleration in drug discovery and the team has just won a significant Innovate UK grant in order to prove that it works.

Intelligent OMICS, the company leading the project, use in silico (computer-based) tools to find alternative druggable targets. They have already completed a successful analysis of cellular signalling pathways elsewhere in lung cancer pathways and are now selectively targeting the KRAS signalling pathway.

As Intelligent OMICS technology identifies novel biological mechanisms, Medicines Discovery Catapult will explore the appropriate chemical tools and leads that can be used against these new targets, and Arctoris will use their automated drug discovery platform in Oxford to conduct the biological assays which will validate them experimentally.

Working together, the group will provide druggable chemistry against the entire in silico pipeline, offering new benchmarks of cost and time effectiveness over conventional methods of discovery.

Much has been written about the wonders of artificial intelligence and its potential in healthcare, says Dr Simon Haworth, CEO of Intelligent OMICS. Our newsflows are full of details of AI applications in process automation, image analysis and computational chemistry. The DeepMind protein folding breakthrough has also hit the headlines recently as a further AI application. But what does Intelligent OMICS do that is different?

By analysing transcriptomic and similar molecular data our neural networks algorithms re-model known pathways and identify new, important targets. This enables us to develop and own a broad stream of new drugs. Lung cancer is just the start we have parallel programs running in many other areas of cancer, in infectious diseases, in auto-immune disease, in Alzheimers and elsewhere.

We have to thank Innovate UK for backing this important work. The independent validation of our methodology by the highly respected cheminformatics team at MDC coupled with the extraordinarily rapid, wet lab validation provided by Arctoris, will finally prove that, in drug discovery, the era of AI has arrived.

Dr Martin-Immanuel Bittner, Chief Executive Officer of Arctoris commented:

We are thrilled to combine our strengths in robotics-powered drug discovery assay development and execution with the expertise in machine learning that Intelligent OMICS and Medicines Discovery Catapult possess. This unique setup demonstrates the next stage in drug discovery evolution, which is based on high quality datasets and machine intelligence. Together, we will be able to rapidly identify and validate novel targets, leading to promising new drug discovery programmes that will ultimately benefit patients worldwide.

Prof. John P. Overington, Chief Informatics Officer at Medicines Discovery Catapult:

Computational based approaches allow us to explore a top-down approach to identifying novel biological mechanisms of disease, which critically can be validated by selecting the most appropriate chemical modulators and assessing their effects in cellular assay technologies.

Working with Intelligent OMICS and with support from Arctoris we are delighted to play our part in laying the groundwork for computer-augmented, automated drug discovery. Should these methods indeed prove fruitful, it will be transformative for both our industry and patients alike.

If this validation is successful, the partners will have established a unique pipeline of promising new targets and compounds for a specific pathway in lung cancer. But more than that they will also have validated an entirely new drug discovery approach which can then be further scaled to other pathways and diseases.

Follow this link:
Machine-learning, robotics and biology to deliver drug discovery of tomorrow - - pharmaphorum

Posted in Machine Learning | Comments Off on Machine-learning, robotics and biology to deliver drug discovery of tomorrow – – pharmaphorum

How This CEO is Using Synthetic Data to Reshape Machine Learning for Real-World Applications – Yahoo Finance

Artificial Intelligence (AI) and Machine Learning (ML) are certainly not new industries. As early as the 1950s, the term machine learning was introduced by IBM AI pioneer Arthur Samuel. It has been in recent years wherein AI and ML have seen significant growth. IDC, for one, estimates the market for AI to be valued at $156.5 billion in 2020 with a 12.3 percent growth over 2019. Even amid global economic uncertainties, this market is set to grow to $300 billion by 2024, a compound annual growth of 17.1 percent.

There are challenges to be overcome, however, as AI becomes increasingly interwoven into real-world applications and industries. While AI has seen meaningful use in behavioral analysis and marketing, for instance, it is also seeing growth in many business processes.

"The role of AI Applications in enterprises is rapidly evolving. It is transforming how your customers buy, your suppliers deliver, and your competitors compete. AI applications continue to be at the forefront of digital transformation (DX) initiatives, driving both innovation and improvement to business operations," said Ritu Jyoti, program vice president, Artificial Intelligence Research at IDC.

Even with the increasing utilization of sensors and internet-of-things, there is only so much that machines can learn from real-world environments. The limitations come in the form of cost and replicable scenarios. Heres where synthetic data will play a big part

Dor Herman

We need to teach algorithms what it is exactly that we want them to look for, and thats where ML comes in. Without getting too technical, algorithms need a training process, where they go through incredible amounts of annotated data, data that has been marked with different identifiers. And this is, finally, where synthetic data comes in, says Dor Herman, Co-Founder and Chief Executive Officer of OneView, a Tel Aviv-based startup that accelerates ML training with the use of synthetic data.

Story continues

Herman says that real-world data can oftentimes be either inaccessible or too expensive to use for training AI. Thus, synthetic data can be generated with built-in annotations in order to accelerate the training process and make it more efficient. He cites four distinct advantages of using synthetic data over real-world data in ML: cost, scale, customization, and the ability to train AI to make decisions on scenarios that are not likely to occur in real-world scenarios.

You can create synthetic data for everything, for any use case, which brings us to the most important advantage of synthetic data--its ability to provide training data for even the rarest occurrences that by their nature dont have real coverage.

Herman gives the example of oil spills, weapons launches, infrastructure damage, and other such catastrophic or rare events. Synthetic data can provide the needed data, data that could have not been obtained in the real world, he says.

Herman cites a case study wherein a client needed AI to detect oil spills. Remember, algorithms need a massive amount of data in order to learn what an oil spill looks like and the company didnt have numerous instances of oil spills, nor did it have aerial images of it.

Since the oil company utilized aerial images for ongoing inspection of their pipelines, OneView applied synthetic data instead. we created, from scratch, aerial-like images of oil spills according to their needs, meaning, in various weather conditions, from different angles and heights, different formations of spills--where everything is customized to the type of airplanes and cameras used.

This would have been an otherwise costly endeavor. Without synthetic data, they would never be able to put algorithms on the detection mission and will need to continue using folks to go over hours and hours of detection flights every day.

With synthetic data, users can define the parameters for training AI, in order for better decision-making once real-world scenarios occur. The OneView platform can generate data customized to their needs. An example involves training computer vision to detect certain inputs based on sensor or visual data.

You input your desired sensor, define the environment and conditions like weather, time of day, shooting angles and so on, add any objects-of-interest--and our platform generates your data; fully annotated, ready for machine learning model training datasets, says Herman.

Annotation also has advantages over real-world data, which will often require manual annotation, which takes extensive time and cost to process. The swift and automated process that produces hundreds of thousands of images replaces a manual, prolonged, cumbersome and error-prone process that hinders computer vision ML algorithms from racing forward, he adds.

OneViews synthetic data generation involves a six-layer process wherein 3D models are created using gaming engines and then flattened to create 2D images.

We start with the layout of the scene so to speak, where the basic elements of the environment are laid out The next step is the placement of objects-of-interest that are the goal of detection, the objects that the algorithms will be trained to discover. We also put in distractors, objects that are similar so the algorithms can learn how to differentiate the goal object from similar-looking objects. Then the appearance building stage follows, when colors, textures, random erosions, noises, and other detailed visual elements are added to mimic how real images look like, with all their imperfections, Herman shares.

The fourth step involves the application of conditions such as weather and time of the day. For the fifth step, sensor parameters (the camera lens type) are implemented, meaning, we adapt the entire image to look like it was taken by a specific remote sensing system, resolution-wise, and other unique technical attributes each system has. Lastly, annotations are added.

Annotations are the marks that are used to define to the algorithm what it is looking at. For example, the algorithm can be trained that this is a car, this is a truck, this is an airplane, and so on. The resulting synthetic datasets are ready for machine learning model training.

For Herman, the biggest contribution of synthetic data is actually paradoxical. By using synthetic data, AI and AI users get a better understanding of the real world and how it works--through machine learning. Image analytics comes with bottlenecks in processing, and computer vision algorithms cannot scale unless this bottleneck is overcome.

Remote sensing data (imagery captured by satellites, airplanes and drones) provides a unique channel to uncover valuable insights on a very large scale for a wide spectrum of industries. In order to do that, you need computer vision AI as a way to study these vast amounts of data collected and return intelligence, Herman explains.

Next, this intelligence is transformed to insights that help us better understand this planet we live on, and of course drive decision making, whether by governments or businesses. The massive growth in computing power enabled the flourishing of AI in recent years, but the collection and preparation of data for computer vision machine learning is the fundamental factor that holds back AI.

He circles back to how OneView intends to reshape machine learning: releasing this bottleneck with synthetic data so the full potential of remote sensing imagery analytics can be realized and thus a better understanding of earth emerges.

The main driver behind Artificial Intelligence and Machine Learning is, of course, business and economic value. Countries, enterprises, businesses, and other stakeholders benefit from the advantages that AI offers, in terms of decision-making, process improvement, and innovation.

The Big message OneView brings is that we enable a better understanding of our planet through the empowerment of computer vision, concludes Herman. Synthetic data is not fake data. Rather, it is purpose-built inputs that enable faster, more efficient, more targeted, and cost-effective machine learning that will be responsive to the needs of real-world decision-making processes.

Continue reading here:
How This CEO is Using Synthetic Data to Reshape Machine Learning for Real-World Applications - Yahoo Finance

Posted in Machine Learning | Comments Off on How This CEO is Using Synthetic Data to Reshape Machine Learning for Real-World Applications – Yahoo Finance

Unlock Insights From Business Documents With Revv’s Metalens, a Machine Learning Based Document Analyzer – Business Wire

PALO ALTO, Calif.--(BUSINESS WIRE)--Businesses run on documents as documents help build connections. They cement relationships and enable trust and transparency between stakeholders. Documents bring certainty, continuity, and clarity. When it comes to reviewing documents, most intelligence platforms perceive documents for their language content. A business document is not just written text, its a record of information and data - from simple entities such as names or addresses to more nuanced ones such as notice period or renewal dates - this information is required to optimize workflows and processes. Revv recently added Metalens, an intelligent document analyzer that breaks this barrier and applies artificial intelligence to extract data and intent from business documents to scale up business processes.

Metalens allows users to extract relevant information and identify potential discussion points from any document (pdf or Docx) within Revv. This extracted data can be reused to set up workflows, feed downstream business apps with relevant information, and optimize business processes. Think itinerary processing, financial compliance, auditing, renewal follow-up, invoice processing, and so on, all identified and automated. The feature improves process automation, which is otherwise riddled with copy-pasting errors and other manual data entry bottlenecks.

Rishi Kulkarni, the co-founder, adds, Revvs Metalens feature is fast, efficient, and a powerful element that sifts through the content and turns your documents into datasets. This unlocks new insights that allow our users to empower themselves and align their businesses for growth.

Metalens is another aspect of Revvs intelligence layer used to understand document structure and compare and review contracts with current industry standards. Businesses can identify their risk profile and footprint in half the time, with half the resources. It helps to get a grip on the intent of business documents and ensure your business objectives are met.

With Metalens, users can -

Excited about this new feature, Sameer Goel, co-founder, adds, The impact of this intelligent layer is clear and immediate as it is able to process complex documents with legalese and endless text thats easy to miss. It can process unstructured and structured document data even when datasets formats and locations change over time. This machine learning approach provides users with an alternative solution that allows them to circumvent their dependence on intimately knowing the document to extract information from it.

Revvs new Metalens feature gives its users the speed and flexibility to generate meaningful insights and accelerate business outcomes by putting machine learning front and center. It quickens the review process and makes negotiation smoother. It brings transparency that helps reduce errors and lets users save time and effort.

Metalens is part of Revvs larger offering designed to simplify business paperwork. Revv is an all-in-one document platform that brings together the power of eSignature, an exhaustive template library, a drag-n-drop editor, payments and Gsheet integrations, and API connections. Specially designed for owner-operators, consultants, agencies, and service providers who want a simple no-code tool to manage their business paperwork, Revv gives them the ability to draft, edit, share online, eSign, collect payments, and centrally store documents with one tool.

About Revv:

Backed by Lightspeed, Matrix Partners, and Arka Ventures, Revv was founded by Freshworks alumni Rishi Kulkarni and Sameer Goel in 2018. With operations in Silicon Valley and Bangalore, India, Revv is designed as a document management system for entrepreneurs. As of now, Revv has more than 3000+ businesses trusting the platform and is poised for even greater growth with features like attaching supporting media/doc files, multi-language support, bulk creation of documents, and even user groups.

Continued here:
Unlock Insights From Business Documents With Revv's Metalens, a Machine Learning Based Document Analyzer - Business Wire

Posted in Machine Learning | Comments Off on Unlock Insights From Business Documents With Revv’s Metalens, a Machine Learning Based Document Analyzer – Business Wire

Embedded AI and Machine Learning Adding New Advancements In Tech Space – Analytics Insight

Throughout the most recent years, as sensor and MCU costs dove and shipped volumes have gone through the roof, an ever-increasing number of organizations have attempted to exploit by adding sensor-driven embedded AI to their products.

Automotive is driving the trend the average non-autonomous vehicle presently has 100 sensors, sending information to 30-50 microcontrollers that run about 1m lines of code and create 1TB of data per vehicle every day. Extravagance vehicles may have twice the same number of, and autonomous vehicles increase the sensor check significantly more drastically.

Yet, its not simply an automotive trend. Industrial equipment is turning out to be progressively brilliant as creators of rotating, reciprocating and other types of equipment rush to add usefulness for condition monitoring and predictive support, and a huge number of new consumer products from toothbrushes, to vacuum cleaners, to fitness monitors add instrumentation and smarts.

An ever-increasing number of smart devices are being introduced each month. We are now at a point where artificial intelligence and machine learning in its exceptionally essential structure has discovered its way into the core of embedded devices. For example, smart home lighting systems that automatically turn on and off depend on whether anybody is available in the room. By all accounts, the system doesnt look excessively stylish. Yet, when you consider everything, you understand that the system is really settling on choices all alone. In view of the contribution from the sensor, the microcontroller/SOC concludes if to turn on the light or not.

To do all of this simultaneously, defeating variety to achieve troublesome detections in real-time, at the edge, inside the vital limitations isnt at all simple. In any case, with current tools, integrating new options for machine learning for signals (like Reality AI) it is getting simpler.

They can regularly achieve detections that escape traditional engineering models. They do this by making significantly more productive and compelling utilization of data to conquer variation. Where traditional engineering approaches will ordinarily be founded on a physical model, utilizing data to appraise parameters, machine learning approaches can adapt autonomously of those models. They figure out how to recognize signatures straightforwardly from the raw information and utilize the mechanics of machine learning (mathematics) to isolate targets from non-targets without depending on physical science.

There are a lot of different regions where the convergence of machine learning and embedded systems will prompt great opportunities. Healthcare, for example, is now receiving the rewards of putting resources into AI technology. The Internet of Things or IoT will likewise profit enormously from the introduction of artificial intelligence. We will have smart automation solutions that will prompt energy savings, cost proficiency as well as the end of human blunder.

Forecasting is at the center of so many ML/AI conversations as organizations hope to use neural networks and deep learning to conjecture time series data. The worth is the capacity to ingest information and quickly acknowledge insight into how it changes the long-term outlook. Further, a large part of the circumstance relies upon the global supply chain, which makes improvements significantly harder to precisely project.

Probably the most unsafe positions in production lines are as of now being dealt by machines. Because of the advancement in embedded electronics and industrial automation, we have ground-breaking microcontrollers running the whole mechanical production systems in assembling plants. However, the majority of these machines are not exactly completely automatic and still require a type of human intercession. In any case, the time will come when the introduction of machine learning will help engineers concoct truly intelligent machines that can work with zero human mediation.

Share This ArticleDo the sharing thingy

About AuthorMore info about author

Read more:
Embedded AI and Machine Learning Adding New Advancements In Tech Space - Analytics Insight

Posted in Machine Learning | Comments Off on Embedded AI and Machine Learning Adding New Advancements In Tech Space – Analytics Insight

AI: This COVID machine-learning tool helps swamped hospitals pick the right treatment – ZDNet

Spain has been one the European states worst hit by the COVID-19 pandemic, with more than 1.7 million detected cases. Despite the second wave of infections that has hit the country over the past few months, the Hospital Clinic in Barcelona has succeeded in halving mortality among its coronavirus patients using artificial intelligence.

The Catalan hospital has developed a machine-learning tool that can predict when a COVID patient will deteriorate and how to customize that individual's treatment to avoid the worst outcome.

"When you have a sole patient who's in a critical state, you can take special care of them. But when they are 700 of them, you need this kind of tool," says Carol Garcia-Vidal, a physician specialized in infectious diseases and IDIBAPS researcher who has led the development of the tool.

SEE: Managing AI and ML in the enterprise 2020: Tech leaders increase project development and implementation (TechRepublic Premium)

Before the pandemic, the hospital had already been working on software to turn variable data into an analyzable form. So when the hospital started to receive COVID patients in March, it put the system to work analyzing three trillion pieces of structured and anonymized data from 2,000 patients.

The goal was to train it to recognize patterns and check what treatments were the most effective for each patient and when they should be administered.

That work underlined to Garcia-Vidal and her team that the virus doesn't manifest itself in the same way for everyone. "There are patients with an inflammatory response, patients with coagulopathies and patients who develop super infections," Garca-Vidal tells ZDNet. Each group needs different drugs and thus a personalized treatment.

Thanks to an EIT Health grant, the AI system has been developed into a real-time dashboard display on physicians' computers that has become one of their everyday tools. Under the supervision of an epidemiologist, the tool enables patients to be classified and offered a more personalized treatment.

"Nobody has done this before," says Garca-Vidal, who says the researchers recently added two more patterns to the system to include the patients who are stable and can leave the hospital, thus freeing a bed, and those patients who are more likely to die. The predictions are 90% accurate.

"It's very useful for physicians with less experience and those who have a specialty that's nothing to do with COVID, such as gynecologists or traumatologists," she says. As in many countries, doctors from all specialist areas were called in to treat patients during the first wave of the pandemic.

The system is also being used during the current second wave because, according to Garca-Vidal, the number of patients in intensive care in Catalan hospitals has jumped. The plan is to make the tool available to other hospitals.

Meanwhile, the Barcelona Supercomputing Center (BSC) is also analyzing a set of data corresponding to 3,000 medical cases generated by the Hospital Clnic during the acute phase of the pandemic in March.

The aim is to develop a model based on deep-learning neural networks that will look for common patterns and generate predictions on the evolution of symptoms. The objective is to know whether a patient is likely to need a ventilator system or be directly sent to intensive care.

SEE: The algorithms are watching us, but who is watching the algorithms?

Some data such as age, sex, vital signs and medication given is structured but other data isn't, because it consists of text written in natural language in the form of, for example, hospital discharge and radiology reports, BSC researcher Marta Villegas explains.

Supercomputing brings the computational capacity and power to extract essential information from these reports and train models based on neural networks to predict the evolution of the disease as well as the response to treatments given the previous conditions of the patients.

This approach, based on natural language processing, is also being tested at a hospital in Madrid.

Go here to see the original:
AI: This COVID machine-learning tool helps swamped hospitals pick the right treatment - ZDNet

Posted in Machine Learning | Comments Off on AI: This COVID machine-learning tool helps swamped hospitals pick the right treatment – ZDNet

Hateful Memes Challenge Winners Machine Learning Times – The Predictive Analytics Times

By: Douwe Kiela, Hamed Firooz and Tony Nelli Originally published in Facebook AI, Dec 11, 2020.

AI has made progress in detecting hate speech, but important and difficult technical challenges remain. Back in May 2020, Facebook AI partnered with Getty Images and DrivenData to launch the Hateful Memes Challenge, a first-of-its-kind $100K competition and data set to accelerate research on the problem of detecting hate speech that combines images and text. As part of the challenge, Facebook AI created a unique data set of 10,000+ new multimodal examples, using licensed images from Getty Images so that researchers could easily use them in their work.

More than 3,300 participants from around the world entered the Hateful Memes Challenge, and we are now sharing details on the winning entries. The top-performing teams were:

Ron Zhu link to code

Niklas Muennighoff link to code

Team HateDetectron: Riza Velioglu and Jewgeni Rose link to code

Team Kingsterdam: Phillip Lippe, Nithin Holla, Shantanu Chandra, Santhosh Rajamanickam, Georgios Antoniou, Ekaterina Shutova and Helen Yannakoudakis link to code

Vlad Sandulescu link to code

You can see the full leaderboard here. As part of the NeurIPS 2020 competition track, the top five winners will discuss their solutions and we facilitated a Q&A with participants from around the world. Each of these five implementations has been made open source and is available now.

To continue reading this article, click here.

View post:
Hateful Memes Challenge Winners Machine Learning Times - The Predictive Analytics Times

Posted in Machine Learning | Comments Off on Hateful Memes Challenge Winners Machine Learning Times – The Predictive Analytics Times