Search Immortality Topics:

Page 22«..10..21222324..3040..»


Category Archives: Machine Learning

This Smart Doorbell Responds to Meowing Cats Using Machine Learning and IoT – Hackster.io

Those who own an outdoor cat or even several might run into the occasional problem of having to let them back in. Due to finding it annoying when having to constantly monitor for when his cat wanted to come inside the house, GitHub user gamename opted for a more automated system.

The solution gamename came up with involves listening to ambient sounds with a single Raspberry Pi and an attached USB microphone. Whenever the locally-running machine learning model detects a meow, it sends a message to an AWS service over the internet where it can then trigger a text to be sent. This has the advantage of limiting false events while simultaneously providing an easy way for the cat to be recognized at the door.

This project started by installing the AWS command-line interface (CLI) onto the Raspberry Pi 4 and then signing in with an account. From here, gamename registered a new IoT device, downloaded the resulting configuration files, and ran the setup script. After quickly updating some security settings, a new function was created that waits for new messages coming from the MQTT service and causes a text message to be sent with the help of the SNS service.

After this plethora of services and configurations had been made to the AWS project, gamename moved onto the next step of testing to see if messages are sent at the right time. His test script simply emulates a positive result by sending the certificates, key, topic, and message to the endpoint, where the user can then watch as the text appears on their phone a bit later.

The Raspberry Pi and microSD card were both placed into an off-the-shelf chassis, which sits just inside the house's entrance. After this, the microphone was connected with the help of two RJ45-to-USB cables that allow the microphone to sit outside inside of a waterproof housing up to 150 feet away.

Running on the Pi is a custom bash script that starts every time the board boots up, and its role is to launch the Python program. This causes the Raspberry Pi to read samples from the microphone and pass them to a TensorFlow audio classifier, which attempts to recognize the sound clip. If the primary noise is a cat, then the AWS API is called in order to publish the message to the MQTT topic. More information about this project can be found here in gamename's GitHub repository.

Continue reading here:
This Smart Doorbell Responds to Meowing Cats Using Machine Learning and IoT - Hackster.io

Posted in Machine Learning | Comments Off on This Smart Doorbell Responds to Meowing Cats Using Machine Learning and IoT – Hackster.io

Tackling the reproducibility and driving machine learning with digitisation – Scientific Computing World

Dr Birthe Nielsen discusses the role of the Methods Database in supporting life sciences research by digitising methods data across different life science functions.

Reproducibility of experiment findings and data interoperability are two of the major barriers facing life sciences R&D today. Independently verifying findings by re-creating experiments and generating the same results is fundamental to progressing research to the next stage in its lifecycle, be it advancing a drug to clinical development, or a product to market. Yet, in the field of biology alone, one study found that 70 per cent of researchers are unable to reproduce the findings of other scientists, and 60 per cent are unable to reproduce their own findings.

This causes delays to the R&D process throughout the life sciences ecosystem. For example, biopharmaceutical companies often use an external Contract Research Organisation (CROs) to conduct clinical studies. Without a centralised repository to provide consistent access, analytical methods are often shared with CROs via email or even by physical documents, and not in a standard format but using an inconsistent terminology. This leads to unnecessary variability and several versions of the same analytical protocol. This makes it very challenging for a CRO to re-establish and revalidate methods without a labour-intensive process that is open to human interpretation and thus error.

To tackle issues like this, the Pistoia Alliance launched the Methods Hub project. The project aims to overcome the issue of reproducibility by digitising methods data across different life science functions, and ensuring data is FAIR (Findable, Accessible, Interoperable, Reusable) from the point of creation. This will enable seamless and secure sharing within the R&D ecosystem, reduce experiment duplication, standardise formatting to make data machine-readable, and increase reproducibility and efficiency. Robust data management is also the building block for machine learning and is the stepping-stone to realising the benefits of AI.

Digitisation of paper-based processes increases the efficiency and quality of methods data management. But it goes beyond manually keying in method parameters on a computer or using an Electronic Lab Notebook; A digital and automated workflow increases efficiency, instrument usages and productivity. Applying a shared data standards ensures consistency and interoperability in addition to fast and secure transfer of information between stakeholders.

One area that organisations need to address to comply with FAIR principles, and a key area in which the Methods Hub project helps, is how analytical methods are shared. This includes replacing free-text data capture with a common data model and standardised ontologies. For example, in a High-Performance Liquid Chromatography (HPLC) experiment, rather than manually typing out the analytical parameters (pump flow, injection volume, column temperature etc. etc.), the scientist will simply download a method which will automatically populate the execution parameters in any given Chromatographic Data System (CSD). This not only saves time during data entry, but the common format eliminates room for human interpretation or error.

Additionally, creating a centralised repository like the Methods Hub in a vendor-neutral format is a step towards greater cyber-resiliency in the industry. When information is stored locally on a PC or an ELN and is not backed up, a single cyberattack can wipe it out instantly. Creating shared spaces for these notes via the cloud protects data and ensures it can be easily restored.

A proof of concept (PoC) via the Methods Hub project was recently successfully completed to demonstrate the value of methods digitisation. The PoC involved the digital transfer via cloud of analytical HPLC methods, proving it is possible to move analytical methods securely between two different companies and CDS vendors with ease. It has been successfully tested in labs at Merck and GSK, where there has been an effective transfer of HPLC-UV information between different systems. The PoC delivered a series of critical improvements to methods transfer that eliminated the manual keying of data, reduces risk, steps, and error, while increasing overall flexibility and interoperability.

The Alliance project team is now working to extend the platforms functionality to connect analytical methods with results data, which would be an industry first. The team will also be adding support for columns and additional hardware and other analytical techniques, such as mass spectrometry and nuclear magnetic resonance spectroscopy (NMR). It also plans to identify new use cases, and further develop the cloud platform that enables secure methods transfer.

If industry-wide data standards and approaches to data management are to be agreed on and implemented successfully, organisations must collaborate. The Alliance recognises methods data management is a big challenge for the industry, and the aim is to make Methods Hub an integral part of the system infrastructure in every analytical lab.

Tackling issues such as digitisation of methods data doesnt just benefit individual companies but will have a knock-on effect for the whole life sciences industry. Introducing shared standards accelerates R&D, improves quality, and reduces the cost and time burden on scientists and organisations. Ultimately this ensures that new therapies and breakthroughs reach patients sooner. We are keen to welcome new contributors to the project, so we can continue discussing common barriers to successful data management, and work together to develop new solutions.

Dr Birthe Nielsen is the Pistoia Alliance Methods Database project manager

Read more:
Tackling the reproducibility and driving machine learning with digitisation - Scientific Computing World

Posted in Machine Learning | Comments Off on Tackling the reproducibility and driving machine learning with digitisation – Scientific Computing World

Artificial intelligence was supposed to transform health care. It hasn’t. – POLITICO

Companies come in promising the world and often dont deliver, said Bob Wachter, head of the department of medicine at the University of California, San Francisco. When I look for examples of true AI and machine learning thats really making a difference, theyre pretty few and far between. Its pretty underwhelming.

Administrators say algorithms the software that processes data from outside companies dont always work as advertised because each health system has its own technological framework. So hospitals are building out engineering teams and developing artificial intelligence and other technology tailored to their own needs.

But its slow going. Research based on job postings shows health care behind every industry except construction in adopting AI.

The Food and Drug Administration has taken steps to develop a model for evaluating AI, but it is still in its early days. There are questions about how regulators can monitor algorithms as they evolve and rein in the technologys detrimental aspects, such as bias that threaten to exacerbate health care inequities.

Sometimes theres an assumption that AI is working, and its just a matter of adopting it, which is not necessarily true, said Florenta Teodoridis, a professor at the University of Southern Californias business school whose research focuses on AI. She added that being unable to understand why an algorithm came to a certain result is fine for things like predicting the weather. But in health care, its impact is potentially life-changing.

Despite the obstacles, the tech industry is still enthusiastic about AIs potential to transform health care.

The transition is slightly slower than I hoped but well on track for AI to be better than most radiologists at interpreting many different types of medical images by 2026, Hinton told POLITICO via email. He said he never suggested that we should get rid of radiologists, but that we should let AI read scans for them.

If hes right, artificial intelligence will start taking on more of the rote tasks in medicine, giving doctors more time to spend with patients to reach the right diagnosis or develop a comprehensive treatment plan.

I see us moving as a medical community to a better understanding of what it can and cannot do, said Lara Jehi, chief research information officer for the Cleveland Clinic. It is not going to replace radiologists, and it shouldnt replace radiologists.

Radiology is one of the most promising use cases for AI. The Mayo Clinic has a clinical trial evaluating an algorithm that aims to reduce the hours-long process oncologists and physicists undertake to map out a surgical plan for removing complicated head and neck tumors.

An algorithm can do the job in an hour, said John D. Halamka, president of Mayo Clinic Platform: Weve taken 80 percent of the human effort out of it. The technology gives doctors a blueprint they can review and tweak without having to do the basic physics themselves, he said.

NYU Langone Health has also experimented with using AI in radiology. The health system has collaborated with Facebooks Artificial Intelligence Research group to reduce the time it takes to get an MRI from one hour to 15 minutes. Daniel Sodickson, a radiological imaging expert at NYU Langone who worked on the research, sees opportunity in AIs ability to downsize the amount of data doctors need to review.

When I look for examples of true AI and machine learning thats really making a difference, theyre pretty few and far between. Its pretty underwhelming.

Bob Wachter, head of the department of medicine at the University of California, San Francisco

Covid has accelerated AIs development. Throughout the pandemic, health providers and researchers shared data on the disease and anonymized patient data to crowdsource treatments.

Microsoft and Adaptive Biotechnologies, which partner on machine learning to better understand the immune system, put their technology to work on patient data to see how the virus affected the immune system.

The amount of knowledge thats been obtained and the amount of progress has just been really exciting, said Peter Lee, corporate vice president of research and incubations at Microsoft.

There are other success stories. For example, Ochsner Health in Louisiana built an AI model for detecting early signs of sepsis, a life-threatening response to infection. To convince nurses to adopt it, the health system created a response team to monitor the technology for alerts and take action when needed.

Im calling it our care traffic control, said Denise Basow, chief digital officer at Ochsner Health. Since implementation, she said, death from sepsis is declining.

The biggest barrier to the use of artificial intelligence in health care has to do with infrastructure.

Health systems need to enable algorithms to access patient data. Over the last several years, large, well-funded systems have invested in moving their data into the cloud, creating vast data lakes ready to be consumed by artificial intelligence. But thats not as easy for smaller players.

Another problem is that every health system is unique in its technology and the way it treats patients. That means an algorithm may not work as well everywhere.

Over the last year, an independent study on a widely used sepsis detection algorithm from EHR giant Epic showed poor results in real-world settings, suggesting where and how hospitals used the AI mattered.

This quandary has led top health systems to build out their own engineering teams and develop AI in-house.

That could create complications down the road. Unless health systems sell their technology, its unlikely to undergo the type of vetting that commercial software would. That could allow flaws to go unfixed for longer than they might otherwise. Its not just that the health systems are implementing AI while no ones looking. Its also that the stakeholders in artificial intelligence, in health care, technology and government, havent agreed upon standards.

A lack of quality data which gives algorithms material to work with is another significant barrier in rolling out the technology in health care settings.

Over the last several years, large, well-funded systems have invested in moving their data into the cloud, creating vast data lakes ready to be consumed by artificial intelligence.|Elaine Thompson/AP Photo

Much data comes from electronic health records but is often siloed among health care systems, making it more difficult to gather sizable data sets. For example, a hospital may have complete data on one visit, but the rest of a patients medical history is kept elsewhere, making it harder to draw inferences about how to proceed in caring for the patient.

We have pieces and parts, but not the whole, said Aneesh Chopra, who served as the governments chief technology officer under former President Barack Obama and is now president of data company CareJourney.

While some health systems have invested in pulling data from a variety of sources into a single repository, not all hospitals have the resources to do that.

Health care also has strong privacy protections that limit the amount and type of data tech companies can collect, leaving the sector behind others in terms of algorithmic horsepower.

Importantly, not enough strong data on health outcomes is available, making it more difficult for providers to use AI to improve how they treat patients.

That may be changing. A recent series of studies on a sepsis algorithm included copious details on how to use the technology in practice and documented physician adoption rates. Experts have hailed the studies as a good template for how future AI studies should be conducted.

But working with health care data is also more difficult than in other sectors because it is highly individualized.

We found that even internally across our different locations and sites, these models dont have a uniform performance, said Jehi of the Cleveland Clinic.

And the stakes are high if things go wrong. The number of paths that patients can take are very different than the number of paths that I can take when Im on Amazon trying to order a product, Wachter said.

Health experts also worry that algorithms could amplify bias and health care disparities.

For example, a 2019 study found that a hospital algorithm more often pushed white patients toward programs aiming to provide better care than Black patients, even while controlling for the level of sickness.

Last year, the FDA published a set of guidelines for using AI as a medical device, calling for the establishment of good machine learning practices, oversight of how algorithms behave in real-world scenarios and development of research methods for rooting out bias.

The agency subsequently published more specific guidelines on machine learning in radiological devices, requiring companies to outline how the technology is supposed to perform and provide evidence that it works as intended. The FDA has cleared more than 300 AI-enabled devices, largely in radiology, since 1997.

Regulating algorithms is a challenge, particularly given how quickly the technology advances. The FDA is attempting to head that off by requiring companies to institute real-time monitoring and submit plans on future changes.

But in-house AI isnt subject to FDA oversight. Bakul Patel, former head of the FDAs Center for Devices and Radiological Health and now Googles senior director for global digital health strategy and regulatory affairs, said that the FDA is thinking about how it might regulate noncommercial artificial intelligence inside of health systems, but he adds, theres no easy answer.

FDA has to thread the needle between taking enough action to mitigate flaws in algorithms while also not stifling AIs potential, he said.

Some argue that public-private standards for AI would help advance the technology. Groups, including the Coalition for Health AI, whose members include major health systems and universities as well as Google and Microsoft, are working on this approach.

But the standards they envision would be voluntary, which could blunt their impact if not widely adopted.

Original post:
Artificial intelligence was supposed to transform health care. It hasn't. - POLITICO

Posted in Machine Learning | Comments Off on Artificial intelligence was supposed to transform health care. It hasn’t. – POLITICO

Cellarity Releases Novel, Open-Source, Single-Cell Dataset and Invites the Machine Learning and Computational Biology Communities to Develop New…

SOMERVILLE, Mass.--(BUSINESS WIRE)--Cellarity, a life sciences company founded by Flagship Pioneering to transform the way medicines are created, announced today the release of a unique single-cell dataset to accelerate innovation in mapping multimodal genetic information across cell states and over time. This dataset will be used to power a competition hosted by Open Problems in Single-Cell Analysis.

Cells are among the most complex and dynamic systems and are regulated by the interplay of DNA, RNA, and proteins. Recent technological advances have made it possible to measure these cellular features and such data provide, for the first time, a direct and comprehensive view spanning the layers of gene regulation that drive biological systems and give rise to disease.

Advancements in single-cell technologies now make it possible to decode genetic regulation, and we are excited to generate another first-of-its-kind dataset to support Open Problems in Single Cell Analysis, said Fabrice Chouraqui, PharmD, CEO of Cellarity and a CEO-Partner at Flagship Pioneering. Developing new machine learning algorithms that can predict how a single-cell genome can drive a diversity of cellular states will provide new insights into how cells and tissues move from health to disease and support informed design of new medicines.

To drive innovation for such data, Cellarity generated a time course profiling in vitro differentiation of blood progenitors, a dataset designed in collaboration with scientists at Yale University, Chan Zuckerberg Biohub, and Helmholtz Munich. This time course will be used to power a competition to develop algorithms that learn the underlying relationships between DNA, RNA, and protein modalities across time. Solving this open problem will help elucidate complex regulatory processes that are the foundation for cell differentiation in health and disease.

While multimodal single-cell data is increasingly available, methods to analyze these data are still scarce and often treat cells as static snapshots without modeling the underlying dynamics of cell state, said Daniel Burkhardt, Ph.D., cofounder of Open Problems in Single-Cell Analysis and Machine Learning Scientist at Cellarity. New machine learning algorithms are needed to learn the rules that govern complex cell regulatory processes so we can predict how cell state changes over time. We hope these new algorithms can augment the value of existing or future single-modality datasets, which can be cost effectively generated at higher quality to streamline and accelerate research.

In 2021, Cellarity partnered with Open Problems collaborators to develop the first benchmark competition for multimodal single-cell data integration using a first-of-its-kind multi-omics benchmarking dataset (NeurIPS 2021). This dataset was the largest atlas of the human bone marrow measured across DNA, RNA, and proteins and was used to predict one modality from another and learn representations of multiple modalities measured in the same cells. The 2021 competition saw winning submissions from both computational biologists with deep single-cell expertise and machine learning practitioners for whom this competition marked their first foray into biology. This translation of knowledge across disciplines is expected to drive more powerful algorithms to learn fundamental rules of biology.

For 2022, Cellarity and Open Problems are extending the challenge to drive innovation in modeling temporal single-cell data measured in multiple modalities at multiple time points. For this years competition, Cellarity generated a 300,000-cell time course dataset of CD34+ hematopoietic stem and progenitor cells (HSPC) from four human donors at five time points. HSPCs are stem cells that give rise to all other cells in the blood throughout adult life, and a 10-day time course captures important biology in CD34+ HSPCs. Being able to solve the prediction problems over time is expected to yield new insights into how gene regulation influences differentiation.

Entries to the competition will be accepted until November 15, 2022. For more information, visit the competition page on Kaggle.

About Open Problems in Single Cell Analysis

Open Problems in Single-Cell Analysis was founded in 2020 bringing together academic, non-profit, and for-profit institutions to accelerate innovation in single-cell algorithm development. An explosion in single-cell analysis algorithms has resulted in more than 1,200 methods published in the last five years. However, few standard benchmarks exist for single-cell biology, both making it difficult to identify top performing algorithms and hindering collaboration with the machine learning community to accelerate single-cell science. Open Problems is a first-of-its-kind international consortium developing a centralized, open-source, and continuously updated framework for benchmarking single-cell algorithms to drive innovation and alignment in the field. For more information, visit https://openproblems.bio/.

About Cellarity

Cellaritys mission is to fundamentally transform the way medicines are created. Founded by Flagship Pioneering in 2017, Cellarity has developed unique capabilities combining high-resolution data, single cell technologies, and machine learning to encode biology, predict interventions, and purposefully design breakthrough medicines. By focusing on the cellular changes that underlie disease instead of a single target, Cellaritys approach uncovers new biology and treatments and is applicable to a vast array of disease areas. The company currently has programs underway in metabolic disease, hematology, immuno-oncology, and respiratory disease. For more info, visit http://www.cellarity.com.

About Flagship Pioneering

Flagship Pioneering conceives, creates, resources, and develops first-in-category bioplatform companies to transform human health and sustainability. Since its launch in 2000, the firm has, through its Flagship Labs unit, applied its unique hypothesis-driven innovation process to originate and foster more than 100 scientific ventures, resulting in more than $100 billion in aggregate value. To date, Flagship has deployed over $2.9 billion in capital toward the founding and growth of its pioneering companies alongside more than $19 billion of follow-on investments from other institutions. The current Flagship ecosystem comprises 41 transformative companies, including Denali Therapeutics (NASDAQ: DNLI), Evelo Biosciences (NASDAQ: EVLO), Foghorn Therapeutics (NASDAQ: FHTX), Moderna (NASDAQ: MRNA), Omega Therapeutics (NASDAQ: OMGA), Rubius Therapeutics (NASDAQ: RUBY), Sana Biotechnology (NASDAQ: SANA), and Seres Therapeutics (NASDAQ: MCRB).

Go here to read the rest:
Cellarity Releases Novel, Open-Source, Single-Cell Dataset and Invites the Machine Learning and Computational Biology Communities to Develop New...

Posted in Machine Learning | Comments Off on Cellarity Releases Novel, Open-Source, Single-Cell Dataset and Invites the Machine Learning and Computational Biology Communities to Develop New…

Are You Making These Deadly Mistakes With Your AI Projects? – Forbes

Since data is at the heart of AI, it should come as no surprise that AI and ML systems need enough good quality data to learn. In general, a large volume of good quality data is needed, especially for supervised learning approaches, in order to properly train the AI or ML system. The exact amount of data needed may vary depending on which pattern of AI youre implementing, the algorithm youre using, and other factors such as in house versus third party data. For example, neural nets need a lot of data to be trained while decision trees or Bayesian classifiers dont need as much data to still produce high quality results.

So you might think more is better, right? Well, think again. Organizations with lots of data, even exabytes, are realizing that having more data is not the solution to their problems as they might expect. Indeed, more data, more problems. The more data you have, the more data you need to clean and prepare. The more data you need to label and manage. The more data you need to secure, protect, mitigate bias, and more. Small projects can rapidly turn into very large projects when you start multiplying the amount of data. In fact, many times, lots of data kills projects.

Clearly the missing step between identifying a business problem and getting the data squared away to solve that problem is determining which data you need and how much of it you really need. You need enough, but not too much. Goldilocks data is what people often say: not too much, not too little, but just right. Unfortunately, far too often, organizations are jumping into AI projects without first addressing an understanding of their data. Questions organizations need to answer include figuring out where the data is, how much of it they already have, what condition it is in, what features of that data are most important, use of internal or external data, data access challenges, requirements to augment existing data, and other crucial factors and questions. Without these questions answered, AI projects can quickly die, even drowning in a sea of data.

Getting a better understanding of data

In order to understand just how much data you need, you first need to understand how and where data fits into the structure of AI projects. One visual way of understanding the increasing levels of value we get from data is the DIKUW pyramid (sometimes also referred to as the DIKW pyramid) which shows how a foundation of data helps build greater value with Information, Knowledge, Understanding and Wisdom.

DIKW pyramid

With a solid foundation of data, you can gain additional insights at the next information layer which helps you answer basic questions about that data. Once you have made basic connections between data to gain informational insight, you can find patterns in that information to gain understanding of the how various pieces of information are connected together for greater insight. Building on a knowledge layer, organizations can get even more value from understanding why those patterns are happening, providing an understanding of the underlying patterns. Finally, the wisdom layer is where you can gain the most value from information by providing the insights into the cause and effect of information decision making.

This latest wave of AI focuses most on the knowledge layer, since machine learning provides the insight on top of the information layer to identify patterns. Unfortunately, machine learning reaches its limits in the understanding layer, since finding patterns isnt sufficient to do reasoning. We have machine learning, not but the machine reasoning required to understand why the patterns are happening. You can see this limitation in effect any time you interact with a chatbot. While the Machine learning-enabled NLP is really good at understanding your speech and deriving intent, it runs into limitations rying to understand and reason.For example, if you ask a voice assistant if you should wear a raincoat tomorrow, it doesn't understand that youre asking about the weather. A human has to provide that insight to the machine because the voice assistant doesnt know what rain actually is.

Avoiding Failure by Staying Data Aware

Big data has taught us how to deal with large quantities of data. Not just how its stored but how to process, manipulate, and analyze all that data. Machine learning has added more value by being able to deal with the wide range of different types of unstructured, semi-structured or structured data collected by organizations. Indeed, this latest wave of AI is really the big data-powered analytics wave.

But its exactly for this reason why some organizations are failing so hard at AI. Rather than run AI projects with a data-centric perspective, they are focusing on the functional aspects. To gain a handle of their AI projects and avoid deadly mistakes, organizations need a better understanding not only of AI and machine learning but also the Vs of big data. Its not just about how much data you have, but also the nature of that data. Some of those Vs of big data include:

With decades of experience managing big data projects, organizations that are successful with AI are primarily successful with big data. The ones that are seeing their AI projects die are the ones who are coming at their AI problems with application development mindsets.

Too Much of the Wrong Data, and Not Enough of the Right Data is Killing AI Projects

While AI projects start off on the right foot, the lack of the necessary data and the lack of understanding and then solving real problems are killing AI projects. Organizations are powering forward without actually having a real understanding of the data that they need and the quality of that data. This poses real challenges.

One of the reasons why organizations are making this data mistake is that they are running their AI projects without any real approach to doing so, other than using Agile or app dev methods. However, successful organizations have realized that using data-centric approaches focus on data understanding as one of the first phases of their project approaches. The CRISP-DM methodology, which has been around for over two decades, specifies data understanding as the very next thing to do once you determine your business needs. Building on CRISP-DM and adding Agile methods, the Cognitive Project Management for AI (CPMAI) Methodology requires data understanding in its Phase II. Other successful approaches likewise require a data understanding early in the project, because after all, AI projects are data projects. And how can you build a successful project on a foundation of data without running your projects with an understanding of data? Thats surely a deadly mistake you want to avoid.

Read more:
Are You Making These Deadly Mistakes With Your AI Projects? - Forbes

Posted in Machine Learning | Comments Off on Are You Making These Deadly Mistakes With Your AI Projects? – Forbes

Deep learning algorithm predicts Cardano to trade above $2 by the end of August – Finbold – Finance in Bold

The price of Cardano (ADA) has mainly traded in the green in recent weeks as the network dubbed Ethereum killer continues to record increased blockchain development.

Specifically, the Cardano community is projecting a possible rise in the tokens value, especially with the upcoming Vasil hard fork.

In this line, NeuralProphets PyTorch-based price prediction algorithm that deploys an open-source machine learning framework has predicted that ADA would trade at $2.26 by August 31, 2022.

Although the prediction model covers the time period from July 31st to December 31st, 2022, and it is not an accurate indicator of future prices, its predictions have historically proven to be relatively accurate up until the abrupt market collapse of the algorithm-based stablecoin project TerraUSD (UST).

However, the prediction aligns with the generally bullish sentiment around ADA that stems from the network activity aimed at improving the assets utility. As reported by Finbold, Cardano founder Charles Hoskinson revealed the highly anticipated Vasil hard fork is ready to be rolled after delays.

It is worth noting that despite minor gains, ADA is yet to show any significant reaction to the upgrade, but the tokens proponents are glued to the price movement as it shows signs of recovery. Similarly, the token has benefitted from the recent two-month-long rally across the general cryptocurrency market.

Elsewhere, the CoinMarketCap community is projecting that ADA will trade at $0.58 by the end of August. The prediction is supported by about 17,877 community members, representing a price growth of about 8.71% from the tokens current value.

For September, the community has placed the prediction at $0.5891, a growth of about 9% from the current price. Interestingly, the algorithm predicts that ADA will trade at $1.77 by the end of September. Overall, both prediction platforms indicate an increase from the digital assets current price.

By press time, the token was trading at $0.53 with gains of less than 1% in the last 24 hours.

In general, multiple investors are aiming to capitalize on the Vasil hard fork, especially with Cardano clarifying the upgrade is going on according to plan.

Disclaimer:The content on this site should not be considered investment advice. Investing is speculative. When investing, your capital is at risk.

See the original post here:
Deep learning algorithm predicts Cardano to trade above $2 by the end of August - Finbold - Finance in Bold

Posted in Machine Learning | Comments Off on Deep learning algorithm predicts Cardano to trade above $2 by the end of August – Finbold – Finance in Bold