Search Immortality Topics:

Page 101«..1020..100101102103..110120..»


Category Archives: Machine Learning

IonQ CEO Peter Chapman on how quantum computing will change the future of AI – VentureBeat

Businesses eager to embrace cutting-edge technology are exploring quantum computing, which depends on qubits to perform computations that would be much more difficult, or simply not feasible, on classical computers. The ultimate goals are quantum advantage, the inflection point when quantum computers begin to solve useful problems, and quantum supremacy, when a quantum computer can solve a problem that classical computers practically cannot. While those are a long way off (if they can even be achieved), the potential is massive. Applications include everything from cryptography and optimization to machine learning and materials science.

As quantum computing startup IonQ has described it, quantum computing is a marathon, not a sprint. We had the pleasure of interviewing IonQ CEO Peter Chapman last month to discuss a variety of topics. Among other questions, we asked Chapman about quantum computings future impact on AI and ML.

The conversation quickly turned to Strong AI, or Artificial General Intelligence (AGI), which does not yet exist. Strong AI is the idea that a machine could one day understand or learn any intellectual task that a human being can.

AI in the Strong AI sense, that I have more of an opinion just because I have more experience in that personally, Chapman told VentureBeat. And there was a really interesting paper that just recently came out talking about how to use a quantum computer to infer the meaning of words in NLP. And I do think that those kinds of things for Strong AI look quite promising. Its actually one of the reasons I joined IonQ. Its because I think that does have some sort of application.

In a follow-up email, Chapman expanded on his thoughts. For decades it was believed that the brains computational capacity lay in the neuron as a minimal unit, he wrote. Early efforts by many tried to find a solution using artificial neurons linked together in artificial neural networks with very limited success. This approach was fueled by the thought that the brain is an electrical computer, similar to a classical computer.

However, since then, I believe we now know, the brain is not an electrical computer, but an electrochemical one, he added. Sadly, todays computers do not have the processing power to be able to simulate the chemical interactions across discrete parts of the neuron, such as the dendrites, the axon, and the synapse. And even with Moores law, they wont next year or even after a million years.

Chapman then quoted Richard Feynman, who famously said Nature isnt classical, dammit, and if you want to make a simulation of nature, youd better make it quantum mechanical, and by golly its a wonderful problem, because it doesnt look so easy.

Similarly, its likely Strong AI isnt classical, its quantum mechanical as well, Chapman said.

One of IonQs competitors, D-Wave, argues that quantum computing and machine learning are extremely well matched. Chapman is still on the fence.

I havent spent enough time to really understand it, he admitted. There clearly is a lot of people who think that ML and quantum have an overlap. Certainly, if you think of 85% of all ML produces a decision tree. And the depth of that decision tree could easily be optimized with a quantum computer. Clearly theres lots of people that think that generation of the decision tree could be optimized with a quantum computer. Honestly, I dont know if thats the case or not. I think its still a little early for machine learning, but there clearly is so many people that are working on it. Its hard to imagine it doesnt have application.

Again, in an email later, Chapman followed up. ML has intimate ties to optimization: many learning problems are formulated as minimization of some loss function on a training set of examples. Generally, Universal Quantum Computers excel at these kinds of problems.

Chapman listed three improvements in ML that quantum computing will likely allow:

Strong AI or ML, IonQ isnt particularly interested either. The company leaves that part to its customers and future partners.

Theres so much to be to be done in a quantum, Champan said. From education at one end all the way to the quantum computer itself. I think some of our competitors have taken on lots of the entire problem set. We at IonQ are just focused on producing the worlds best quantum computer for them. We think thats a large enough task for a little company like us to handle.

So, for the moment were kind of happy to let everyone else work on different problems, he added. We just think, producing the worlds best quantum computer is a large enough task. We just dont have extra bandwidth or resources to put into working on machine learning algorithms. And luckily, theres lots of other companies that think that theres applications there. Well partner with them in the sense that well provide the hardware that their algorithms will run on. But were not in the ML business per se.

The rest is here:
IonQ CEO Peter Chapman on how quantum computing will change the future of AI - VentureBeat

Posted in Machine Learning | Comments Off on IonQ CEO Peter Chapman on how quantum computing will change the future of AI – VentureBeat

Determined AI makes its machine learning infrastructure free and open source – TechCrunch

Machine learning has quickly gone from niche field to crucial component of innumerable software stacks, but that doesnt mean its easy. The tools needed to create and manage it are enterprise-grade and often enterprise-only but Determined AI aims to make them more accessible than ever by open-sourcing its entire AI infrastructure product.

The company created its Determined Training Platform for developing AI in an organized, reliable way the kind of thing that large companies have created (and kept) for themselves, the team explained when they raised an $11 million Series A last year.

Machine learning is going to be a big part of how software is developed going forward. But in order for companies like Google and Amazon to be productive, they had to build all this software infrastructure, said CEO Evan Sparks. One company we worked for had 70 people building their internal tools for AI. There just arent that many companies on the planet that can withstand an effort like that.

At smaller companies, ML is being experimented with by small teams using tools intended for academic work and individual research. To scale that up to dozens of engineers developing a real product there arent a lot of options.

Theyre using things like TensorFlow and PyTorch, said Chief Scientist Ameet Talwalkar. A lot of the way that work is done is just conventions: How do the models get trained? Where do I write down the data on which is best? How do I transform data to a good format? All these are bread and butter tasks. Theres tech to do it, but its really the Wild West. And the amount of work you have to do to get it set up theres a reason big tech companies build out these internal infrastructures.

Determined AI, whose founders started out at UC Berkeleys AmpLab (home of Apache Spark), has been developing its platform for a few years, with feedback and validation from some paying customers. Now, they say, its ready for its open source debut with an Apache 2.0 license, of course.

We have confidence people can pick it up and use it on their own without a lot of hand-holding, said Sparks.

You can spin up your own self-hosted installation of the platform using local or cloud hardware, but the easiest way to go about it is probably the cloud-managed version that automatically provisions resources from AWS or wherever you prefer and tears them down when theyre no longer needed.

The hope is that the Determined AI platform becomes something of a base layer that lots of small companies can agree on, providing portability to results and standards so youre not starting from scratch at every company or project.

With machine learning development expected to expand by orders of magnitude in the coming years, even a small piece of the pie is worth claiming, but with luck, Determined AI may grow to be the new de facto standard for AI development in small and medium businesses.

You can check out the platform on GitHub or at Determined AIs developer site.

Read the original here:
Determined AI makes its machine learning infrastructure free and open source - TechCrunch

Posted in Machine Learning | Comments Off on Determined AI makes its machine learning infrastructure free and open source – TechCrunch

Industrial Asset Optimization: Connecting Machines Directly with Data Scientists – Machine Learning Times – machine learning & data science news -…

By: Terry Miller, Global Digital Strategy and Business Development, SiemensFor more from this author, attend his virtual presentation, Industrial Asset Optimization: Machine-to-Cloud/Edge Analytics, at Predictive Analytics World for Industry 4.0, May 31-June 4, 2020. For industrial firms to realize the benefits promised by embracing Industry 4.0, the access to clean, quality asset data must improve. Most of a data , scientists work, in any vertical, involves cleaning and contextualizing data, or data prep. In the industrial segment, this remains true, and, considerably more challenging. Enterprise-wide data ingest platforms tend to yield inefficient, incomplete data necessary to optimize assets at the application layer. In order to improve this, firms should

To view this content OR subscribe for free

Already receive the Machine Learning Times emails?The Machine Learning Times now requires legacy email subscribers to upgrade their subscription - one time only - in order to attain a password-protected login and gain complete access.

Sign up for the Newsletter with your Choice of social media account:

Read more:
Industrial Asset Optimization: Connecting Machines Directly with Data Scientists - Machine Learning Times - machine learning & data science news -...

Posted in Machine Learning | Comments Off on Industrial Asset Optimization: Connecting Machines Directly with Data Scientists – Machine Learning Times – machine learning & data science news -…

Tecton.ai Launches with New Data Platform to Make Machine Learning Accessible to Every Company – insideBIGDATA

Tecton.ai emerged from stealth and formally launched with its data platform for machine learning. Tecton enables data scientists to turn raw data into production-ready features, the predictive signals that feed machine learning models. Tecton is in private beta with paying customers, including a Fortune 50 company.

Tecton.ai also announced $25 million in seed and Series A funding co-led by Andreessen Horowitz and Sequoia. Both Martin Casado, general partner at Andreessen Horowitz, and Matt Miller, partner at Sequoia, have joined the board.

Tecton.ai founders Mike Del Balso (CEO), Kevin Stumpf (CTO) and Jeremy Hermann (VP of Engineering) worked together at Uber when the company was struggling to build and deploy new machine learning models, so they createdUbers Michelangelo machine learning platform. Michelangelo was instrumental in scaling Ubers operations to thousands of production models serving millions of transactions per second in just a few years, and today it supports a myriad of use cases ranging from generating marketplace forecasts, calculating ETAs and automating fraud detection.

Del Balso, Stumpf and Hermann went on to found Tecton.ai to solve the data challenges that are the biggest impediment to deploying machine learning in the enterprise today. Enterprises are already generating vast amounts of data, but the problem is how to harness and refine this data into predictive signals that power machine learning models. Engineering teams end up spending the majority of their time building bespoke data pipelines for each new project. These custom pipelines are complex, brittle, expensive and often redundant. The end result is that 78% of new projects never get deployed, and 96% of projects encounter challenges with data quality and quantity(1).

Data problems all too often cause last-mile delivery issues for machine learning projects, said Mike Del Balso, Tecton.ai co-founder and CEO. With Tecton, there is no last mile. We created Tecton to empower data science teams to take control of their data and focus on building models, not pipelines. With Tecton, organizations can deliver impact with machine learning quickly, reliably and at scale.

Tecton.ai has assembled a world-class engineering team that has deep experience building machine learning infrastructure for industry leaders such as Google, Facebook, Airbnb and Uber. Tecton is the industrys first data platform that has been designed specifically to support the requirements of operational machine learning. It empowers data scientists to build great features, serve them to production quickly and reliably and do it at scale.

Tecton makes the delivery of machine learning data predictable for every company.

The ability to manage data and extract insights from it is catalyzing the next wave of business transformation, said Martin Casado, general partner at Andreessen Horowitz. The Tecton team has been on the forefront of this change with a long history of machine learning/AI and data at Google, Facebook and Airbnb and building the machine learning platform at Uber. Were very excited to be partnering with Mike, Kevin, Jeremy and the Tecton team to bring this expertise to the rest of the industry.

The founders of Tecton built a platform within Uber that took machine learning from a bespoke research effort to the core of how the company operated day-to-day, said Matt Miller, partner at Sequoia. They started Tecton to democratize machine learning across the enterprise. We believe their platform for machine learning will drive a Cambrian explosion within their customers, empowering them to drive their business operations with this powerful technology paradigm, unlocking countless opportunities. We were thrilled to partner with Tecton along with a16z at the seed and now again at the Series A. We believe Tecton has the potential to be one of the most transformational enterprise companies of this decade.

Sign up for the free insideBIGDATAnewsletter.

Continue reading here:
Tecton.ai Launches with New Data Platform to Make Machine Learning Accessible to Every Company - insideBIGDATA

Posted in Machine Learning | Comments Off on Tecton.ai Launches with New Data Platform to Make Machine Learning Accessible to Every Company – insideBIGDATA

How To Verify The Memory Loss Of A Machine Learning Model – Analytics India Magazine

It is a known fact that deep learning models get better with diversity in the data they are fed with. For instance, data in a use case related to healthcare data will be taken from several providers such as patient data, history, workflows of professionals, insurance providers, etc. to ensure such data diversity.

These data points that are collected through various interactions of people are fed into a machine learning model, which sits remotely in a data haven spewing predictions without exhausting.

However, consider a scenario where one of the providers ceases to offer data to the healthcare project and later requests to delete the provided information. In such a case, does the model remember or forget its learnings from this data?

To explore this, a team from the University of Edinburgh and Alan Turing Institute assumed that a model had forgotten some data and what can be done to verify the same. In this process, they investigated the challenges and also offered solutions.

The authors of this work wrote that this initiative is first of its kind and the only work that comes close is the Membership Inference Attack (MIA), which is also an inspiration to this work.

To verify if a model has forgotten specific data, the authors propose a Kolmogorov Smirnov (K-S) distance-based method. This method is used to infer whether a model is trained with the query dataset. The algorithm can be seen below:

Based on the above algorithm, the researchers have used benchmark datasets such as MNIST, SVHN and CIFAR-10 for experiments, which were used to verify the effectiveness of this new method. Later, this method was also tested on the ACDC dataset using the pathology detection component of the challenge.

The MNIST dataset contains 60,000 images of 10 digits with image size 28 28. Similar to MNIST, the SVHN dataset has over 600,000 digit images obtained from house numbers in Google Street view images. The image size of SVHN is 32 32. Since both datasets are for the task of digit recognition/classification, this dataset was considered to belong to the same domain. CIFAR-10 is used as a dataset to validate the method. CIFAR-10 has 60,000 images (size 32 32) of 10-class objects, including aeroplane, bird, etc. To train models with the same design, the images of all three datasets are preprocessed to grey-scale and rescaled to size 28 28.

Using the K-S distance statistics about the output distribution of a target model, said the authors, can be obtained without knowing the weights of the model. Since the models training data are unknown, few new models called the shadow models were trained with the query dataset and another calibration dataset.

Then by comparing the K-S values, one can conclude if the training data contains information from the query dataset or not.

Experiments have been done before to check the ownership one has over data in the world of the internet. One such attempt was made by the researchers at Stanford in which they investigated the algorithmic principles behind efficient data deletion in machine learning.

They found that for many standard ML models, the only way to completely remove an individuals data is to retrain the whole model from scratch on the remaining data, which is often not computationally practical. In a trade-off between efficiency and privacy, a challenge arises because algorithms that support efficient deletion need not be private, and algorithms that are private do not have to support efficient deletion.

Aforementioned experiments are an attempt to probe and raise new questions related to the never-ending debate about the usage of AI and privacy. The objective in these works is to investigate the idea of how much authority an individual has over specific data while also helping expose the vulnerabilities within a model if certain data is removed.

Check more about this work here.

comments

See the original post here:
How To Verify The Memory Loss Of A Machine Learning Model - Analytics India Magazine

Posted in Machine Learning | Comments Off on How To Verify The Memory Loss Of A Machine Learning Model – Analytics India Magazine

AI, machine learning and automation in cybersecurity: The time is now – GCN.com

INDUSTRY INSIGHT

The cybersecurity skills shortage continues to plague organizations across regions, markets and sectors, and the government sector is no exception.According to (ISC)2, there are only enough cybersecurity pros to fill about 60% of the jobs that are currently open -- which means the workforce will need to grow by roughly 145% to just meet the current global demand.

The Government Accountability Office states that the federal government needs a qualified, well-trained cybersecurity workforce to protect vital IT systems, and one senior cybersecurity official at the Department of Homeland Security has described the talent gap as a national security issue. The scarcity of such workers is one reason why securing federal systems is on GAOs High Risk list.Given this situation, chief information security officers who are looking for ways to make their existing resources more effective can make great use of automation and artificial intelligence to supplement and enhance their workforce.

The overall challenge landscape

Results of our survey, Making Tough Choices: How CISOs Manage Escalating Threats and Limited Resources show that CISOs currently devote 36% of their budgets to response and 33% to prevention.However, as security needs change, many CISOs are looking to shift budget away from prevention without reducing its effectiveness. An optimal budget would reduce spend on prevention and increase spending on detection and response to 33% and 40% of the security budget, respectively.This shift would give security teams the speed and flexibility they need to react quickly in the face of a threat from cybercriminals who are outpacing agencies defensive capabilities.When breaches are inevitable, it is important to stop as many as possible at the point of intrusion, but it is even more important to detect and respond to them before they can do serious damage.

One challenge to matching the speed of todays cyberattacks is that CISOs have limited personnel and budget resources. To overcome these obstacles and attain the detection and response speeds necessary for effective cybersecurity, CISOs must take advantage of AI, machine learning and automation.These technologies will help close gaps by correlating threat intelligence and coordinating responses at machine speed. Government agencies will be able to develop a self-defending security system capable of analyzing large volumes of data, detecting threats, reconfiguring devices and responding to threats without human intervention.

The unique challenges

Federal agencies deal with a number of challenges unique to the public sector, including the age and complexity of IT systems as well as the challenges of the government budget cycle.IT teams for government agencies arent just protecting intellectual property or credit card numbers; they are also tasked with protecting citizens sensitive data and national security secrets.

Charged with this duty but constrained by limited resources, IT leaders must weigh the risks of cyber threats against the daily demands of keeping networks up and running. This balancing act becomes more difficult as agencies migrate to the cloud, adopt internet-of-things devices and transition to software-defined networks that have no perimeter. These changes mean government networks are expanding their attack surface with no additional -- or even fewerdefensive resources. Its part of the reason why the Verizon Data Breach Investigations Report found that government agencies were subjected to more security incidents and more breaches than any other sector last year.

To change that dynamic, the typical government set-up of siloed systems must be replaced with a unified platform that can provide wider and more granular network visibility and more rapid and automated response.

How AI and automation can help

The keys to making a unified platform work are AI and automation technologies. Because organizations cannot keep pace with the growing volume of threats by manual detection and response, they need to leverage AI/ML and automation to fill these gaps. AI-driven solutions can learn what normal behavior looks like in order to detect anomalous behavior.For instance, many employees typically access a specific kind of data or only log on at certain times. If an employees account starts to show activity outside of these normal parameters, an AI/ML-based solution can detect these anomalies and can inspect or quarantine the affected device or user account until it is determined to be safe or mitigating action can be taken.

If the device is infected with malware or is otherwise acting maliciously, that AI-based tool can also issue automated responses. Making these tactical tasks the responsibility of AI-driven solutions frees security teams to work on more strategic problems, develop threat intelligence or focus on more difficult tasks such as detecting unknown threats.

IT teams at government agencies that want to implement AI and automation must be sure the solution they choose can scale and operate at machine speeds to keep up with the growing complexity and speed of the threat. In selecting a solution, IT managers must take time to ensure solutions have been developed using AI best practices and training techniques and that they are powered by best-in-class threat intelligence, security research and analytics technology. Data should be collected from a variety of nodes -- both globally and within the local IT environment -- to glean the most accurate and actionable information for supporting a security strategy.

Time is of the essence

Government agencies are experiencing more cyberattacks than ever before, at a time when the nation is facing a 40% cybersecurity skills talent shortage. Time is of the essence in defending a network, but time is what under-resourced and over-tasked government IT teams typically lack. As attacks come more rapidly and adapt to the evolving IT environment and new vulnerabilities, AI/ML and automation are rapidly becoming necessities.Solutions built from the ground up with these technologies will help government CISOs counter and potentially get ahead of todays sophisticated attacks.

About the Author

Jim Richberg is a Fortinet field CISO focused on the U.S. public sector.

More here:
AI, machine learning and automation in cybersecurity: The time is now - GCN.com

Posted in Machine Learning | Comments Off on AI, machine learning and automation in cybersecurity: The time is now – GCN.com