Search Immortality Topics:

Page 84«..1020..83848586..90100..»


Category Archives: Machine Learning

Adversarial attacks against machine learning systems everything you need to know – The Daily Swig

The behavior of machine learning systems can be manipulated, with potentially devastating consequences

In March 2019, security researchers at Tencent managed to trick a Tesla Model S into switching lanes.

All they had to do was place a few inconspicuous stickers on the road. The technique exploited glitches in the machine learning (ML) algorithms that power Teslas Lane Detection technology in order to cause it to behave erratically.

Machine learning has become an integral part of many of the applications we use every day from the facial recognition lock on iPhones to Alexas voice recognition function and the spam filters in our emails.

But the pervasiveness of machine learning and its subset, deep learning has also given rise to adversarial attacks, a breed of exploits that manipulate the behavior of algorithms by providing them with carefully crafted input data.

Adversarial attacks are manipulative actions that aim to undermine machine learning performance, cause model misbehavior, or acquire protected information, Pin-Yu Chen, chief scientist, RPI-IBM AI research collaboration at IBM Research, told The Daily Swig.

Adversarial machine learning was studied as early as 2004. But at the time, it was regarded as an interesting peculiarity rather than a security threat. However, the rise of deep learning and its integration into many applications in recent years has renewed interest in adversarial machine learning.

Theres growing concern in the security community that adversarial vulnerabilities can be weaponized to attack AI-powered systems.

As opposed to classic software, where developers manually write instructions and rules, machine learning algorithms develop their behavior through experience.

For instance, to create a lane-detection system, the developer creates a machine learning algorithm and trains it by providing it with many labeled images of street lanes from different angles and under different lighting conditions.

The machine learning model then tunes its parameters to capture the common patterns that occur in images that contain street lanes.

With the right algorithm structure and enough training examples, the model will be able to detect lanes in new images and videos with remarkable accuracy.

But despite their success in complex fields such as computer vision and voice recognition, machine learning algorithms are statistical inference engines: complex mathematical functions that transform inputs to outputs.

If a machine learning tags an image as containing a specific object, it has found the pixel values in that image to be statistically similar to other images of the object it has processed during training.

Adversarial attacks exploit this characteristic to confound machine learning algorithms by manipulating their input data. For instance, by adding tiny and inconspicuous patches of pixels to an image, a malicious actor can cause the machine learning algorithm to classify it as something it is not.

Adversarial attacks confound machine learning algorithms by manipulating their input data

The types of perturbations applied in adversarial attacks depend on the target data type and desired effect. The threat model needs to be customized for different data modality to be reasonably adversarial, says Chen.

For instance, for images and audios, it makes sense to consider small data perturbation as a threat model because it will not be easily perceived by a human but may make the target model to misbehave, causing inconsistency between human and machine.

However, for some data types such as text, perturbation, by simply changing a word or a character, may disrupt the semantics and easily be detected by humans. Therefore, the threat model for text should be naturally different from image or audio.

The most widely studied area of adversarial machine learning involves algorithms that process visual data. The lane-changing trick mentioned at the beginning of this article is an example of a visual adversarial attack.

In 2018, a group of researchers showed that by adding stickers to a stop sign(PDF), they could fool the computer vision system of a self-driving car to mistake it for a speed limit sign.

Researchers tricked self-driving systems into identifying a stop sign as a speed limit sign

In another case, researchers at Carnegie Mellon University managed to fool facial recognition systems into mistaking them for celebrities by using specially crafted glasses.

Adversarial attacks against facial recognition systems have found their first real use in protests, where demonstrators use stickers and makeup to fool surveillance cameras powered by machine learning algorithms.

Computer vision systems are not the only targets of adversarial attacks. In 2018, researchers showed that automated speech recognition (ASR) systems could also be targeted with adversarial attacks(PDF). ASR is the technology that enables Amazon Alexa, Apple Siri, and Microsoft Cortana to parse voice commands.

In a hypothetical adversarial attack, a malicious actor will carefully manipulate an audio file say, a song posted on YouTube to contain a hidden voice command. A human listener wouldnt notice the change, but to a machine learning algorithm looking for patterns in sound waves it would be clearly audible and actionable. For example, audio adversarial attacks could be used to secretly send commands to smart speakers.

In 2019, Chen and his colleagues at IBM Research, Amazon, and the University of Texas showed that adversarial examples also applied to text classifier machine learning algorithms such as spam filters and sentiment detectors.

Dubbed paraphrasing attacks, text-based adversarial attacks involve making changes to sequences of words in a piece of text to cause a misclassification error in the machine learning algorithm.

Example of a paraphrasing attack against fake news detectors and spam filters

Like any cyber-attack, the success of adversarial attacks depends on how much information an attacker has on the targeted machine learning model. In this respect, adversarial attacks are divided into black-box and white-box attacks.

Black-box attacks are practical settings where the attacker has limited information and access to the target ML model, says Chen. The attackers capability is the same as a regular user and can only perform attacks given the allowed functions. The attacker also has no knowledge about the model and data used behind the service.

Read more AI and machine learning security news

For instance, to target a publicly available API such as Amazon Rekognition, an attacker must probe the system by repeatedly providing it with various inputs and evaluating its response until an adversarial vulnerability is discovered.

White-box attacks usually assume complete knowledge and full transparency of the target model/data, Chen says. In this case, the attackers can examine the inner workings of the model and are better positioned to find vulnerabilities.

Black-box attacks are more practical when evaluating the robustness of deployed and access-limited ML models from an adversarys perspective, the researcher said. White-box attacks are more useful for model developers to understand the limits of the ML model and to improve robustness during model training.

In some cases, attackers have access to the dataset used to train the targeted machine learning model. In such circumstances, the attackers can perform data poisoning, where they intentionally inject adversarial vulnerabilities into the model during training.

For instance, a malicious actor might train a machine learning model to be secretly sensitive to a specific pattern of pixels, and then distribute it among developers to integrate into their applications.

Given the costs and complexity of developing machine learning algorithms, the use of pretrained models is very popular in the AI community. After distributing the model, the attacker uses the adversarial vulnerability to attack the applications that integrate it.

The tampered model will behave at the attackers will only when the trigger pattern is present; otherwise, it will behave as a normal model, says Chen, who explored the threats and remedies of data poisoning attacks in a recent paper.

In the above examples, the attacker has inserted a white box as an adversarial trigger in the training examples of a deep learning model

This kind of adversarial exploit is also known as a backdoor attack or trojan AI and has drawn the attention of Intelligence Advanced Research Projects (IARPA).

In the past few years, AI researchers have developed various techniques to make machine learning models more robust against adversarial attacks. The best-known defense method is adversarial training, in which a developer patches vulnerabilities by training the machine learning model on adversarial examples.

Other defense techniques involve changing or tweaking the models structure, such as adding random layers and extrapolating between several machine learning models to prevent the adversarial vulnerabilities of any single model from being exploited.

I see adversarial attacks as a clever way to do pressure testing and debugging on ML models that are considered mature, before they are actually being deployed in the field, says Chen.

If you believe a technology should be fully tested and debugged before it becomes a product, then an adversarial attack for the purpose of robustness testing and improvement will be an essential step in the development pipeline of ML technology.

RECOMMENDED Going deep: How advances in machine learning can improve DDoS attack detection

See more here:
Adversarial attacks against machine learning systems everything you need to know - The Daily Swig

Posted in Machine Learning | Comments Off on Adversarial attacks against machine learning systems everything you need to know – The Daily Swig

Coronavirus will finally give artificial intelligence its moment – San Antonio Express-News

For years, artificial intelligence seemed on the cusp of becoming the next big thing in technology - but the reality never matched the hype. Now, the changes caused by the covid-19 pandemic may mean AI's moment is finally upon us.

Over the past couple of months, many technology executives have shared a refrain: Companies need to rejigger their operations for a remote-working world. That's why they have dramatically increased their spending on powerful cloud-computing technologies and migrated more of their work and communications online.

With fewer people in the office, these changes will certainly help companies run more nimbly and reliably. But the centralization of more corporate data in the cloud is also precisely what's needed for companies to develop the AI capabilities - from better predictive algorithms to increased robotic automation - we've been hearing about for so long. If business leaders invest aggressively in the right areas, it could be a pivotal moment for the future of innovation.

To understand all the fuss around artificial intelligence, some quick background might be useful: AI is based on computer science research that looks at how to imitate the workings of human intelligence. It uses powerful algorithms that digest large amounts of data to identify patterns. These can be used to anticipate, say, what consumers will buy next or offer other important insights. Machine learning - essentially, algorithms that can improve at recognizing patterns on their own, without being explicitly programmed to do so - is one subset of AI that can enable applications like providing real-time protection against fraudulent financial transactions.

Historically, AI hasn't fully lived up to its hype. We're still a ways off from being able to have natural, life-like conversations with a computer, or getting truly safe self-driving cars. Even when it comes to improving less advanced algorithms, researchers have struggled with limited datasets and a lack of scaleable computing power.

Still, Silicon Valley's AI-startup ecosystem has been vibrant. Crunchbase says there are 5,751 private-held AI companies in the U.S. and that the industry received $17.4 billion in new funding last year. International Data Corporation (IDC) recently forecast that global AI spending will rise to $96.3 billion in 2023 from $38.4 billion in 2019. A Gartner survey of chief information officers and IT leaders, conducted in February, found that enterprises are projecting to double their number of AI projects, with over 40% planning to deploy at least one by the end of 2020.

As the pandemic accelerates the need for AI, these estimates will most likely prove to be understated. Big Tech has already demonstrated how useful AI can be in fighting covid-19. For instance, Amazon.com partnered with researchers to identify vulnerable populations and act as an "early warning" system for future outbreaks. BlueDot, an Amazon Web Services startup customer, used machine learning to sift through massive amounts of online data and anticipate the spread of the virus in China.

Pandemic lockdowns have also affected consumer behavior in ways that will spur AI's growth and development. Take a look at the soaring e-commerce industry: As consumers buy more online to avoid the new risks of shopping in stores, they are giving sellers more data on preferences and shopping habits. Bank of America's internal card-spending data for e-commerce points to rising year-over-year revenue growth rates of 13% for January, 17% for February, 24% for March, 73% for April and 80% for May. The data these transactions generate is a goldmine for retailers and AI companies, allowing them to improve the algorithms that provide personalized recommendations and generate more sales.

The growth in online activity also makes a compelling case for the adoption of virtual customer-service agents. International Business Machines Corporation estimates that only about 20% of companies use such AI-powered technology today. But they predict that almost all enterprises will adopt it in the coming years. By allowing computers to handle the easier questions, human representatives can focus on the more difficult interactions, thereby improving customer service and satisfaction.

Another area of opportunity comes from the increase in remote working. As companies struggle with the challenge of bringing employees back to the office, they may be more receptive to AI-based process automation software, which can handle mundane tasks like data entry. Its ability to read invoices and update databases without human intervention can reduce the need for some types of office work while also improving its accuracy. UiPath, Automation Anywhere and Blue Prism are the three leading vendors in this space, according to Goldman Sachs, accounting for about 36% of the roughly $850 million market last year. More imaginative AI projects are on the horizon. Graphics semiconductor-maker NVIDIA Corporation and luxury automaker BMW Group recently announced a deal where AI-powered logistics robots will be used to manufacture customized vehicles. In mid-May, Facebook said it was working on an AI lifestyle assistant that can recommend clothes or pick out furniture based on your personal taste and the configuration of your room.

As with the mass adoption of any new technology, there will be winners and losers. Among the winners, cloud-computing vendors will thrive as they capture more and more data. According to IDC, Amazon Web Services was number one in infrastructure cloud-computing services, with a 47% market share last year, followed by Microsoft at 13%.

But NVIDIA may be at an even better intersection of cloud and AI tech right now: Its graphic chip technology, once used primarily for video games, has morphed into the preeminent platform for AI applications. NVIDIA also makes the most powerful graphic processing units, so it dominates the AI-chip market used by cloud-computing companies. And it recently launched new data center chips that use its next-generation "Ampere" architecture, providing developers with a step-function increase in machine-learning capabilities.

On the other hand, the legacy vendors that provide computing equipment and software for in-office environments are most at risk of losing out in this technological shift. This category includes server sellers like Hewlett Packard Enterprise Company and router-maker Cisco Systems, Inc.

We must not ignore the more insidious consequences of an AI renaissance, either. There are a lot of ethical hurdles and complications ahead involving job loss, privacy and bias. Any increased automation may lead to job reductions, as software and robots replace tasks performed by humans. As more data becomes centrally stored on the cloud, the risk of larger data breaches will increase. Top-notch security has to become another key area of focus for technology and business executives. They also need to be vigilant in preventing algorithms from discriminating against minority groups, starting with monitoring their current technology and compiling more accurate datasets.

But the upside of greater computing power, better business insights and cost efficiencies from AI is too big to ignore. So long as companies proceed responsibly, years from now, the advances in AI catalyzed by the coronavirus crisis may be one of the silver linings we remember from 2020.

- - -

This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners. Kim is a Bloomberg Opinion columnist covering technology.

See the original post:
Coronavirus will finally give artificial intelligence its moment - San Antonio Express-News

Posted in Machine Learning | Comments Off on Coronavirus will finally give artificial intelligence its moment – San Antonio Express-News

After Effects and Premiere Pro gain more ‘magic’ machine-learning-based features – Digital Arts

ByNeil Bennett| on June 16, 2020

Roto Brush 2 (above) makes masking easier in After Effects, while Premiere Rush and Pro will automatically reframe and detect scenes in videos.

Adobe has announced new features coming to its video post-production apps, on the date when it was supposed to be holding its Adobe Max Europe event in Lisbon, which was cancelled due to COVID-19.

These aren't available yet unlike the new updates to Photoshop, Illustrator and InDesign but are destined in future releases. We would usually expect these to coincide with the IBC conference in Amsterdam in September or Adobe Max in October, though both of these are virtual events this year.

The new tools are based on Adobe's Sensei machine-learning technology. Premiere Pro will gain the ability to identify cuts in a video and create timelines with cuts or markers from them ideal if you've deleted a project and only have the final output, or are working with archive material.

A second-generation version of After Effects' Roto Brush enables you to automatically extract subjects from their background. You paint over the subject in a reference frame and the tech tracks the person or object through a scene to extract them.

Premiere Rush will be gaining Premiere Pro's Auto Reframe feature, which identify's key areas of video and frames around them when changing aspect ratio for example when creating a square version of video for Instagram or Facebook.

Also migrating to Rush from Pro will be an Effects panel, transitions and Pan and Zoom.

Note: We may earn a commission when you buy through links on our site, at no extra cost to you. This doesn't affect our editorial independence. Learn more.

View post:
After Effects and Premiere Pro gain more 'magic' machine-learning-based features - Digital Arts

Posted in Machine Learning | Comments Off on After Effects and Premiere Pro gain more ‘magic’ machine-learning-based features – Digital Arts

IBM Joins SCTEISBE Explorer Initiative To Help Shape Future Of AI And ML – AiThority

IBMhas joined theSCTEISBE Explorer Initiativeas a member of the artificial intelligence (AI) and machine learning (ML) working group. IBM is the first company from outside the cable telecommunications industry to join Explorer.

IBM will collaborate with subject matter experts from across industries to develop AI and ML standards and best practices. By sharing expertise and insights fostered within their organizations, members will help shape the standards that will enable the wide-spread availability of AI and ML applications.

Recommended AI News:Azure DevSecOps Jumpstart Now Available In The Microsoft Azure Marketplace

Integrating advancements in AI and machine learning with the deployment of agile, open, and secure, software-defined networks will help usher in new innovations, many of which will transform the way we connect, saidSteve Canepa, global industry managing director, telecommunications, media & entertainment for IBM. The industry is going through a dramatic transformation as it prepares for a different marketplace with different demands, and we are energized by this collaboration. As the network becomes a cloud platform, it will help drive innovative data-driven services and applications to bring value to both enterprises and consumers.

SCTEISBE announced the expansion of its award-winning Standards program in lateMarch 2020with the introduction of the Explorer Initiative. As part of the initiative seven new working groups will bring together leaders with diverse backgrounds to develop standards forAI and ML, smart cities, aging in place and telehealth, telemedicine, autonomous transport, extended spectrum (up to 3.0 GHz), and human factors affecting network reliability. Explorer working groups were chosen for their potential to impact telecommunications infrastructure, take advantage of the benefits of cables10G platform,and improve societys ability to cope with natural disasters and health crises like COVID-19.

Recommended AI News:Zilliant Price IQ Is Integrated With Oracle Cloud And Now Available In The Oracle Cloud Marketplace

The COVID-19 pandemic has demonstrated the importance of technology and connectivity to modern society and by many accounts, increased the speed of digital transformation across industries, saidChris Bastian, SCTEISBE senior vice president and CTIO. Explorer will help us turn innovative concepts into reality by giving industry leaders the opportunity to learn from each other, reduce development costs, ensure their connectivity needs are met, and ultimately get to market faster.

Recommended: AiThority Interview With Elie Melois, CTO And Co-Founder At LumApps

Share and Enjoy !

Here is the original post:
IBM Joins SCTEISBE Explorer Initiative To Help Shape Future Of AI And ML - AiThority

Posted in Machine Learning | Comments Off on IBM Joins SCTEISBE Explorer Initiative To Help Shape Future Of AI And ML – AiThority

Machine Learning Chip Market to Witness Huge Growth by 2027 | Amazon Web Services, Inc., Advanced Micro Devices, Inc, BitMain Technologies Holding…

Data Bridge Market Research has recently added a concise research on the Global Machine Learning Chip Market to depict valuable insights related to significant market trends driving the industry. The report features analysis based on key opportunities and challenges confronted by market leaders while highlighting their competitive setting and corporate strategies for the estimated timeline. The development plans, market risks, opportunities and development threats are explained in detail. The CAGR value, technological development, new product launches and Machine Learning Chip Industry competitive structure is elaborated. As per study key players of this market are Google Inc, Amazon Web Services, Inc., Advanced Micro Devices, Inc, BitMain Technologies Holding Company, Intel Corporation, Xilinx, SAMSUNG, Qualcomm Technologies, Inc.,

Click HERE To get SAMPLE COPY OF THIS REPORT (Including Full TOC, Table & Figures) [emailprotected] https://www.databridgemarketresearch.com/request-a-sample/?dbmr=global-machine-learning-chip-market

Machine learning chip market is expected to reach USD 72.45 billion by 2027 witnessing market growth with the rate of 40.60% in the forecast period of 2020 to 2027. Data Bridge Market Research report on machine learning chip market provides analysis and insights regarding the various factors expected to be prevalent throughout the forecast period while providing their impacts on the markets growth.

Global Machine Learning Chip Market Dynamics:

Global Machine Learning Chip Market Scope and Market Size

Machine learning chip market is segmented on the basis of chip type, technology and industry vertical. The growth among segments helps you analyse niche pockets of growth and strategies to approach the market and determine your core application areas and the difference in your target markets.

Important Features of the Global Machine Learning Chip Market Report:

1) What all companies are currently profiled in the report?

List of players that are currently profiled in the report- NVIDIA Corporation, Wave Computing, Inc., Graphcore, IBM Corporation, Taiwan Semiconductor Manufacturing Company Limited, Micron Technology, Inc.,

** List of companies mentioned may vary in the final report subject to Name Change / Merger etc.

2) What all regional segmentation covered? Can specific country of interest be added?

Currently, research report gives special attention and focus on following regions:

North America, Europe, Asia-Pacific etc.

** One country of specific interest can be included at no added cost. For inclusion of more regional segment quote may vary.

3) Can inclusion of additional Segmentation / Market breakdown is possible?

Yes, inclusion of additional segmentation / Market breakdown is possible subject to data availability and difficulty of survey. However a detailed requirement needs to be shared with our research before giving final confirmation to client.

** Depending upon the requirement the deliverable time and quote will vary.

Global Machine Learning Chip Market Segmentation:

By Chip Type (GPU, ASIC, FPGA, CPU, Others),

Technology (System-on-Chip, System-in-Package, Multi-Chip Module, Others),

Industry Vertical (Media & Advertising, BFSI, IT & Telecom, Retail, Healthcare, Automotive & Transportation, Others),

Country (U.S., Canada, Mexico, Brazil, Argentina, Rest of South America, Germany, Italy, U.K., France, Spain, Netherlands, Belgium, Switzerland, Turkey, Russia, Rest of Europe, Japan, China, India, South Korea, Australia, Singapore, Malaysia, Thailand, Indonesia, Philippines, Rest of Asia-Pacific, Saudi Arabia, U.A.E, South Africa, Egypt, Israel, Rest of Middle East and Africa) Industry Trends and Forecast to 2027

New Business Strategies, Challenges & Policies are mentioned in Table of Content, Request TOC @ https://www.databridgemarketresearch.com/toc/?dbmr=global-machine-learning-chip-market

Strategic Points Covered in Table of Content of Global Machine Learning Chip Market:

Chapter 1:Introduction, market driving force product Objective of Study and Research Scope Machine Learning Chip market

Chapter 2:Exclusive Summary the basic information of Machine Learning Chip Market.

Chapter 3:Displaying the Market Dynamics- Drivers, Trends and Challenges of Machine Learning Chip

Chapter 4:Presenting Machine Learning Chip Market Factor Analysis Porters Five Forces, Supply/Value Chain, PESTEL analysis, Market Entropy, Patent/Trademark Analysis.

Chapter 5:Displaying the by Type, End User and Region 2013-2018

Chapter 6:Evaluating theleading manufacturers of Machine Learning Chip marketwhich consists of its Competitive Landscape, Peer Group Analysis, BCG Matrix & Company Profile

Chapter 7:To evaluate the market by segments, by countries and by manufacturers with revenue share and sales by key countries in these various regions.

Chapter 8 & 9:Displaying the Appendix, Methodology and Data Source

Region wise analysis of the top producers and consumers, focus on product capacity, production, value, consumption, market share and growth opportunity in below mentioned key regions:

North America U.S., Canada, Mexico

Europe : U.K, France, Italy, Germany, Russia, Spain, etc.

Asia-Pacific China, Japan, India, Southeast Asia etc.

South America Brazil, Argentina, etc.

Middle East & Africa Saudi Arabia, African countries etc.

What the Report has in Store for you?

Industry Size & Forecast: The industry analysts have offered historical, current, and expected projections of the industry size from the cost and volume point of view

Future Opportunities: In this segment of the report, Machine Learning Chip competitors are offered with the data on the future aspects that the Machine Learning Chip industry is likely to provide

Industry Trends & Developments: Here, authors of the report have talked about the main developments and trends taking place within the Machine Learning Chip marketplace and their anticipated impact at the overall growth

Study on Industry Segmentation: Detailed breakdown of the key Machine Learning Chip industry segments together with product type, application, and vertical has been done in this portion of the report

Regional Analysis: Machine Learning Chip market vendors are served with vital information of the high growth regions and their respective countries, thus assist them to invest in profitable regions

Competitive Landscape: This section of the report sheds light on the competitive situation of the Machine Learning Chip market by focusing at the crucial strategies taken up through the players to consolidate their presence inside the Machine Learning Chip industry.

Key questions answered in this report

About Data Bridge Market Research:

An absolute way to forecast what future holds is to comprehend the trend today!Data Bridge set forth itself as an unconventional and neoteric Market research and consulting firm with unparalleled level of resilience and integrated approaches. We are determined to unearth the best market opportunities and foster efficient information for your business to thrive in the market.

Contact:

US: +1 888 387 2818

UK: +44 208 089 1725

Hong Kong: +852 8192 7475

[emailprotected]

See the article here:
Machine Learning Chip Market to Witness Huge Growth by 2027 | Amazon Web Services, Inc., Advanced Micro Devices, Inc, BitMain Technologies Holding...

Posted in Machine Learning | Comments Off on Machine Learning Chip Market to Witness Huge Growth by 2027 | Amazon Web Services, Inc., Advanced Micro Devices, Inc, BitMain Technologies Holding…

The startup making deep learning possible without specialized hardware – MIT Technology Review

GPUs became the hardware of choice for deep learning largely by coincidence. The chips were initially designed to quickly render graphics in applications such as video games. Unlike CPUs, which have four to eight complex cores for doing a variety of computation, GPUs have hundreds of simple cores that can perform only specific operationsbut the cores can tackle their operations at the same time rather than one after another, shrinking the time it takes to complete an intensive computation.

It didnt take long for the AI research community to realize that this massive parallelization also makes GPUs great for deep learning. Like graphics-rendering, deep learning involves simple mathematical calculations performed hundreds of thousands of times. In 2011, in a collaboration with chipmaker Nvidia, Google found that a computer vision model it had trained on 2,000 CPUs to distinguish cats from people could achieve the same performance when trained on only 12 GPUs. GPUs became the de facto chip for model training and inferencingthe computational process that happens when a trained model is used for the tasks it was trained for.

But GPUs also arent perfect for deep learning. For one thing, they cannot function as a standalone chip. Because they are limited in the types of operations they can perform, they must be attached to CPUs for handling everything else. GPUs also have a limited amount of cache memory, the data storage area nearest a chips processors. This means the bulk of the data is stored off-chip and must be retrieved when it is time for processing. The back-and-forth data flow ends up being a bottleneck for computation, capping the speed at which GPUs can run deep-learning algorithms.

NEURAL MAGIC

In recent years, dozens of companies have cropped up to design AI chips that circumvent these problems. The trouble is, the more specialized the hardware, the more expensive it becomes.

So Neural Magic intends to buck this trend. Instead of tinkering with the hardware, the company modified the software. It redesigned deep-learning algorithms to run more efficiently on a CPU by utilizing the chips large available memory and complex cores. While the approach loses the speed achieved through a GPUs parallelization, it reportedly gains back about the same amount of time by eliminating the need to ferry data on and off the chip. The algorithms can run on CPUs at GPU speeds, the company saysbut at a fraction of the cost. It sounds like what they have done is figured out a way to take advantage of the memory of the CPU in a way that people havent before, Thompson says.

Neural Magic believes there may be a few reasons why no one took this approach previously. First, its counterintuitive. The idea that deep learning needs specialized hardware is so entrenched that other approaches may easily be overlooked. Second, applying AI in industry is still relatively new, and companies are just beginning to look for easier ways to deploy deep-learning algorithms. But whether the demand is deep enough for Neural Magic to take off is still unclear. The firm has been beta-testing its product with around 10 companiesonly a sliver of the broader AI industry.

We want to improve not just neural networks but also computing overall.

Neural Magic currently offers its technique for inferencing tasks in computer vision. Clients must still train their models on specialized hardware but can then use Neural Magics software to convert the trained model into a CPU-compatible format. One client, a big manufacturer of microscopy equipment, is now trialing this approach for adding on-device AI capabilities to its microscopes, says Shavit. Because the microscopes already come with a CPU, they wont need any additional hardware. By contrast, using a GPU-based deep-learning model would require the equipment to be bulkier and more power hungry.

Another client wants to use Neural Magic to process security camera footage. That would enable it to monitor the traffic in and out of a building using computers already available on site; otherwise it might have to send the footage to the cloud, which could introduce privacy issues, or acquire special hardware for every building it monitors.

Shavit says inferencing is also only the beginning. Neural Magic plans to expand its offerings in the future to help companies train their AI models on CPUs as well. We believe 10 to 20 years from now, CPUs will be the actual fabric for running machine-learning algorithms, he says.

Thompson isnt so sure. The economics have really changed around chip production, and that is going to lead to a lot more specialization, he says. Additionally, while Neural Magics technique gets more performance out of existing hardware, fundamental hardware advancements will still be the only way to continue driving computing forward. This sounds like a really good way to improve performance in neural networks, he says. But we want to improve not just neural networks but also computing overall.

Read the original here:
The startup making deep learning possible without specialized hardware - MIT Technology Review

Posted in Machine Learning | Comments Off on The startup making deep learning possible without specialized hardware – MIT Technology Review