Search Immortality Topics:

Page 104«..1020..103104105106..110120..»


Category Archives: Machine Learning

Machine learning insight will lead to greener and cheaper mobile phone towers – University of Southampton

Home>>>

Published:27 April 2020

Off-grid renewable energy solutions will be introduced to mobile telecom towers in developing countries through a new collaboration involving researchers at the University of Southampton.

London-based Global Tower Solutions will work with machine learning experts in Electronics and Computer Science on the new project funded by the national SPRINT business support programme.

The partnership will develop a solution that is estimated to cost around half that of existing diesel generators, while also improving access to mobile communication services in targeted countries in Asia and sub-Saharan Africa.

Professor Gopal Ramchurn, Director of the Universitys Centre for Machine Intelligence, said: Mobile phone towers make a significant contribution to CO2 emissions and Global Tower Solutions is looking to decrease carbon emissions through a reduction in diesel powered mobile phone towers.

Through the SPRINT project, the University will apply machine learning techniques to high- and low-resolution datasets, drone imagery, census data, data from satellite images and other data available around settlements. This will help to define the business case for renewable energy for phone towers which can then be delivered to mobile phone operators to identify the most appropriate renewable energy sources and which regions need mobile communications the most.

Mobile communication has been shown to be a key factor in relieving poverty by providing access to information and financial services that drive trade, education, reduction in poverty and better health. The project will also lead to the reduced use of diesel and improved sustainability of small businesses that underpin developing economies.

Mark Eastwood, Chief Executive Officer of Global Tower Solutions, said: The renewable energy market has evolved over last 10-12 years and we set the company up 3-4 years ago with the aim of moving from diesel generation towards solar power and storage. We wanted to remove the diesel generation price point using sustainable, non-polluting storage solutions, particularly in emerging markets.

The SPRINT project will help us to explore the impact of renewable generating assets on both telco tower businesses and local communities, using business insights from datasets. Working with the University of Southampton, we can access expertise that can support us in high precision localised intelligence including valuable business insights, topological mapping, individual patterns of usage and movement of local population.

SPRINT (SPace Research and Innovation Network for Technology) helps businesses through the commercial exploitation of space data and technologies. The 4.8m programme provides unprecedented access to university space expertise and facilities. Southampton researchers are contributing to several SPRINT projects, including a recently announced collaboration with Smallspark Space Systems that is using AI to optimise aerostructure designs.

Read more here:
Machine learning insight will lead to greener and cheaper mobile phone towers - University of Southampton

Posted in Machine Learning | Comments Off on Machine learning insight will lead to greener and cheaper mobile phone towers – University of Southampton

Machine Learning as a Service Market Overview, Top Companies, Region, Application and Global Forecast by 2026 – Latest Herald

Xeround

Global Machine Learning as a Service Market Segmentation

This market was divided into types, applications and regions. The growth of each segment provides an accurate calculation and forecast of sales by type and application in terms of volume and value for the period between 2020 and 2026. This analysis can help you develop your business by targeting niche markets. Market share data are available at global and regional levels. The regions covered by the report are North America, Europe, the Asia-Pacific region, the Middle East, and Africa and Latin America. Research analysts understand the competitive forces and provide competitive analysis for each competitor separately.

To get Incredible Discounts on this Premium Report, Click Here @ https://www.marketresearchintellect.com/ask-for-discount/?rid=195381&utm_source=LHN&utm_medium=888

Machine Learning as a Service Market Region Coverage (Regional Production, Demand & Forecast by Countries etc.):

North America (U.S., Canada, Mexico)

Europe (Germany, U.K., France, Italy, Russia, Spain etc.)

Asia-Pacific (China, India, Japan, Southeast Asia etc.)

South America (Brazil, Argentina etc.)

Middle East & Africa (Saudi Arabia, South Africa etc.)

Some Notable Report Offerings:

-> We will give you an assessment of the extent to which the market acquire commercial characteristics along with examples or instances of information that helps your assessment.

-> We will also support to identify standard/customary terms and conditions such as discounts, warranties, inspection, buyer financing, and acceptance for the Machine Learning as a Service industry.

-> We will further help you in finding any price ranges, pricing issues, and determination of price fluctuation of products in Machine Learning as a Service industry.

-> Furthermore, we will help you to identify any crucial trends to predict Machine Learning as a Service market growth rate up to 2026.

-> Lastly, the analyzed report will predict the general tendency for supply and demand in the Machine Learning as a Service market.

Have Any Query? Ask Our Expert@ https://www.marketresearchintellect.com/need-customization/?rid=195381&utm_source=LHN&utm_medium=888

Table of Contents:

Study Coverage: It includes study objectives, years considered for the research study, growth rate and Machine Learning as a Service market size of type and application segments, key manufacturers covered, product scope, and highlights of segmental analysis.

Executive Summary: In this section, the report focuses on analysis of macroscopic indicators, market issues, drivers, and trends, competitive landscape, CAGR of the global Machine Learning as a Service market, and global production. Under the global production chapter, the authors of the report have included market pricing and trends, global capacity, global production, and global revenue forecasts.

Machine Learning as a Service Market Size by Manufacturer: Here, the report concentrates on revenue and production shares of manufacturers for all the years of the forecast period. It also focuses on price by manufacturer and expansion plans and mergers and acquisitions of companies.

Production by Region: It shows how the revenue and production in the global market are distributed among different regions. Each regional market is extensively studied here on the basis of import and export, key players, revenue, and production.

About Us:

Market Research Intellect provides syndicated and customized research reports to clients from various industries and organizations with the aim of delivering functional expertise. We provide reports for all industries including Energy, Technology, Manufacturing and Construction, Chemicals and Materials, Food and Beverage and more. These reports deliver an in-depth study of the market with industry analysis, market value for regions and countries and trends that are pertinent to the industry.

Contact Us:

Mr. Steven Fernandes

Market Research Intellect

New Jersey ( USA )

Tel: +1-650-781-4080

Tags: Machine Learning as a Service Market Size, Machine Learning as a Service Market Growth, Machine Learning as a Service Market Forecast, Machine Learning as a Service Market Analysis

Our Trending Reports

Aerospace and Defense Telemetry Market Size, Growth Analysis, Opportunities, Business Outlook and Forecast to 2026

Aerospace Coatings Market Size, Growth Analysis, Opportunities, Business Outlook and Forecast to 2026

Read the rest here:
Machine Learning as a Service Market Overview, Top Companies, Region, Application and Global Forecast by 2026 - Latest Herald

Posted in Machine Learning | Comments Off on Machine Learning as a Service Market Overview, Top Companies, Region, Application and Global Forecast by 2026 – Latest Herald

Facebook, AWS team up to produce open-source PyTorch AI libraries, grad student says he successfully used GPT-2 to write his homework…. – The…

Roundup Hello El Reg readers. If you're stuck inside, and need some AI news to soothe your soul, here's our weekly machine-learning roundup.

Nvidia GTC virtual keynote coming to YouTube: Nvidia cancelled its annual GPU Technology Conference in Silicon Valley in March over the ongoing coronavirus pandemic. The keynote speech was promised to be screened virtually, and then that got canned, too. Now, its back.

CEO Jensen Huang will present his talk on May 14 on YouTube at 0600 PT (1300 UTC). Yes, thats early for people on the US West Coast. And no, Jensen isnt doing it live at that hour: the video is prerecorded.

Still, graphics hardware and AI fans will probably want to keep an eye on the presentation. Huang is expected to unveil specs for a new GPU architecture reportedly named the A100, which is expected to be more powerful than its Tesla V100 chips. Youll be able to watch the keynote when it comes out on Nvidias YouTube channel, here.

Also, Nvidia has partnered up with academics at Kings College London to release MONAI, an open-source AI framework for medical imaging.

The framework packages together tools to help researchers and medical practitioners process image data for computer vision models built with PyTorch. These include things like segmenting features in 3D scans or classifying objects in 2D.

Researchers need a flexible, powerful and composable framework that allows them to do innovative medical AI research, while providing the robustness, testing and documentation necessary for safe hospital deployment, said Jorge Cardoso, chief technology officer of the London Medical Imaging & AI Centre for Value-based Healthcare. Such a tool was missing prior to Project MONAI.

You can play with MONAI on GitHub here, or read about it more here.

New PyTorch libraries for ML production: Speaking of PyTorch, Facebook and AWS have collaborated to release a couple of open-source goodies for deploying machine-learning models.

There are now two new libraries: TorchServe and TorchElastic. TorchServe provides tools to manage and perform inference with PyTorch models. It can be used in any cloud service, and you can find the instructions on how to install and use it here.

TorchElastic allows users to train large models over a cluster of compute nodes with Kubernetes. The distributed training means that even if some servers go down for maintenance or random network issues, the service isnt completely interrupted. It can be used on any cloud provider that supports Kubernetes. You can read how to use the library here.

These libraries enable the community to efficiently productionize AI models at scale and push the state of the art on model exploration as model architectures continue to increase in size and complexity, Facebook said this week.

MIT stops working with blacklisted AI company: MIT has discontinued its five-year research collaboration with iFlyTek, a Chinese AI company the US government flagged as being involved in the ongoing persecution of Uyghur Muslims in China.

Academics at the American university made the decision to cut ties with the controversial startup in February. iFlyTek is among 27 other names that are on the US Bureau of Industry and Securitys Entity List, which forbids American organizations from doing business with without Uncle Sam's permission. Breaking the rules will result in sanctions.

We take very seriously concerns about national security and economic security threats from China and other countries, and human rights issues, Maria Zuber, vice president of research at MIT, said, Wired first reported.

MIT entered a five-year deal with iFlyTek in 2018 to collaborate on AI research focused on human-computer interaction, speech recognition, and computer vision.

The relationship soured when it was revealed iFlyTek was helping the Chinese government build a mass automated voice recognition and monitoring system, according to the non-profit Human Rights Watch. That technology was sold to police bureaus in the provinces of Xinjiang and Anhui, where the majority of the Uyghur population in China resides.

OpenAIs GPT-2 writes university papers: A cheeky masters degree student admitted this week to using OpenAIs giant language model GPT-2 to help write his essays.

The graduate student, named only as Tiago, was interviewed by Futurism. We're told that although he passed his assignments using the machine-learning software, he said the achievement was down to failings within the business school rather than to the prowess of state-of-the-art AI technology.

In other words, his science homework wasn't too rigorously marked in this particular unnamed school, allowing him to successfully pass off machine-generated write-ups of varying quality as his own work and GPT-2's output does vary in quality, depending on how you use it.

You couldnt write an essay on science that could be anywhere near convincing using the methods that I used," he said. "Many of the courses that I take in business school wouldnt make it possible as well.

"However, some particular courses are less information-dense, and so if you can manage to write a few pages with some kind of structure and some kind of argument, you can get through. Its not that great of an achievement, I would say, for GPT-2.

Thanks to the Talk to Transformer tool, anyone can use GPT-2 on a web browser. Tiago would feed opening sentences to the model, and copy and paste the machine-generated responses to put in his essay.

GPT-2 is pretty convincing at first: it has a good grasp of grammar, and there is some level of coherency in its opening paragraphs when responding to a statement or question. Its output quality begins to fall apart, becoming incoherent or absurd, as it rambles in subsequent paragraphs. It also doesnt care about facts, which is why it wont be good as a collaborator for subjects such as history and science.

Sponsored: Practical tips for Office 365 tenant-to-tenant migration

Go here to see the original:
Facebook, AWS team up to produce open-source PyTorch AI libraries, grad student says he successfully used GPT-2 to write his homework.... - The...

Posted in Machine Learning | Comments Off on Facebook, AWS team up to produce open-source PyTorch AI libraries, grad student says he successfully used GPT-2 to write his homework…. – The…

How Coronavirus Pandemic Will Impact Machine Learning as a Service Market 2020- Global Leading Players, Industry Updates, Future Growth, Business…

The global Machine Learning as a Service market reached ~US$ xx Mn in 2019and is anticipated grow at a CAGR of xx% over the forecast period 2019-2029. In this Machine Learning as a Service market study, the following years are considered to predict the market footprint:

The business intelligence study of the Machine Learning as a Service market covers the estimation size of the market both in terms of value (Mn/Bn USD) and volume (x units). In a bid to recognize the growth prospects in the Machine Learning as a Service market, the market study has been geographically fragmented into important regions that are progressing faster than the overall market. Each segment of the Machine Learning as a Service market has been individually analyzed on the basis of pricing, distribution, and demand prospect for the Global region.

Request Sample Report @https://www.mrrse.com/sample/9077?source=atm

competition landscape which include competition matrix, market share analysis of major players in the global machine learning as a service market based on their 2016 revenues and profiles of major players. Competition matrix benchmarks leading players on the basis of their capabilities and potential to grow. Factors including market position, offerings and R&D focus are attributed to companys capabilities. Factors including top line growth, market share, segment growth, infrastructure facilities and future outlook are attributed to companys potential to grow. This section also identifies and includes various recent developments carried out by the leading players.

Company profiling includes company overview, major business strategies adopted, SWOT analysis and market revenues for year 2014 to 2016. The key players profiled in the global machine learning as a service market include IBM Corporation, Google Inc., Amazon Web Services, Microsoft Corporation, BigMl Inc., FICO, Yottamine Analytics, Ersatz Labs Inc, Predictron Labs Ltd and H2O.ai. Other players include ForecastThis Inc., Hewlett Packard Enterprise, Datoin, Fuzzy.ai, and Sift Science Inc. among others.

The global machine learning as a service market is segmented as below:

By Deployment Type

By End-use Application

By Geography

Each market player encompassed in the Machine Learning as a Service market study is assessed according to its market share, production footprint, current launches, agreements, ongoing R&D projects, and business tactics. In addition, the Machine Learning as a Service market study scrutinizes the strengths, weaknesses, opportunities and threats (SWOT) analysis.

COVID-19 Impact on Machine Learning as a Service Market

Adapting to the recent novel COVID-19 pandemic, the impact of the COVID-19 pandemic on the global Machine Learning as a Service market is included in the present report. The influence of the novel coronavirus pandemic on the growth of the Machine Learning as a Service market is analyzed and depicted in the report.

Request For Discount On This Report @ https://www.mrrse.com/checkdiscount/9077?source=atm

What insights readers can gather from the Machine Learning as a Service market report?

The Machine Learning as a Service market report answers the following queries:

Buy This Report @ https://www.mrrse.com/checkout/9077?source=atm

Why Choose Machine Learning as a Service Market Report?

Read this article:
How Coronavirus Pandemic Will Impact Machine Learning as a Service Market 2020- Global Leading Players, Industry Updates, Future Growth, Business...

Posted in Machine Learning | Comments Off on How Coronavirus Pandemic Will Impact Machine Learning as a Service Market 2020- Global Leading Players, Industry Updates, Future Growth, Business…

Announcing availability of Inf1 instances in Amazon SageMaker for high performance and cost-effective machine learning inference – idk.dev

Amazon SageMaker is a fully managed service that provides every developer and data scientist with the ability to build, train, and deploy machine learning (ML) models quickly. Tens of thousands of customers, including Intuit, Voodoo, ADP, Cerner, Dow Jones, and Thompson Reuters, use Amazon SageMaker to remove the heavy lifting from each step of the ML process.

When it comes to deploying ML models for real-time prediction, Amazon SageMaker provides you with a large selection of AWS instance types, from small CPU instances to multi-GPU instances. This lets you find the right cost/performance ratio for your prediction infrastructure. Today we announce the availability of Inf1 instances in Amazon SageMaker to deliver high performance, low latency, and cost-effective inference.

The Amazon EC2 Inf1 instances were launched at AWS re:Invent 2019. Inf1 instances are powered by AWS Inferentia, a custom chip built from the ground up by AWS to accelerate machine learning inference workloads. When compared to G4 instances, Inf1 instances offer up to three times the inferencing throughput and up to 45% lower cost per inference.

Inf1 instances are available in multiple sizes, with 1, 4, or 16 AWS Inferentia chips. An AWS Inferentia chip contains four NeuronCores. Each implements a high-performance systolic array matrix multiply engine, which massively speeds up typical deep learning operations such as convolution and transformers. NeuronCores are also equipped with a large on-chip cache, which helps cut down on external memory accesses and saves I/O time in the process.

When several AWS Inferentia chips are available on an Inf1 instance, you can partition a model across them and store it entirely in cache memory. Alternatively, to serve multi-model predictions from a single Inf1 instance, you can partition the NeuronCores of an AWS Inferentia chip across several models.

To run machine learning models on Inf1 instances, you need to compile models to a hardware-optimized representation using the AWS Neuron SDK. Since the launch of Inf1 instances, AWS has released five versions of the AWS Neuron SDK that focused on performance improvements and new features, with plans to add more on a regular cadence. For example, image classification (ResNet-50) performance has improved by more than 2X, from 1100 to 2300 images/sec on a single AWS Inferentia chip. This performance improvement translates to 45% lower cost per inference as compared to G4 instances. Support for object detection models starting with Single Shot Detection (SSD) was also added, with Mask R-CNN coming soon.

Now let us show you how you can easily compile, load and run models on ml.Inf1 instances in Amazon SageMaker.

Compiling and deploying models for Inf1 instances in Amazon SageMaker is straightforward thanks to Amazon SageMaker Neo. The AWS Neuron SDK is integrated with Amazon SageMaker Neo to run your model optimally on Inf1 instances in Amazon SageMaker. You only need to complete the following steps:

In the following example use case, you train a simple TensorFlow image classifier on the MNIST dataset, like in this sample notebook on GitHub. The training code would look something like the following:

To compile the model for an Inf1 instance, you make a single API call and select ml_inf1 as the deployment target. See the following code:

Once the machine learning model has been compiled, you deploy the model on an Inf1 instance in Amazon SageMaker using the optimized estimator from Amazon SageMaker Neo. Under the hood, when creating the inference endpoint, Amazon SageMaker automatically selects a container with the Neo Deep Learning Runtime, a lightweight runtime that will load and invoke the optimized model for inference.

Thats it! After you deploy the model, you can invoke the endpoint and receive predictions in real time with low latency. You can find a full example on Github.

Inf1 instances in Amazon SageMaker are available in four sizes: ml.inf1.xlarge, ml.inf1.2xlarge, ml.inf1.6xlarge, and ml.inf1.24xlarge. Machine learning models developed using TensorFlow and MxNet frameworks can be compiled with Amazon SageMaker Neo to run optimally on Inf1 instances and deployed on Inf1 instances in Amazon SageMaker for real-time inference. You can start using Inf1 instances in Amazon SageMaker today in the US East (N. Virginia) and US West (Oregon) Regions.

Julien Simon is an Artificial Intelligence & Machine Learning Evangelist for EMEA, Julien focuses on helping developers and enterprises bring their ideas to life.

Read the original:
Announcing availability of Inf1 instances in Amazon SageMaker for high performance and cost-effective machine learning inference - idk.dev

Posted in Machine Learning | Comments Off on Announcing availability of Inf1 instances in Amazon SageMaker for high performance and cost-effective machine learning inference – idk.dev

Microsoft Office 365: How these Azure machine-learning services will make you more productive and efficient – TechRepublic

Office can now suggest better phrases in Word or entire replies in Outlook, design your PowerPoint slides, and coach you on presenting them. Microsoft built those features with Azure Machine Learning and big models - while keeping your Office 365 data private.

The Microsoft Office clients have been getting smarter for several years: the first version of Editor arrived in Word in 2016, based on Bing's machine learning, and it's now been extended to include the promised Ideas feature with extra capabilities. More and more of the new Office features in the various Microsoft 365 subscriptions are underpinned by machine learning.

You get the basic spelling and grammar checking in any version of Word. But if you have a subscription, Word, Outlook and a new Microsoft Editor browser extension will be able to warn you if you're phrasing something badly, using gendered idioms so common that you may not notice who they exclude, hewing so closely to the way your research sources phrased something that you need to either write it in your own words or enter a citation, or just not sticking to your chosen punctuation rules.

SEE:Choosing your Windows 7 exit strategy: Four options(TechRepublic Premium)

Word can use the real-world number comparisons that Bing has had for a while to make large numbers more comprehensible. It can also translate the acronyms you use inside your organization -- and distinguish them from what someone in another industry would mean by them. It can even recognise that those few words in bold are a heading and ask if you want to switch to a heading style so they show up in the table of contents.

Outlook on iOS uses machine learning to turn the timestamp on an email to a friendlier 'half an hour ago' when you have it read out your messages. Mobile and web Outlook use machine learning and natural-language processing to suggest three quick replies for some messages, which might include scheduling a meeting.

Excel has the same natural-language queries for spreadsheets as Power BI, letting you ask questions about your data. PowerPoint Designer can automatically crop pictures, put them in the right place on the slide and suggest a layout and design; it uses machine learning for text and slide structure analysis, image categorisation, recommending content to include and ranking the layout suggestions it makes. The Presenter Coach tells you if you're slouching, talking in a monotone or staring down at your screen all the time while you're talking, using machine learning to analyse your voice and posture from your webcam.

How PowerPoint Designer uses AML (Azure Machine Learning).

Image: Microsoft

Many of these features are built using the Azure Machine Learning service, Erez Barak, partner group program manager for AI Platform Management, told TechRepublic. At the other extreme, some call the pre-built Azure Cognitive Services APIs for things like speech recognition in the presentation coach, as well as captioning PowerPoint presentations in real-time and live translation into 60-plus languages (and those APIs are themselves built using AML).

Other features are based on customising pre-trained models like Turing Neural Language Generation, a seventeen-billion parameter deep-learning language model that can answer questions, complete sentences and summarize text -- useful for suggesting alternative phrases in Editor or email replies in Outlook. "We use those models in Office after applying some transfer learning to customise them," Barak explained. "We leverage a lot of data, not directly but by the transfer learning we do; that's based on big data to give us a strong natural-language understanding base. For everything we do in Office requires that context; we try to leverage the data we have from big models -- from the Turing model especially given its size and its leadership position in the market -- in order to solve for specific Office problems."

AML is a machine-learning platform for both Microsoft product teams and customers to build intelligent features that can plug into business processes. It provides automated pipelines that take large amounts of data stored in Azure Data Lake, merge and pre-process the raw data, and feed them into distributed training running in parallel across multiple VMs and GPUs. The machine-learning version of the automated deployment common in DevOps is known as MLOps. Office machine-learning models are often built using frameworks like PyTorch or TensorFlow; the PowerPoint team uses a lot of Python and Jupiter notebooks.

The Office data scientists experiment with multiple different models and variations; the best model then gets stored back into Azure Data Lake and downloaded into AML using the ONNX runtime (open-sourced by Microsoft and Facebook) to run in production without having to be rebuilt. "Packaging the models in the ONNX runtime, especially for PowerPoint Designer, helps us to normalise the models, which is great for MLOps; as you tie these into pipelines, the more normalised assets you have, the easier, simpler and more productive that process becomes," said Barak.

ONNX also helps with performance when it comes to running the models in Office, especially for Designer. "If you think about the number of inference calls or scoring calls happening, performance is key: every small percentage and sub-percentage point matters," Barak pointed out.

A tool like Designer that's suggesting background images and videos to use as content needs a lot of compute and GPU to be fast enough. Some of the Turing models are so large that they run on the FPGA-powered Brainwave hardware inside Azure because otherwise they'd be too slow for workloads like answering questions in Bing searches. Office uses the AML compute layer for training and production which, Barak said, "provides normalised access to different types of compute, different types of machines, and also provides a normalised view into the performance of those machines".

"Office's training needs are pretty much bleeding edge: think long-running, GPU-powered, high-bandwidth training jobs that could run for days, sometimes for weeks, across multiple cores, and require a high level of visibility into the end process as well as a high level of reliability," Barak explained. "We leverage a lot of high-performing GPUs for both training the base models and transfer learning." Although the size of training data varies between the scenarios, Barak estimates that fine-tuning the Turing base model with six months of data would use 30-50TB of data (on top of the data used to train the original model).

Acronyms accesses your Office 365 data, because it needs to know which acronyms your organisation uses.

Image: Mary Branscombe/TechRepublic

The data used to train Editor's rewrite suggestions includes documents written by people with dyslexia, and many of the Office AI features use anonymised usage data from Office 365 usage. Acronyms is one of the few features that specifically uses your own Office 365 data, because it needs to find out which acronyms your organisation uses, but that isn't shared with any other Office users. Microsoft also uses public data for many features rather than trying to mine that from private Office documents. The similarity checker uses Bing data, and Editor's sentence rewrite uses public data like Wikipedia as well as public news data to train on.

As the home of so many documents, Office 365 has a wealth of data, but it also has strong compliance policies and processes that Microsoft's data scientists must follow. Those policies change over time as laws change or Office gets accredited to new standards -- "think of it as a moving target of policies and commitments Office has made in the past and will continue to make," Barak suggested. "In order for us to leverage a subset of the Office data in machine learning, naturally, we adhere to all those compliance promises."

LEARN MORE:Office 365 Consumer pricing and features

But models like those used in Presentation Designer need frequent retraining (at least every month) to deal with new data, such as which of the millions of slide designs it suggests get accepted and are retained in presentations. That data is anonymised before it's used for training, and the training is automated with AML pipelines. But it's important to score retrained models consistently with existing models so you can tell when there's an improvement, or if an experiment didn't pan out, so data scientists need repeated access to data.

"People continuously use that, so we continuously have new data around people's preferences and choices, and we want to continuously retrain. We can't have a system that needs to be adjusted over and over again, especially in the world of compliance. We need to have a system that's automatable. That's reproducible -- and frankly, easy enough for those users to use," Barak said.

"They're using AML Data Sets, which allow them to access this data while using the right policies and guard rails, so they're not creating copies of the data -- which is a key piece of keeping the compliance and trust promise we make to customers. Think of them as pointers and views into subsets of the data that data scientists want to use for machine learning."It's not just about access; it's about repeatable access, when the data scientists say 'let's bring in that bigger model, let's do some transfer learning using the data'. It's very dynamic: there's new data because there's more activity or more people [using it]. Then the big models get refreshed on a regular basis. We don't just have one version of the Turing model and then we're done with it; we have continuous versions of that model which we want to put in the hands of data scientists with an end-to-end lifecycle."

Those data sets can be shared without the risk of losing track of the data, which means other data scientists can run experiments on the same data sets. This makes it easier for them to get started developing a new machine-learning model.

Getting AML right for Microsoft product teams also helps enterprises who want to use AML for their own systems. "If we nail the likes and complexities of Office, we enable them to use machine learning in multiple business processes," Barak said. "And at the same time we learn a lot about automation and requirements around compliance that also very much applies to a lot of our third-party customers."

Be your company's Microsoft insider by reading these Windows and Office tips, tricks, and cheat sheets. Delivered Mondays and Wednesdays

Read more:
Microsoft Office 365: How these Azure machine-learning services will make you more productive and efficient - TechRepublic

Posted in Machine Learning | Comments Off on Microsoft Office 365: How these Azure machine-learning services will make you more productive and efficient – TechRepublic