Search Immortality Topics:

Page 121«..1020..120121122123..130140..»


Category Archives: Machine Learning

Navigating the New Landscape of AI Platforms – Harvard Business Review

Executive Summary

What only insiders generally know is that data scientists, once hired, spend more time building and maintaining the tooling for AI systems than they do building the AI systems themselves. Now, though, new tools are emerging to ease the entry into this era of technological innovation. Unified platforms that bring the work of collecting, labelling, and feeding data into supervised learning models, or that help build the models themselves, promise to standardize workflows in the way that Salesforce and Hubspot have for managing customer relationships. Some of these platforms automate complex tasks using integrated machine-learning algorithms, making the work easier still. This frees up data scientists to spend time building the actual structures they were hired to create, and puts AI within reach of even small- and medium-sized companies.

Nearly two years ago, Seattle Sport Sciences, a company that provides data to soccer club executives, coaches, trainers and players to improve training, made a hard turn into AI. It began developing a system that tracks ball physics and player movements from video feeds. To build it, the company needed to label millions of video frames to teach computer algorithms what to look for. It started out by hiring a small team to sit in front of computer screens, identifying players and balls on each frame. But it quickly realized that it needed a software platform in order to scale. Soon, its expensive data science team was spending most of its time building a platform to handle massive amounts of data.

These are heady days when every CEO can see or at least sense opportunities for machine-learning systems to transform their business. Nearly every company has processes suited for machine learning, which is really just a way of teaching computers to recognize patterns and make decisions based on those patterns, often faster and more accurately than humans. Is that a dog on the road in front of me? Apply the brakes. Is that a tumor on that X-ray? Alert the doctor. Is that a weed in the field? Spray it with herbicide.

What only insiders generally know is that data scientists, once hired, spend more time building and maintaining the tools for AI systems than they do building the systems themselves. A recent survey of 500 companies by the firm Algorithmia found that expensive teams spend less than a quarter of their time training and iterating machine-learning models, which is their primary job function.

Now, though, new tools are emerging to ease the entry into this era of technological innovation. Unified platforms that bring the work of collecting, labelling and feeding data into supervised learning models, or that help build the models themselves, promise to standardize workflows in the way that Salesforce and Hubspot have for managing customer relationships. Some of these platforms automate complex tasks using integrated machine-learning algorithms, making the work easier still. This frees up data scientists to spend time building the actual structures they were hired to create, and puts AI within reach of even small- and medium-sized companies, like Seattle Sports Science.

Frustrated that its data science team was spinning its wheels, Seattle Sports Sciences AI architect John Milton finally found a commercial solution that did the job. I wish I had realized that we needed those tools, said Milton. He hadnt factored the infrastructure into their original budget and having to go back to senior management and ask for it wasnt a pleasant experience for anyone.

The AI giants, Google, Amazon, Microsoft and Apple, among others, have steadily released tools to the public, many of them free, including vast libraries of code that engineers can compile into deep-learning models. Facebooks powerful object-recognition tool, Detectron, has become one of the most widely adopted open-source projects since its release in 2018. But using those tools can still be a challenge, because they dont necessarily work together. This means data science teams have to build connections between each tool to get them to do the job a company needs.

The newest leap on the horizon addresses this pain point. New platforms are now allowing engineers to plug in components without worrying about the connections.

For example, Determined AI and Paperspace sell platforms for managing the machine-learning workflow. Determined AIs platform includes automated elements to help data scientists find the best architecture for neural networks, while Paperspace comes with access to dedicated GPUs in the cloud.

If companies dont have access to a unified platform, theyre saying, Heres this open source thing that does hyperparameter tuning. Heres this other thing that does distributed training, and they are literally gluing them all together, said Evan Sparks, cofounder of Determined AI. The way theyre doing it is really with duct tape.

Labelbox is a training data platform, or TDP, for managing the labeling of data so that data science teams can work efficiently with annotation teams across the globe. (The author of this article is the companys co-founder.) It gives companies the ability to track their data, spot, and fix bias in the data and optimize the quality of their training data before feeding it into their machine-learning models.

Its the solution that Seattle Sports Sciences uses. John Deere uses the platform to label images of individual plants, so that smart tractors can spot weeds and deliver pesticide precisely, saving money and sparing the environment unnecessary chemicals.

Meanwhile, companies no longer need to hire experienced researchers to write machine-learning algorithms, the steam engines of today. They can find them for free or license them from companies who have solved similar problems before.

Algorithmia, which helps companies deploy, serve and scale their machine-learning models, operates an algorithm marketplace so data science teams dont duplicate other peoples effort by building their own. Users can search through the 7,000 different algorithms on the companys platform and license one or upload their own.

Companies can even buy complete off-the-shelf deep learning models ready for implementation.

Fritz.ai, for example, offers a number of pre-trained models that can detect objects in videos or transfer artwork styles from one image to another all of which run locally on mobile devices. The companys premium services include creating custom models and more automation features for managing and tweaking models.

And while companies can use a TDP to label training data, they can also find pre-labeled datasets, many for free, that are general enough to solve many problems.

Soon, companies will even offer machine-learning as a service: Customers will simply upload data and an objective and be able to access a trained model through an API.

In the late 18th century, Maudslays lathe led to standardized screw threads and, in turn, to interchangeable parts, which spread the industrial revolution far and wide. Machine-learning tools will do the same for AI, and, as a result of these advances, companies are able to implement machine-learning with fewer data scientists and less senior data science teams. Thats important given the looming machine-learning, human resources crunch: According to a 2019 Dun & Bradstreet report, 40 percent of respondents from Forbes Global 2000 organizations say they are adding more AI-related jobs. And the number of AI-related job listings on the recruitment portal Indeed.com jumped 29 percent from May 2018 to May 2019. Most of that demand is for supervised-learning engineers.

But C-suite executives need to understand the need for those tools and budget accordingly. Just as Seattle Sports Sciences learned, its better to familiarize yourself with the full machine-learning workflow and identify necessary tooling before embarking on a project.

That tooling can be expensive, whether the decision is to build or to buy. As is often the case with key business infrastructure, there are hidden costs to building. Buying a solution might look more expensive up front, but it is often cheaper in the long run.

Once youve identified the necessary infrastructure, survey the market to see what solutions are out there and build the cost of that infrastructure into your budget. Dont fall for a hard sell. The industry is young, both in terms of the time that its been around and the age of its entrepreneurs. The ones who are in it out of passion are idealistic and mission driven. They believe they are democratizing an incredibly powerful new technology.

The AI tooling industry is facing more than enough demand. If you sense someone is chasing dollars, be wary. The serious players are eager to share their knowledge and help guide business leaders toward success. Successes benefit everyone.

See the rest here:
Navigating the New Landscape of AI Platforms - Harvard Business Review

Posted in Machine Learning | Comments Off on Navigating the New Landscape of AI Platforms – Harvard Business Review

Management Styles And Machine Learning: A Case Of Life Imitating Art – Forbes

Oscar Wilde coined a great phrase in saying that life imitates art far more than art imitates life. What he meant by that is that through art, we appreciate life more. Art is an expression of life, and it helps us better understand ourselves and our surroundings. In business, learning how life imitates art, and even how art imitates life, is an intriguing way to capitalize on the value they both can bring.

When looking at organizations, there is a lot of art and life in how the business is run. Leadership styles range from specific and precise to open and adaptable. Often this is based on a managers own personal life experience. An example on the art side of the equation is machine learning. Its something we have created as an expression of what it means to be human. Both have common benefits that have the capability to propel the business forward.

The Art Of Machine Learning

Although its been around for many years, its only been in the past 10 years that machine learning has become more mainstream. In the past, technology used traditional programming, where programmers hard-coded rigid sets of instructions that tried to accommodate every possible scenario. Outputs were predetermined and could not respond to new scenarios without additional coding. Not only does this require continuous updates, costing in lost time and money, but it can also result in outputs that are inaccurate based on problems that the program could not predict.

Computer programming has advanced to where programs are now capable of learning and evolving on their own. With traditional programming, an input was run through a program and created an output. Now, with machine learning, an input is run through a trainable data model, so the output evolves as the model is continually learning and adapting, much in the same way a human brain does.

Take a game of checkers. Using traditional programming, it is unlikely that the computer will ever beat the programmer, who is the trainer of that software. Traditional programming has limited the game to a set of rules based on that programmers knowledge of the game. Whereas with machine learning, the program is based on a self-improving model that is not only able to make decisions for each move, but also evolves the program, learning which moves are most likely to lead to a win.

After several matches, the program will learn from its "experience" in prior matches how to play even better than a human. In the same way humans learn which moves to take and which to avoid, the program also keeps track of that information and eventually becomes a better player.

A Different Take On Management Styles

Now lets consider management styles as part of life. In the business world, you typically find two types of managers: those who give explicit instructions that need to be followed in order to accomplish a goal, and those who provide the goal without detailed instructions, but offer guidance and counseling as needed to get there.

In the first scenario, the worker needs only to apply the instructions received to get the work done, which makes the manager happy with the result, similar to traditional programming. But the results are limited and dont take into account unexpected variables and changes that happen as a part of the business. In this micromanagement style, the worker often has to go back to the manager for additional instructions, costing both time and money, similar to traditional programming.

In the second scenario, the worker is expected to find their own path to achieving the goal by learning, trying and testing different options. From that experience, the worker modifies their process based on the results of their efforts, much like machine learning. The process is much more flexible and accommodating as its able to be adjusted easily, on the fly. Like machine learning, this strategic style of management provides flexibility and autonomy, saving time and producing much better results.

Leverage Common Benefits For Optimal Results

Both machine learning and strategic management styles have some common benefits, if done well. One benefit is scalability. Its nearly impossible to scale an organization with a micromanagement style. As the company grows, managers will have increasingly less time to spend with workers. The same is true of traditional programming. Unless the program can learn and change on its own, it will never be able to scale to keep pace with the business.

Another common benefit is the ability to outsmart the competition. Companies that embrace machine learnings intelligent algorithms and better analytical power will have a leg up on those organizations that do not. They can take advantage of the automated learning capabilities built into machine learning. In the same way, those companies that take advantage of a strategic management style over micromanaging will enable workers to be self-sufficient and contribute the full power of their wisdom.

Oscar Wilde was surprisingly prophetic when he talked about life imitating art. The best organizations are those that leverage the commonalities both life and art; of machine learning and strategic management styles. As life can be fuller by imitating art, art and life together help organizations realize their greatest potential.

More:
Management Styles And Machine Learning: A Case Of Life Imitating Art - Forbes

Posted in Machine Learning | Comments Off on Management Styles And Machine Learning: A Case Of Life Imitating Art – Forbes

AI-powered honeypots: Machine learning may help improve intrusion detection – The Daily Swig

John Leyden09 March 2020 at 15:50 UTC Updated: 09 March 2020 at 16:04 UTC

Forget crowdsourcing, heres crooksourcing

Computer scientists in the US are working to apply machine learning techniques in order to develop more effective honeypot-style cyber defenses.

So-called deception technology refers to traps or decoy systems that are strategically placed around networks.

These decoy systems are designed to act as a honeypot so that once an attacker has penetrated a network, they will attempt to attack them setting off security alerts in the process.

Deception technology is not a new concept. Companies including Illusive Networks and Attivo have been working in the field for several years.

Now, however, researchers from the University of Texas at Dallas (UT Dallas) are aiming to take the concept one step further.

The DeepDig (DEcEPtion DIGging) technique plants traps and decoys onto real systems before applying machine learning techniques in order to gain a deeper understanding of attackers behavior.

The technique is designed to use cyber-attacks as free sources of live training data for machine learning-based intrusion detection systems.

Somewhat ironically, the prototype technology enlists attackers as free penetration testers.

Dr Kevin Hamlen, endowed professor of computer science at UT Dallas, explained: Companies like Illusive Networks, Attivo, and many others create network topologies intended to be confusing to adversaries, making it harder for them to find real assets to attack.

The shortcoming of existing approaches, Dr Hamlen, told The Daily Swig is that such deceptions do not learn from attacks.

While the defense remains relatively static, the adversary learns over time how to distinguish honeypots from a real asset, leading to an asymmetric game that the adversary eventually wins with high probability, he said.

In contrast, DeepDig turns real assets into traps that learn from attacks using artificial intelligence and data mining.

Turning real assets into a form of honeypot has numerous advantages, according to Dr Hamlen.

Even the most skilled adversary cannot avoid interacting with the trap because the trap is within the real asset that is the adversary's target, not a separate machine or software process, he said.

This leads to a symmetric game in which the defense continually learns and gets better at stopping even the most stealthy adversaries.

The research which has applications in the field of web security was presented in a paper (PDF) entitled Improving Intrusion Detectors by Crook-Sourcing, at the recent Computer Security Applications Conference in Puerto Rico.

The research was funded by the US federal government. The algorithms and evaluation data developed so far have been publicly released to accompany the research paper.

Its hoped that the research might eventually find its way into commercially available products, but this is still some time off and the technology is still only at the prototype stage.

In practice, companies typically partner with a university that conducted the research theyre interested in to build a full product, a UT Dallas spokesman explained. Dr Hamlens project is not yet at that stage.

RELATED Gold-nuggeting: Machine learning tool simplifies target discovery for pen testers

See the article here:
AI-powered honeypots: Machine learning may help improve intrusion detection - The Daily Swig

Posted in Machine Learning | Comments Off on AI-powered honeypots: Machine learning may help improve intrusion detection – The Daily Swig

RIT professor explores the art and science of statistical machine learning – RIT University News Services

Statistical machine learning is at the core of modern-day advances in artificial intelligence, but a Rochester Institute of Technology professor argues that applying it correctly requires equal parts science and art. Professor Ernest Fokou of RITs School of Mathematical Sciences emphasized the human element of statistical machine learning in his primer on the field that graced the cover of a recent edition of Notices of the American Mathematical Society.

One of the most important commodities in your life is common sense, said Fokou. Mathematics is beautiful, but mathematics is your servant. When you sit down and design a model, data can be very stubborn. We design models with assumptions of what the data will show or look like, but the data never looks exactly like what you expect. You may have a nice central tenet, but theres always something thats going to require your human intervention. Thats where the art comes in. After you run all these statistical techniques, when it comes down to drawing the final conclusion, you need your common sense.

Statistical machine learning is a field that combines mathematics, probability, statistics, computer science, cognitive neuroscience and psychology to create models that learn from data and make predictions about the world. One of its earliest applications was when the United States Postal Service used it to accurately learn and recognize handwritten letters and digits to autonomously sort letters. Today, we see it applied in a variety of settings, from facial recognition technology on smartphones to self-driving cars.

Researchers have developed many different learning machines and statistical models that can be applied to a given problem, but there is no one-size-fits-all method that works well for all situations. Fokou said using selecting the appropriate method requires mathematical and statistical rigor along with practical knowledge. His paper explains the central concepts and approaches, which he hopes will get more people involved in the field and harvesting its potential.

Statistical machine learning is the main tool behind artificial intelligence, said Fokou. Its allowing us to construct extensions of the human being so our lives, transportation, agriculture, medicine and education can all be better. Thanks to statistical machine learning, you can understand the processes by which people learn and slowly and steadily help humanity access a higher level.

This year, Fokou has been on sabbatical traveling the world exploring new frontiers in statistical machine learning. Fokous full article is available on the AMS website.

See more here:
RIT professor explores the art and science of statistical machine learning - RIT University News Services

Posted in Machine Learning | Comments Off on RIT professor explores the art and science of statistical machine learning – RIT University News Services

How is AI and machine learning benefiting the healthcare industry? – Health Europa

In order to help build increasingly effective care pathways in healthcare, modern artificial intelligence technologies must be adopted and embraced. Events such as the AI & Machine Learning Convention are essential in providing medical experts around the UK access to the latest technologies, products and services that are revolutionising the future of care pathways in the healthcare industry.

AI has the potential to save the lives of current and future patients and is something that is starting to be seen across healthcare services across the UK. Looking at diagnostics alone, there have been large scale developments in rapid image recognition, symptom checking and risk stratification.

AI can also be used to personalise health screening and treatments for cancer, not only benefiting the patient but clinicians too enabling them to make the best use of their skills, informing decisions and saving time.

The potential AI will have on the NHS is clear, so much so, NHS England is setting up a national artificial intelligence laboratory to enhance the care of patients and research.

The Health Secretary, Matt Hancock, commented that AI had enormous power to improve care, save lives and ensure that doctors had more time to spend with patients, so he pledged 250M to boost the role of AI within the health service.

The AI and Machine Learning Convention is a part of Mediweek, the largest healthcare event in the UK and as a new feature of the Medical Imaging Convention and the Oncology Convention, the AI and Machine Learning expo offer an effective CPD accredited education programme.

Hosting over 50 professional-led seminars, the lineup includes leading artificial intelligence and machine learning experts such as NHS Englands Dr Minai Bakhai, Faculty of Clinical Informatics Professor Jeremy Wyatt, and Professor Claudia Pagliari from the University of Edinburgh.

Other speakers in the seminar programme come from leading organisations such as the University of Oxford, Kings College London, and the School of Medicine at the University of Nottingham.

The event all takes place at the National Exhibition Centre, Birmingham on the 17th and 18th March 2020. Tickets to the AI and Machine Learning are free and gains you access to the other seven shows within MediWeek.

Health Europa is proud to be partners with the AI and Machine Learning Convention, click here to get your tickets.

Do you want the latest news and updates from Health Europa? Click here to subscribe to all the latest updates and stay connected with us here.

Read the original:
How is AI and machine learning benefiting the healthcare industry? - Health Europa

Posted in Machine Learning | Comments Off on How is AI and machine learning benefiting the healthcare industry? – Health Europa

What would machine learning look like if you mixed in DevOps? Wonder no more, we lift the lid on MLOps – The Register

Achieving production-level governance with machine-learning projects currently presents unique challenges. A new space of tools and practices is emerging under the name MLOps. The space is analogous to DevOps but tailored to the practices and workflows of machine learning.

Machine learning models make predictions for new data based on the data they have been trained on. Managing this data in a way that can be safely used in live environments is challenging, and one of the key reasons why 80 per cent of data science projects never make it to production an estimate from Gartner.

It is essential that the data is clean, correct, and safe to use without any privacy or bias issues. Real-world data can also continuously change, so inputs and predictions have to be monitored for any shifts that may be problematic for the model. These are complex challenges that are distinct from those found in traditional DevOps.

DevOps practices are centred on the build and release process and continuous integration. Traditional development builds are packages of executable artifacts compiled from source code. Non-code supporting data in these builds tends to be limited to relatively small static config files. In essence, traditional DevOps is geared to building programs consisting of sets of explicitly defined rules that give specific outputs in response to specific inputs.

In contrast, machine-learning models make predictions by indirectly capturing patterns from data, not by formulating all the rules. A characteristic machine-learning problem involves making new predictions based on known data, such as predicting the price of a house using known house prices and details such as the number of bedrooms, square footage, and location. Machine-learning builds run a pipeline that extracts patterns from data and creates a weighted machine-learning model artifact. This makes these builds far more complex and the whole data science workflow more experimental. As a result, a key part of the MLOps challenge is supporting multi-step machine learning model builds that involve large data volumes and varying parameters.

To run projects safely in live environments, we need to be able to monitor for problem situations and see how to fix things when they go wrong. There are pretty standard DevOps practices for how to record code builds in order to go back to old versions. But MLOps does not yet have standardisation on how to record and go back to the data that was used to train a version of a model.

There are also special MLOps challenges to face in the live environment. There are largely agreed DevOps approaches for monitoring for error codes or an increase in latency. But its a different challenge to monitor for bad predictions. You may not have any direct way of knowing whether a prediction is good, and may have to instead monitor indirect signals such as customer behaviour (conversions, rate of customers leaving the site, any feedback submitted). It can also be hard to know in advance how well your training data represents your live data. For example, it might match well at a general level but there could be specific kinds of exceptions. This risk can be mitigated with careful monitoring and cautious management of the rollout of new versions.

The effort involved in solving MLOps challenges can be reduced by leveraging a platform and applying it to the particular case. Many organisations face a choice of whether to use an off-the-shelf machine-learning platform or try to put an in-house platform together themselves by assembling open-source components.

Some machine-learning platforms are part of a cloud providers offering, such as AWS SageMaker or AzureML. This may or may not appeal, depending on the cloud strategy of the organisation. Other platforms are not cloud-specific and instead offer self-install or a custom hosted solution (eg, Databricks MLflow).

Instead of choosing a platform, organisations can instead choose to assemble their own. This may be a preferred route when requirements are too niche to fit a current platform, such as needing integrations to other in-house systems or if data has to be stored in a particular location or format. Choosing to assemble an in-house platform requires learning to navigate the ML tool landscape. This landscape is complex with different tools specialising in different niches and in some cases there are competing tools approaching similar problems in different ways (see the Linux Foundations LF AI project for a visualization or categorised lists from the Institute for Ethical AI).

The Linux Foundations diagram of MLOps tools ... Click for full detail

For organisations using Kubernetes, the kubeflow project presents an interesting option as it aims to curate a set of open-source tools and make them work well together on kubernetes. The project is led by Google, and top contributors (as listed by IBM) include IBM, Cisco, Caicloud, Amazon, and Microsoft, as well as ML tooling provider Seldon, Chinese tech giant NetEase, Japanese tech conglomerate NTT, and hardware giant Intel.

Challenges around reproducibility and monitoring of machine learning systems are governance problems. They need to be addressed in order to be confident that a production system can be maintained and that any challenges from auditors or customers can be answered. For many projects these are not the only challenges as customers might reasonably expect to be able to ask why a prediction concerning them was made. In some cases this may also be a legal requirement as the European Unions General Data Protection Regulation states that a "data subject" has a right to "meaningful information about the logic involved" in any automated decision that relates to them.

Explainability is a data science problem in itself. Modelling techniques can be divided into black-box and white-box, depending on whether the method can naturally be inspected to provide insight into the reasons for particular predictions. With black-box models, such as proprietary neural networks, the options for interpreting results are more restricted and more difficult to use than the options for interpreting a white-box linear model. In highly regulated industries, it can be impossible for AI projects to move forward without supporting explainability. For example, medical diagnosis systems may need to be highly interpretable so that they can be investigated when things go wrong or so that the model can aid a human doctor. This can mean that projects are restricted to working with models that admit of acceptable interpretability. Making black-box models more interpretable is a fast-growth area, with new techniques rapidly becoming available.

The MLOps scene is evolving as machine-learning becomes more widely adopted, and we learn more about what counts as best practice for different use cases. Different organisations have different machine learning use cases and therefore differing needs. As the field evolves well likely see greater standardisation, and even the more challenging use cases will become better supported.

Ryan Dawson is a core member of the Seldon open-source team, providing tooling for machine-learning deployments to Kubernetes. He has spent 10 years working in the Java development scene in London across a variety of industries.

Bringing DevOps principles to machine learning throws up some unique challenges, not least very different workflows and artifacts. Ryan will dive into this topic in May at Continuous Lifecycle London 2020 a conference organized by The Register's mothership, Situation Publishing.

You can find out more, and book tickets, right here.

Sponsored: Quit your addiction to storage

The rest is here:
What would machine learning look like if you mixed in DevOps? Wonder no more, we lift the lid on MLOps - The Register

Posted in Machine Learning | Comments Off on What would machine learning look like if you mixed in DevOps? Wonder no more, we lift the lid on MLOps – The Register