Search Immortality Topics:

Page 60«..1020..59606162..7080..»


Category Archives: Machine Learning

AI and Machine Learning Can Help Fintechs if We Focus on Practical Implementation and Move Away from Overhyped Narratives, Researcher Says – Crowdfund…

Artificial intelligence (AI) and machine learning (ML) algorithms are increasingly being used by Fintech platform developers to make more intelligent or informed decisions regarding key processes. This may include using AI to identify potentially fraudulent transactions, determining the creditworthiness of a borrower applying for a loan, and many other use cases.

Research conducted by Accenture found that 87% of business owners in the United Kingdom claim that theyre struggling with finding the best ways to adopt AI or ML technologies. Three out of four or 75% of C-Suite executives responding to Accentures survey said they really need to effectively adopt AI solutions within 5 years, so that they dont lose business to competitors.

As reported by IT Pro Portal, theres currently a gap between what may be considered just hype and actual or practical implementation of AI technologies and platforms.

Less than 5% of firms have actually managed to effectively apply Ai, meanwhile, more than 80% are currently just exploring basic proof of concepts for applying AL or ML algorithms. Many firms are also not familiar or dont have the expertise to figure out how to best apply these technologies to specific business use cases.

Yann Stadnicki, an experienced technologist and research engineer, argues that these technologies can play a key role in streamlining business operations. For example, they can help Fintech firms with lowering their operational costs while boosting their overall efficiency. They can also make it easier for a companys CFO to do their job and become a key player when it comes to supporting the growth of their firm.

Stadnicki points out that a research study suggests that company executives werent struggling to adopt AI solutions due to budgetary constraints or limitations. He adds that the study shows there may be certain operational challenges when it comes to effectively integrating AI and ML technologies.

He also mentions:

The inability to set up a supportive organizational structure, the absence of foundational data capabilities, and the lack of employee adoption are barriers to harnessing AI and machine learning within an organization.

He adds:

For businesses to harness the benefits of AI and machine learning, there needs to be a move away from an overhyped theoretical narrative towards practical implementation.It is important to formulate a plan and integration strategy of how your business will use AI and ML, to both mitigate the risks of cybercrime and fraud, while embracing the opportunity of tangible business impact.

Fintech firms and organizations across the globe are now leveraging AI and ML technologies to improve their products and services. In a recent interview with Crowdfund Insider, Michael Rennie, a U.K.-based product manager for Mendix, a Siemens business and the global leader in enterprise low-code, explained how emerging tech can be used to enhance business processes.

He noted:

Prior to low-code, the application and use of cutting-edge technologies within the banking sector have been more academic than actual. But low-code now enables you to apply emerging technologies like AI in a practical way so that they actually make an impact. For example, you could pair a customer-focused banking application built with low-code with a machine learning (ML) engine to identify user behaviors. Then you could make more informed decisions about where to invest in customer experience and most benefit your business.

He added:

Its easy to see the value in this. The problem is that without the correct technology, its too difficult to integrate traditional customer-facing applications with new technology systems. Such integrations typically require millions of dollars in investment and years of work. By the time an organization finishes that intensive work, the market may have moved on. Low-code eliminates that problem, makes integration easy and your business more agile.

Go here to see the original:
AI and Machine Learning Can Help Fintechs if We Focus on Practical Implementation and Move Away from Overhyped Narratives, Researcher Says - Crowdfund...

Posted in Machine Learning | Comments Off on AI and Machine Learning Can Help Fintechs if We Focus on Practical Implementation and Move Away from Overhyped Narratives, Researcher Says – Crowdfund…

Top 8 Books on Machine Learning In Cybersecurity One Must Read – Analytics India Magazine

With the proliferation of information technologies and data among us, cybersecurity has become a necessity. Machine learning helps organisations by getting insights from raw data, predicting future outcomes and more.

For a few years now, such utilisation of machine learning techniques has been started being implemented in cybersecurity. It helps in several ways, including identifying frauds, malicious codes and other such.

In this article, we list down the top eight books, in no particular order, on machine learning In cybersecurity that one must-read.

About: Written by Sumeet Dua and Xian Du, this book introduces the basic notions in machine learning and data mining. It provides a unified reference for specific machine learning solutions to cybersecurity problems as well as provides a foundation in cybersecurity fundamentals, including surveys of contemporary challenges.

The book details some of the cutting-edge machine learning and data mining techniques that can be used in cybersecurity, such as in-depth discussions of machine learning solutions to detection problems, contemporary cybersecurity problems, categorising methods for detecting, scanning, and profiling intrusions and anomalies, among others.

Get the book here.

About: In Malware Data Science, security data scientist Joshua Saxe introduces machine learning, statistics, social network analysis, and data visualisation, and shows you how to apply these methods to malware detection and analysis.

Youll learn how to analyse malware using static analysis, identify adversary groups through shared code analysis, detect vulnerabilities by building machine learning detectors, identify malware campaigns, trends, and relationships through data visualisation, etc.

Get the book here.

About: This book begins with an introduction of machine learning and algorithms that are used to build AI systems. After gaining a fair understanding of how security products leverage machine learning, you will learn the core concepts of breaching the AI and ML systems.

With the help of hands-on cases, you will understand how to find loopholes as well as surpass a self-learning security system. After completing this book, readers will be able to identify the loopholes in a self-learning security system and will also be able to breach a machine learning system efficiently.

Get the book here.

About: In this book, youll learn how to use popular Python libraries such as TensorFlow, Scikit-learn, etc. to implement the latest AI techniques and manage difficulties faced by the cybersecurity researchers.

The book will lead you through classifiers as well as features for malware, which will help you to train and test on real samples. You will also build self-learning, reliant systems to handle the cybersecurity tasks such as identifying malicious URLs, spam email detection, intrusion detection, tracking user and process behaviour, among others.

Get the book here.

About: This book is for the data scientists, machine learning developers, security researchers, and anyone keen to apply machine learning to up-skill computer security. In this book, you will learn how to use machine learning algorithms with complex datasets to implement cybersecurity concepts, implement machine learning algorithms such as clustering, k-means, and Naive Bayes to solve real-world problems, etc.

You will also learn how to speed up a system using Python libraries with NumPy, Scikit-learn, and CUDA, combat malware, detect spam and fight financial fraud to mitigate cybercrimes, among others.

Get the book here.

About: This book teaches you how to use machine learning for penetration testing. You will learn a hands-on and practical manner, how to use the machine learning to perform penetration testing attacks, and how to perform penetration testing attacks on machine learning systems. You will also learn the techniques that few hackers or security experts know about.

Get the book here.

About: In this book, you will learn machine learning in cybersecurity self-assessment, how to identify and describe the business environment in cybersecurity projects using machine learning, etc.

The book covers all machine learning in cybersecurity essentials, such as extensive criteria grounded in the past and current successful projects and activities by experienced machine learning in cybersecurity practitioners, among others.

Get the book here.

About: This book presents a collection of state-of-the-art AI approaches to cybersecurity and cyber threat intelligence. It offers strategic defence mechanisms for malware, addressing cybercrime, and assessing vulnerabilities to yield proactive rather than reactive countermeasures.

Get the book here.

Read the original here:
Top 8 Books on Machine Learning In Cybersecurity One Must Read - Analytics India Magazine

Posted in Machine Learning | Comments Off on Top 8 Books on Machine Learning In Cybersecurity One Must Read – Analytics India Magazine

Experian partners with Standard Chartered to drive Financial Inclusion with Machine Learning, powering the next generation of Decisioning – Yahoo…

Leveraging innovation in technology to provide access to credit during uncertain times to populations underserved by formal financial services.

This social impact was made possible by the Bank's digital first strategy and Experian's best-in-class decisioning platform. Experian's software enables the Bank to analyse a high volume of alternative data and execute machine learning models for better decision-making and risk management.

Since the first pilot implementation in India in December 2019, the Bank saw an improvement in approvals by increasing overall acceptance rates using big data and artificial intelligence. This enhanced risk management capabilities to test and learn, helping to expand access to crucial credit and financial services.

The Bank and Experian are committed to financial inclusion, with plans for rollouts across 6 more markets across Asia, Africa and the Middle East.

SINGAPORE, Oct. 15, 2020 /PRNewswire/ -- Experian a leading global information services company has announced a partnership with leading international banking group Standard Chartered to drive financial access across key markets in Asia, Africa and the Middle East by leveraging the latest technology innovation in credit decisioning. Without enough credit bureau data for financial institutions to determine their credit worthiness, especially in this time of unprecedented volatility, many underbanked communities are facing difficulties securing access to loans.

The collaboration involves Experian's leading global decisioning solution, PowerCurve Strategy Manager, integrated with machine learning capabilities that will enable deployment of advanced analytics to help organisations make the most of their data. In support of Standard Chartered's digital-first transformation strategy, this state-of-the-art machine learning capability provides the Bank with the ability to ingest and analyse a high volume of non-Bank or, with client consent, alternative data, enabling faster, more effective and accurate credit decisioning, resulting in better risk management for the Bank and better outcomes from clients.

Story continues

Launched in India in December 2019, Standard Chartered registered positive business outcomes such as increased acceptance rates and reduced overall delinquencies. The success in India meant that Standard Chartered is now able to improve risk management for more clients who previously would have been underbanked, empowering them with access to crucial credit and financial services in their time of need.

Beyond benefits to consumers, access to credit is vital for overall economic growth, with consumer spending helping businesses continue to operate during these difficult times.

"Social and economic growth in developing markets, especially in the coming period, will be driven by progress in financial inclusion. Experian strongly believes that a technology, advanced analytics and data-driven approach can address this opportunity and we remain deeply committed to the vision of progressing financial inclusion for the world's underserved and underbanked population. Our long-standing collaboration with Standard Chartered across our PowerCurve decisioning suite of solutions, leveraging machine learning and big data to advance to the next generation of credit decisioning, is focused on empowering these underbanked communities to access credit," said Mohan Jayaraman, Managing Director, Southeast Asia & Regional Innovation, Experian Asia Pacific.

Reaffirming a commitment towards financial inclusion, Experian and Standard Chartered are working on plans to deploy the solution to its retail franchise across Asia, Africa and the Middle East, in addition to India.

"We're committed to supporting sustainable social and economic development through our business, operations and communities. This partnership helps the Bank manage risk more effectively with a more robust data-driven credit decisioning which in turn enables more clients to gain access to financial services at a time when they need it the most," said Vishu Ramachandran, Group Head, Retail Banking, Standard Chartered.

"Partnerships are central to our digital banking strategy and how we better serve our clients. Experian was a natural choice as a partner given their strong track record in innovation and in driving financial inclusion," said Aalishaan Zaidi, Global Head, Client Experience, Channels & Digital Banking, Standard Chartered.

For more information, please visit Experian's Decisioning & Credit Risk Management solution.

For further information, please contact:

About Experian

Experian is the world's leading global information services company. During life's big moments from buying a home or a car, to sending a child to college, to growing a business by connecting with new customers we empower consumers and our clients to manage their data with confidence. We help individuals to take financial control and access financial services, businesses to make smarter decisions and thrive, lenders to lend more responsibly, and organisations to prevent identity fraud and crime.

We have 17,800 people operating across 45 countries and every day we're investing in new technologies, talented people and innovation to help all our clients maximise every opportunity. We are listed on the London Stock Exchange (EXPN) and are a constituent of the FTSE 100 Index.

Learn more at http://www.experian.com.sg or visit our global content hub at our global news blog for the latest news and insights from the Group.

About Standard Chartered

We are a leading international banking group, with a presence in 60 of the world's most dynamic markets, and serving clients in a further 85. Our purpose is to drive commerce and prosperity through our unique diversity, and our heritage and values are expressed in our brand promise, Here for good.

Standard Chartered PLC is listed on the London and Hong Kong Stock Exchanges.

For more stories and expert opinions please visit Insights at sc.com. Follow Standard Chartered on Twitter, LinkedIn and Facebook.

Logo - https://photos.prnasia.com/prnh/20201013/2947870-1LOGO-a Logo - https://photos.prnasia.com/prnh/20201013/2947870-1LOGO-b

SOURCE Experian

View post:
Experian partners with Standard Chartered to drive Financial Inclusion with Machine Learning, powering the next generation of Decisioning - Yahoo...

Posted in Machine Learning | Comments Off on Experian partners with Standard Chartered to drive Financial Inclusion with Machine Learning, powering the next generation of Decisioning – Yahoo…

Commentary: Can AI and machine learning improve the economy? – FreightWaves

The views expressed here are solely those of the author and do not necessarily represent the views of FreightWaves or its affiliates.

In this installment of the AI in Supply Chain series (#AIinSupplyChain), I tried to discern the outlines of an answer to the question posed in the headline above by reading three academic papers. This article distills what I consider the most important takeaways from the papers.

Although the context of the investigations that resulted in these papers looks at the economy as a whole, there are implications that are applicable at the level of an individual firm. So, if you are responsible for innovation, corporate development and strategy at your company, its probably worth your time to read each of them and then interpret the findings for your own firm.

In this paper, Erik Brynjolfsson, Daniel Rock and Chad Syverson explore the paradox that while systems using artificial intelligence are advancing rapidly, measured economywide productivity has declined.

Recent optimism about AI and machine learning is driven by recent and dramatic improvements in machine perception and cognition. These skills are essential to the ways in which people get work done. So this has fueled hopes that machines will rapidly approach and possibly surpass people in their ability to do many different tasks that today are the preserve of humans.

However, productivity statistics do not yet reflect growth that is driven by the advances in AI and machine learning. If anything, the authors cite statistics to suggest that labor productivity growth fell in advanced economies starting in the mid-2000s and has not recovered to its previous levels.

Therein lies the paradox: AI and machine learning boosters predict it will transform entire swathes of the economy, yet the economic data do not point to such a transformation taking place. What gives?

The authors offer four possible explanations.

First, it is possible that the optimism about AI and machine learning technologies is misplaced. Perhaps they will be useful in certain narrow sectors of the economy, but ultimately their economywide impact will be modest and insignificant.

Second, it is possible that the impact of AI and machine learning technologies is not being measured accurately. Here it is pessimism about the significance of these technologies that prevents society from accurately measuring their contribution to economic productivity.

Third, perhaps these new technologies are producing positive returns to the economy, BUT these benefits are being captured by a very small number of firms and as such the rewards are enjoyed by only a minuscule fraction of the population.

Fourth, the benefits of AI and machine learning will not be reflected in the wider economy until investments have been made to build up complementary technologies, processes, infrastructure, human capital and other types of assets that make it possible for society to realize and measure the transformative benefits of AI and machine learning.

The authors argue that AI, machine learning and their complementary new technologies embody the characteristics of general purpose technologies (GPTs). A GPT has three primary features: It is pervasive or can become pervasive; it can be improved upon as time elapses; and it leads directly to complementary innovations.

Electricity. The internal combustion engine. Computers. The authors cite these as examples of GTPs, with which readers are familiar.

Crucially, the authors state that a GPT can at one moment both be present and yet not affect current productivity growth if there is a need to build a sufficiently large stock of the new capital, or if complementary types of capital, both tangible and intangible, need to be identified, produced, and put in place to fully harness the GPTs productivity benefits.

It takes a long time for economic production at the macro- or micro-scale to be reorganized to accommodate and harness a new GPT. The authors point out that computers took 25 years before they became ubiquitous enough to have an impact on productivity. It took 30 years for electricity to become widespread. As the authors state, the changes required to harness a new GPT take substantial time and resources, contributing to organizational inertia. Firms are complex systems that require an extensive web of complementary assets to allow the GPT to fully transform the system. Firms that are attempting transformation often must reevaluate and reconfigure not only their internal processes but often their supply and distribution chains as well.

The authors end the article by stating: Realizing the benefits of AI is far from automatic. It will require effort and entrepreneurship to develop the needed complements, and adaptability at the individual, organizational, and societal levels to undertake the associated restructuring. Theory predicts that the winners will be those with the lowest adjustment costs and that put as many of the right complements in place as possible. This is partly a matter of good fortune, but with the right roadmap, it is also something for which they, and all of us, can prepare.

In this paper, Brynjolfsson, Xiang Hui and Meng Liu explore the effect that the introduction of eBay Machine Translation (eMT) had on eBays international trade. The authors describe eMT as an in-house machine learning system that statistically learns how to translate among different languages. They also state: As a platform, eBay mediated more than 14 billion dollars of global trade among more than 200 countries in 2014. Basically, eBay represents a good approximation of a complex economy within which to examine the economywide benefits of this type of machine translation.

The authors state: We show that a moderate quality upgrade increases exports on eBay by 17.5%. The increase in exports is larger for differentiated products, cheaper products, listings with more words in their title. Machine translation also causes a greater increase in exports to less experienced buyers. These heterogeneous treatment effects are consistent with a reduction in translation-related search costs, which comes from two sources: (1) an increased matching relevance due to improved accuracy of the search query translation and (2) better translation quality of the listing title in buyers language.

They report an accompanying 13.1% increase in revenue, even though they only observed a 7% increase in the human acceptance rate.

They also state: To put our result in context, Hui (2018) has estimated that a removal of export administrative and logistic costs increased export revenue on eBay by 12.3% in 2013, which is similar to the effect of eMT. Additionally, Lendle et al. (2016) have estimated that a 10% reduction in distance would increase trade revenue by 3.51% on eBay. This means that the introduction of eMT is equivalent of [sic] the export increase from reducing distances between countries by 37.3%. These comparisons suggest that the trade-hindering effect of language barriers is of first-order importance. Machine translation has made the world significantly smaller and more connected.

In this paper, Brynjolfsson, Rock and Syverson develop a model that shows how GPTs like AI enable and require significant complementary investments, including co-invention of new processes, products, business models and human capital. These complementary investments are often intangible and poorly measured in the national accounts, even when they create valuable assets for the firm AND they develop a model that shows how this leads to an underestimation of productivity growth in the early years of a new GPT, and how later, when the benefits of intangible investments are harvested, productivity growth will be overestimated. Their model generates a Productivity J-Curve that can explain the productivity slowdowns often accompanying the advent of GPTs, as well as the increase in productivity later.

The authors find that, first, As firms adopt a new GPT, total factor productivity growth will initially be underestimated because capital and labor are used to accumulate unmeasured intangible capital stocks. Then, second, Later, measured productivity growth overestimates true productivity growth because the capital service flows from those hidden intangible stocks generates measurable output. Finally, The error in measured total factor productivity growth therefore follows a J-curve shape, initially dipping while the investment rate in unmeasured capital is larger than the investment rate in other types of capital, then rising as growing intangible stocks begin to contribute to measured production.

This explains the observed phenomenon that when a new technology like AI and machine learning, or something like blockchain and distributed ledger technology, is introduced into an area such as supply chain, it generates furious debate about whether it creates any value for incumbent suppliers or customers.

If we consider the reported time it took before other GPTs like electricity and computers began to contribute measurably to firm-level and economywide productivity, we must admit that it is perhaps too early to write off blockchains and other distributed ledger technologies, or AI and machine learning, and their applications in sectors of the economy that are not usually associated with internet and other digital technologies.

Give it some time. However, I think we are near the inflection point of the AI and Machine Learning Productivity J-curve. As I have worked on this #AIinSupplyChain series, I have become more convinced that the companies that are experimenting with AI and machine learning in their supply chain operations now will have the advantage over their competitors over the next decade.

I think we are a bit farther away from the inflection point of a Blockchain and Distributed Ledger Technologies Productivity J-Curve. I cannot yet make a cogent argument about why this is true, although in March 2014, I published #ChainReaction: Who Will Own The Age of Cryptocurrencies? part of an ongoing attempt to understand when blockchains and other distributed technologies might become more ubiquitous than they are now.

Examining this topic has added to my understanding of why disruption happens. The authors of the Productivity J-Curve paper state that the more transformative the new technology, the more likely its productivity effects will initially be underestimated.

The long duration during which incumbent firms underestimate the productivity effects of a relatively new GPT is what contributes to the phenomenon studied by Rebecca Henderson and Kim Clark in Architectural Innovation: The Reconfiguration of Existing Product Technologies and the Failure of Established Firms. It is also described as Supply Side Disruption by Josgua Gans in his book, The Disruption Dilemma, and summarized in this March 2016 HBR article, The Other Disruption.

If we focus on AI and machine learning specifically, in an exchange on Twitter on Sept. 27, Brynjolfsson said, The machine translation example is in many ways the exception. More often it takes a lot of organizational reinvention and time before AI breakthroughs translate into productivity gains.

By the time entrenched and industry-leading incumbents awaken to the threats posed by newly developed GPTs, a crop of challengers who had no option but to adopt the new GPT at the outset has become powerful enough to threaten the financial stability of an industry.

One example? E-commerce and its impact on retail in general.

If you are an executive, what experiments are you performing to figure out if and how your companys supply chain operations can be made more productive by implementing technologies that have so far been underestimated by you and other incumbents in your industry?

If you are not doing anything yet, are you fulfilling your obligations to your companys shareholders, employees, customers and other stakeholders?

If you are a team working on innovations that you believe have the potential to significantly refashion global supply chains, wed love to tell your story in FreightWaves. I am easy to reach on LinkedIn and Twitter. Alternatively, you can reach out to any member of the editorial team at FreightWaves at media@freightwaves.com.

Dig deeper into the #AIinSupplyChain Series with FreightWaves.

Commentary: Optimal Dynamics the decision layer of logistics?

Commentary: Combine optimization, machine learning and simulation to move freight

Commentary: SmartHop brings AI to owner-operators and brokers

Commentary: Optimizing a truck fleet using artificial intelligence

Commentary: FleetOps tries to solve data fragmentation issues in trucking

Commentary: Bulgarias Transmetrics uses augmented intelligence to help customers

Commentary: Applying AI to decision-making in shipping and commodities markets

Commentary: The enabling technologies for the factories of the future

Commentary: The enabling technologies for the networks of the future

Commentary: Understanding the data issues that slow adoption of industrial AI

Commentary: How AI and machine learning improve supply chain visibility, shipping insurance

Commentary: How AI, machine learning are streamlining workflows in freight forwarding, customs brokerage

Authors disclosure: I am not an investor in any early-stage startups mentioned in this article, either personally or through REFASHIOND Ventures. I have no other financial relationship with any entities mentioned in this article.

Read the rest here:
Commentary: Can AI and machine learning improve the economy? - FreightWaves

Posted in Machine Learning | Comments Off on Commentary: Can AI and machine learning improve the economy? – FreightWaves

The secrets of small data: How machine learning finally reached the enterprise – VentureBeat

Over the past decade, big data has become Silicon Valleys biggest buzzword. When theyre trained on mind-numbingly large data sets, machine learning (ML) models can develop a deep understanding of a given domain, leading to breakthroughs for top tech companies. Google, for instance, fine-tunes its ranking algorithms by tracking and analyzing more than one trillion search queries each year. It turns out that the Solomonic power to answer all questions from all comers can be brute-forced with sufficient data.

But theres a catch: Most companies are limited to small data; in many cases, they possess only a few dozen examples of the processes they want to automate using ML. If youre trying to build a robust ML system for enterprise customers, you have to develop new techniques to overcome that dearth of data.

Two techniques in particular transfer learning and collective learning have proven critical in transforming small data into big data, allowing average-sized companies to benefit from ML use cases that were once reserved only for Big Tech. And because just 15% of companies have deployed AI or ML already, there is a massive opportunity for these techniques to transform the business world.

Above: Using the data from just one company, even modern machine learning models are only about 30% accurate. But thanks to collective learning and transfer learning, Moveworks can determine the intent of employees IT support requests with over 90% precision.

Image Credit: Moveworks

Of course, data isnt the only prerequisite for a world-class machine learning model theres also the small matter of building that model in the first place. Given the short supply of machine learning engineers, hiring a team of experts to architect an ML system from scratch is simply not an option for most organizations. This disparity helps explain why a well-resourced tech company like Google benefits disproportionately from ML.

But over the past several years, a number of open source ML models including the famous BERT model for understanding language, which Google released in 2018 have started to change the game. The complexity of creating a model the caliber of BERT, whose aptly named large version has about 340 million parameters, means that few organizations can even consider quarterbacking such an initiative. However, because its open source, companies can now tweak that publicly available playbook to tackle their specific use cases.

To understand what these use cases might look like, consider a company like Medallia, a Moveworks customer. On its own, Medallia doesnt possess enough data to build and train an effective ML system for an internal use case, like IT support. Yet its small data does contain a treasure trove of insights waiting for ML to unlock them. And by leveraging new techniques to glean these insights, Medallia has become more efficient, from recognizing which internal workflows need attention to understanding the company-specific language its employees use when asking for tech support.

So heres the trillion-dollar question: How do you take an open source ML model designed to solve a particular problem and apply that model to a disparate problem in the enterprise? The answer starts with transfer learning, which, unsurprisingly, entails transferring knowledge gained from one domain to a different domain that has less data.

For example, by taking an open source ML model like BERT designed to understand generic language and refining it at the margins, it is now possible for ML to understand the unique language employees use to describe IT issues. And language is just the beginning, since weve only begun to realize the enormous potential of small data.

Above: Transfer learning leverages knowledge from a related domain typically one with a greater supply of training data to augment the small data of a given ML use case.

Image Credit: Moveworks

More generally, this practice of feeding an ML model a very small and very specific selection of training data is called few-shot learning, a term thats quickly become one of the new big buzzwords in the ML community. Some of the most powerful ML models ever created such as the landmark GPT-3 model and its 175 billion parameters, which is orders of magnitude more than BERT have demonstrated an unprecedented knack for learning novel tasks with just a handful of examples as training.

Taking essentially the entire internet as its tangential domain, GPT-3 quickly becomes proficient at these novel tasks by building on a powerful foundation of knowledge, in the same way Albert Einstein wouldnt need much practice to become a master at checkers. And although GPT-3 is not open source, applying similar few-shot learning techniques will enable new ML use cases in the enterprise ones for which training data is almost nonexistent.

With transfer learning and few-shot learning on top of powerful open source models, ordinary businesses can finally buy tickets to the arena of machine learning. But while training ML with transfer learning takes several orders of magnitude less data, achieving robust performance requires going a step further.

That step is collective learning, which comes into play when many individual companies want to automate the same use case. Whereas each company is limited to small data, third-party AI solutions can use collective learning to consolidate those small data sets, creating a large enough corpus for sophisticated ML. In the case of language understanding, this means abstracting sentences that are specific to one company to uncover underlying structures:

Above: Collective learning involves abstracting data in this case, sentences with ML to uncover universal patterns and structures.

Image Credit: Moveworks

The combination of transfer learning and collective learning, among other techniques, is quickly redrawing the limits of enterprise ML. For example, pooling together multiple customers data can significantly improve the accuracy of models designed to understand the way their employees communicate. Well beyond understanding language, of course, were witnessing the emergence of a new kind of workplace one powered by machine learning on small data.

View original post here:
The secrets of small data: How machine learning finally reached the enterprise - VentureBeat

Posted in Machine Learning | Comments Off on The secrets of small data: How machine learning finally reached the enterprise – VentureBeat

Bespoken Spirits raises $2.6M in seed funding to combine machine learning and accelerated whiskey aging – TechCrunch

Bespoken Spirits, a Silicon Valley spirits company that has developed a new data-driven process to accelerate the aging of whiskey and create specific flavors, today announced that it has raised a $2.6 million seed funding round. Investors include Clos de la Tech owner T.J. Rodgers and baseballs Derek Jeter.

The company was co-founded by former Bloom Energy, BlueJeans and Mixpanel exec Stu Aaron and another Bloom Energy alumn, Martin Janousek, whose name can be found on a fair number of Bloom Energy patents.

Bespoken isnt the first startup to venture into accelerated aging, a process that tries to minimize the time it takes to age these spirits, which is typically done in wooden barrels. The company argues that its the first to combine that with a machine learning-based approach though what it calls its ACTivation technology.

Rather than putting the spirit in a barrel and passively waiting for nature to take its course, and just rolling the dice and seeing what happens, we instead use our proprietary ACTivation technology with the A, C and T standing for aroma, color and taste to instill the barrel into the spirit, and actively control the process and the chemical reactions in order to deliver premium quality tailored spirits and to be able to do that in just days rather than decades, explained Aaron.

Image Credits: Bespoken Spirits

And while there is surely a lot of skepticism around this technology, especially in a business that typically prides itself on its artisanal approach, the company has won prizes at a number of competitions. The team argues that traditional barrel aging is a wasteful process, where you lose 20% of the product through evaporation, and one that is hard to replicate. And because of how long it takes, it also creates financial challenges for upstarts in this business and it makes it hard to innovate.

As the co-founders told me, there are three pillars to its business: selling its own brand of spirits, maturation-as-a-service for rectifiers and distillers and producing custom private label spirits for retailers, bars and restaurants. At first, the team mostly focused on the latter two and especially its maturation-as-a-service business. Right now, Aaron noted, a lot of craft distilleries are facing financial strains and need to unlock their inventory and get their product to market sooner and maybe at a better quality and hence higher price point than they previously could.

Theres also the existing market of rectifiers, who, at least in the U.S., take existing products and blend them. These, too, are looking for ways to improve their processes and make it more replicable.

Interestingly, a lot of breweries, too, are now sitting on excess or expired beer because of the pandemic. Theyre realizing that rather than paying somebody to dispose of that beer and taking it back, they can actually recycle or upcycle maybe is a better word the beer, by distilling it into whiskey, Aaron said. But unfortunately, when a brewery distills beer into whiskey, its typically not very good whiskey. And thats where we come in. We can take that beer bin, as a lot of people call initial distillation, and we can convert it into a premium-quality whiskey.

Image Credits: Bespoken Spirits

Bespoken is also working with a few grocery chains, for example, to create bespoke whiskeys for their house brands that match the look and flavor of existing brands or that offer completely new experiences.

The way the team does this is by collecting a lot of data throughout its process and then having a tasting panel describe the product for them. Using that data and feeding it into its systems, the company can then replicate the results or tweak them as necessary without having to wait for years for a barrel to mature.

Were collecting all this data and some of the data that were collecting today, we dont even know yet what were going to use it for, Janousek said. Using its proprietary techniques, Bespoken will often create dozens of samples for a new customer and then help them whittle those down.

I often like to describe our company as a cross between 23andme, Nespresso and Impossible Foods, Aaron said. Were like 23andme, because again, were trying to map the customer to preference to the recipe to results. There is this big data, genome mapping kind of a thing. And were like Nespresso because our machine takes spirit and supply pods and produces results, although obviously were industrial scale and theyre not. And its like Impossible Foods, because its totally redefining an age-old antiquated model to be completely different.

The company plans to use the new funding to accelerate its market momentum and build out its technology. Its house brand is currently available for sale in California, Wisconsin and New York.

The companys ability to deliver both quality and variety is what really caught my attention and made me want to invest, said T.J. Rogers. In a short period of time, theyve already produced an incredible range of top-notch spirits, from whiskeys to rum, brandy and tequila all independently validated time and again in blind tastings and prestigious competitions.

Full disclaimer: The company sent me a few samples. Im not enough of a whiskey aficionado to review those, but I did enjoy them (responsibly).

See the article here:
Bespoken Spirits raises $2.6M in seed funding to combine machine learning and accelerated whiskey aging - TechCrunch

Posted in Machine Learning | Comments Off on Bespoken Spirits raises $2.6M in seed funding to combine machine learning and accelerated whiskey aging – TechCrunch