Search Immortality Topics:

Page 21234..1020..»


Category Archives: Machine Learning

Effective Machine Learning Needs Leadership Not AI Hype – The Machine Learning Times

Capitalizing on this technology is criticalbut its notoriously difficult to launch. Many ML projects never progress beyond the modeling: the number-crunching phase. Industry surveys repeatedly show that most new ML initiatives dont make it to deployment, where the value would be realized.

Hype contributes to this problem. ML is mythologized, misconstrued as intelligent when it is not. Its also mismeasured as highly accurate, even when that notion is irrelevant and misleading. For now, these adulations largely drown out the words of consternation, but those words are bound to increase in volume.

Take self-driving cars. In the most publicly visible cautionary tale about ML hype, overzealous promises have led to slamming on the brakes and slowing progress. AsThe Guardianput it, The driverless car revolution has stalled. This is a shame, as the concept promises greatness. Someday, it will prove to be a revolutionary application of ML that greatly reduces traffic fatalities. This will require a lengthy transformation that is going to happen over 30 years and possibly longer, according Chris Urmson, formerly the CTO of Googles self-driving team and now the CEO of Aurora, which bought out Ubers self-driving unit. But in the mid-2010s, the investment and fanatical hype, including grandiose tweets by Tesla CEO Elon Musk, reached a premature fever pitch. The advent of truly impressive driver assistance capabilities were branded as Full Self-Driving and advertised as being on the brink of widespread, completely autonomous drivingthat is, self-driving that allows you to nap in the back seat.

Expectations grew, followed by . . . a conspicuous absence of self-driving cars. Disenchantment took hold and by the early 2020s investments had dried up considerably. Self-driving is doomed to be this decades jetpack.

What went wrong? Underplanning is an understatement. It wasnt so much a matter of overselling ML itself, that is, of exaggerating how well predictive models can, for example, identify pedestrians and stop signs. Instead, the greater problem was the dramatic downplaying of deployment complexity. Only a comprehensive, deliberate plan could possibly manage the inevitable string of impediments that arise while slowly releasing such vehicles into the world. After all, were talking about ML models autonomously navigating large, heavy objects through the midst of our crowded cities! One tech journalist poignantly dubbed them self-driving bullets. When it comes to operationalizing ML, autonomous driving is literally where the rubber hits the road. More than any other ML initiative, it demands a shrewd, incremental deployment plan that doesnt promise unrealistic timelines.

The ML industry has nailed the development of potentially valuable models, but not their deployment. A report prepared by theAI Journalbased on surveys by Sapio Research showed that the top pain point for data teams is Delivering business impact now through AI. Ninety-six percent of those surveyed checked that box. That challenge beat out a long list of broader data issues outside the scope of AI per se, including data security, regulatory compliance, and various technical and infrastructure challenges. But when presented with a model, business leaders refuse to deploy. They just say no. The disappointed data scientist is left wondering, You cant . . . or you wont? Its a mixture of both, according to a question asked by my survey with KDnuggets (see responsesto the question, What is the main impediment to model deployment?). Technical hurdles mean that they cant. A lack of approvalincluding when decision makers dont consider model performance strong enough or when there are privacy or legal issuesmeans that theywont.

Another survey also told this some cant and some wont story. After ML consultancy Rexer Analytics survey of data scientists asked why models intended for deployment dont get there, founder Karl Rexer told me that respondents wrote in two main reasons: The organization lacks the proper infrastructure needed for deployment and People in the organization dont understand the value of ML.

Unsurprisingly, the latter group of data scientiststhe wonts rather than the cantssound the most frustrated, Karl says.

Whether they cant or they wont, the lack of a well-established business practice is almost always to blame. Technical challenges abound for deployment, but they dont stand in the way so long as project leaders anticipate and plan for them. With a plan that provides the time and resources needed to handle model implementationsometimes, major constructiondeployment will proceed. Ultimately, its not so much that they cant but that they wont.

About the Author

Eric Siegel, Ph.D., is a leading consultant and former Columbia University professor who helps companies deploy machine learning. He is the founder of the long-runningMachine Learning Weekconference series and its new sister,Generative AI World, the instructor of the acclaimed online course Machine Learning Leadership and Practice End-to-End Mastery, executive editor ofThe Machine Learning Times, and afrequent keynote speaker. He wrote the bestsellingPredictive Analytics: The Power to PredictWho Will Click, Buy, Lie, or Die, which has been used in courses at hundreds of universities, as well asThe AI Playbook: Mastering the Rare Art of Machine Learning Deployment. Erics interdisciplinary work bridges the stubborn technology/business gap. At Columbia, he won the Distinguished Faculty award when teaching the graduatecomputer sciencecourses in ML and AI. Later, he served as abusiness schoolprofessor at UVA Darden. Eric also publishesop-eds on analytics and social justice.

Eric hasappeared onBloomberg TV and Radio, BNN (Canada), Israel National Radio, National Geographic Breakthrough, NPR Marketplace, Radio National (Australia), and TheStreet. Eric and his books have beenfeatured inBig Think, Businessweek, CBS MoneyWatch, Contagious Magazine, The European Business Review, Fast Company, The Financial Times, Forbes, Fortune, GQ, Harvard Business Review, The Huffington Post, The Los Angeles Times, Luckbox Magazine, MIT Sloan Management Review, The New York Review of Books, The New York Times, Newsweek, Quartz, Salon, The San Francisco Chronicle, Scientific American, The Seattle Post-Intelligencer, Trailblazers with Walter Isaacson, The Wall Street Journal, The Washington Post,andWSJ MarketWatch.

See original here:
Effective Machine Learning Needs Leadership Not AI Hype - The Machine Learning Times

Posted in Machine Learning | Comments Off on Effective Machine Learning Needs Leadership Not AI Hype – The Machine Learning Times

AI chip startup Groq acquires Definitive Intelligence to scale its cloud platform – SiliconANGLE News

Groq Inc., a well-funded maker of artificial intelligence inference chips, has acquired fellow startup Definitive Intelligence Inc. for an undisclosed sum.

The companies announced the transaction today. The deal will help Groq enhance the capabilities of its newest offering, a cloud platform called GroqCloud that provides on-demand access to its AI chips.

Groq was founded in 2016 by Chief Executive Officer Jonathan Ross, a former Google LLC engineer who invented the search giants TPU machine learning processors. The company is backed by more than $360 million in funding. It raised the bulk of that capital through a Series C round co-led by Tiger Global Management and D1 Capital in early 2021.

Groqs flagship product is an AI chip known as the LPU Inference Engine. Its optimized to power large language models with a focus on inference, or the task of running an AI in production after it has been trained. In a November benchmark test, Groqs LPU set an inference speed record while running Meta Platform Inc.s popular Llama 2 70B LLM.

The LPU consists of cores dubbed TSPs that each include about 230 megabytes of memory. According to Groq, the TSPs are linked together by an on-chip network that provides detailed information on how much time it takes data to travel between the different cores. This information helps speed up LLM response times.

The faster a piece of data reaches a chip core via the onboard network, the sooner processing can begin. The information that the LPU provides about its onboard network allows LLMs to identify the fastest data travel routes and use them to speed up computations. Groq claims its chip can perform inference up to 10 times faster than competing products.

Definitive Intelligence, the company has acquired, is a Palo Alto, California-based analytics provider that previously raised more than $10 million in funding. It offers an AI-powered application that enables users to query datasets with natural language instructions. The software also lends itself to related tasks such as creating data visualizations.

Groq detailed on occasion of the acquisition that Definitive Intelligence had helped it build GroqCloud, a recently launched platform through which it provides on-demand access to LPUs. Developers can use the platform to familiarize themselves with the companys chips and build applications optimized for their architecture. A built-in library of learning resources promises to ease the onboarding process.

Following the acquisition, Definitive Intelligence co-founder and CEO Sunny Madra will join Groq to lead the business unit in charge of GroqCloud. The units initial priorities include expanding the platforms capacity and growing its user base. Groq said that the acquisition will also support the launch of a second division, Groq Systems, that will focus on helping organizations such as government agencies deploy the companys LPUs.

THANK YOU

Continue reading here:
AI chip startup Groq acquires Definitive Intelligence to scale its cloud platform - SiliconANGLE News

Posted in Machine Learning | Comments Off on AI chip startup Groq acquires Definitive Intelligence to scale its cloud platform – SiliconANGLE News

Firms Turn to AI for Smarter Cybersecurity Solutions – PYMNTS.com

Google CEO Sundar Pichai recently noted that artificial intelligence (AI) could boost online security, a sentiment echoed by many industry experts.

AI is transforming how security teams handle cyber threats, making their work faster and more efficient. By analyzing vast amounts of data and identifying complex patterns, AI automates the initial stages of incident investigation. The new methods allow security professionals to begin their work with a clear understanding of the situation, speeding up response times.

Tools like machine learning-based anomaly detection systems can flag unusual behavior, while AI-driven security platforms offer comprehensive threat intelligence and predictive analytics, Timothy E. Bates, chief technology officer at Lenovo, told PYMNTS in an interview. Then theres deep learning, which can analyze malware to understand its structure and potentially reverse-engineer attacks. These AI operatives work in the shadows, continuously learning from each attack to not just defend but also to disarm future threats.

Cybercrime is a growing problem as more of the world embraces the connected economy. Losses from cyberattackstotaled at least $10.3 billion in the U.S. in 2022, per an FBI report.

The tools used by attackers and defenders are constantly changing and increasingly complex, Marcus Fowler, CEO of cybersecurity firm Darktrace Federal, said in an interview with PYMNTS.

AI represents the greatest advancement in truly augmenting the current cyber workforce, expanding situational awareness, and accelerating mean time to action to allow them to be more efficient, reduce fatigue, and prioritize cyber investigation workloads, he said.

As cyberattacks continue to rise, improving defense tools is becoming increasingly important. Britains GCHQ intelligence agency recently warned that new AI tools could lead to more cyberattacks, making it easier for beginner hackers to cause harm. The agency also said that the latest technology could increase ransomware attacks, where criminals lock files and ask for money, according to a reportby GCHQs National Cyber Security Centre.

Googles Pichai pointed out that AI is helping to speed up how quickly security teams can spot and stop attacks. This innovation helps defenders who have to catch every attack to keep systems safe, while attackers only need to succeed once to cause trouble.

While AI may enhance the capabilities of cyberattackers, it equally empowers defenders against security breaches.

Artificial intelligence has the potential to benefit the field of cybersecurity far beyond just automating routine tasks, Piyush Pandey, CEO of cybersecurity firm Pathlock, noted in an interview with PYMNTS. As rules and security needs keep growing, he said, the amount of data for governance, risk management and compliance (GRC) is increasing so much that it may soon become too much to handle.

Continuous, automated monitoring of compliance posture using AI can and will drastically reduce manual efforts and errors, he said. More granular, sophisticated risk assessments will be available via ML [machine learning] algorithms, which can process vast amounts of data to identify subtle risk patterns, offering a more predictive approach to reducing risk and financial losses.

Using AI to spot specific patterns is one way to catch hackers who keep getting better at what they do. Todays hackers are good at avoiding usual security checks, so many groups are using AI to catch them, Mike Britton, CISO at Abnormal Security, told PYMNTS in an interview. He said that one way that AI can be used in cyber defense is through behavioral analytics. Instead of just searching for known bad signs like dangerous links or suspicious senders, AI-based solutions can spot unusual activity that doesnt fit the normal pattern.

By baselining normal behavior across the email environment including typical user-specific communication patterns, styles, and relationships AI could detect anomalous behavior that may indicate an attack, regardless of whether the content was authored by a human or by generative AI tools, he added.

AI systems can distinguish between fake and real attacks by recognizing ransomware behavior. The system can swiftly identify suspicious behavior, including unauthorized key generation, Zack Moore, a product security manager at InterVision, said in an interview with PYMNTS.

Generative AI, especially large language models (LLMs), allows organizations to simulate potential attacks and identify their weaknesses. Moore said that the most effective use of AI in uncovering and dissecting attacks lies in ongoing penetration testing.

Instead of simulating an attack once every year, organizations can rely on AI-empowered penetration testing to constantly verify their systems fortitude, he said. Furthermore, technicians can review the tools logs to reverse-engineer a solution after identifying a vulnerability.

The game of cat and mouse between attackers and defenders using AI is likely to continue indefinitely. Meanwhile, consumers are concerned about how to keep their data safe. A recent PYMNTS Intelligencestudy showed that people who love using online shopping features care the most about keeping their data safe, with 40% of shoppers in the U.S. saying its their top worry or very important.

Originally posted here:
Firms Turn to AI for Smarter Cybersecurity Solutions - PYMNTS.com

Posted in Machine Learning | Comments Off on Firms Turn to AI for Smarter Cybersecurity Solutions – PYMNTS.com

This Week in AI: A Battle for Humanity or Profits? – PYMNTS.com

Theres some in-fighting going on in the artificial intelligence (AI) world, and one prominent billionaire claims the future of the human race is at stake. Elon Musk is taking legal action against Microsoft-backed OpenAI and its CEO, Sam Altman, alleging the company has strayed from its original mission to develop artificial intelligence for the collective benefit of humanity.

Musks attorneys filed a lawsuit on Thursday (Feb. 29) in San Francisco, asserting that in 2015, Altman and Greg Brockman, co-founders of OpenAI, approached Musk to assist in establishing a nonprofit focused on advancing artificial general intelligence for the betterment of humanity.

Although Musk helped initiate OpenAI in 2015, he departed from its board in 2018. Previously, in 2014, he had voiced concerns about the risks associated with AI, suggesting it could pose more significant dangers than nuclear weapons.

The lawsuit highlights that OpenAI, Inc. still claims on its website to prioritize ensuring that artificial general intelligence benefits all of humanity. However, the suit contends that in reality, OpenAI, Inc. has evolved into a closed-source entity effectively operating as a subsidiary of Microsoft, the worlds largest technology company.

When it comes to cybersecurity, AI brings both risks and rewards. Google CEO Sundar Pichai and other industry leaders say artificial intelligence is key to enhancing online security. AI can accelerate and streamline the management of cyber threats. It leverages vast datasets to identify patterns, automating early incident analysis and enabling security teams to quickly gain a comprehensive view of threats, thus hastening their response.

Lenovo CTO Timothy E. Bates told PYMNTS that AI-driven tools, such as machine learning for anomaly detection and AI platforms for threat intelligence, are pivotal. Deep learning technologies dissect malware to decipher its composition and potentially deconstruct attacks. These AI systems operate behind the scenes, learning from attacks to bolster defense and neutralize future threats.

With the global shift toward a connected economy, cybercrime is escalating, causing significant financial losses, including an estimated $10.3 billion in the U.S. alone in 2022, according to the FBI.

Get set for lots more books that are authored or co-authored by AI. Inkitt, a startup leveraging artificial intelligence (AI) to craft books, has secured $37 million. Inkitts app enables users to self-publish their narratives. By employing AI and data analytics, it selects stories for further development and markets them on its Galatea app.

This technological shift offers both opportunities and challenges.

Zachary Weiner, CEO of Emerging Insider Communications, which focuses on publishing, shared his insights on the impact of AI on writing with PYMNTS. Writers gain significantly from the vast new toolkit AI provides, enhancing their creative process with AI-generated prompts and streamlining tasks like proofreading. AI helps them overcome traditional brainstorming limits, allowing for the fusion of ideas into more intricate narratives. It simplifies refining their work, letting them concentrate on their primary tasks.

But he warns of the pitfalls AI introduces to the publishing world. AI is making its way into all aspects of writing and content creation, posing a threat to editorial roles, he said. The trend towards replacing human writers with AI for cost reduction and efficiency gains is not just a possibility but a current reality.

The robots are coming, and they are getting smarter. New advancements in artificial intelligence (AI) are making it possible for companies to create robots with better features and improved abilities to interact with humans.

Figure AI has raised $675 million to develop AI-powered humanoid robots. Investors include Jeff Bezos Explore Investments and tech giants like Microsoft, Amazon, Nvidia, OpenAI, and Intel. Experts say this investment shows a growing interest in robotics because of AI.

According to Sarah Sebo, an assistant professor of computer science at the University of Chicago, AI can help robots understand their surroundings better, recognize objects and people more accurately, communicate more naturally with humans and improve their abilities over time through feedback.

Last March, Figure AI introduced the Figure 01 robot, designed for various tasks, from industrial work to household chores. Equipped with AI, this robot mimics human movements and interactions.

The company hopes these robots will take on risky or repetitive tasks, allowing humans to focus on more creative work.

Read more from the original source:
This Week in AI: A Battle for Humanity or Profits? - PYMNTS.com

Posted in Machine Learning | Comments Off on This Week in AI: A Battle for Humanity or Profits? – PYMNTS.com

Causal AI: AI Confesses Why It Did What It Did – InformationWeek

The holy grail in AI development is explainable AI, which is a means to reveal the decision-making processes that the AI model used to arrive at its output. In other words, we humans want to know why the AI did what it did before we staked our careers, lives, or businesses on its outputs.

Causal AI requires models to explain their prediction. In its simplest form, the explanation is a graph representing a cause-and-effect chain, says George Williams, GSI Technologys director of ML, data science and embedded AI. In its modern form, its a human understandable explanation in the form of text, he says.

Typically, AI models have no auditable trails in its decision-making, no self-reporting mechanisms, and no way to peer behind the cloaking curtains of increasingly complicated algorithms.

Traditional predictive AI can be likened to a black box where its nearly impossible to tell what drove an individual result, says Phil Johnson, VP data solutions at mPulse.

As a result, humans can trust hardly anything an AI model delivers. The output could be a hallucination -- a lie, fabrication, miscalculation, or a fairytale, depending on how generous you want to be in labeling such errors and what type of AI model is being used.

GenAI models still have the unfortunate side-effect of hallucinating or making up facts sometimes. This means they can also hallucinate their explanations. Hallucination mitigation is a rapidly evolving area of research, and it can be difficult for organizations to keep up with the latest research/techniques, says Williams.

Related:5 Ways to Use AI You May Have Never Even Considered

On the other hand, that same AI model could reveal a profound truth humans cannot see because their view is obscured by huge volumes of data.

Like the proverbial army of monkeys pounding on keyboards may one day produce a great novel, many crowds of humans may one day trip across an important insight buried in ginormous stores of data. Or we can lean on the speed of AI to find a useful answer now and focus on teaching it to reveal how it came to that conclusion. The latter is far more manageable than the former.

If one gets anything out of the experience of working with AI, it should be the re-discovery of the marvel that is the human brain. The more we fashion AI after our own brains, the more ways we find it a mere shadow of our own astounding capabilities.

And thats not a diss on AI, which is a truly astounding invention and itself a testament to human capabilities. Nonetheless, the creators truly want to know what the creation is actually up to.

Related:What to Know About Machine Customers

Most AI/ML is correlational in nature, not causal, explains David Guarrera, EY Americas generative AI leader. So, you cant say much about the direction of the effect. If age and salary correlate, you dont technically know if being older CAUSES you to have more money or money CAUSES you to age, he says.

Most of us would intuitively agree that its the lack of money that causes one to age, but we cant reliably depend on our intuition to evaluate the AIs output. Neither can we rely on AI to explain itself -- mostly because it wasnt designed to do so.

In many advanced machine learning models such as deep learning, massive amounts of data are ingested to create a model, says Judith Hurwitz, chief evangelist, Geminos Software and author of Causal Artificial Intelligence: The Next Step in Effective Business AI. One of the key issues with this approach to AI is that the models created by the data cannot be easily understood by the business. They are, therefore, not explainable.In addition, it is easy to create a biased result depending on the quality of the data used to create the model, she says.

This issue is commonly referred to as AIs black box. Breaking into the innards of an AI model to retrieve the details of its decision-making is no small task, technically speaking.

Related:Implementing Generative AI for Business Success

This involves the use of causal inference theories and graphical models, such as directed acyclic graphs (DAGs), which help in mapping out and understanding the causal relationships between variables, says Ryan Gross, head of data and applications at Caylent. By manipulating one variable, causal AI can observe and predict how this change affects other variables, thereby identifying cause-and-effect relationships.

Traditional AI models are fixed in time and understand nothing. Causal AI is a different animal entirely.

Causal AI is dynamic, whereas comparable tools are static. Causal AI represents how an event impacts the world later. Such a model can be queried to find out how things might work, says Brent Field at Infosys Consulting. On the other hand, traditional machine learning models build a static representation of what correlates with what. They tend not to work well when the world changes, something statisticians call nonergodicity, he says.

Its important to grok why this one point of nonergodicity is such a crucial difference to almost everything we do.

Nonergodicity is everywhere. Its this one reason why money managers generally underperform the S&P 500 index funds. Its why election polls are often off by many percentage points. Commercial real estate and global logistics models stopped working about March 15, 2020, because COVID caused this massive supply-side economic shock that is still reverberating through the world economy, Field explains.

Without knowing the cause of an event or potential outcome, the knowledge we extract from AI is largely backward facing even when it is forward predicting. Outputs based on historical data and events alone are by nature handicapped and sometimes useless. Causal AI seeks to remedy that.

Causal models allow humans to be much more involved and aware of the decision-making process. Causal models are explainable and debuggable by default -- meaning humans can trust and verify results -- leading to higher trust, says Joseph Reeve, software engineering manager at Amplitude. Causal models also allow human expertise through model design to be leveraged when training a model, as opposed to traditional models that need to be trained from scratch, without human guidance, he says.

Can causal AI be applied even to GenAI models? In a word, yes.

We could use causal AI to analyze a large amount of data and pair it with GenAI to visualize the analysis using graphics or explanations, says Mohamed Abdelsadek, EVP data, insights, and analytics at Mastercard. Or, on the flip side, GenAI could be engaged to identify the common analysis questions at the beginning, such as the pictures of damage caused by a natural event, and causal AI would be brought in to execute the data processing and analysis, he says.

There are other ways causal AI and GenAI can work together, too.

Generative AI can be an effective tool to support causal AI. However, keep in mind that GenAI is a tool not a solution, says Geminos Softwares Hurwitz. One of the emerging ways that GenAI can be hugely beneficial in causal AI is to use these tools to analyze subject matter information stored in both structured and instructed formats. One of the essential areas needed to create an effective causal AI solution is the need for what is called causal discovery -- determining what data is needed to understand cause and effect, she says.

Does this mean that causal AI is a panacea for all of AI or that it is an infallible technology?

Causal AI is a nascent field. Because the technology is not completely developed yet, the error rates tend to be higher than expected, especially in domains that dont have sufficient training for the AI system, says Flavio Villanustre, global chief information security officer of LexisNexis Risk Solutions. However, you should expect this to improve significantly with time.

So where does causal AI stand in the scheme of things?

In 2022 Gartner Hype Cycle, causal AI was deemed as more mature and ahead of generative AI, says Ed Watal, founder and principal at Intellibus. However, unlike generative AI, causal AI has not yet found a mainstream use case and adoption that tools like ChatGPT have provided over generative AI models like GPT, he says.

Read this article:
Causal AI: AI Confesses Why It Did What It Did - InformationWeek

Posted in Machine Learning | Comments Off on Causal AI: AI Confesses Why It Did What It Did – InformationWeek

It’s 10 p.m. Do You Know Where Your AI Models Are Tonight? – Dark Reading

If you thought the software supply chain security problem was difficult enough today, buckle up. The explosive growth in artificial intelligence (AI) use is about to make those supply chain issues exponentially harder to navigate in the years to come.

Developers, application security pros, and DevSecOps professionals are called to fix the highest risk flaws that lurk in what seems like the endless combinations of open source and proprietary components that are woven into their applications and cloud infrastructure. But it's a constant battle trying to even understand which components they have, which ones are vulnerable, and which flaws put them most at risk. Clearly, they're already struggling to sanely manage these dependencies in their software as it is.

What's going to get harder is the multiplier effect that AI stands to add to the situation.

AI and machine learning (ML)-enabled tools are software just the same as any other kind of application and their code is just as likely to suffer from supply chain insecurities. However, they add another asset variable to the mix that greatly increases the attack surface of the AI software supply chain: AI/ML models.

"What separates AI applications from every other form of software is that [they rely] in some way or fashion on a thing called a machine learning model," explains Daryan Dehghanpisheh, co-founder of Protect AI. "As a result, that machine learning model itself is now an asset in your infrastructure. When you have an asset in your infrastructure, you need the ability to scan your environment, identify where they are, what they contain, who has permissions, and what they do. And if you can't do that with models today, you can't manage them."

AI/ML models provide the foundation for an AI system's ability to recognize patterns, make predictions, make decisions, trigger actions, or create content. But the truth is that most organizations don't even know how to even start gaining visibility into all of the AI models embedded in their software. Models and the infrastructure around them are built differently than other software components, and traditional security and software tooling isn't built to scan for or understand how AI models work or how they're flawed. This is what makes them unique, says Dehghanpisheh, who explains that they're essentially hidden pieces of self-executing code.

"A model, by design, is a self-executing piece of code. It has a certain amount of agency," says Dehghanpisheh. "If I told you that you have assets all over your infrastructure that you can't see, you can't identify, you don't know what they contain, you don't know what the code is, and they self-execute and have outside calls, that sounds suspiciously like a permission virus, doesn't it?"

Getting ahead of this issue was the big impetus behind him and his co-founders launching Protect AI in 2022, which is one of a spate of new firms cropping up to address model security and data lineage issues that are looming in the AI era. Dehghanpisheh and co-founder Ian Swanson saw a glimpse of the future when they worked previously together building AI/ML solutions at AWS. Dehghanpisheh had been the global leader for AI/ML solution architects.

"During the time that we spent together at AWS, we saw customers building AI/ML systems at an incredibly rapid pace, long before generative AI captured the hearts and minds of everyone from the C-suite to Congress," he says, explaining that he worked with a range of engineers and business development experts, as well as extensively with customers. "That's when we realized how and where the security vulnerabilities unique to AI/ML systems are."

They observed three basic things about AI/ML that had incredible implications for the future of cybersecurity, he says. The first was that the pace of adoption was so fast that they saw firsthand how quickly shadow IT entities were cropping up around AI development and business use that escaped the kind of governance that would oversee any other kind of development in the enterprise.

The second was that the majority of tools that were being used whether commercial or open source were built by data scientists and up-and-coming ML engineers who had never been trained in security concepts.

"As a result, you had really useful, very popular, very distributed, widely adopted tools that weren't built with a security-first mindset," he says.

As a result, many AI/ML systems and shared tools lack the basics in authentication and authorization and often grant too much read and write access in file systems, he explains. Coupled with insecure network configurations and then those inherent problems in the models, organizations start getting bogged down cascading security issues in these highly complex, difficult-to-understand systems.

"That made us realize that the existing security tools, processes, frameworks no matter how shift left you went, were missing the context that machine learning engineers, data scientists, and AI builders would need," he says.

Finally, the third major observation he and Swanson made during those AWS days was that AI breaches weren't coming. They had already arrived.

"We saw customers have breaches on a variety of AI/ML systems that should have been caught but weren't," he says. "What that told us is that the set and the processes, as well as the incident response management elements, were not purpose-built for the way AI/ML was being architected. That problem has become much worse as generative AI picked up momentum."

Dehghanpisheh and Swanson also started seeing how models and training data were creating a unique new AI supply chain that would need to be considered just as seriously as the rest of the software supply chain. Just like with the rest of modern software development and cloud-native innovation, data scientists and AI experts have fueled advancements in AI/ML systems through rampant use of open source and shared componentry including AI models and the data used to train them. So many AI systems, whether academic or commercial, are built using someone else's model. And as with the rest of modern development, the explosion in AI development keeps driving a huge daily influx of new model assets proliferated across the supply chain, which means keeping track of them just keeps getting harder.

Take Hugging Face, for example. This is one of the most widely used repositories of open source AI models online today its founders say they want to be the GitHub of AI. Back in November 2022, Hugging Face users had shared 93,501 different models with the community. The following November, that had blown up to 414,695 models. Now, just three months later, that number has expanded to 527,244. This is an issue whose scope is snowballing by the day. And it is going to put the software supply chain security problem "on steroids," says Dehghanpisheh.

A recent analysis by his firm found thousands of models that are openly shared on Hugging Face can execute arbitrary code on model load or inference. While Hugging Face does some basic scanning of its repository for security issues, many models are missed along the way at least half of the highly risk models discovered in the research were not deemed unsafe by the platform, and Hugging Face makes it clear in documentation that determining the safety of a model is ultimately the responsibility of its users.

Dehghanpisheh believes the lynchpin of cybersecurity in the AI era will start first by creating a structured understanding of AI lineage. That includes model lineage and data lineage, which are essentially the origin and history of these assets, how they've been changed, and the metadata associated with them.

"That's the first place to start. You can't fix what you can't see and what you can't know and what you can't define, right?" he says.

Meantime, on the daily operational level Dehghanpisheh believes organizations need to build out capabilities to scan their models, looking for flaws that can impact not only the hardening of the system but the integrity of its output. This includes issues like AI bias and malfunction that could cause real-world physical harm from, say, an autonomous car crashing into a pedestrian.

"The first thing is you need to scan," he says. "The second thing is you need to understand those scans. And the third is then once you have something that's flagged, you essentially need to stop that model from activating. You need to restrict its agency."

MLSecOps is a vendor-neutral movement that mirrors the DevSecOps movement in the traditional software world.

"Similar to the move from DevOps to DevSecOps, you've got to do two things at once. The first thing you've got to do is make the practitioners aware that security is a challenge and that it is a shared responsibility," Dehghanpisheh says. "The second thing you've got to do is give context and put security into tools that keep data scientists, machine learning engineers, [and] AI builders on the bleeding edge and constantly innovating, but allowing the security concerns to disappear into the background."

In addition, he says organizations are going to have to start adding governance, risk, and compliance policies and enforcement capabilities and incident response procedures that help govern the actions and processes that take place when insecurities are discovered. As with a solid DevSecOps ecosystem, this means that MLSecOps will need strong involvement from business stakeholders all the way up the executive ladder.

The good news is that AI/ML security is benefiting from one thing that no other rapid technology innovation has had right out of the gate namely, regulatory mandates right out of the gate.

"Think about any other technology transition," Dehghanpisheh says. "Name one time that a federal regulator or even state regulators have said this early on, 'Whoa, whoa, whoa, you've got to tell me everything that's in it. You've got to prioritize knowledge of that system. You have to prioritize a bill of materials. There isn't any."

This means that many security leaders are more likely to get buy-in to build out AI security capabilities a lot earlier in the innovation life cycle. One of the most obvious signs of this support is the rapid shift to sponsor new job functions at organizations.

"The biggest difference that the regulatory mentality has brought to the table is that in January of 2023, the concept of a director of AI security was novel and didn't exist. But by June, you started seeing those roles," Dehghanpisheh says. "Now they're everywhere and they're funded."

Read the original here:
It's 10 p.m. Do You Know Where Your AI Models Are Tonight? - Dark Reading

Posted in Machine Learning | Comments Off on It’s 10 p.m. Do You Know Where Your AI Models Are Tonight? – Dark Reading