Search Immortality Topics:

Page 26«..1020..25262728..4050..»


Category Archives: Machine Learning

AI Ethics Tempted But Hesitant To Use AI Adversarial Attacks Against The Evils Of Machine Learning, Including For Self-Driving Cars – Forbes

AI Ethics quandary about using adversarial attacks against Machine Learning even if done for ... [+] purposes of goodness.

It is widely accepted sage wisdom to garner as much as you can about your adversaries.

Frederick The Great, the famous king of Prussia and a noted military strategist, stridently said this: Great advantage is drawn from knowledge of your adversary, and when you know the measure of their intelligence and character, you can use it to play on their weakness.

Astutely leveraging the awareness of your adversaries is both a vociferous defense and a compelling offense-driven strategy in life. On the one hand, you can be better prepared for whatever your adversary might try to destructively do to you. The other side of that coin is that you are likely able to carry out better attacks against your adversary via the known and suspected weaknesses of any vaunted foe.

Per the historically revered statesman and ingenious inventor Benjamin Franklin, those that are on their guard and appear ready to receive their adversaries are in much less danger of being attacked, much more so than otherwise being unawares, supine, and negligent in preparation.

Why all this talk about adversaries?

Because one of the biggest concerns facing much of todays AI is that cyber crooks and other evildoers are deviously attacking AI systems using what is commonly referred to as adversarial attacks. This can cause an AI system to falter and fail to perform its designated functions. As youll see in a moment, there are a variety of vexing AI Ethics and Ethical AI issues underlying the matter, such as ensuring that AI systems are protected against such scheming adversaries, see my ongoing and extensive coverage of AI Ethics at the link here and the link here, just to name a few.

Perhaps even worse than getting the AI to simply stumble, the adversarial attack can sometimes be used to get AI to perform as the wrongdoer wishes the AI to perform. The attacker can essentially trick the AI into doing the bidding of the malefactor. Whereas some adversarial attacks seek to disrupt or confound the AI, another equally if not more insidious form of deception involves getting the AI to act on the behalf of the attacker.

It is almost as though one might use a mind trick or hypnotic means to get a human to do wrong acts and yet the person is blissfully unaware that they have been fooled into doing something that they should not particularly have done. To clarify, the act that is performed does not necessarily have to be wrong per se or illegal in its merits. For example, conning a bank teller to open the safe or vault for you is not in itself a wrong or illegal act. The bank teller is doing what they legitimately are able to perform as a valid bank-approved task. Of course, if they open the vault and doing so allows a robber to steal the money and all of the gold bullion therein, the bank teller has been tricked into performing an act that they should not have undertaken in the given circumstances.

The use of adversarial attacks against AI has to a great extent arisen because of the way in which much of contemporary AI is devised. You see, this latest era of AI has tended to emphasize the use of Machine Learning (ML) and Deep Learning (DL). These are computational pattern matching techniques and technologies which have dramatically aided the advancement of modern-day AI systems. ML/DL is often used as a key element in many of the AI systems that you interact with daily, such as the use of conversational interactive systems or Natural Language Processing (NLP) akin to Alexa and Siri.

The manner in which ML/DL is designed and fielded provides a fertile opening for the leveraging of adversarial attacks. Cybercrooks generally can guess how the ML/DL was built. They can make reasoned guesses about how the ML/DL will react when put into use. There are only so many ways that ML/DL is usually constructed. As such, the evildoer hackers can try a slew of underhanded ML/DL adversarial tricks to get the AI to either go awry or do their bidding.

In contrast, during the prior era of AI systems, it was somewhat harder to undertake adversarial attacks since much of the AI was more idiosyncratic and written in a more proprietary or individualistic manner. You would have had a more challenging time trying to guess how the AI was constructed and also how it might react when placed into active use. In comparison, ML/DL is largely more predictable as to its susceptibilities (this is not always the case, and please know that I am broadly generalizing).

You might be thinking that if adversarial attacks are relatively able to be targeted specifically at ML/DL then certainly there be should a boatload of cybersecurity measures available to protect against those attacks. One would hope that those devising and releasing their AI applications would ensure that the app was securely able to fight against those adversarial attacks.

The answer is yes and no.

Yes, there exist numerous cybersecurity protections that can be used by and within ML/DL to guard against adversarial attacks. Unfortunately, the answer is also somewhat a no in that many of the AI builders are not especially versed in those protections or are not explicitly including those protections.

There are lots of reasons for this.

One is that some AI software engineers concentrate solely on the AI side and are not particularly caring about the cybersecurity elements. They figure that someone else further along in the chain of making and releasing the AI will deal with any needed cybersecurity protections. Another reason for the lack of protection against adversarial attacks is that it can be a burden of sorts to the AI project. An AI project might be under a tight deadline to get the AI out the door. Adding into the mix a bunch of cybersecurity protections that need to be crafted or set up will potentially delay the production cycle of the AI. Furthermore, the cost of creating AI is bound to go up too.

Note that none of those are satisfactory as to allow an AI system to be vulnerable to adversarial attacks. Those that are in the know would say the famous line of either pay me now or pay me later would come to play in this instance. You can skirt past the cybersecurity portions to get an AI system sooner into production, but the chances are that it will then suffer an adversarial attack. A cost-benefit analysis and ROI (return on investment) needs to be properly assessed as to whether the cost upfront and the benefits thereof are going to be more profitable against the costs to repair and deal with cybersecurity intrusions further down the pike.

There is no free lunch when it comes to making ML/DL that is well-protected against adversarial attacks.

That being said, you dont necessarily need to move heaven and earth to be moderately protected against those evildoing tricks. Savvy specialists that are versed in cybersecurity protections can pretty much sit side-by-side with the AI crews and dovetail the security into the AI as it is being devised. There is also the assumption that a well-versed AI builder can readily use AI constructing techniques and technologies that simultaneously aid their AI building and that seamlessly encompasses adversarial attack protections. To adequately do so, they usually need to know about the nature of adversarial attacks and how to best blunt or mitigate them. This is something only gradually becoming regularly instituted as part of devising AI systems.

A twist of sorts is that more and more people are getting into the arena of developing ML/DL applications. Regrettably, some of those people are not versed in AI per se, and neither are they versed in cybersecurity. The idea overall is that perhaps by making the ability to craft AI systems with ML/DL widely available to all we are aiming to democratize AI. That sounds good, but there are downsides to this popular exhortation, see my analysis and coverage at the link here.

Speaking of twists, I will momentarily get to the biggest twist of them all, namely, I am going to shock you with a recently emerging notion that some find sensible and others believe is reprehensible. Ill give you a taste of where I am heading on this heated and altogether controversial matter.

Are you ready?

There is a movement toward using adversarial attacks as a means to disrupt or fool AI systems that are being used by wrongdoers.

Let me explain.

So far, I have implied that AI is seemingly always being used in the most innocent and positive of ways and that only miscreants would wish to confound the AI via the use of adversarial attacks. But keep in mind that bad people can readily devise AI and use that AI for doing bad things.

You know how it is, whats good for the goose is good for the gander.

Criminals and cybercrooks are eagerly wising up to the building and using AI ML/DL to carry out untoward acts. When you come in contact with an AI system, you might not have any means of knowing whether it is an AI For Good versus an AI For Bad type of system. Be on the watch! Just because AI is being deployed someplace does not somehow guarantee that the AI will be crafted by well-intended builders. The AI could be deliberately devised for foul purposes.

Here then is the million-dollar question.

Should we be okay with using adversarial attacks on purportedly AI For Bad systems?

Im sure that your first thought is that we ought to indeed be willing to fight fire with fire. If AI For Good systems can be shaken up via adversarial attacks, we can use those same evildoing adversarial attacks to shake up those atrocious AI For Bad systems. We can rightfully turn the attacking capabilities into an act of goodness. Fight evil using the appalling trickery of evil. The net result would seem to be an outcome of good.

Not everyone agrees with that sentiment.

From an AI Ethics perspective, there is a lot of handwringing going on about this meaty topic. Some would argue that by leveraging adversarial attacks, even when the intent is for the good, you are perpetuating the use of adversarial attacks all-told. You are basically saying that it is okay to launch and promulgate adversarial attacks. Shame on you, they exclaim. We ought to be stamping out evil rather than encouraging or expanding upon evil (even if the evil is ostensibly aiming to offset evil and carry out the work of the good).

Those against the use of adversarial attacks would also argue that by keeping adversarial attacks in the game that you are going to merely step into a death knell of quicksand. More and stronger adversarial attacks will be devised under the guise of attacking the AI For Bad systems. That seems like a tremendously noble pursuit. The problem is that the evildoers will undoubtedly also grab hold of those emboldened and super-duper adversarial attacks and aim them squarely at the AI For Good.

You are blindly promoting the cat and mouse gambit. We might be shooting our own foot.

A retort to this position is that there are no practical means of stamping out adversarial attacks. No matter whether you want them to exist or not, the evildoers are going to make sure they do persist. In fact, the evildoers are probably going to be making the adversarial attacks more resilient and potent, doing so to overcome whatever cyber protections are put in place to block them. Thus, a proverbial head-in-the-sand approach to dreamily pretending that adversarial attacks will simply slip quietly away into the night is pure nonsense.

You could contend that adversarial attacks against AI are a double-edged sword. AI researchers have noted this quandary, as stated by these authors in a telling article in AI And Ethics journal: Sadly, AI solutions have already been utilized for various violations and theft, even receiving the name AI or Crime (AIC). This poses a challenge: are cybersecurity experts thus justified to attack malicious AI algorithms, methods and systems as well, to stop them? Would that be fair and ethical? Furthermore, AI and machine learning algorithms are prone to be fooled or misled by the so-called adversarial attacks. However, adversarial attacks could be used by cybersecurity experts to stop the criminals using AI, and tamper with their systems. The paper argues that this kind of attacks could be named Ethical Adversarial Attacks (EAA), and if used fairly, within the regulations and legal frameworks, they would prove to be a valuable aid in the fight against cybercrime (article by Micha Chora and Micha Woniak, The Double-Edged Sword Of AI: Ethical Adversarial Attacks To Counter Artificial Intelligence For Crime).

Id ask you to mull this topic over and render a vote in your mind.

Is it unethical to use AI adversarial attacks against AI For Bad, or can we construe this as an entirely unapologetic Ethical AI practice?

You might be vaguely aware that one of the loudest voices these days in the AI field and even outside the field of AI consists of clamoring for a greater semblance of Ethical AI. Lets take a look at what it means to refer to AI Ethics and Ethical AI. On top of that, we can set the stage by looking at some examples of adversarial attacks to establish what I mean when I speak of Machine Learning and Deep Learning.

One particular segment or portion of AI Ethics that has been getting a lot of media attention consists of AI that exhibits untoward biases and inequities. You might be aware that when the latest era of AI got underway there was a huge burst of enthusiasm for what some now call AI For Good. Unfortunately, on the heels of that gushing excitement, we began to witness AI For Bad. For example, various AI-based facial recognition systems have been revealed as containing racial biases and gender biases, which Ive discussed at the link here.

Efforts to fight back against AI For Bad are actively underway. Besides vociferous legal pursuits of reining in the wrongdoing, there is also a substantive push toward embracing AI Ethics to righten the AI vileness. The notion is that we ought to adopt and endorse key Ethical AI principles for the development and fielding of AI doing so to undercut the AI For Bad and simultaneously heralding and promoting the preferable AI For Good.

On a related notion, I am an advocate of trying to use AI as part of the solution to AI woes, fighting fire with fire in that manner of thinking. We might for example embed Ethical AI components into an AI system that will monitor how the rest of the AI is doing things and thus potentially catch in real-time any discriminatory efforts, see my discussion at the link here. We could also have a separate AI system that acts as a type of AI Ethics monitor. The AI system serves as an overseer to track and detect when another AI is going into the unethical abyss (see my analysis of such capabilities at the link here).

In a moment, Ill share with you some overarching principles underlying AI Ethics. There are lots of these kinds of lists floating around here and there. You could say that there isnt as yet a singular list of universal appeal and concurrence. Thats the unfortunate news. The good news is that at least there are readily available AI Ethics lists and they tend to be quite similar. All told, this suggests that by a form of reasoned convergence of sorts that we are finding our way toward a general commonality of what AI Ethics consists of.

First, lets cover briefly some of the overall Ethical AI precepts to illustrate what ought to be a vital consideration for anyone crafting, fielding, or using AI.

For example, as stated by the Vatican in the Rome Call For AI Ethics and as Ive covered in-depth at the link here, these are their identified six primary AI ethics principles:

As stated by the U.S. Department of Defense (DoD) in their Ethical Principles For The Use Of Artificial Intelligence and as Ive covered in-depth at the link here, these are their six primary AI ethics principles:

Ive also discussed various collective analyses of AI ethics principles, including having covered a set devised by researchers that examined and condensed the essence of numerous national and international AI ethics tenets in a paper entitled The Global Landscape Of AI Ethics Guidelines (published in Nature), and that my coverage explores at the link here, which led to this keystone list:

As you might directly guess, trying to pin down the specifics underlying these principles can be extremely hard to do. Even more so, the effort to turn those broad principles into something entirely tangible and detailed enough to be used when crafting AI systems is also a tough nut to crack. It is easy to overall do some handwaving about what AI Ethics precepts are and how they should be generally observed, while it is a much more complicated situation in the AI coding having to be the veritable rubber that meets the road.

The AI Ethics principles are to be utilized by AI developers, along with those that manage AI development efforts, and even those that ultimately field and perform upkeep on AI systems. All stakeholders throughout the entire AI life cycle of development and usage are considered within the scope of abiding by the being-established norms of Ethical AI. This is an important highlight since the usual assumption is that only coders or those that program the AI are subject to adhering to the AI Ethics notions. As earlier stated, it takes a village to devise and field AI, and for which the entire village has to be versed in and abide by AI Ethics precepts.

Lets also make sure we are on the same page about the nature of todays AI.

There isnt any AI today that is sentient. We dont have this. We dont know if sentient AI will be possible. Nobody can aptly predict whether we will attain sentient AI, nor whether sentient AI will somehow miraculously spontaneously arise in a form of computational cognitive supernova (usually referred to as the singularity, see my coverage at the link here).

The type of AI that I am focusing on consists of the non-sentient AI that we have today. If we wanted to wildly speculate about sentient AI, this discussion could go in a radically different direction. A sentient AI would supposedly be of human quality. You would need to consider that the sentient AI is the cognitive equivalent of a human. More so, since some speculate we might have super-intelligent AI, it is conceivable that such AI could end up being smarter than humans (for my exploration of super-intelligent AI as a possibility, see the coverage here).

Lets keep things more down to earth and consider todays computational non-sentient AI.

Realize that todays AI is not able to think in any fashion on par with human thinking. When you interact with Alexa or Siri, the conversational capacities might seem akin to human capacities, but the reality is that it is computational and lacks human cognition. The latest era of AI has made extensive use of Machine Learning (ML) and Deep Learning (DL), which leverage computational pattern matching. This has led to AI systems that have the appearance of human-like proclivities. Meanwhile, there isnt any AI today that has a semblance of common sense and nor has any of the cognitive wonderment of robust human thinking.

ML/DL is a form of computational pattern matching. The usual approach is that you assemble data about a decision-making task. You feed the data into the ML/DL computer models. Those models seek to find mathematical patterns. After finding such patterns, if so found, the AI system then will use those patterns when encountering new data. Upon the presentation of new data, the patterns based on the old or historical data are applied to render a current decision.

I think you can guess where this is heading. If humans that have been making the patterned upon decisions have been incorporating untoward biases, the odds are that the data reflects this in subtle but significant ways. Machine Learning or Deep Learning computational pattern matching will simply try to mathematically mimic the data accordingly. There is no semblance of common sense or other sentient aspects of AI-crafted modeling per se.

Furthermore, the AI developers might not realize what is going on either. The arcane mathematics in the ML/DL might make it difficult to ferret out the now hidden biases. You would rightfully hope and expect that the AI developers would test for the potentially buried biases, though this is trickier than it might seem. A solid chance exists that even with relatively extensive testing that there will be biases still embedded within the pattern matching models of the ML/DL.

You could somewhat use the famous or infamous adage of garbage-in garbage-out. The thing is, this is more akin to biases-in that insidiously get infused as biases submerged within the AI. The algorithm decision-making (ADM) of AI axiomatically becomes laden with inequities.

Not good.

I trust that you can readily see how adversarial attacks fit into these AI Ethics matters. Evildoers are undoubtedly going to use adversarial attacks against ML/DL and other AI that is supposed to be doing AI For Good. Meanwhile, those evildoers are indubitably going to be devising AI For Bad that they foster upon us all. To try and fight against those AI For Bad systems, we could arm ourselves with adversarial attacks. The question is whether we are doing more good or more harm by leveraging and continuing the advent of adversarial attacks.

Time will tell.

One vexing issue is that there is a myriad of adversarial attacks that can be used against AI ML/DL. You might say there are more than you can shake a stick at. Trying to devise protective cybersecurity measures to negate all of the various possible attacks is somewhat problematic. Just when you might think youve done a great job of dealing with one type of adversarial attack, your AI might get blindsided by a different variant. A determined evildoer is likely to toss all manner of adversarial attacks at your AI and be hoping that at least one or more sticks. Of course, if we are using adversarial attacks against AI For Bad, we too would take the same advantageous scattergun approach.

Some of the most popular types of adversarial attacks include:

At this juncture of this weighty discussion, Id bet that you are desirous of some illustrative examples that might showcase the nature and scope of adversarial attacks against AI and particularly aimed at Machine Learning and Deep Learning. There is a special and assuredly popular set of examples that are close to my heart. You see, in my capacity as an expert on AI including the ethical and legal ramifications, I am frequently asked to identify realistic examples that showcase AI Ethics dilemmas so that the somewhat theoretical nature of the topic can be more readily grasped. One of the most evocative areas that vividly presents this ethical AI quandary is the advent of AI-based true self-driving cars. This will serve as a handy use case or exemplar for ample discussion on the topic.

Heres then a noteworthy question that is worth contemplating: Does the advent of AI-based true self-driving cars illuminate anything about the nature of adversarial attacks against AI, and if so, what does this showcase?

Allow me a moment to unpack the question.

First, note that there isnt a human driver involved in a true self-driving car. Keep in mind that true self-driving cars are driven via an AI driving system. There isnt a need for a human driver at the wheel, nor is there a provision for a human to drive the vehicle. For my extensive and ongoing coverage of Autonomous Vehicles (AVs) and especially self-driving cars, see the link here.

Id like to further clarify what is meant when I refer to true self-driving cars.

Understanding The Levels Of Self-Driving Cars

As a clarification, true self-driving cars are ones where the AI drives the car entirely on its own and there isnt any human assistance during the driving task.

These driverless vehicles are considered Level 4 and Level 5 (see my explanation at this link here), while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-ons that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, and we dont yet even know if this will be possible to achieve, nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend, see my coverage at this link here).

Since semi-autonomous cars require a human driver, the adoption of those types of cars wont be markedly different than driving conventional vehicles, so theres not much new per se to cover about them on this topic (though, as youll see in a moment, the points next made are generally applicable).

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect thats been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

Self-Driving Cars And Adversarial Attacks Against AI

For Level 4 and Level 5 true self-driving vehicles, there wont be a human driver involved in the driving task.

All occupants will be passengers.

The AI is doing the driving.

One aspect to immediately discuss entails the fact that the AI involved in todays AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can.

Why is this added emphasis about the AI not being sentient?

Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to todays AI, despite the undeniable and inarguable fact that no such AI exists as yet.

With that clarification, you can envision that the AI driving system wont natively somehow know about the facets of driving. Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car.

Lets dive into the myriad of aspects that come to play on this topic.

First, it is important to realize that not all AI self-driving cars are the same. Each automaker and self-driving tech firm is taking its approach to devising self-driving cars. As such, it is difficult to make sweeping statements about what AI driving systems will do or not do.

Furthermore, whenever stating that an AI driving system doesnt do some particular thing, this can, later on, be overtaken by developers that in fact program the computer to do that very thing. Step by step, AI driving systems are being gradually improved and extended. An existing limitation today might no longer exist in a future iteration or version of the system.

I hope that provides a sufficient litany of caveats to underlie what I am about to relate.

As earlier mentioned, some of the most popular types of adversarial attacks include:

We can showcase the nature of each such adversarial attack and do so in the context of AI-based self-driving cars.

Adversarial Falsification Attacks

Consider the use of adversarial falsifications.

There are generally two such types: (1) false-positive attacks, and (2) false-negative attacks. In the false-positive attack, the emphasis is on presenting to AI a so-called negative sample that is then incorrectly classified by the ML/DL as a positive one. The jargon for this is that it is a Type I effort (this is reminiscent perhaps of your days of taking a statistics class in college). In contrast, the false-negative attack entails presenting a positive sample for which the ML/DL incorrectly classifies as a negative instance, known as a Type II error.

Suppose that we had trained an AI driving system to detect Stop signs. We used an ML/DL that we had trained beforehand with thousands of images that contained Stop signs. The idea is that we would be using video cameras on the self-driving car to collect video and images of the roadway scene surrounding the autonomous vehicle during a driving journey. As the digital imagery real-time streams into an onboard computer, the ML/DL scans the digital data to detect any indication of a nearby Stop sign. The detection of a Stop sign is obviously crucial for the AI driving system. If a Stop sign is detected by the ML/DL, this is conveyed to the AI driving system and the AI would need to ascertain a suitable means to use the driving controls to bring the self-driving car to a proper and safe stop.

Humans seem to readily be able to detect Stop signs, at least most of the time. Our human perception of such signs is keenly honed by our seemingly innate cognitive pattern matching capacities. All we need to do is learn what a Stop sign looks like and we take things from there. A toddler learns soon enough that a Stop sign is typically red in color, contains the word STOP in large letters, has a special rectangular shape, usually is posted adjacent to the roadway and resides at a persons height, and so on.

Imagine an evildoer that wants to make trouble for self-driving cars.

In a false-positive adversarial attack, the wrongdoer would try to trick the ML/DL into computationally calculating that a Stop sign exists even when there isnt a Stop sign present. Maybe the wrongdoer puts up a red sign along a roadway that looks generally similar to a Stop sign but lacks the word STOP on it. A human would likely realize that this is merely a red sign and not a driving directive. The ML/DL might though calculate that the sign resembles sufficiently enough a Stop sign to the degree that the AI ought to consider the sign as in fact a Stop sign.

You might be tempted to think that this is not much of an adversarial attack and that it seems rather innocuous. Well, suppose that you are driving in a car and meanwhile a self-driving car that is ahead of you suddenly and seemingly without any basis for doing so comes to an abrupt stop (due to having misconstrued a red sign near the roadway as being a Stop sign). You might ram into that self-driving car. It could be that the AI was fooled into computationally calculating that a non-stop sign was a Stop sign, thus committing a false-positive error. You get injured, the passengers in the self-driving car get injured, and perhaps even pedestrians get injured by this dreadful false-positive adversarial attack.

A false-negative adversarial attack is somewhat akin to this preceding depiction though based on tricking the ML/DL into incorrectly misclassifying in the other direction, as it were. Imagine that a Stop sign is sitting next to the roadway and for all usual visual reasons seems to be a Stop sign. Humans accept that this is indeed a valid Stop sign.

Visit link:
AI Ethics Tempted But Hesitant To Use AI Adversarial Attacks Against The Evils Of Machine Learning, Including For Self-Driving Cars - Forbes

Posted in Machine Learning | Comments Off on AI Ethics Tempted But Hesitant To Use AI Adversarial Attacks Against The Evils Of Machine Learning, Including For Self-Driving Cars – Forbes

Australian Institute for Machine Learning (AIML …

News

14

Apr

AI for space research delivers back-to-back success in global satellite challenge

South Australias leadership in space innovation has been recognised, with an AIML-led team securing first place in a global AI competition organised by the European Space Agency.

12

Apr

Tech and defence experts call to build AI Australia

Australia must commit to building its sovereign AI research and innovation capability, or risk being left behind as other countries race to pursue their ambitious AI strategies.

11

Feb

Meet the amazing women training AI machines

For International Day of Women and Girls in Science, meet some of the women at AIML who are building great new things and leading the way in cutting-edge machine learning technology.

16

Dec

Machine learning students say cheers with AI beers

How do you build a neural network that can learn how to make beer? We'll show you.

08

Dec

AI + industry collaborations bring award-winning success

South Australias capacity to lead innovation in AI and machine learning has been recognised at the 2021 SA Science and Innovation Excellence Awards, with an AIML team winning the category of Excellence in Science and Industry Collaboration.

24

Nov

New centre boosts AIMLs advanced machine learning research and innovation

Australia's advancedmachine learning capability has received a boost, with a new $20m research and innovation initiative now underway at AIML.

More here:
Australian Institute for Machine Learning (AIML ...

Posted in Machine Learning | Comments Off on Australian Institute for Machine Learning (AIML …

AI Dynamics Will Employ Machine Learning to Triage TB Patients More Accurately, Quickly, Simply and Inexpensively Using Cough Sound Data, Bringing…

Selected by QB3 and UCSF for R2D2 TB Networks Scale Up Your TB Diagnostic Solution Program

BELLEVUE, Wash., April 26, 2022 (GLOBE NEWSWIRE) -- AI Dynamics, an organization founded on the belief that everyone should have access to the power of artificial intelligence (AI) to change the world, has been selected for the Rapid Research in Diagnostics Development for TB Networks (R2D2 TB Network) Scale Up Your TB Diagnostic Solution Program, hosted by QB3 and the UCSF Rosenman Institute. With 1.5 million deaths reported each year, Tuberculosis (TB) is the worldwide leading cause of death from a single infectious disease agent. The goal of the program is to harness machine learning technology for triaging TB using simple and affordable tests that can be performed on easy-to-collect samples such as cough sounds.

Currently, two weeks of cough sound data is widely used to determine who requires costly confirmatory testing, which delays the initiation of the treatment. AI Dynamics will build a proof-of-concept machine learning model to triage TB patients more accurately, quickly, simply and inexpensively using cough sounds, relieving patients from paying for unnecessary molecular and culture TB tests. Due to the prevalence of TB in under-resourced and remote locations, access to affordable early detection options is necessary to prevent disease transmissions and deaths in such countries.

At the core of AI Dynamics mission is providing equal access to the power of AI to everyone and we are committed to working with like-minded companies that recognize the positive impact innovative technology can have on the world, Rajeev Dutt, Founder and CEO of AI Dynamics said. The collaboration and accessible datasets that the R2D2 TB Network provides help to facilitate life-changing diagnostics for the most vulnerable populations.

The R2D2 TB Network offers a transparent and partner-engaged process for the identification, evaluation and advancement of promising TB diagnostics by providing experts and data and facilitating rigorous clinical study evaluation. AI Dynamics will build and validate a model using cough sounds collected from sites worldwide through the R2D2 TB Network.

About AI Dynamics:

AI Dynamics aims to make artificial intelligence (AI) accessible to organizations of all sizes. The company's NeoPulse Framework is an intuitive development and management platform for AI, which enables companies to develop and implement deep neural networks and other machine learning models that can improve key performance metrics. The company's team brings decades of experience in the fields of machine learning and artificial intelligence from leading companies and research organizations. For more information, please visit aidynamics.com.

About The R2D2 TB Network:

The Rapid Research in Diagnostics Development for TB Network (R2D2 TB Network) brings together various TB experts with highly experienced clinical study sites in 10 countries. For further information, please visit their website at https://www.r2d2tbnetwork.org/.

Media Contact:

Justine GoodielUPRAISE Marketing + PR for AI Dynamicsaidynamics@upraisepr.com

Originally posted here:
AI Dynamics Will Employ Machine Learning to Triage TB Patients More Accurately, Quickly, Simply and Inexpensively Using Cough Sound Data, Bringing...

Posted in Machine Learning | Comments Off on AI Dynamics Will Employ Machine Learning to Triage TB Patients More Accurately, Quickly, Simply and Inexpensively Using Cough Sound Data, Bringing…

Politics, Machine Learning, and Zoom Conferences in a Pandemic: A Conversation with an Undergraduate Researcher – Caltech

In every election, after the polls close and the votes are counted, there comes a time for reflection. Pundits appear on cable news to offer theories, columnists pen op-eds with warnings and advice for the winners and losers, and parties conduct postmortems.

The 2020 U.S. presidential election in which Donald Trump lost to Joe Biden was no exception.

For Caltech undergrad Sreemanti Dey, the election offered a chance to do her own sort of reflection. Dey, an undergrad majoring in computer science, has a particular interest in using computers to better understand politics. Working with Michael Alvarez, professor of political and computational social science, Dey used machine learning and data collected during the 2020 election to find out what actually motivated people to vote for one presidential candidate over another.

In December, Dey presented her work on the topic at the fourth-annual International Conference on Applied Machine Learning and Data Analytics, which was held remotely and was recognized by the organizers as having the best paper at the conference.

We recently chatted with Dey and Alvarez, who is co-chair of the Caltech-MIT Voting Project, about their research, what machine learning can offer to political scientists, and what it is like for undergrads doing research at Caltech.

Sreemanti Dey: I think that how elections are run has become a really salient issue in the past couple of years. Politics is in the forefront of people's minds because things have gotten so, I guess, strange and chaotic recently. That, along with a lot of factors in 2020, made people care a lot more about voting. That makes me think it's really important to study how elections work and how people choose candidates in general.

Sreemanti: I've learned from Mike that a lot of social science studies are deductive in nature. So, you pick a hypothesis and then you pick the data that would best help you understand the hypothesis that you've chosen. We wanted to take a more open-ended approach and see what the data itself told us. And, of course, that's precisely what machine learning is good for.

In this particular case, it was a matter of working with a large amount of data that you can't filter through yourself without introducing a lot of bias. And that could be just you choosing to focus on the wrong issues. Machine learning and the model that we used are a good way to reduce the amount of information you're looking at without bias.

Basically it's a way of reducing high-dimensional data sets to the most important factors in the data set. So it goes through a couple steps. It first groups all the features of the data into these modules so that the features within a module are very correlated with each other, but there is not much correlation between modules. Then, since each module represents the same type of features, it reduces how many features are in each module. And then at the very end, it combines all the modules together and then takes one last pass to see if it can be reduced by anything else.

Mike: This technique was developed by Christina Ramirez (MS' 96, PhD '99), a PhD graduate of our program now at UCLA. Christina is someone who I've collaborated with quite a bit. Sreemanti and I were meeting pretty regularly with Christina and getting some advice from her along the way about this project and some others that we're thinking about.

Sreemanti: I think we got pretty much what we expected, except for what the most partisan-coded issues are. Those I found a little bit surprising. The most partisan questions turned out to be about filling the Supreme Court seats. I thought that it was interesting.

Sreemanti: It's really incredible. I find it astonishing that a person like Professor Alvarez has the time to focus so much on the undergraduates in lab. I did research in high school, and it was an extremely competitive environment trying to get attention from professors or even your mentor.

It's a really nice feature of Caltech that professors are very involved with what their undergraduates are doing. I would say it's a really incredible opportunity.

Mike: I and most of my colleagues work really hard to involve the Caltech undergraduates in a lot of the research that we do. A lot of that happens in the SURF [Summer Undergraduate Research Fellowship] program in the summers. But it also happens throughout the course of the academic year.

What's unusual a little bit here is that undergraduate students typically take on smaller projects. They typically work on things for a quarter or a summer. And while they do a good job on them, they don't usually reach the point where they produce something that's potentially publication quality.

Sreemanti started this at the beginning of her freshman year and we worked on it through her entire freshman year. That gave her the opportunity to really learn the tools, read the political science literature, read the machine learning literature, and take this to a point where at the end of the year, she had produced something that was of publication quality.

Sreemanti: It was a little bit strange, first of all, because of the time zone issue. This conference was in a completely different time zone, so I ended up waking up at 4 a.m. for it. And then I had an audio glitch halfway through that I had to fix, so I had some very typical Zoom-era problems and all that.

Mike: This is a pandemic-era story with how we were all working to cope and trying to maintain the educational experience that we want our undergraduates to have. We were all trying to make sure that they had the experience that they deserved as a Caltech undergraduate and trying to make sure they made it through the freshman year.

We have the most amazing students imaginable, and to be able to help them understand what the research experience is like is just an amazing opportunity. Working with students like Sreemanti is the sort of thing that makes being a Caltech faculty member very special. And it's a large part of the reason why people like myself like to be professors at Caltech.

Sreemanti: I think I would want to continue studying how people make their choices about candidates but maybe in a slightly different way with different data sets. Right now, from my other projects, I think I'm learning how to not rely on surveys and rely on more organic data, for example, from social media. I would be interested in trying to find a way to study their candidatepeople's candidate choice from their more organic interactions with other people.

Sreemanti's paper, titled, "Fuzzy Forests for Feature Selection in High-Dimensional Survey Data: An Application to the 2020 U.S. Presidential Election," was presented in December at the fourth-annual International Conference on Applied Machine Learning and Data Analytics," where it won the best paper award.

Originally posted here:
Politics, Machine Learning, and Zoom Conferences in a Pandemic: A Conversation with an Undergraduate Researcher - Caltech

Posted in Machine Learning | Comments Off on Politics, Machine Learning, and Zoom Conferences in a Pandemic: A Conversation with an Undergraduate Researcher – Caltech

Mperativ Adds New Vice President of Applied Data Science, Machine Learning and AI to Advance Vision for AI in Revenue Marketing – Business Wire

SAN FRANCISCO--(BUSINESS WIRE)--Mperativ, the Revenue Marketing Platform that aligns marketing with sales, customer success, and finance on the cause and effect relationships between marketing activities and revenue outcomes, today announced the appointment of Nohyun Myung as Vice President of Applied Data Science, Machine Learning and AI. In this new role, Nohyun will lead the development of new Mperativ platform capabilities to help marketers realize the value of AI predictions and seamlessly connect data across the customer journey without having to build a data science practice.

Nohyun has unique and important experience in data science, analytics and AI that will be critical to the growth of the Mperativ Data Science and AI practices, said Jim McHugh, CEO and co-founder of Mperativ. He not only brings the knowledge and skill set to help accelerate the evolution of the Mperativ platform, but his involvement in the technical side of sales organizations will give us a unique perspective on how AI and forecasting can be used to help address the challenges go-to-market teams face.

Nohyun brings over 20 years of experience as a data and analytics practitioner. Prior to Mperativ he built and scaled high-functioning, multi-disciplinary teams in his roles as Vice President of Global Solution Engineering & Customer Success at OmniSci and as Vice President of Global Solution Engineering at Kinetica. He has worked closely with industry leaders across Telco, Utilities, Automotive and Government verticals to deliver enterprise-grade AI and advanced analytics capabilities to their data practices, pioneering work across autonomous vehicle deployments to telecommunications network optimization and uncovering anomalies from object-detected features of satellite imagery. Nohyuns prior experience has led to the advancement of enterprise-class AI capabilities spanning Autonomous Vehicles, automating Object Detection from optical imagery and Global-Scale Smart Infrastructure initiatives across various industries.

Throughout my career Ive become acutely familiar with the immense challenges that go-to-market teams face when trying to get a comprehensive and accurate picture of the customer journey, said Nohyun. As the world sprints towards becoming more prescriptive and predictive, having operational tools and platforms that can augment business without having to build it in-house will become essential across B2B organizations. I look forward to working with the talented team at Mperativ to bring the true value of AI to marketing leaders so they can better execute engagement strategies that produce their desired revenue outcomes.

About Mperativ

Mperativ provides the first strategic platform to align marketing with sales, customer success, and finance on the cause and effect relationships between marketing activities and revenue outcomes. Despite pouring significant effort into custom analytics, marketers are struggling to convey the value of their initiatives. By recentering marketing metrics around revenue, Mperativ makes it possible to uncover data narratives and extract trends across the entire customer journey, with beautifully-designed interactive visualizations that demonstrate the effectiveness of marketing in a new revenue-centric language. As a serverless data warehouse, Mperativ eliminates the complexity of surfacing compelling marketing insights. Connect marketing strategy to revenue results with Mperativ. To learn more, visit us at http://www.mperativ.io or contact us at info@mperativ.io.

Read this article:
Mperativ Adds New Vice President of Applied Data Science, Machine Learning and AI to Advance Vision for AI in Revenue Marketing - Business Wire

Posted in Machine Learning | Comments Off on Mperativ Adds New Vice President of Applied Data Science, Machine Learning and AI to Advance Vision for AI in Revenue Marketing – Business Wire

Five Machine Learning Project Pitfalls to Avoid in 2022 – EnterpriseTalk

Machine Learning (ML) systems are complex, and this complexity increases the chances of failure as well. Knowing what may go wrong is critical for developing robust machine learning systems.

Machine Learning (ML) initiatives fail 85% of the time, according to Gartner. Worse yet, according to the research firm, this tendency will continue until the end of 2022.

There are a number of foreseeable reasons why machine learning initiatives fail, many of which may be avoided with the right knowledge and diligence. Here are some of the most common challenges that machine learning projects face, as well as ways to prevent them.

All AI/ML endeavors require data, which is needed for testing, training, and operating models. However, acquiring such data is a stumbling block because most organizational data is dispersed among on-premises and cloud data repositories, each with its own set of compliance and quality control standards, making data consolidation and analysis that much more complex.

Another stumbling block is data silos. When teams use multiple systems to store and handle data sets, data silos collections of data controlled by one team but not completely available to others can form. That might, however, be a result of a siloed organizational structure.

In reality, no one knows everything. It is critical to have at least one ML expert on the team, to be able to do the foundational work, for the successful adoption and implementation of ML in enterprise projects. Being overly confident, without the right skill, sets in the team will only add to the chances of failure.

Organizations are nearly drowning in large volumes of observational data. Thanks to developments in technology such as integrated smart devices and telematics as well as relatively inexpensive and available big data storage and a desire to incorporate more data science into business decisions. However, a high level of data availability might result in observational data dumpster diving.

Also Read: How Enterprises can Keep Machine Learning Models on Track with Crucial Guard Rails

When adopting a strong tool like machine learning, it pays to be more aware about what organizations are searching for. Businesses should take advantage of their large observational data resources to uncover potentially valuable insights, but evaluate those hypotheses through AB or multivariate testing to distinguish reality from fiction.

The ability to evaluate the overall performance of a trained model is crucial in machine learning. Its critical to assess how well the model performs when compared to both training and test data. This data will be used to choose the model to use, the hyper-parameters to utilize, and decide if the model is ready for production use.

It is vital to select the right assessment measures for the job at hand when evaluating model performance.

Machine learning has become more accessible in various ways. There are far more machine learning tools available today than there were even a few years ago, and data science knowledge has multiplied.

Having a data science team to work on an AI and ML project in isolation, on the other hand, might drive the organization down the most difficult path to success. They may come across unanticipated difficulties unless they have prior familiarity with them. Unfortunately, they can also get into the thick of a project before recognizing they are not adequately prepared.

Its imperative to make sure that domain specialists like process engineers and plant operators are not left out of the process because they are familiar with its complexity and the context of relevant data.

Check Out The NewEnterprisetalk Podcast.For more such updates follow us on Google NewsEnterprisetalk News.

Originally posted here:
Five Machine Learning Project Pitfalls to Avoid in 2022 - EnterpriseTalk

Posted in Machine Learning | Comments Off on Five Machine Learning Project Pitfalls to Avoid in 2022 – EnterpriseTalk