Search Immortality Topics:

Page 81«..1020..80818283..90100..»


Category Archives: Machine Learning

Machine Learning Algorithm From RaySearch Enhances Workflow at Swedish Radiation Therapy Clinic – PRNewswire

STOCKHOLM, June 29, 2020 /PRNewswire/ -- RaySearch Laboratories AB (publ) has announced that by using a machine learning algorithm in treatment planning RayStation*, Mlar Hospital in Eskilstuna, Sweden, has made significant time savings in dose planning for radiation therapy. The algorithm in question is a deep learning method for contouring the patients' organs. The decision to implement this advanced technology was made to save time, thereby alleviating the prevailing shortage of doctors specialized in radiation therapy at the hospital - which was also exacerbated by the COVID-19 situation.

When creating a plan for radiation treatment of cancer, it is critical to carefully define the tumor volume. In order to avoid unwanted side-effects, it is also necessary to identify different organs in the tumor's environment, so-called organs at risk. This process is called contouring and is usually performed using manual or semi-automatic tools.

The deep learning contouring feature in RayStation uses machine learning models that have been trained and evaluated on previous clinical cases to create contours of the patient's organs automatically and quickly. Healthcare staff can review and, if necessary, adjust the contours. The final result is reached much faster than with other methods.

Andreas Johansson, physicist at Region Srmland, which runs Mlar Hospital, says: "We used deep learning to contour the first patient on May 26 and the treatment was performed on June 9. From taking 45-60 minutes per patient, the contouring now only takes 10-15 minutes, which means a huge time saving."

Johan Lf, founder and CEO, RaySearch, says: "Mlar Hospital was very quick to implement RayStation in 2015 and now it has shown again how quickly new technology can be adopted and brought into clinical use. The fact that this helps to resolve a situation where hospital resources are unusually strained is of course also very positive."

CONTACT:

For further information, please contact:Johan Lf, Founder and CEO, RaySearch Laboratories AB (publ)Telephone: +46-(0)-8-510-530-00[emailprotected]

Peter Thysell, CFO, RaySearch Laboratories AB (publ)Telephone: +46-(0)-70-661-05-59[emailprotected]

This information was brought to you by Cision http://news.cision.com

https://news.cision.com/raysearch-laboratories/r/machine-learning-algorithm-from-raysearch-enhances-workflow-at-swedish-radiation-therapy-clinic,c3144587

The following files are available for download:

SOURCE RaySearch Laboratories

See original here:
Machine Learning Algorithm From RaySearch Enhances Workflow at Swedish Radiation Therapy Clinic - PRNewswire

Posted in Machine Learning | Comments Off on Machine Learning Algorithm From RaySearch Enhances Workflow at Swedish Radiation Therapy Clinic – PRNewswire

Fake data is great data when it comes to machine learning – Stacey on IoT

Its been a few years since Ilast wroteabout the idea of using synthetic data to train machine learning models.After having three recent discussions on the topic, I figured its time to revisit the technology, especially as it seems to be gaining ground in mainstream adoption.

Back in 2018, at Microsoft Build, I saw a demonstration of a drone flying over a pipeline as it inspected it for leaks or other damage. Notably, the drones visual inspection model was trained using both actual data and simulated data. Use of the synthetic data helped teach the machine learning model about outliers and novel conditions it wasnt able to encounter using traditional training. Italso allowed Microsoft researchers to train the model more quickly and without the need to embark on as many expensive, data-gathering flights as it would have had to otherwise.

The technology is finally starting to gain ground. In April, a startup calledAnyverse raised 3million ($3.37 million)for its synthetic sensor data,while another startup,AI.Reverie,published a paper about how it used simulated data to train a model to identify planes on airport runways.

After writing that initial story, I heard very little about synthetic data untilmy conversation earlier this month with Dan Jeavons, chief data scientist at Shell. When I asked him about Shells machine learning projects, using simulated data was one that he was incredibly excited about because it helps build models that can detect problems that occur only rarely.

I think its a really interesting way to get info on the edge cases that were trying to solve, he said. Even though we have a lot of data, the big problem that we have is that, actually, we often only had a very few examples of what were looking for.

In the oil business, corrosion in factories and pipelines is a big challenge, and one that can lead to catastrophic failures. Thats why companies are careful about not letting anything corrode to the point where it poses a risk. But that also means the machine learning models cant be trained on real-world examples of corrosion. So Shell uses synthetic data to help.

As Jeavons explained, Shell is also using synthetic data to try and solve the problem of people smoking at gas stations. Shelldoesnthave a lot of examples because the cameras dont always catch the smokers; in other cases, theyre too far away or arent facing the camera. So the company is working hard on combining simulated synthetic data with real data to build computer vision models.

Almost always the things were interested in are the edge cases rather than the general norm, said Jeavons. And its quite easy to detect the edge [deviating] from the standard pattern, but its quite hard to detect the specific thing that you want.

In the meantime, startup AI.Reverie endeavored to learn more about the accuracy of synthetic data. The paper it published, RarePlanes: Synthetic Data Takes Flight, lays out how its researchers combined satellite imagery of planes parked at airports that was annotated and validated by humans with synthetic data created by machine.

When using just synthetic data, the model was only about 55% percent accurate, whereas when it only used real-world data that number jumped to 73%. But by makingreal-world data 10% of the training sample and using synthetic data for the rest, the models accuracy came in at 69%.

Paul Walborsky, the CEO of AI.Reverie (and the former CEO at GigaOM; in other words, my former boss), says that synthetic datais going to be a big business. Companies using such data need to account for ways that their fake data can skew the model, but if they can do that, they can achieve robust models faster and at a lower cost than if they relied on real-world data.

So even though IoT sensors are throwing off petabytes of data, it would be impossible to annotate all of it and use it for training models. And as Jeavons points out, those petabytes of data may not have the situation you actually want the computer to look for. In other words, expect the wave of synthetic and simulated data to keep on coming.

Were convinced that, actually, this is going to be the future in terms of making things work well, said Jeavons, both in the cloud and at the edge for some of these complex use cases.

Related

Read the rest here:
Fake data is great data when it comes to machine learning - Stacey on IoT

Posted in Machine Learning | Comments Off on Fake data is great data when it comes to machine learning – Stacey on IoT

How Does AIOps Integrate AI and Machine Learning into IT Operations? – Analytics Insight

Data is everywhere growing across variety and velocity in both structured and unstructured formats. Leveraging this chaotic data generated at ever-increasing speeds is often a mammoth task. Even powerful AI and machine learning capabilities lose their accuracy if they dont have the right data to support them. The rise in data complexity, makes it challenging for IT operations to get the best from Artificial Intelligence and ML algorithms for digital transformation.

The secret lies in acknowledging this data, to use its explosion as an opportunity to drive intelligence, automation, effectiveness and productivity with Artificial intelligence for IT operations (AIOps). In simple words, AIOps refers to the automation of IT operations artificial intelligence (AI), freeing enterprise IT operations by inputs of operational data to achieve the ultimate data automation goals.

AIOps of any enterprise stands firmly on four pillars, collectively referred to as the key dimensions of IT operations monitoring:

Data Selection & Filtering

Modern IT environments create noisy IT data, collating this data and filtering for Excel, AI and ML models is a tedious task. Taking massive amounts of redundant data selecting data elements of interest often means filtering out up to 99% of data.

Discovering Data Patterns

Unearthing data patterns implies to collate filtered data to establish meaningful relationships between the selected data groups for further analysis.

Data Collaboration

Data analysis fosters collaboration among interdisciplinary teams across global enterprises, besides preserving valuable data intelligence that can accelerate future synergies within the enterprise.

Solution Automation

This dimension relates to automating data responses and remediation, in a bid to more precise solutions achieved at a quicker TAT.

A responsible AIOps platform combines AI, machine learning and big data with a mature understanding of IT operations. It makes way to assimilate real-time and historical data from any source for cutting edge AI and ML capabilities. This makes it possible for enterprises to get a hold of problems before they even happen by leveraging on clustering, anomaly detection, prediction, statistical thresholding, predictive analytics, forecasting, and more.

IT environments have broken silos and currently exceeding the realms of the manual human scale of operations. Traditional approaches to managing IT find redundancy over the dynamic environments governed by technology.

1. Data pipelines that ITOps need to retain is exponentially increasing encompassing a larger number of events and alerts. With the introduction of APIs, digital or machine users, mobile applications, and IoT devices, modern enterprises receive higher service ticket volumes. A trend that is becoming too complex for manual reporting and analysis.

2. As organizations walk on the digital transformation path, seamless ITOps becomes indispensable. The accessibility of technology has changed user expectations across industries and vertices. This calls for an immediate reaction to IT events especially when an issue impacts user experience.

3. The introduction of edge computing and cloud infrastructure empowers the line of business (LOB) functions to build and host their own IT solutions and applications over the cloud to be accessed anytime anywhere. This calls for an increase in budgetary allocation increase and more computing power (that can be leveraged) to be added from outside core IT.

AIOps bridges the gap between service management, performance management, and automation within the IT eco-system to accomplish the continuous goal of IT operation improvements. AIOps creates a game plan that delivers within the new accelerated IT environments, to identify patterns in monitoring, service desk, capacity addition and data automation across hybrid on-premises and multi-cloud environments.

Share This ArticleDo the sharing thingy

About AuthorMore info about author

Kamalika Some is an NCFM level 1 certified professional with previous professional stints at Axis Bank and ICICI Bank. An MBA (Finance) and PGP Analytics by Education, Kamalika is passionate to write about Analytics driving technological change.

More here:
How Does AIOps Integrate AI and Machine Learning into IT Operations? - Analytics Insight

Posted in Machine Learning | Comments Off on How Does AIOps Integrate AI and Machine Learning into IT Operations? – Analytics Insight

What a machine learning tool that turns Obama white can (and cant) tell us about AI bias – The Verge

Its a startling image that illustrates the deep-rooted biases of AI research. Input a low-resolution picture of Barack Obama, the first black president of the United States, into an algorithm designed to generate depixelated faces, and the output is a white man.

Its not just Obama, either. Get the same algorithm to generate high-resolution images of actress Lucy Liu or congresswoman Alexandria Ocasio-Cortez from low-resolution inputs, and the resulting faces look distinctly white. As one popular tweet quoting the Obama example put it: This image speaks volumes about the dangers of bias in AI.

But whats causing these outputs and what do they really tell us about AI bias?

First, we need to know a little a bit about the technology being used here. The program generating these images is an algorithm called PULSE, which uses a technique known as upscaling to process visual data. Upscaling is like the zoom and enhance tropes you see in TV and film, but, unlike in Hollywood, real software cant just generate new data from nothing. In order to turn a low-resolution image into a high-resolution one, the software has to fill in the blanks using machine learning.

In the case of PULSE, the algorithm doing this work is StyleGAN, which was created by researchers from NVIDIA. Although you might not have heard of StyleGAN before, youre probably familiar with its work. Its the algorithm responsible for making those eerily realistic human faces that you can see on websites like ThisPersonDoesNotExist.com; faces so realistic theyre often used to generate fake social media profiles.

What PULSE does is use StyleGAN to imagine the high-res version of pixelated inputs. It does this not by enhancing the original low-res image, but by generating a completely new high-res face that, when pixelated, looks the same as the one inputted by the user.

This means each depixelated image can be upscaled in a variety of ways, the same way a single set of ingredients makes different dishes. Its also why you can use PULSE to see what Doom guy, or the hero of Wolfenstein 3D, or even the crying emoji look like at high resolution. Its not that the algorithm is finding new detail in the image as in the zoom and enhance trope; its instead inventing new faces that revert to the input data.

This sort of work has been theoretically possible for a few years now, but, as is often the case in the AI world, it reached a larger audience when an easy-to-run version of the code was shared online this weekend. Thats when the racial disparities started to leap out.

PULSEs creators say the trend is clear: when using the algorithm to scale up pixelated images, the algorithm more often generates faces with Caucasian features.

It does appear that PULSE is producing white faces much more frequently than faces of people of color, wrote the algorithms creators on Github. This bias is likely inherited from the dataset StyleGAN was trained on [...] though there could be other factors that we are unaware of.

In other words, because of the data StyleGAN was trained on, when its trying to come up with a face that looks like the pixelated input image, it defaults to white features.

This problem is extremely common in machine learning, and its one of the reasons facial recognition algorithms perform worse on non-white and female faces. Data used to train AI is often skewed toward a single demographic, white men, and when a program sees data not in that demographic it performs poorly. Not coincidentally, its white men who dominate AI research.

But exactly what the Obama example reveals about bias and how the problems it represents might be fixed are complicated questions. Indeed, theyre so complicated that this single image has sparked heated disagreement among AI academics, engineers, and researchers.

On a technical level, some experts arent sure this is even an example of dataset bias. The AI artist Mario Klingemann suggests that the PULSE selection algorithm itself, rather than the data, is to blame. Klingemann notes that he was able to use StyleGAN to generate more non-white outputs from the same pixelated Obama image, as shown below:

These faces were generated using the same concept and the same StyleGAN model but different search methods to Pulse, says Klingemann, who says we cant really judge an algorithm from just a few samples. There are probably millions of possible faces that will all reduce to the same pixel pattern and all of them are equally correct, he told The Verge.

(Incidentally, this is also the reason why tools like this are unlikely to be of use for surveillance purposes. The faces created by these processes are imaginary and, as the above examples show, have little relation to the ground truth of the input. However, its not like huge technical flaws have stopped police from adopting technology in the past.)

But regardless of the cause, the outputs of the algorithm seem biased something that the researchers didnt notice before the tool became widely accessible. This speaks to a different and more pervasive sort of bias: one that operates on a social level.

Deborah Raji, a researcher in AI accountability, tells The Verge that this sort of bias is all too typical in the AI world. Given the basic existence of people of color, the negligence of not testing for this situation is astounding, and likely reflects the lack of diversity we continue to see with respect to who gets to build such systems, says Raji. People of color are not outliers. Were not edge cases authors can just forget.

The fact that some researchers seem keen to only address the data side of the bias problem is what sparked larger arguments about the Obama image. Facebooks chief AI scientist Yann LeCun became a flashpoint for these conversations after tweeting a response to the image saying that ML systems are biased when data is biased, and adding that this sort of bias is a far more serious problem in a deployed product than in an academic paper. The implication being: lets not worry too much about this particular example.

Many researchers, Raji among them, took issue with LeCuns framing, pointing out that bias in AI is affected by wider social injustices and prejudices, and that simply using correct data does not deal with the larger injustices.

Others noted that even from the point of view of a purely technical fix, fair datasets can often be anything but. For example, a dataset of faces that accurately reflected the demographics of the UK would be predominantly white because the UK is predominantly white. An algorithm trained on this data would perform better on white faces than non-white faces. In other words, fair datasets can still created biased systems. (In a later thread on Twitter, LeCun acknowledged there were multiple causes for AI bias.)

Raji tells The Verge she was also surprised by LeCuns suggestion that researchers should worry about bias less than engineers producing commercial systems, and that this reflected a lack of awareness at the very highest levels of the industry.

Yann LeCun leads an industry lab known for working on many applied research problems that they regularly seek to productize, says Raji. I literally cannot understand how someone in that position doesnt acknowledge the role that research has in setting up norms for engineering deployments.

When contacted by The Verge about these comments, LeCun noted that hed helped set up a number of groups, inside and outside of Facebook, that focus on AI fairness and safety, including the Partnership on AI. I absolutely never, ever said or even hinted at the fact that research does not play a role is setting up norms, he told The Verge.

Many commercial AI systems, though, are built directly from research data and algorithms without any adjustment for racial or gender disparities. Failing to address the problem of bias at the research stage just perpetuates existing problems.

In this sense, then, the value of the Obama image isnt that it exposes a single flaw in a single algorithm; its that it communicates, at an intuitive level, the pervasive nature of AI bias. What it hides, however, is that the problem of bias goes far deeper than any dataset or algorithm. Its a pervasive issue that requires much more than technical fixes.

As one researcher, Vidushi Marda, responded on Twitter to the white faces produced by the algorithm: In case it needed to be said explicitly - This isnt a call for diversity in datasets or improved accuracy in performance - its a call for a fundamental reconsideration of the institutions and individuals that design, develop, deploy this tech in the first place.

Update, Wednesday, June 24: This piece has been updated to include additional comment from Yann LeCun.

Follow this link:
What a machine learning tool that turns Obama white can (and cant) tell us about AI bias - The Verge

Posted in Machine Learning | Comments Off on What a machine learning tool that turns Obama white can (and cant) tell us about AI bias – The Verge

Eric and Wendy Schmidt back Cambridge University effort to equip researchers with A.I. skills – CNBC

Google Executive Chairman Eric Schmidt

Win McNamee | Getty Images

Schmidt Futures, the philanthropic foundation set up by billionaires Eric and Wendy Schmidt, is funding a new program at the University of Cambridge that's designed to equip young researchers with machine learning and artificial intelligence skills that have the potential to accelerate their research.

The initiative known as the Accelerate Program for Scientific Discovery will initially be aimed at researchers in science, technology, engineering, mathematics and medicine. However, it will eventually be available for those studying arts, humanities and social science.

Some 32 PhD students will receive machine-learning training through the program in the first year, the university said, adding that the number will rise to 160 over five years. The aim is to build a network of machine-learning experts across the university.

"Machine learning and AI are increasingly part of our day-to-day lives, but they aren't being used as effectively as they could be, due in part to major gaps of understanding between different research disciplines," Professor Neil Lawrence, a former Amazon director who will lead the program, said in a statement.

"This program will help us to close these gaps by training physicists, biologists, chemists and other scientists in the latest machine learning techniques, giving them the skills they need."

The scheme will be run by four new early-career specialists, who are in the process of being recruited.

The Schmidt Futures donation will be used partly to pay the salaries of this team, which will work with the university's Department of Computer Science and Technology and external companies.

Guest lectures will be provided by research scientists at DeepMind, the London-headquartered AI research lab that was acquired by Google.

The size of the donation from Schmidt Futures has not been disclosed.

"We are delighted to support this far-reaching program at Cambridge," said Stuart Feldman, chief scientist at Schmidt Futures, in a statement. "We expect it to accelerate the use of new techniques across the broad range of research as well as enhance the AI knowledge of a large number of early-stage researchers at this superb university."

Read more here:
Eric and Wendy Schmidt back Cambridge University effort to equip researchers with A.I. skills - CNBC

Posted in Machine Learning | Comments Off on Eric and Wendy Schmidt back Cambridge University effort to equip researchers with A.I. skills – CNBC

Deliver More Effective Threat Intelligence with Federated Machine Learning – SC Magazine

Cybercriminals never stop innovating. Their increased use of automated and scripted attacks that increase speed and scale makes them more sophisticated and dangerous than ever. And because of the volume, velocity and sophistication of todays global threat landscape, enterprises must respond in real-time and at machine speeds to effectively counter these aggressive attacks. Machine learning and artificial intelligence can help deliver better, more effective threat intelligence.

As we move through 2020, AI has started increasing its capacity to detect attack patterns using a combination of threat intelligence feeds delivered by a variety of external sources, ranging from vendors to industry consortiums, and distributed sensors and learning nodes that gather information about the threats and probes targeting the edges of the networks.

This new form of distributed AI relies on something called federated machine learning. Instead of relying on a single, centralized AI system to process data and initiate a response to threats (like in centralized AI), these regional machine learning nodes will respond to threats autonomously using existing threat intelligence. Just as white blood cells automatically react to an infection, and clotting systems respond to a cut without requiring the brain to initiate those responses, these interconnected systems can see, correlate, track, and prepare for threats as they move through cyberspace by sharing information across the network, enabling local nodes to respond with increasing accuracy and efficiency to events by leveraging continually updated response models.

Its all part of an iterative cycle, where in addition to the passive data collected by local learning nodes, the data gleaned from active responses, including how malware or attackers fight back, will also get shared across the network of local peers. This will let the entire system further refine its ability to identify additional unique characteristics of attack patterns and strategies, and formulate increasingly effective threat responses.

There are many encouraging implications for cybersecurity. Security pros will use this system of distributed nodes connected to a central AI brain to detect even the most subtle deviations in normal network traffic. Examples of this are already emerging in research and development labs, particularly in health care, where researchers are using federated learning to train algorithms without centralizing sensitive data and running afoul of HIPAA. When added to production networks, this technology will make it increasingly difficult for cybercriminals to hide.

Building from there, AI can share its locally collected data with other AI systems via an M2M interface, whether from peers in an industry, within a specific geography, or with law enforcement developing a more global perspective.

In addition to pulling from external feeds or analyzing internal traffic and data, federated machine learning will feed on the deluge of relevant information coming from new edge computing devices and environments being collected by local learning nodes.

For this to work, these local nodes will need to operate in a continuous learning mode and evolve from a hub-and-spoke model back to the central AI to a more interconnected system. Rather than operating as information islands, a federated learning system would let these data sets interconnect so that learning models could adapt to event trends and changing environments from the moment a threat gets detected.

That way, rather than waiting for information to make the round trip to the central AI once an attack sensor has been tripped, other local learning nodes and embedded security devices are immediately alerted. These regional elements could then create and coordinate an ad-hoc swarm of local, interactive components to autonomously respond to the threat in real-time, even in mid-attack by anticipating the next move of the attacker or malware, while waiting for refined intelligence from a supervised authoritative master AI node.

Finally, the systems would share these events with the master AI node and also local learner nodes so that an event at one location improves the intelligence of the entire system. This would let the system customize the intelligence to the unique configurations and solutions in place at a particular place in the network. This would help local nodes collect and process data more efficiently, and also enhance their first-tier response to local cyber events.

The security industry clearly needs more efficient ways to analyze threat intelligence. When combined with automation to assist with autonomous decision-making, the intelligence gathered with federated machine learning will help organizations more effectively fight the increasingly aggressive and damaging nature of todays cybercrime. Throughout 2020 and beyond, AI in its various forms will continue to move forward, helping to level the playing field and making it more possible to fend off the growing deluge of attacks.

Derek Manky, chief, Global Threat Alliances, FortiGuard Labs

Original post:
Deliver More Effective Threat Intelligence with Federated Machine Learning - SC Magazine

Posted in Machine Learning | Comments Off on Deliver More Effective Threat Intelligence with Federated Machine Learning – SC Magazine