Search Immortality Topics:

Page 44«..1020..43444546..5060..»


Category Archives: Machine Learning

The future of software testing: Machine learning to the rescue – TechBeacon

The last decadehas seen a relentless push to deliver software faster. Automated testing has emerged as one of the most important technologies for scaling DevOps, companies are investing enormous time and effort to build end-to-end software delivery pipelines, and containers and their ecosystem are holding up on their early promise.

The combination of delivery pipelines and containers has helped high performers to deliver software faster than ever.That said, many organizations are stillstruggling to balance speed and quality. Many are stuck trying to make headway with legacy software, large test suites, and brittle pipelines. So where do yougofrom here?

In the drive to release quickly, end users have become software testers. But theyno longer want to be your testers, and companies are taking note. Companies now want to ensure that quality is not compromised in the pursuit of speed.

Testing is one of the top DevOps controls that organizations can leverage to ensure that their customers engage with a delightful brand experience. Othersinclude access control, activity logging, traceability, and disaster recovery. Our company'sresearch over the past year indicates that slow feedback cycles, slow development loops, and developer productivity will remain the top priorities over the next few years.

Quality and access control are preventative controls, while others are reactive. There will be an increasing focus on quality in the future because it prevents customers from having a bad experience. Thus, delivering value fastor better yet, delivering the right value at the right quality level fastis the key trend that we will see this year and beyond.

Here are the five key trends to watch.

Test automation efforts will continue to accelerate. A surprising number of companiesstill have manual tests in their delivery pipeline, but you can't deliver fast if you have humans in the critical path of the value chain, slowing things down. (The exception isexploratory testing, where humans are a must.)

Automating manual tests is a long process that requires dedicated engineering time. While many organizations have at least some test automation, there's more that needs to be done. That's why automatedtesting willremain one of the top trends going forward.

As teams automate tests and adopt DevOps, quality must become part of the DevOps mindset. That means quality will become a shared responsibility of everyone in the organization.

Figure 2. Top performers shift tests around to create new workflows. They shift left for earlier validation and right to speed up delivery. Source: Launchable

Teams will need to become more intentional about where tests land. Should they shift tests left to catch issues much earlier, or should they add more quality controls to the right? On the "shift-right"side of the house, practices such as chaos engineering and canary deployments are becoming essential.

Shifting large test suites left is difficult because you don't want to introduce long delays while running tests in an earlier part of your workflow. Many companies tag some tests from a large suite to run in pre-merge, but the downside is that these tests may or may not be relevant to a specific change set. Predictive test selection (see trend 5 below) provides a compelling solution for running just the relevant tests.

Over the past six to eightyears, the industry has focused on connecting various tools by building robust delivery pipelines. Each of those tools generates a heavy exhaust of data, but that data is being used minimally, if at all. We have moved from "craft" or "artisanal" solutions to the "at-scale" stage in the evolution of tools in delivery pipelines.

The next phase is to bring smartsto the tooling.Expect to see an increased emphasis by practitioners onmakingdata-driven decisions.

There are two key problems in testing: not enough tests, and too many of them. Test-generation tools take a shot at the first problem.

To create a UI test today, you either must write a lot of code or a tester has to click through the UI manually, which is an incredibly painful and slow process. To relieve this pain, test-generation tools use AI to create and run UI tests on various platforms.

For example, one tool my team exploreduses a "trainer"that lets you record actions on a web app to create scriptless tests. While scriptless testing isnt a new idea, what is new is that this tool "auto-heals"tests in lockstep with the changes to your UI.

Another tool that we explored has AI bots that act like humans. They tap buttons, swipe images, type text, and navigate screens to detect issues. Once they find an issue, they create a ticket in Jira for the developers to take action on.

More testing tools that use AI willgain traction in 2021.

AI has other uses for testing apart from test generation. For organizations struggling with runtimes of large test suites, an emerging technology calledpredictive test selectionis gaining traction.

Many companies have thousands of tests that run all the time. Testing a small change might take hours or even days to get feedback on. While more tests are generally good for quality, it also means that feedback comes more slowly.

To date, companies such as Google and Facebook have developed machine-learning algorithms that process incoming changes and run only the tests that are most likely to fail. This is predictive test selection.

What's amazing about this technology is that you can run between 10% and 20% of your tests to reach 90% confidence that a full run will not fail. This allows you to reduce a five-hour test suite that normally runs post-merge to 30 minuteson pre-merge, running only the tests that are most relevant to the source changes. Another scenario would be to reduce a one-hour run to six minutes.

Expect predictive test selection to become more mainstream in 2021.

Automated testing is taking over the world. Even so, many teams are struggling to make the transition. Continuous quality culture will become part of the DevOps mindset. Tools will continue to become smarter. Test-generation tools will help close the gap between manual and automated testing.

But as teams add more tests, they face real problems with test execution time. While more tests help improve quality, they often become a roadblock to productivity. Machine learning will come to the rescue as we roll into 2021.

See the original post here:
The future of software testing: Machine learning to the rescue - TechBeacon

Posted in Machine Learning | Comments Off on The future of software testing: Machine learning to the rescue – TechBeacon

Five real world AI and machine learning trends that will make an impact in 2021 – IT World Canada

Experts predict artificial intelligence (AI) and machine learning will enter a golden age in 2021, solving some of the hardest business problems.

Machine learning trains computers to learn from data with minimal human intervention. The science isnt new, but recent developments have given it fresh momentum, said Jin-Whan Jung, Senior Director & Leader, Advanced Analytics Lab at SAS. The evolution of technology has really helped us, said Jung. The real-time decision making that supports self-driving cars or robotic automation is possible because of the growth of data and computational power.

The COVID-19 crisis has also pushed the practice forward, said Jung. Were using machine learning more for things like predicting the spread of the disease or the need for personal protective equipment, he said. Lifestyle changes mean that AI is being used more often at home, such as when Netflix makes recommendations on the next show to watch, noted Jung. As well, companies are increasingly turning to AI to improve their agility to help them cope with market disruption.

Jungs observations are backed by the latest IDC forecast. It estimates that global AI spending will double to $110 billion over the next four years. How will AI and machine learning make an impact in 2021? Here are the top five trends identified by Jung and his team of elite data scientists at the SAS Advanced Analytics Lab:

Canadas Armed Forces rely on Lockheed Martins C-130 Hercules aircraft for search and rescue missions. Maintenance of these aircraft has been transformed by the marriage of machine learning and IoT. Six hundred sensors located throughout the aircraft produce 72,000 rows of data per flight hour, including fault codes on failing parts. By applying machine learning, the system develops real-time best practices for the maintenance of the aircraft.

We are embedding the intelligence at the edge, which is faster and smarter and thats the key to the benefits, said Jung. Indeed, the combination is so powerful that Gartner predicts that by 2022, more than 80 per cent of enterprise IoT projects will incorporate AI in some form, up from just 10 per cent today.

Computer vision trains computers to interpret and understand the visual world. Using deep learning models, machines can accurately identify objects in videos, or images in documents, and react to what they see.

The practice is already having a big impact on industries like transportation, healthcare, banking and manufacturing. For example, a camera in a self-driving car can identify objects in front of the car, such as stop signs, traffic signals or pedestrians, and react accordingly, said Jung. Computer vision has also been used to analyze scans to determine whether tumors are cancerous or benign, avoiding the need for a biopsy. In banking, computer vision can be used to spot counterfeit bills or for processing document images, rapidly robotizing cumbersome manual processes. In manufacturing, it can improve defect detection rates by up to 90 per cent. And it is even helping to save lives; whereby cameras monitor and analye power lines to enable early detection of wildfires.

At the core of machine learning is the idea that computers are not simply trained based on a static set of rules but can learn to adapt to changing circumstances. Its similar to the way you learn from your own successes and failures, said Jung. Business is going to be moving more and more in this direction.

Currently, adaptive learning is often used fraud investigations. Machines can use feedback from the data or investigators to fine-tune their ability to spot the fraudsters. It will also play a key role in hyper-automation, a top technology trend identified by Gartner. The idea is that businesses should automate processes wherever possible. If its going to work, however, automated business processes must be able to adapt to different situations over time, Jung said.

To deliver a return for the business, AI cannot be kept solely in the hands of data scientists, said Jung. In 2021, organizations will want to build greater value by putting analytics in the hands of the people who can derive insights to improve the business. We have to make sure that we not only make a good product, we want to make sure that people use those things, said Jung. As an example, Gartner suggests that AI will increasingly become part of the mainstream DevOps process to provide a clearer path to value.

Responsible AI will become a high priority for executives in 2021, said Jung. In the past year, ethical issues have been raised in relation to the use of AI for surveillance by law enforcement agencies, or by businesses for marketing campaigns. There is also talk around the world of legislation related to responsible AI.

There is a possibility for bias in the machine, the data or the way we train the model, said Jung. We have to make every effort to have processes and gatekeepers to double and triple check to ensure compliance, privacy and fairness. Gartner also recommends the creation of an external AI ethics board to advise on the potential impact of AI projects.

Large companies are increasingly hiring Chief Analytics Officers (CAO) and the resources to determine the best way to leverage analytics, said Jung. However, organizations of any size can benefit from AI and machine learning, even if they lack in-house expertise.

Jung recommends that if organizations dont have experience in analytics, they should consider getting an assessment on how to turn data into a competitive advantage. For example, the Advanced Analytics Lab at SAS offers an innovation and advisory service that provides guidance on value-driven analytics strategies; by helping organizations define a roadmap that aligns with business priorities starting from data collection and maintenance to analytics deployment through to execution and monitoring to fulfill the organizations vision, said Jung. As we progress into 2021, organizations will increasingly discover the value of analytics to solve business problems.

SAS highlights a few top trends in AI and machine learning in this video.

Jim Love, Chief Content Officer, IT World Canada

Read the original post:
Five real world AI and machine learning trends that will make an impact in 2021 - IT World Canada

Posted in Machine Learning | Comments Off on Five real world AI and machine learning trends that will make an impact in 2021 – IT World Canada

Harnessing the power of machine learning for improved decision-making – GCN.com

INDUSTRY INSIGHT

Across government, IT managers are looking to harness the power of artificial intelligence and machine learning techniques (AI/ML) to extract and analyze data to support mission delivery and better serve citizens.

Practically every large federal agency is executing some type of proof of concept or pilot project related to AI/ML technologies. The governments AI toolkit is diverse and spans the federal administrative state, according to a report commissioned by the Administrative Conference of the United States (ACUS). Nearly half of the 142 federal agencies canvassed have experimented with AI/ML tools, the report, Government by Algorithm: Artificial Intelligence in Federal Administrative Agencies, states.

Moreover, AI tools are already improving agency operations across the full range of governance tasks, including regulatory mandate enforcement, adjudicating government benefits and privileges, monitoring and analyzing risks to public safety and health, providing weather forecasting information and extracting information from the trove of government data to address consumer complaints.

Agencies with mature data science practices are further along in their AI/ML exploration. However, because agencies are at different stages in their digital journeys, many federal decision-makers still struggle to understand AI/ML. They need a better grasp of the skill sets and best practices needed to derive meaningful insights from data powered by AI/ML tools.

Understanding how AI/ML works

AI mimics human cognitive functions such as the ability to sense, reason, act and adapt, giving machines the ability to act intelligently. Machine learning is a component of AI, which involves the training of algorithms or models that then give predictions about data it has yet to observe. ML models are not programmed like conventional algorithms. They are trained using data -- such as words, log data, time series data or images -- and make predictions on actions to perform.

Within the field of machine learning, there are two main types of tasks: supervised and unsupervised.

With supervised learning, data analysts have prior knowledge of what the output values for their samples should be. The AI system is specifically told what to look for, so the model is trained until it can detect underlying patterns and relationships. For example, an email spam filter is a machine learning program that can learn to flag spam after being given examples of spam emails that are flagged by users and examples of regular non-spam emails. The examples the system uses to learn are called the training set.

Unsupervised learning looks for previously undetected patterns in a dataset with no pre-existing labels and with a minimum of human supervision. For instance, data points with similar characteristics can be automatically grouped into clusters for anomaly detection, such as in fraud detection or identifying defective mechanical parts in predictive maintenance.

Supervised, unsupervised in action

It is not a matter of which approach is better. Both supervised and unsupervised learning are needed for machine learning to be effective.

Both approaches were applied recently to help a large defense financial management and comptroller office resolve over $2 billion in unmatched transactions in an enterprise resource planning system. Many tasks required significant manual effort, so the organization implemented a robotic process automation solution to automatically access data from various financial management systems and process transactions without human intervention. However, RPA fell short when data variances exceeded tolerance for matching data and documents, so AI/ML techniques were used to resolve the unmatched transactions.

The data analyst team used supervised learning with preexisting rules that resulted in these transactions. The team was then able to provide additional value because they applied unsupervised ML techniques to find patterns in the data that they were not previously aware of.

To get a better sense of how AI/ML can help agencies better manage data, it is worth considering these three steps:

Data analysts should think of these steps as a continuous loop. If the output from unsupervised learning is meaningful, they can incorporate it into the supervised learning modeling. Thus, they are involved in a continuous learning process as they explore the data together.

Avoiding pitfalls

It is important for IT teams to realize they cannot just feed data into machine learning models, especially with unsupervised learning, which is a little more art than science. That is where humans really need to be involved. Also, analysts should avoid over-fitting models seeking to derive too much insight.

Remember: AI/ML and RPA are meant to augment humans in the workforce, not merely replace people with autonomous robots or chatbots. To be effective, agencies must strategically organize around the right people, processes and technologies to harness the power of innovative technologies such as AI/ML to achieve the performance they need at scale.

About the Author

Samuel Stewart is a data scientist with World Wide Technology.

Read the original post:
Harnessing the power of machine learning for improved decision-making - GCN.com

Posted in Machine Learning | Comments Off on Harnessing the power of machine learning for improved decision-making – GCN.com

Taking Micro Machine Learning to the MAX78000 – Electronic Design

What youll learn

I tend to do only a few hands-on articles a year, so I look for cutting-edge platforms that developers will want to check out. Maxim Integrateds MAX78000 evaluation kit fits in this bucket. The MAX78000 is essentially an Arm Cortex-M4F microcontroller with a lot of hardware around it, including a convolutional-neural-network (CNN) accelerator designed by Maxim (Fig. 1). This machine-learning (ML) support allows the chip to handle chores like identifying voice keywords or even faces in camera images in real time without busting the power budget.

1.The MAX78000 includes a Cortex-M4F and RISC-V cores as well as a CNN accelerator.

The chip also includes a RISC-V core that caught my eye. However, the development tools are so new that the RISC-V support is still in the works as the Cortex-M4F is the main processor. Even the CNN support is just out of the beta stage, but that's where this article will concentrate on.

The MAX78000 has the usual microcontroller peripheral complement, including a range of serial ports, timers, and parallel serial interfaces like I2S. It even has a parallel camera interface. Among the analog peripherals is an 8-channel, 10-bit sigma-delta ADC. There are four comparators as well.

The chip has large 512-kB flash memory along with 128 kB of SRAM and a boot ROM that allows more complex boot procedures such as secure boot support. There's on-chip key storage as well as CRC and AES hardware support. We will get into the CNN support a little later. The Github-based documentation covers some of the features I outline here in step-by-step detail.

The development tools are free and based on Eclipse, which is the basis for other platforms like Texas Instruments' Code Composer Studio and Silicon Labs Simplicity Studio. Maxim doesn't do a lot of customization, but there's enough to facilitate using hardware like the MAX78000 while making it easy to utilize third party plug-ins and tools, which can be quite handy when dealing with cloud or IoT development environments. The default installation includes examples and tutorials that enable easy testing of the CNN hardware and other peripherals.

The MAX78000 development board features two LCD displays. The larger, 3.5-in TFT touch-enabled display is for the processor, while the second, smaller display provides power-management information. The chip doesn't have a display controller built in, so it uses a serial interface to work with the larger display. The power-tracking support is sophisticated, but I won't delve into that now.

There's a 16-MB QSPI flash chip that can be handy for storing image data. In addition, a USB bridge to the flash chip allows for faster and easier downloads.

The board also adds some useful devices like a digital microphone, a 3D accelerometer, and 3D gyro. Several buttons and LEDs round out the peripherals.

There are a couple JTAG headers; the RISC-V core has its own. As noted, I didnt play with the RISC-V core this time around as it's not required for using the CNN supportalthough it could. Right now, the Maxim tools generate C code for the Cortex-M4F to set up the CNN hardware. The CNN hardware is designed to handle a single model, but it's possible to swap in new models quickly.

As with most ML hardware, the underlying hardware tends to be hidden from most programmers, providing more of a black-box operation where you set up the box and feed it data with results coming out the other end. This works well if the models are available; it's a matter of training them with different information or using trained models. The challenge comes when developing and training new models, which is something I'll avoid discussing here.

I did try out two of the models provided by Maxim, including a Keyword Spotting and a Face Identification (FaceID) application. The Keyword Spotting app is essentially the speech-recognition system that can be used to listen for a keyword to start off a cloud-based discussion, which is how most Alexa-based voice systems work since the cloud handles everything after recognizing a keyword.

On the other hand, being able to recognize a number of different keywords makes it possible to build a voice-based command system, such as those used in many car navigation systems. As usual, the Cortex-M4F handles the input and does a bit of munging to provide suitable inputs to the CNN accelerator (Fig. 2). The detected class output specifies which keyword is recognized, if any. The application can then utilize this information.

2. The Cortex-M4F handles the initial audio input stream prior to handing off the information to the CNN accelerator.

The FaceID system highlights the camera support of the MAX78000 (Fig. 3). This could be used to recognize a face or identify a particular part moving by on an assembly line. The sample application can operate using canned inputs, as shown in the figure, or from the camera.

3. The FaceID application highlights the CNNs ability to process images in real time.

Using the defaults is as easy as compiling and programming the chip. Maxim provides all of the sample code and procedures. These can be modified somewhat, but retraining a model is a more involved exercisethough one that Maxims documentation does cover. These examples provide an outline of what's needed to be done as well as what needs to be changed to customize the solution.

Changing the model and application to something like a motor vibration-monitoring system will be a significant job requiring a new model, but one that the chip is likely able to handle. It will require much more machine learning and CNN support, so it's not something that should be taken lightly.

The toolset supports models from platforms like TensorFlow and PyTorch (Fig. 4). This is useful because training isn't handled by the chip, but rather done on platforms like a PC or cloud servers. Likewise, the models can be refined and tested on higher-end hardware to verify the models, which can then be pruned to fit on the MAX78000.

4. PyTorch is just one of the frameworks handled by the MAX78000. Training isn't done on the micro. Maxims tools convert the models to code that drives the CNN hardware.

At this point, the CNN accelerator documentation is a bit sparse, as is the RISC-V support. Maxims CNN model compiler kicks out C code that drops in nicely to the Eclipse IDE. Debugging the regular application code is on par with other cross-development systems where remote debugging via JTAG is the norm.

Maxim also provides the MAX78000FTHR, the little brother of the evaluation kit (Fig. 5), This doesn't have the display or other peripheral hardware, but most I/O is exposed. The board alone is only $25. The chip is priced around $15 in small quantities. The Github-based documentation provides more details.

5. The evaluation kit has a little brother, the MAX78000FTHR.

The MAX78000 was fun to work with. It's a great platform for supporting ML applications on the edge. However, be aware that while it's a very low power solution, it's not the same thing as even a low-end Nvidia Jetson Nano. It will be interesting to check out the power-tracking support since power utilization and requirements will likely be key factors in many MAX78000 applications, especially battery-based solutions.

Original post:
Taking Micro Machine Learning to the MAX78000 - Electronic Design

Posted in Machine Learning | Comments Off on Taking Micro Machine Learning to the MAX78000 – Electronic Design

Northwell Health researchers using Facebook data and AI to spot early stages of severe psychiatric illness – FierceHealthcare

After going missing for three days in 2016, Christian Herrera Gaton of Jackson Heights, New York, was diagnosed with bipolar disorder type 1.

His experiences with bipolar disorder include mood swings, depression and manic episodes. During a recent bout with the illness, he was admitted to Zucker Hillside Hospital in August 2020 due to some stress he was feeling from the COVID-19 pandemic.

While at Zucker for treatment, the Feinstein Institutes for Medical Research, the research arm of New Yorks Northwell Health, approached him to join a study about Facebook data and psychiatric conditions.

The goal of the study was to use machine learning algorithms to predict a patients psychiatric diagnosis more than a year in advance of an official diagnosis and stay in the hospital.

Michael Birnbaum, M.D., assistant professor at Feinstein Institutes Institute of Behavioral Science, saw an opportunity to use the social media platforms that are a part of everyday life to gain insights into the early stages of severe psychiatric illness.

There was an interest in harnessing these ubiquitous, widely used platforms in understanding how we could improve the work that we do, Birnbaum said in an interview. We wanted to know what we can learn from the digital universe and all of the data that's being created and uploaded by the young folks that we treat. That's what motivated our interest.

RELATED:Brigham and Women's taps mental health startup to use AI to track providers' stress

After Gaton, a former student at John Jay College of Criminal Justice, was discharged from the hospital, he shared almost 10 years of Facebook and Instagram data with the Feinstein Institutes. He uploaded an archive that contained pictures, private messages and basic user information.

It's been a difficult experience to deal with [COVID] and to go through everything with the hospitals and losing friends because of doing stupid things during manic episodes, Gaton told Fierce Healthcare. It's not easy, but at least I get to join this research study and help other people.

The study, conducted along with IBM Research, looked at patients with schizophrenia spectrum disorders and mood disorders. Feinstein Institutes researchers handled the participant recruitment and assessments as well as data collection and analysis. Meanwhile, IBM developed the machine learning algorithms that researchers used to analyze Facebook data.

Results of thestudy, called Identifying signals associated with psychiatric illness utilizing language and images posted to Facebook, was published Dec. 3 in Nature Partner Journals (npj) Schizophrenia.

Feinstein Institutes and IBM researchers studied archives of people in an early treatment program to extract meaning from the data to gain an understanding of how people with mental illness use social media.

Essentially, at its core, the machine learns to predict which group an individual belongs to, based on data that we feed it, Birnbaum explained. So, for example, if we show the computer a Facebook post and then we say to the computer, based on what you've learned so far and based on the patterns that you recognize, does this post belong to an individual with schizophrenia or bipolar disorder? Then the computer makes a prediction.

Birnbaum added that the greater the predictions and accuracy, the more effective the algorithms are at predicting which characteristics belong to which group of people.

Feinstein and IBM took care to anonymize the social media data, according to Birnbaum. They stripped out names and addresses from written posts. Words essentially using language-analytic software become vectors, Birnbaum said. The actual content of the sentences, once they're parsed through the software, often becomes meaningless.

In addition, the machine learning software does not analyze participants images closely. Instead, it focuses on shape, size, height, contrast and colors, Birnbaum said.

We did our best to ensure that we identified the data to the extent possible and ensured the confidentiality of our participants because that's one of our top priorities, of course, Birnbaum said.

The study analyzed Facebook data for the 18 months prior to help predict a patients diagnosis or hospitalization a year in advance.

Researchers used machine learning algorithms to study 3.4 million Facebook messages and 142,390 images for 223 participants for up to 18 months before their first psychiatrichospitalization. Study subjects with schizophrenia spectrum disorders and mood disorders were more prone to discuss anger, swearing, sex and negative emotions in their Facebook posts, according to the study.

RELATED:Northwell Health research arm develops AI tool to help hospital patients avoid sleepless nights

Birnbaum sees an opportunity to use the data from social media platforms to gain insights to deliver better healthcare. By using social media, such as analyzing Facebook status updates, researchers can gain insights on personality traits, demographics, political views and substance use.

Harnessing social media platforms could be a significant step forward for psychiatry, which is limited by its reliance on mostly retrospective, self-reported data, the study stated.

Gaton believes that he could have avoided time in the hospital if he received an earlier diagnosis. Like with other subjects in the study, Gaton can sense the warning signs of an episode when he starts to post differently on Facebook.

From analyzing the data, researchers were able to study who would use more swear words compared with healthy volunteers. Some participants would use words related to blood, pain or biological processes. As their conditions progressed and patients neared hospitalization, they would use more punctuation and negative emotional words in their Facebook posts, according to the study.

Other organizations are also turning to artificial intelligence to monitor mental health. Researchers at Brigham and Women's Hospital are using AI technology from startup Rose to monitor the mental well-being of front-line workers during the COVID-19 pandemic. Meanwhile, the Feinstein Institutes recently developed an AI tool that can help patients get better sleep in the hospital.

Researchers see a use for social media data for patients that could be similar to the vital data they pull from a blood or urine sample, according to Birnbaum. I could imagine a world where people go see their psychiatrists and provide their archives in the same way they provide a blood test, which is then analyzed much like a blood test and is used to inform clinical decision-making moving forward, he said.

RELATED:The unexpected ways AI is impacting the delivery of care, including for COVID-19

I think that is where psychiatry is heading, and social media will play a component of a much larger, broader digital representation of behavioral health.

Guillermo Cecchi, principal research staff member, computational psychiatry, at IBM Research, also sees a use for social media data as a common way to evaluate patients.

Our vision is that this type of technology could one day be used in a non-burdensome way, with patient consent and high privacy standards, to provide clinicians with the most comprehensive and relevant information to make treatment decisions, including regular clinical assessments, biomarkers and a patients medical history, Cecchi told Fierce Healthcare.

Researchers hope that the Facebook data can inform future studies.

Ultimately, the language markers we identified with AI in this study could be used to inform future work, shaped with rigorous ethical frameworks, that could help clinicians to monitor the progression of mental health patients considered at-risk for relapse or undergoing treatment, Cecchi said.

Gaton said he would like to see the technology get more accurate. I just hope that with my contributions to the study, the technology gets more accurate and more responsive and can be something that doctors can use in the near futurewith patient consent, of course, he said.

Read the original here:
Northwell Health researchers using Facebook data and AI to spot early stages of severe psychiatric illness - FierceHealthcare

Posted in Machine Learning | Comments Off on Northwell Health researchers using Facebook data and AI to spot early stages of severe psychiatric illness – FierceHealthcare

Connected and autonomous vehicles: Protecting data and machine learning innovations – Lexology

The development of connected and autonomous vehicles (CAVs) is technology-driven and data-centric. Zenzics Roadmap to 2030 highlights that 'the intelligence of self-driving vehicles is driven by advanced features such as artificial intelligence (AI) or machine learning (ML) techniques'.[1] Developers of connected and automated mobility (CAM) technologies are engineering advances in machine learning and machine analysis techniques that can create valuable, potentially life-saving, insights from the massive well of data that is being generated.

Diego Black and Lucy Pegler take a look at the legal and regulatory issues involved in protecting data and innovations in CAVs.

The data of driving

It is predicted that the average driverless car will produce around 4TB of data per day, including data on traffic, route choices, passenger preferences, vehicle performance and many more data points[2].

'Data is foundational to emerging CAM technologies, products and services driving their safety, operation and connectivity'.[3]

As Burges Salmon and AXA UK outlined in their joint report as part of FLOURISH, an Innovate UK-funded CAV project, the data produced by CAVs can be broadly divided into a number of categories based on its characteristics. For example, sensitive commercial data, commercial data, personal data. How data should be protected will depend on its characteristics and importantly, the purposes for which it is used. The use of personal data (i.e. data from which an individual can be identified) attracts particular consideration.

The importance of data to the CAM industry and, in particular, the need to share data effectively to enable the deployment and operation of CAM, needs to be balanced against data protection considerations. In 2018, the Open Data Institute (ODI) published a report setting out that it considered that all journey data is personal data[4] consequently bringing journey data within the scope of the General Data Protection Regulation.[5]

Additionally, the European Data Protection Board (EDPB) has confirmed that the ePrivacy directive (2002/58/EC as revised by 2009/136/EC) applies to connected vehicles by virtue of 'the connected vehicle and every device connected to it [being] considered as a 'terminal equipment'.'[6] This means that any machine learning innovations deployed in CAVs will inevitably process vast amounts of personal data. The UK Information Commissioners Office has issued guidance on how to best deal with harnessing both big data and AI in relation to personal data, including emphasising the need for industry to deploy ethical principles, create ethics boards to monitor the new uses of data and ensure that machine learning algorithms are auditable.[7]

Navigating the legal frameworks that apply to the use of data is complex and whilst the EDPB has confirmed its position in relation to connected vehicles, automated vehicles and their potential use cases raise an entirely different set of considerations. Whilst the market is developing rapidly, use case scenarios for automated mobility will focus on how people consume services. Demand responsive transport and ride sharing are likely to play a huge role in the future of personal mobility.

The main issue policy makers now face is the ever evolving nature of the technology. As new, potentially unforeseen, technologies are integrated into CAVs, the industry will require both a stringent data protection framework on the one hand, and flexibility and accessibility on the other hand. These two policy goals are necessarily at odds with one another, and the industry will need to take a realistic, privacy by design approach to future development, working with rather than against regulators.

Whilst the GDPR and ePrivacy Directive will likely form the building blocks of future regulation of CAV data, we anticipate the development of a complementary framework of regulation and standards that recognises the unique applications of CAM technologies and the use of data.

Cyber security

The prolific and regular nature of cyber-attacks poses risks to both public acceptance of CAV technology and to the underlying business interests of organisations involved in the CAV ecosystem.

New technologies can present threat to existing cyber security measures. Tarquin Folliss of Reliance acsn highlights this noting that 'a CAVs mix of operational and information technology will produce systems complex to monitor, where intrusive endpoint monitoring might disrupt inadvertently the technology underpinning safety'. The threat is even more acute when thinking about CAVs in action and as Tarquin notes, the ability for 'malign actors to target a CAV network in the same way they target other critical national infrastructure networks and utilities, in order to disrupt'.

In 2017, the government announced 8 Key principles of Cyber Security for Connected and Automated Vehicles. This, alongside the DCMS IoT code of practice, the CCAVs CAV code of practice and the BSIs PAS 1885, provides a good starting point for CAV manufacturers. Best practices include:

Work continues at pace on cyber security for CAM. In May this year, Zenzic published its Cyber Resilience in Connected and Automated Mobility (CAM) Cyber Feasibility Report which sets out the findings of seven projects tasked with providing a clear picture of the challenges and potential solutions in ensuring digital resilience and cyber security within CAM.

Demonstrating the pace of work in the sector, in June 2020 the United Nations Economic Commission for Europe (UNECE) published two new UN Regulations focused on cyber security in the automotive sector. The Regulations represent another step-change in the approach to managing the significant cyber risk of an increasingly connected automotive sector.

Protecting innovation

As innovation in the CAV sector increases, issues regarding intellectual property and its protection and exploitation become more important. Companies that historically were not involved in the automotive sector are now rapidly becoming key partners providing expertise in technologies such as IT security, telecoms, block chain and machine learning. In autonomous vehicles many of the biggest patent filers in this area have software and telecoms backgrounds[8].

With the increasing use of in and inter-car connectivity and the accumulative amount of data having to be handled per second as levels of autonomy rises, innovators in the CAV space are having to handle issues regarding data security as well as determining how best to handle the large data sets. Furthermore, the recent UK government call for evidence on automated lane keeping systems is being seen by many as the first step of standards being introduced in autonomous vehicles.

In view of these developments new challenges are now being faced by companies looking to benefit from their innovations. Unlike more traditional automotive innovation where the innovations lay in improvements to engineering and machinery many of the innovations in the CAV space reside in electronics and software development. The ability to protect and exploit inventions in the software space has become increasingly of relevance in the automotive industry.

Multiple Intellectual Property rights exist that can be used to protect innovations in CAVs. Some rights can be particularly effective in areas of technology where standards exist, or are likely to exist. Two of the main ways seen at present are through the use of patents and trade secrets. Both can be used in combination, or separately, to provide an effective IP strategy. Such an approach is seen in other industries such as those involved in data security.

For companies that are developing or improving machine learning models, or training sets, the use of trade secrets is particularly common. Companies relying on trade secrets may often license access to, or sell the outputs of, their innovations. Advantageously, trade secrets are free and last indefinitely.

An effective strategy in such fields is to obtain patents that cover the technological standard. By definition if a third party were to adhere to the defined standard, they would necessarily fall within the scope of the patent, thus providing the owner of the patent with a potential revenue stream through licensing agreements. If, as anticipated, standards will be set in CAVs any company that can obtain patents to cover the likely standard will be at an advantage. Such licenses are typically offered under a fair, reasonable and non-discriminatory (FRAND) basis, to ensure that companies are not prevented by patent holders from entering the market.

A key consideration is that the use of trade secrets may be incompatible with the use of standards. If technology standards are introduced for autonomous vehicles, in order to comply with the standards companies would have to demonstrate that their technology complies with the standard. The use of trade secrets may be incompatible with the need to demonstrate compliance with a standard.

However, whilst a patent provides a stronger form of protection in order to enforce a patent the owner must be able to demonstrate a third party is performing the acts as defined in the patent. In the case of machine learning and mathematical-based methods such information is often kept hidden making providing infringement difficult. As a result patents in such areas are often directed towards a visible, or tangible, output. For example in CAVs this may be the control of a vehicle based on the improvements in the machine learning. Due to the difficulty in demonstrating infringement, many companies are choosing to protect their innovations with a mixture of trade secrets and patents.

Legal protections for innovations

For the innovations typically seen in the software side of CAVs, trade secrets and patents are the two main forms of protection.

Trade secrets are, as the name implies, where a company will keep all, or part of, their innovation a secret. In software-based inventions this may be in form of a black-box disclosure where the workings and functionality of the software are kept secret. However, steps do need to be taken to keep the innovation secret, and they do not prevent a third party from independently implementing, or reverse engineering, the innovation. Furthermore, once a trade secret is made public, the value associated with the trade secret is gone.

Patents are an exclusive right, lasting up to 20 years, which allow the holder to prevent, or request a license from, a third party utilising the technology that is covered by the scope of the patent in that territory. Therefore it is not possible to enforce say, a US patent in the UK. Unlike trade secrets publication of patents is an important part of the process.

In order for inventions to be patented they must be new (that is to say they have not been disclosed anywhere in the world before), inventive (not run-of-the-mill improvements), and concern non-excluded subject matter. The exclusions in the UK and Europe cover software, and mathematical methods, amongst other fields, as such. In the case of CAVs a large number of inventions are developed that could fall in the software and mathematical methods categories.

The test regarding whether or not an invention may be seen as excluded subject matter varies between jurisdictions. In Europe if an invention is seen to solve a technical problem, for example relating to the control of vehicles it would be deemed allowable. Many of the innovations in CAVs can be tied to technical problems relating to, for example, the control of vehicles or improvements in data security. As such on the whole CAV inventions may escape the exclusions.

What does the future hold?

Technology is advancing at a rapid rate. At the same time as industry develops more and more sophisticated software to harness data, bad actors gain access to more advanced tools. To combat these increased threats, CAV manufacturers need to be putting in place flexible frameworks to review and audit their uses of data now, looking toward the developments of tomorrow to assess the data security measures they have today. They should also be looking to protect some of their most valuable IP assets from the outset, including machine learning developments in a way that is secure and enforceable.

Originally posted here:
Connected and autonomous vehicles: Protecting data and machine learning innovations - Lexology

Posted in Machine Learning | Comments Off on Connected and autonomous vehicles: Protecting data and machine learning innovations – Lexology