Search Immortality Topics:

Page 100«..1020..99100101102..110120..»


Category Archives: Machine Learning

Millions of historic newspaper images get the machine learning treatment at the Library of Congress – TechCrunch

Historians interested in the way events and people were chronicled in the old days once had to sort through card catalogs for old papers, then microfiche scans, then digital listings but modern advances can index them down to each individual word and photo. A new effort from the Library of Congress has digitized and organized photos and illustrations from centuries of news using state of the art machine learning.

Led by Ben Lee, a researcher from the University of Washington occupying the Librarys Innovator in Residence position, the Newspaper Navigator collects and surfaces data from images from some 16 million pages of newspapers throughout American history.

Lee and his colleagues were inspired by work already being done in Chronicling America, an ongoing digitization effort for old newspapers and other such print materials. While that work used optical character recognition to scan the contents of all the papers, there was also a crowdsourced project in which people identified and outlined images for further analysis. Volunteers drew boxes around images relating to World War I, then transcribed the captions and categorized the picture.

This limited effort set the team thinking.

I loved it because it emphasized the visual nature of the pages seeing the visual diversity of the content coming out of the project, I just thought it was so cool, and I wondered what it would be like to chronicle content like this from all over America, Lee told TechCrunch.

He also realized that what the volunteers had created was in fact an ideal set of training data for a machine learning system. The question was, could we use this stuff to create an object detection model to go through every newspaper, to throw open the treasure chest?

The answer, happily, was yes. Using the initial human-powered work of outlining images and captions as training data, they built an AI agent that could do so on its own. After the usual tweaking and optimizing, they set it loose on the full Chronicling America database of newspaper scans.

It ran for 19 days nonstop definitely the largest computing job Ive ever run, said Lee. But the results are remarkable: millions of images spanning three centuries (from 1789 to 1963) and organized with metadata pulled from their own captions. The team describes their work in a paper you can read here.

Assuming the captions are at all accurate, these images until recently only accessible by trudging through the archives date by date and document by document can be searched for by their contents, like any other corpus.

Looking for pictures of the president in 1870? No need to browse dozens of papers looking for potential hits and double-checking the contents in the caption just search Newspaper Navigator for president 1870. Or if you want editorial cartoons from the World War II era, you can just get all illustrations from a date range. (The team has already zipped up the photos into yearly packages and plans other collections.)

Here are a few examples of newspaper pages with the machine learning systems determinations overlaid on them (warning: plenty of hat ads and racism):

Thats fun for a few minutes for casual browsers, but the key thing is what it opens up for researchers and other sets of documents. The team is throwing a data jam today to celebrate the release of the data set and tools, during which they hope to both discover and enable new applications.

Hopefully it will be a great way to get people together to think of creative ways the data set can be used, said Lee. The idea Im really excited by from a machine learning perspective is trying to build out a user interface where people can build their own data set. Political cartoons or fashion ads, just let users define theyre interested in and train a classifier based on that.

A sample of what you might get if you asked for maps from the Civil War era.

In other words, Newspaper Navigators AI agent could be the parent for a whole brood of more specific ones that could be used to scan and digitize other collections. Thats actually the plan within the Library of Congress, where the digital collections team has been delighted by the possibilities brought up by Newspaper Navigator, and machine learning in general.

One of the things were interested in is how computation can expand the way were enabling search and discovery, said Kate Zwaard. Because we have OCR, you can find things it would have taken months or weeks to find. The Librarys book collection has all these beautiful plates and illustrations. But if you want to know like, what pictures are there of the Madonna and child, some are categorized, but others are inside books that arent catalogued.

That could change in a hurry with an image-and-caption AI systematically poring over them.

Newspaper Navigator, the code behind it and all the images and results from it are completely public domain, free to use or modify for any purpose. You can dive into the code at the projects GitHub.

Read more:
Millions of historic newspaper images get the machine learning treatment at the Library of Congress - TechCrunch

Posted in Machine Learning | Comments Off on Millions of historic newspaper images get the machine learning treatment at the Library of Congress – TechCrunch

Machine Learning Engineer: Challenges and Changes Facing the Profession – Dice Insights

Last year, the fastest-growing job title in the world was that of the machine learning (ML) engineer, and this looks set to continue for the foreseeable future. According to Indeed, the average base salary of an ML engineer in the US is $146,085, and the number of machine learning engineer openings grew by 344% between 2015 and 2018. Machine learning engineers dominate the job postings around artificial intelligence (A.I.), with 94% of job advertisements that contain AI or ML terminology targeting machine learning engineers specifically.

This demonstrates that organizations understand how profound an effect machine learning promises to have on businesses and society. AI and ML are predicted to drive a Fourth Industrial Revolution that will see vast improvements in global productivity and open up new avenues for innovation; by 2030, its predicted that the global economy will be$15.7 trillion richersolely because of developments from these technologies.

The scale of demand for machine learning engineers is also unsurprising given how complex the role is. The goal of machine learning engineers is todeploy and manage machine learning modelsthat process and learn from the patterns and structures in vast quantities of data, into applications running in production, to unlock real business value while ensuring compliance with corporate governance standards.

To do this, machine learning engineers have to sit at the intersection of three complex disciplines. The first discipline is data science, which is where the theoretical models that inform machine learning are created; the second discipline is DevOps, which focuses on the infrastructure and processes for scaling the operationalization of applications; and the third is software engineering, which is needed to make scalable and reliable code to run machine learning programs.

Its the fact that machine learning engineers have to be at ease in the language of data science, software engineering, and DevOps that makes them so scarceand their value to organizations so great. A machine learning engineer has to have a deep skill-set; they must know multiple programming languages, have a very strong grasp of mathematics, and be able to understand andapply theoretical topics in computer science and statistics. They have to be comfortable with taking state-of-the-art models, which may only work in a specialized environment, andconverting them into robust and scalable systems that are fit for a business environment.

As a burgeoning occupation, the role of a machine learning engineer is constantly evolving. The tools and capabilities that these engineers have in 2020 are radically different from those they had available in 2015, and this is set to continue evolve as the specialism matures. One of the best ways to understand what the role of a machine learning engineer means to an organization is to look at the challenges they face in practice, and how they evolve over time.

Four major challenges that every machine learning engineer has to deal with are data provenance, good data, reproducibility, and model monitoring.

Across a models development and deployment lifecycle, theres interaction between a variety of systems and teams. This results in a highly complex chain of data from a variety of sources. At the same time, there is a greater demand than ever for data to be audited, and there to be a clear lineage of its organizational uses. This is increasingly a priority for regulators, with financial regulators now demandingthat all machine learning data be stored for seven years for auditing purposes.

This does not only make the data and metadata used in models more complex, but it also makes the interactions between the constituent pieces of data far more complex. This means machine learning engineers need to put the right infrastructure in place to ensure the right data and metadata is accessible, all while making sure it is properly organized.

Membership has its benefits. Sign up for a free Dice profile, add your resume, discover great career insights and set your tech career in motion. Register now

In 2016, it was estimated that the US alonelost $3.1 trillionto bad datadata thats improperly formatted, duplicated, or incomplete. People and businesses across all sectors lose time and money because of this, but in a job that requires building and running accurate models reliant on input data, these issues can seriously jeopardize projects.

IBM estimates that around80 percent of a data scientists timeis spentfinding, cleaning up, and organizing the data they put into their models. Over time, however, increasingly sophisticated error and anomaly detection programs will likely be used to comb through datasets and screen out information that is incomplete or inaccurate.

This means that, as time goes on and machine learning capabilities continue to develop, well see machine learning engineers have more tools in their belt to clean up the information their programs use, and thus be able to focus more time spent on putting together ML programs themselves.

Reproducibility is often defined as the ability to be able to keep a snapshot of the state of a specific machine learning model, and being able to reproduce the same experiment with the exact same results regardless of the time and location. This involves a great level of complexity, given that machine learning requires reproducibility of three components: 1) code, 2) artifacts, and 3) data. If one of these change, then the result will change.

To add to this complexity, its also necessary to keep reproducibility of entire pipelines that may consist of two or more of these atomic steps, which introduces an exponential level of complexity. For machine learning, reproducibility is important because it lets engineers and data scientists know that the results of a model can be relied upon when they are deployed live, as they will be the same if they are run today as if they were run in two years.

Designing infrastructure for machine learning that is reproducible is a huge challenge. It will continue to be a thorn in the side of machine learning engineers for many years to come. One thing that may make this easier in coming years is the rise of universally accepted frameworks for machine learning test environments, which will provide a consistent barometer for engineers to measure their efforts against.

Its easy to forget that the lifecycle of a machine learning model only begins when its deployed to production. Consequently, a machine learning engineer not only needs to do the work of coding, testing, and deploying a model, but theyll have to also develop the right tools to monitor it.

The production environment of a model can often throw up scenarios the machine learning engineer didnt anticipate when they were creating it. Without monitoring and intervention after deployment, its likely that a model can end up being rendered dysfunctional or produce skewed results by unexpected data. Without accurate monitoring, results can often slowly drift away from what is expected due to input data becoming misaligned with the data a model was trained with, producing less and less effective or logical results.

Adversarial attacks on models, often far more sophisticated than tweets and a chatbot, are of increasing concern, and it is clear that monitoring by machine learning engineers is needed to stop a model being rendered counterproductive by unexpected data. As more machine learning models are deployed, and as more economic output becomes dependent upon these models, this challenge is only going to grow in prominence for machine learning engineers going forward.

One of the most exciting things about the role of the machine learning engineer is that its a job thats still being defined, and still faces so many open problems. That means machine learning engineers get the thrill of working in a constantly changing field that deals with cutting-edge problems.

Challenges such as data quality may be problems we can make major progress towards in the coming years. Other challenges, such monitoring, look set to become more pressing in the more immediate future. Given the constant flux of machine learning engineering as an occupation, its of little wonder that curiosity and an innovative mindset are essential qualities for this relatively new profession.

Alex Housley is CEO ofSeldon.

See more here:
Machine Learning Engineer: Challenges and Changes Facing the Profession - Dice Insights

Posted in Machine Learning | Comments Off on Machine Learning Engineer: Challenges and Changes Facing the Profession – Dice Insights

How Machine Learning Is Redefining The Healthcare Industry – Small Business Trends

The global healthcare industry is booming. As per recent research, it is expected to cross the $2 trillion mark this year, despite the sluggish economic outlook and global trade tensions. Human beings, in general, are living longer and healthier lives.

There is increased awareness about living organ donation. Robots are being used for gallbladder removals, hip replacements, and kidney transplants. Early diagnosis of skin cancers with minimum human error is a reality. Breast reconstructive surgeries have enabled breast cancer survivors to partake in rebuilding their glands.

All these jobs were unthinkable sixty years ago. Now is an exciting time for the global health care sector as it progresses along its journey for the future.

However, as the worldwide population of 7.7 billion is likely to reach 8.5 billion by 2030, meeting health needs could be a challenge. That is where significant advancements in machine learning (ML) can help identify infection risks, improve the accuracy of diagnostics, and design personalized treatment plans.

source: Deloitte Insights 2020 global health care outlook

In many cases, this technology can even enhance workflow efficiency in hospitals. The possibilities are endless and exciting, which brings us to an essential segment of the article:

Do you understand the concept of the LACE index?

Designed in Ontario in 2004, it identifies patients who are at risk of readmission or death within 30 days of being discharged from the hospital. The calculation is based on four factors length of stay of the patient in the hospital, acuity of admission, concurring diseases, and emergency room visits.

The LACE index is widely accepted as a quality of care barometer and is famously based on the theory of machine learning. Using the past health records of the patients, the concept helps to predict their future state of health. It enables medical professionals to allocate resources on time to reduce the mortality rate.

This technological advancement has started to lay the foundation for closer collaboration among industry stakeholders, affordable and less invasive surgery options, holistic therapies, and new care delivery models. Here are five examples of current and emerging ML innovations:

From the initial screening of drug compounds to calculating the success rates of a specific medicine based on physiological factors of the patients the Knight Cancer Institute in Oregon and Microsofts Project Hanover are currently applying this technology to personalize drug combinations to cure blood cancer.

Machine learning has also given birth to new methodologies such as precision medicine and next-generation sequencing that can ensure a drug has the right effect on the patients. For example, today, medical professionals can develop algorithms to understand disease processes and innovative design treatments for ailments like Type 2 diabetes.

Signing up volunteers for clinical trials is not easy. Many filters have to be applied to see who is fit for the study. With machine learning, collecting patient data such as past medical records, psychological behavior, family health history, and more is easy.

In addition, the technology is also used to monitor biological metrics of the volunteers and the possible harm of the clinical trials in the long-run. With such compelling data in hand, medical professionals can reduce the trial period, thereby reducing overall costs and increasing experiment effectiveness.

Every human body functions differently. Reactions to a food item, medicine, or season differ. That is why we have allergies. When such is the case, why is customizing the treatment options based on the patients medical data still such an odd thought?

Machine learning helps medical professionals determine the risk for each patient, depending on their symptoms, past medical records, and family history using micro-bio sensors. These minute gadgets monitor patient health and flag abnormalities without bias, thus enabling more sophisticated capabilities of measuring health.

Cisco reports that machine-to-machine connection in global healthcare is growing at a rate of 30% CAGR which is the highest compared to any other industry!

Machine learning is mainly used to mine and analyze patient data to find out patterns and carry out the diagnosis of so many medical conditions, one of them being skin cancer.

Over 5.4mn people in the US are diagnosed with this disease annually. Unfortunately, the diagnosis is a virtual and time-taking process. It relies on long clinical screenings, comprising a biopsy, dermoscopy, and histopathological examination.

But machine learning changes all that. Moleanalyzer, an Australia-based AI software application, calculates and compares the size, diameter, and structure of the moles. It enables the user to take pictures at predefined intervals to help differentiate between benign and malignant lesions on the skin.

The analysis lets oncologists confirm their skin cancer diagnosis using evaluation techniques combined with ML, and they can start the treatment faster than usual. Where experts could identify malignant skin tumors, only 86.6% correctly, Moleanalyzer successfully detected 95%.

Healthcare providers have to ideally submit reports to the government with necessary patient records that are treated at their hospitals.

Compliance policies are continually evolving, which is why it is even more critical to ensure the hospital sites to check if they are being compliant and functioning within the legal boundaries. With machine learning, it is easy to collect data from different sources, using different methods and formatting them correctly.

For data managers, comparing patient data from various clinics to ensure they are compliant could be an overwhelming process. Machine learning helps gather, compare, and maintain that data as per the standards laid down by the government, informs Dr. Nick Oberheiden, Founder and Attorney, Oberheiden P.C.

The healthcare industry is steadily transforming through innovative technologies like AI and ML. The latter will soon get integrated into practice as a diagnostic aid, particularly in primary care. It plays a crucial role in shaping a predictive, personalized, and preventive future, making treating people a breeze. What are your thoughts?

Image: Depositphotos.com

Continue reading here:
How Machine Learning Is Redefining The Healthcare Industry - Small Business Trends

Posted in Machine Learning | Comments Off on How Machine Learning Is Redefining The Healthcare Industry – Small Business Trends

Tackling climate change with machine learning: Covid-19 and the energy transition – pv magazine International

The effect the coronavirus pandemic is having on energy systems and environmental policy in Europe was discussed at a recent machine learning and climate change workshop, along with the help artificial intelligence can offer to those planning electricity access in Africa.

The impact of Covid-19 on the energy system was discussed in an online climate change workshop that also considered how machine learning can help electricity planning in Africa.

This years International Conference on Learning Representations event included a workshop held by the Climate Change AI group of academics and artificial intelligence industry representatives which considered how machine learning can help tackle climate change.

Bjarne Steffen, senior researcher at the energy politics group at ETH Zrich, shared his insights at the workshop on how Covid-19 and the accompanying economic crisis are affecting recently introduced green policies. The crisis hit at a time when energy policies were experiencing increasing momentum towards climate action, especially in Europe, said Steffen, who added the coronavirus pandemic has cast into doubt the implementation of such progressive policies.

The academic said there was a risk of overreacting to the public health crisis, as far as progress towards climate change goals was concerned.

Lobbying

Many interest groups from carbon-intensive industries are pushing to remove the emissions trading system and other green policies, said Steffen. In cases where those policies are having a serious impact on carbon-emitting industries, governments should offer temporary waivers during this temporary crisis, instead of overhauling the regulatory structure.

However, the ETH Zrich researcher said any temptation to impose environmental conditions to bail-outs for carbon-intensive industries should be resisted. While it is tempting to push a green agenda in the relief packages, tying short-term environmental conditions to bail-outs is impractical, given the uncertainty in how long this crisis will last, he said. It is better to include provisions that will give more control over future decisions to decarbonize industries, such as the government taking equity shares in companies.

Steffen shared with pv magazine readers an article published in Joule which can be accessed here, and which articulates his arguments about how Covid-19 could affect the energy transition.

Covid-19 in the U.K.

The electricity system in the U.K. is also being affected by Covid-19, according to Jack Kelly, founder of London-based, not-for-profit, greenhouse gas emission reduction research laboratory Open Climate Fix.

The crisis has reduced overall electricity use in the U.K., said Kelly. Residential use has increased but this has not offset reductions in commercial and industrial loads.

Steve Wallace, a power system manager at British electricity system operator National Grid ESO recently told U.K. broadcaster the BBC electricity demand has fallen 15-20% across the U.K. The National Grid ESO blog has stated the fall-off makes managing grid functions such as voltage regulation more challenging.

Open Climate Fixs Kelly noted even events such as a nationally-coordinated round of applause for key workers was followed by a dramatic surge in demand, stating:On April 16, the National Grid saw a nearly 1 GW spike in electricity demand over 10 minutes after everyone finished clapping for healthcare workers and went about the rest of their evenings.

Read pv magazines coverage of Covid-19; and tell us how it is affecting your solar and energy storage operations. Email editors@pv-magazine.com to share your experiences.

Climate Change AI workshop panelists also discussed the impact machine learning could have on improving electricity planning in Africa. The Electricity Growth and Use in Developing Economies (e-Guide) initiative funded by fossil fuel philanthropic organization the Rockefeller Foundationaims to use data to improve the planning and operation of electricity systems in developing countries.

E-Guide members Nathan Williams, an assistant professor at the Rochester Institute of Technology (RIT) in New York state, and Simone Fobi, a PhD student at Columbia University in NYC, spoke about their work at the Climate Change AI workshop, which closed on Thursday. Williams emphasized the importance of demand prediction, saying: Uncertainty around current and future electricity consumption leads to inefficient planning. The weak link for energy planning tools is the poor quality of demand data.

Fobi said: We are trying to use machine learning to make use of lower-quality data and still be able to make strong predictions.

The market maturity of individual solar home systems and PV mini-grids in Africa mean more complex electrification plan modeling is required.

Modeling

When we are doing [electricity] access planning, we are trying to figure out where the demand will be and how much demand will exist so we can propose the right technology, added Fobi. This makes demand estimation crucial to efficient planning.

Unlike many traditional modeling approaches, machine learning is scalable and transferable. Rochesters Williams has been using data from nations such as Kenya, which are more advanced in their electrification efforts, to train machine learning models to make predictions to guide electrification efforts in countries which are not as far down the track.

Williams also discussed work being undertaken by e-Guide members at the Colorado School of Mines, which uses nighttime satellite imagery and machine learning to assess the reliability of grid infrastructure in India.

Rural power

Another e-Guide project, led by Jay Taneja at the University of Massachusetts, Amherst and co-funded by the Energy and Economic Growth program by police reform organization Oxford Policy Management uses satellite imagery to identify productive uses of electricity in rural areas by detecting pollution signals from diesel irrigation pumps.

Though good quality data is often not readily available for Africa, Williams added, it does exist.

We have spent years developing trusting relationships with utilities, said the RIT academic. Once our partners realize the value proposition we can offer, they are enthusiastic about sharing their data We cant do machine learning without high-quality data and this requires that organizations can effectively collect, organize, store and work with data. Data can transform the electricity sector but capacity building is crucial.

By Dustin Zubke

This article was amended on 06/05/20 to indicate the Energy and Economic Growth program is administered by Oxford Policy Management, rather than U.S. university Berkeley, as previously stated.

More here:
Tackling climate change with machine learning: Covid-19 and the energy transition - pv magazine International

Posted in Machine Learning | Comments Off on Tackling climate change with machine learning: Covid-19 and the energy transition – pv magazine International

Machine Learning Engineers Will Not Exist In 10 Years – Machine Learning Times – machine learning & data science news – The Predictive Analytics…

Originally published in Medium, April 28, 2020

The landscape is evolving quickly. Machine Learning will transition to a commonplace part of every Software Engineers toolkit.

In every field we get specialized roles in the early days, replaced by the commonplace role over time. It seems like this is another case of just that.

Lets unpack.

Machine Learning Engineer as a role is a consequence of the massive hype fueling buzzwords like AI and Data Science in the enterprise. In the early days of Machine Learning, it was a very necessary role. And it commanded a nice little pay bump for many! But Machine Learning Engineer has taken on many different personalities depending on who you ask.

The purists among us say a Machine Learning Engineer is someone who takes models out of the lab and into production. They scale Machine Learning systems, turn reference implementations into production-ready software, and oftentimes cross over into Data Engineering. Theyre typically strong programmers who also have some fundamental knowledge of the models they work with.

But this sounds a lot like a normal software engineer.

Ask some of the top tech companies what Machine Learning Engineer means to them and you might get 10 different answers from 10 survey participants. This should be unsurprising. This is a relatively young role and the folks posting these jobs are managers, oftentimes of many decades who dont have the time (or will) to understand the space.

To continue reading this article, click here.

More:
Machine Learning Engineers Will Not Exist In 10 Years - Machine Learning Times - machine learning & data science news - The Predictive Analytics...

Posted in Machine Learning | Comments Off on Machine Learning Engineers Will Not Exist In 10 Years – Machine Learning Times – machine learning & data science news – The Predictive Analytics…

Udacity partners with AWS to offer scholarships on machine learning for working professionals – Business Insider India

All applicants will be able to join the AWS Machine Learning Foundations Course. While applications are on currently, enrollment for the course begins on May 19.

This course will provide an understanding of software engineering and AWS machine learning concepts including production-level coding and practice object-oriented programming. They will also learn about deep learning techniques and its applications using AWS DeepComposer. Advertisement

A major reason behind the increasing uptake of such niche courses among the modern-age learners has to do with the growing relevance of technology across all spheres the world over. In its wake, many high-value job roles are coming up that require a person to possess immense technical proficiency and knowledge in order to assume them. And machine learning is one of the key components of the ongoing AI revolution driving digital transformation worldwide, said Gabriel Dalporto, CEO of Udacity.

The top 325 performers in the foundation course will be awarded with a scholarship to join Udacitys Machine Learning Engineer Nanodegree program. In this advanced course, the students will work on ML tools from AWS. This includes real-time projects that are focussed on specific machine learning skills.

Advertisement

The Nanodegree program scholarship will begin on August 19.

See also:Advertisement

Here are five apps you need to prepare for JEE Main and NEET competitive exams

See the article here:
Udacity partners with AWS to offer scholarships on machine learning for working professionals - Business Insider India

Posted in Machine Learning | Comments Off on Udacity partners with AWS to offer scholarships on machine learning for working professionals – Business Insider India