Search Immortality Topics:

Page 36«..1020..35363738..5060..»


Category Archives: Machine Learning

The Beatles: Get Back Used High-Tech Machine Learning To Restore The Audio – /Film

"TheBeatles: Get Back" is eight hours of carefully curated audio and footage from The Beatles in the studio and performing a rooftop concert in London in 1969. Jackson had to dig through 60 hours of vintage film footage and around 150 hours of audio recordings in order to put together his three-part documentary. Once he decided which footage and audio to include, then he had to take the next difficult step: cleaning up and restoring them both to give fans a look at TheBeatles like they had never seen them before.

In order to clean up the audio for "Get Back," Jackson employed algorithm technology to teach computers what different instruments and voices sounded like so they could isolate each track:

Once each track was isolated, sound mixers could then adjust volume levels individually to help with sound quality and clarity. The isolated tracks also make it much easier to remove noise from the audio tracks, like background sounds or the electronic hum of older recording equipment. This ability to fine-tune every aspect of the audio allowedJackson to make it sound like theFab Four are hanging out in your living room. When that technology is used for their musical performances, it's all the more impressive, as their rooftop concert feels as close to the real thing as you can possibly get.

Check out "TheBeatles: Get Back," streaming on Disney+.

Read more:
The Beatles: Get Back Used High-Tech Machine Learning To Restore The Audio - /Film

Posted in Machine Learning | Comments Off on The Beatles: Get Back Used High-Tech Machine Learning To Restore The Audio – /Film

Human Rights Documentation In The Digital Age: Why Machine Learning Isnt A Silver Bullet – Forbes

When the Syrian uprising started nearly 10 years ago, videos taken by citizens of attacks against them such as chemical and barrel bomb strikes started appearing on social media. While international human rights investigators couldn't get into the country, people on the ground documented and shared what was happening. Yet soon, videos and pictures of war atrocities were deleted from social media platforms a pattern that has continued to date. Ashoka Fellow Hadi al-Khatib, founder of the Syrian Archive and Mnemonic, works to save these audiovisual documents so they are available as evidence for lawyers, human rights investigators, historians, prosecutors, and journalists. In the wake of the Facebook Leaks, which are drawing needed attention to the topic of content moderation and human rights, Ashokas Konstanze Frischen caught up with Hadi.

Hadi al-Khatib, founder of Mnemonic and the Syrian Archive warns us against an over-reliance on ... [+] machine learning for online content moderation.

Konstanze Frischen: Hadi, you verify and save images and videos that show potential human rights violations, and ensure that prosecutors and journalists can use them later to investigate crimes against humanity. How and why did you start this work?

Hadi al-Khatib: I come from Hama, a city in the north of Damascus in Syria, where the first uprising against the Syrian government happened in 1982, and thousands of people died at the hands of the Syrian military. Unfortunately, at the time, there was very little documentation about what happened. Growing up, when my family spoke about these incidents, they would speak very quietly, or avoid the topic when I asked them about it. They would say: be careful, even the walls have ears. In 2011, during the second big uprising against the Syrian government, the situation was quite different. We immediately saw a huge scale of audio-visual documentation on social media - videos and photos captured by people witnessing the peaceful protests first, and then the violence against protesters. People wanted to make sure the crimes that they were witnessing were documented, in contrast to what happened in Hama in 1982. My work is to ensure that this documentation captured by people who risked their lives is not lost and is accessible in the future.

Frischen: With people publishing this on social media on a very large scale, many people might assume It's all out there, so why do I need someone else to archive it?

al-Khatib: Yes, good question. When we work with journalists, photographers, citizens from around the world, most of them do think of social media as a place where they can safely archive their materials. They think We have the archive. It's on social media, Dropbox, or Google Drive. But its not safe there once this media is uploaded on social media platforms, we lose control of it. From March 2011 until I founded the Syrian Archive in 2014, footage got deleted on a very large scale and it still is until now because of social media platforms content moderation policies. It got worse after 2017 when social media companies like YouTube started to use machine learning to detect content that shows violence automatically.

Frischen: Why do you think the materials get removed from social media platforms?

al-Kathib: Because the machine learning algorithm they have developed doesn't really differentiate between a video that shows extremist content or graphic content, and a video that documents a human rights violation. They all get detected automatically and removed.

Frischen: Though its well intended, machine learning cant handle the complexity?

al-Khatib: Exactly. The use of machine learning is very dangerous for human rights documentation, not just in Syria, but around the world. Social media platforms would need to invest more in human intelligence, not just machine intelligence, to make sound decisions.

Frischen: The Syrian Archive, one of the organizations you founded, has archived over 3.5 million records of digital content. How does that work in practice? How do you balance machine learning and manual work?

al-Khatib: The first step is to monitor specific sources, locations, and keywords around current or historical events. Once we discover content, we make sure that we preserve it automatically, as fast as possible. This is always our priority. Each of the 3.5 million records we have collected come from social media platforms, websites, or apps like Telegram. We archive them all in a way that provides availability, accessibility and authenticity for these records. We use machine learning with the project VFRAME to help us discover what we have in the archives that is most relevant for human rights investigations, journalism reporting or legal case building within this large pool of media. Then, we manually verify the location, date, time. We also verify any kind of objects we can see in the video, and make sure we are able to link it with other pieces of archived media and corroborate it with other types of evidence, to construct a verified incident. We also use blockchain to timestamp the materials, with a third-party company called Enigio. We want to provide long term, safe accessibility to the documents, and authenticate them in a way that proves we haven't tampered with the material during the archival process.

Frischen: Machine learning is great for analyzing large data sets, but then human judgment and a deep knowledge of history, politics, and the region must be brought to bear?

al-Khatib: Exactly. Knowledge of context, language, and history is vital for verification. This is all a manual process where researchers use certain tools and techniques to verify the location, date, time of every record, and make sure that it's clustered together into incidents. Those incidents are also clustered together into collections to form a bigger picture understanding of the pattern of violence and the impact it has on people.

Frischen: These findings can in turn be leveraged: You feed the results of your investigations to governments and prosecutors. What has the impact been?

al-Khatib: We realize that any legal accountability is going to take a long time. One of the main legal cases we are working on right now is about the use of chemical weapons in Syria. We focus on two incidents in two locations in Syria, in Eastern Ghouta (2013), and in Khan Sheikhoun (2017), where we saw the biggest uses of chemical weapons (i.e. Sarin gas) in recent history. We submitted a legal complaint to the German, French and Swedish prosecutors in collaboration with the Syrian Center for Media and Freedom of Expression, Civil Rights Defenders, and the Open Society Justice Initiative. Part of that submission was media evidence verified and collected by the Syrian Archive. Our investigations into the Syrian chemical supply chain resulted in the conviction of three Belgian firms who violated European Union sanctions, an internal audit of the Belgian customs system, parliamentary inquiries in multiple countries, a change in Swiss export laws to reflect European Union sanctions laws on specific chemicals, and the filing of complaints urging the governments of Germany and Belgium to initiate investigations into additional shipments to Syria.

Frischen: Wow. Let me come back to the automated content removal on social media platforms when this happens, i.e. when pieces of evidence of atrocities by the government are deleted, does this then opens up windows of opportunity for actors like the Syrian government to then flood social media with other, positive images, and thus take over newsfeeds?

al-Khatib: Yes, absolutely. Over the last 10 years, we've seen this kind information propaganda coming from all sides of the conflict in Syria. And our role within this information environment is to counter disinformation by archiving, collecting and verifying visual materials to reconstruct what really happened and to make sure that this reconstruction is based on facts. And we are doing this transparently, so anyone can see our methodology and tools we are using.

Frischen: How are the big social media companies responding? Do you see them as collaborative or as distant?

al-Khatib: Many civil society organizations from around the world, have been engaging with social media companies and asking them to invest more resources into this issue. So far, nothing has changed. The use of machine learning is still happening. A huge amount of content related to human rights documentation is still being removed. But there has absolutely been engagement and collaboration throughout the years, especially since 2017. We worked with YouTube for example to reinstate some of the channels that were removed, as well as thousands of videos that were published by credible human rights and media organizations in Syria. But unfortunately, a big part of this documentation is still being removed. The Facebook Leaks reveal the company knew about this problem, but they are continuing to use machine learning, erasing the history and memory of people around the world.

Frischen: How do you attend to the wellbeing of the humans involved in gathering and triaging violent and traumatic content?

al-Khatib: This is a very important question. We need to make sure there is a system of support for all researchers looking at this content practical assistance from psychologists that understand all the challenges and mitigate some of them. We are setting up protocols, so the researchers have access to experts. There are also some technical efforts underway. For example, we work with machine learning to blur images at the beginning, so researchers are not seeing graphic images directly on their screen. This is something that we want to do more work on.

Frischen: What gives you hope?

al-Khatib: The will of people who are facing the violence firsthand, and the families of victims. Whether in Syria or other countries, they did not yet get the accountability they deserve, but regardless, they are asking for it, fighting for it. This is what gives me hope working together with them, adding value by linking documentation to justice and accountability, and using this process to reconstruct the future of the country again.

Hadi al-Khatib (@Hadi_alkhatib) is the founder of Syrian Archive and its umbrella organization Mnemonic.

This conversation was condensed and edited. Watch the full conversation & browse more insights on Tech & Humanity.

Read the original post:
Human Rights Documentation In The Digital Age: Why Machine Learning Isnt A Silver Bullet - Forbes

Posted in Machine Learning | Comments Off on Human Rights Documentation In The Digital Age: Why Machine Learning Isnt A Silver Bullet – Forbes

Artificial Intelligence in the Food Manufacturing Industry: Machine Conquers Human? – Food Industry Executive

By Lior Akavia, CEO and co-founder of Seebo

Four years ago, Elon Musk famously predicted that artificial intelligence will overtake human intelligence by the year 2025.

Were headed toward a situation where AI is vastly smarter than humans and I think that time frame is less than five years from now, he told the New York Times.

Musk has also repeatedly warned of the potential dangers of AI, even invoking the Terminator movie franchise by way of illustration.

And yet, the very same Elon Musk recently unveiled the prototype for a distinctly humanoid Tesla Robot, which he hopes will be ready in 2022. Speaking to an audience at Teslas AI Day in August, Musk quipped that the robot is intended to be friendly, and added that it will be designed to navigate through a world built for humans alluding to his previous, apparently still-extant concerns.

Of course, Musks fears about AI arent shared by everyone. Fellow tech entrepreneur Mark Zuckerberg has distinctly different views on the matter. On the other hand, Musk isnt alone, either; Stephen Hawking once famously warned that AI could ultimately spell the end of the human race.

So what can we take away from this confusing discourse about AI? Is artificial intelligence the savior of humanity? Or are we about to get conquered by an army of drones?

The truth is (probably) a lot less theatrical but arguably no less dramatic.

The misleading thing about these types of high-profile, philosophical debates about AI is that we actually have a long way to go before what Hawking referred to as full artificial intelligence is even developed let alone mass-introduced into the marketplace.

Undeniably, however, the vast potential of AI is as much recognized by experts as it is taken for granted by the general public. Machine learning and other forms of AI are already defining many aspects of our daily lives, from the way we communicate with others to our ability to get to work on time, to how we shop, work, and even acquire knowledge.

In unveiling his Tesla robot, Musk offered a pretty succinct summary of the core benefits of AI in general, asserting that the robots purpose will be to take over unsafe, repetitive, or boring tasks that humans would rather not do.

That summary is applicable to almost any AI application you can think of: taking over tasks that humans either never really enjoyed doing, or werent ever that great at in the first place. A classic example is food assembly lines: humans get tired, bored, make mistakes, and have potentially dangerous accidents all things that robots either dont experience at all, or (in the case of accidents) experience less often, with costs measured in terms of financial losses rather than human lives.

But a far better illustration of this reality is in the world of data. In the days before big data became a buzzword, there was hope that the explosion of information would immediately usher in an era of true enlightenment. Finally, human beings could have all the data they needed at their fingertips to make the optimal decisions every time.

Of course, thats not what happened. Instead of being liberated by big data, we became hostages to it. From the spam clogging our email inboxes to the blur of graphs, charts, and tables that to this day form the core challenge for almost every business.

Then came artificial intelligence, and with it, the key to unlocking the potential of that ocean of data. And herein lies both the immense promise of AI, as well as the fear of Terminators and robot-driven unemployment: AI, particularly in the form of machine learning algorithms, is infinitely better at analyzing data than human beings are.

While philosophical debates between tech heavyweights naturally make the headlines, the current daily reality is far more benign. In practice, AI is mostly being used to empower humans, not sideline them.

Take the food manufacturing example above. Yes, its true that many food assembly lines are now dominated by machines rather than people, much in the way the Industrial Revolution did away with other menial jobs. But just as the Industrial Revolution paved the way for a more prosperous future, rather than one of mass unemployment (as many feared at that time as well), the Industrial Artificial Intelligence Revolution is enhancing and improving the lives of food manufacturing teams, rather than rendering them redundant.

Using AI, food manufacturing teams are better able to excel at their jobs which of course benefits them, their employers, and ultimately the consumers who benefit from a greater quantity and better quality of product.

Ive seen this firsthand. My company, Seebo, is part of this Fourth Industrial Revolution. Our proprietary Process-Based Artificial Intelligence is enabling global leaders in the food industry to reduce production losses like waste, yield, and quality, saving them millions each year. At the same time, theyre using our technology to become more sustainable: cutting emissions, reducing energy consumption overall and significantly reducing food waste.

And as with many other applications of machine learning AI, its all about the data. In the case of food manufacturers, it means using Seebos AI to reveal the hidden causes of these food production losses, high emissions, and so on insights that were previously unavailable due to the complex nature of food manufacturing data. Armed with those insights, process experts and production teams are able to make the right decisions in real time: to know when to adjust the process or maintain certain set points that they may otherwise have neglected or overlooked.

Of course, as the saying goes, with great power comes great responsibility.

From the wheel to the printing press to nuclear power, technological advancements always have the potential for good or bad. In that sense, AI is no different; where it differs is that its full potential is largely unknown. Weve still yet to tap into the full potential of this technology, so it often feels like a sort of black magic.

But I do believe that the current trajectory is very much for the good but more to the point, we dont have a choice.

Humanity today faces two simultaneous global challenges. First, a population crisis with the global population set to swell 25% by the year 2050 on the one hand, while on the other hand many countries (most notably China) face a rapidly aging population. And second, a rising climate crisis, as countries and industries struggle to cut carbon emissions while maintaining the productivity necessary to sustain those growing and aging populations.

In this struggle, artificial intelligence is perhaps our greatest ally. Ive seen up close its potential to empower better decisions, bridging the gap between seemingly opposing goals like reducing emissions while producing more, not less.

Far from conquering us, AI is humanitys best chance of overcoming some of our greatest food manufacturing challenges today.

Lior Akavia is the CEO and co-founder of Seebo, an industrial Artificial Intelligence start-up that helps tier-one manufacturers around the world to predict and prevent quality and yield losses. He is a serial entrepreneur and innovator, focused on the fields of AI, IoT, and manufacturing.

Link:
Artificial Intelligence in the Food Manufacturing Industry: Machine Conquers Human? - Food Industry Executive

Posted in Machine Learning | Comments Off on Artificial Intelligence in the Food Manufacturing Industry: Machine Conquers Human? – Food Industry Executive

The Coolest Data Science And Machine Learning Tool Companies Of The 2021 Big Data 100 – CRN

Learning Curve

As businesses and organizations strive to manage ever-growing volumes of data and, even more important, derive value from that data, they are increasingly turning to data engineering and machine learning tools to improve and even automate their big data processes and workflows.

As part of the 2021 Big Data 100, CRN has compiled a list of data science and machine learning tool companies that solution providers should be aware of. While most of these are not exactly household names, some, including DataRobot, Dataiku and H2O, have been around for a number of years and have achieved significant market presence. Others, including dotData, are more recent startups.

This week CRN is running the Big Data 100 list in slideshows, organized by technology category, with vendors of business analytics software, database systems, data management and integration software, data science and machine learning tools, and big data systems and platforms.

(Some vendors market big data products that span multiple technology categories. They appear in the slideshow for the technology segment in which they are most prominent.)

View original post here:
The Coolest Data Science And Machine Learning Tool Companies Of The 2021 Big Data 100 - CRN

Posted in Machine Learning | Comments Off on The Coolest Data Science And Machine Learning Tool Companies Of The 2021 Big Data 100 – CRN

Can machine learning help save the whales? How PNW researchers use tech tools to monitor orcas – GeekWire

Aerial image of endangered Southern Resident killer whales in K pod. The image was obtained using a remotely piloted octocopter drone that was flown during health research by Dr. John Durban and Dr. Holly Fearnbach. (Vulcan Image)

Being an orca isnt easy. Despite a lack of natural predators, these amazing mammals face many serious threats most of them brought about by their human neighbors. Understanding the pressures we put on killer whale populations is critical to the environmental policy decisions that will hopefully contribute to their ongoing survival.

Fortunately, marine mammal researchers like Holly Fearnbach of Sealife Response + Rehab + Research (SR3) and John Durban of Oregon State University are working hard to regularly monitor the condition of the Salish Seas southern resident killer whale population (SKRW). Identified as J pod, K pod and L pod, these orca communities have migrated through the Salish Sea for millennia. Unfortunately, in recent years their numbers have dwindled to only 75 whales, with one new calf born in 2021. This is the lowest population figure for the SRKW in 30 years.

For more than a decade, Fearnbach and Durban have flown photographic surveys to capture aerial images of the orcas. Starting in 2008, image surveys were performed using manned helicopter flights. Then beginning in 2014, the team transitioned to unmanned drones.

As the remote-controlled drone flies 100 feet or more above the whales, images are captured of each of the pod members, either individually or in groups. Since the drone is also equipped with a laser altimeter, the exact distance is known making calculations of the whales dimensions very accurate. The images are then analyzed in whats called a photogrammetric health assessment. This assessment helps determine each whales physical condition, including any evidence of pregnancy or significant weight loss due to malnourishment.

As a research tool, the drone is very cost effective and it allows us to do our research very noninvasively, Fearnbach said. When we do detect health declines in individuals, were able to provide management agencies with these quantitative health metrics.

But while the image collection stage is relatively inexpensive, processing the data has been costly and time-consuming. Each flight can capture 2,000 images with tens of thousands of images captured for each survey. Following the drone work, it typically takes about six months to manually complete the analysis on each seasons batch of images.

Obviously, half a year is a very long time if youre starving or pregnant, which is one reason why SR3s new partnership with Vulcan is so important. Working together, the organizations developed a new approach to process the data more rapidly. The Aquatic Mammal Photogrammetry Tool (AMPT) uses machine learning and an end-user tool to accelerate the laborious process, dramatically shortening the time needed to analyze, identify and categorize all of the images.

Applying machine learning techniques to the problem has already yielded huge results, reducing a six-month process to just six weeks with room for further improvements. Machine learning is a branch of computing that can improve its performance through experience and use of data. The faster turnaround time will make it possible to more quickly identify whales of concern and provide health metrics to management groups to allow for adaptive decision making, according to Vulcan.

Were trying to make and leave the world a better place, primarily through ocean health and conservation, said Sam McKennoch, machine learning team manager at Vulcan. We got connected with SR3 and realized this was a great use case, where they have a large amount of existing data and needed help automating their workflows.

AMPT is based on four different machine learning models. First, the orca detector identifies those images that have orcas in them and places a box around each whale. The next ML model fully outlines the orcas body, a process known in the machine learning field as semantic segmentation. After that comes the landmark detector which detects the rostrum (or snout) of the whale, the dorsal fins, blowhole, shape of the eye patches, fluke notch and so forth. This allows the software to measure and calculate the shape and proportions of various parts of the body.

Of particular interest is whether the whales facial fat deposits are so low they result in indentations of the head that marine biologists refer to as peanut head. This only appears when the orca has lost a significant amount of body fat and is in danger of starvation.

Finally, the fourth machine learning model is the identifier. The shape of the gray saddle patch behind the whales dorsal fin is as unique as a fingerprint, allowing each of the individuals in the pod to be identified.

There are a lot of different kinds of information needed for this kind of automation. Fortunately, Vulcan has been able to leverage some of SR3s prior manual work to bootstrap their machine learning models.

We really wanted to understand their pain points and how we could provide them the tools they needed, rather than the tools we might want to give them, McKennoch said.

As successful as AMPT has been, theres a lot of knowledge and information that has yet to be incorporated into its machine learning models. As a result, theres still the need to have users in-the-loop in a semi-supervised way for some of the ML processing. The interface speeds up user input and standardizes measurements made by different users.

McKennoch believes there will be gains with each batch they process for several cycles to come. Because of this, they hope to continue to improve performance in terms of accuracy, workflow and compute time to the point that the entire process eventually takes days, instead of weeks or months.

This is very important because AMPT will provide information that guides policy decisions at many levels. Human impact on the orcas environment is not diminishing and if anything, is increasing. Overfishing is reducing food sources, particularly chinook salmon, the orcas preferred meal. Commercial shipping and recreational boats continue to cause injury and their excessive noise interferes with the orcas ability to hunt salmon. Toxic chemicals from stormwater runoff and other pollution damage the marine mammals health. Ongoing monitoring of each individual whale will be critical to maintaining their wellbeing and the health of the local marine ecosystem.

Vulcan plans to open-source AMPT, giving it a life of its own in the marine mammal research community. McKennoch said they hope to extend the tool so it can be used for other killer whale populations, different large whales, and in time, possibly smaller dolphins and harbor seals.

Read more:
Can machine learning help save the whales? How PNW researchers use tech tools to monitor orcas - GeekWire

Posted in Machine Learning | Comments Off on Can machine learning help save the whales? How PNW researchers use tech tools to monitor orcas – GeekWire

Apple will focus on machine learning, AI jobs in new NC campus – VentureBeat

Join Transform 2021 this July 12-16. Register for the AI event of the year.

(Reuters) Apple on Monday said it will establish a new campus in North Carolina that will house up to 3,000 employees, expand its operations in several other U.S. states and increase its spending targets with U.S. suppliers.

Apple said it plans to spend $1 billion as it builds a new campus and engineering hub in the Research Triangle area of North Carolina, with most of the jobs expected to focus on machine learning, artificial intelligence, software engineering and other technology fields. It joins a $1 billion Austin, Texas campus announced in 2019.

North Carolinas Economic Investment Committee on Monday approved a job-development grant that could provide Apple as much as $845.8 million in tax reimbursements over 39 years if Apple hits job and growth targets. State officials said the 3,000 jobs are expected to create $1.97 billion in new tax revenues to the state over the grant period.

The iPhone maker said it would also establish a $100 million fund to support schools in the Raleigh-Durham area of North Carolina and throughout the state, as well as contribute $110 million to help build infrastructure such as broadband internet, roads, bridges and public schools in 80 North Carolina counties.

As a North-Carolina native, Im thrilled Apple is expanding and creating new long-term job opportunities in the community I grew up in, Jeff Williams, Apples chief operating officer, said in a statement.

Were proud that this new investment will also be supporting education and critical infrastructure projects across the state.

Apple also said it expanded hiring targets at other U.S. locations to hit a goal 20,000 additional jobs by 2026, setting new goals for facilities in Colorado, Massachusetts and Washington state.

In Apples home state of California, the company said it will aim to hire 5,000 people in San Diego and 3,000 people in Culver City in the Los Angeles area.

Apple also increased a U.S. spending target to $430 billion by 2026, up from a five-year goal of $350 billion Apple set in 2018, and said it was on track to exceed.

The target includes Apples U.S. data centers, capital expenditures and spending to create original television content in 20 states. It also includes spending with Apples U.S.-headquartered suppliers, though Apple has not said whether it applies only to goods made in those suppliers U.S. facilities.

Go here to see the original:
Apple will focus on machine learning, AI jobs in new NC campus - VentureBeat

Posted in Machine Learning | Comments Off on Apple will focus on machine learning, AI jobs in new NC campus – VentureBeat