Search Immortality Topics:

Page 119«..1020..118119120121..130140..»


Category Archives: Machine Learning

Nvidias DLSS 2.0 aims to prove the technology is essential – VentureBeat

Deep Learning Super Sampling (DLSS) is one of the marquee features for Nvidias RTX video cards, but its also one people tend to overlook or outright dismiss. The reason for that is because many people equate the technology to something like a sharpening filter that can sometimes reduce the jagged look of lower-resolution images. But DLSS uses a completely different method with much more potential for improving visual quality, and Nvidia is ready to prove that with DLSS 2.0.

Nvidia built the second-generation DLSS to address all of the concerns with the technology. It looks better, gives players much more control, and should support a lot more games. But at its core, DLSS 2.0 is still about using machine learning to intelligently upscale a game to a higher resolution. The idea is to give you a game that, for example, looks like it is running at 4K while actually rendering at 1080p or 1440p. This drastically improves performance. And, in certain games, it can even produce frames that contain more detail than native rendering.

For DLSS, Nvidia inputs a game into a training algorithm to determine what the visuals are supposed to look like at the sharpest possible fidelity. And this is one of the areas where DLSS 2.0 is a significant leap forward. Nvidia originally needed a bespoke training model for every game. DLSS 2.0, however, uses the same neural network for every game. This means Nvidia can add DLSS support to more games at a more rapid pace.

Then using that deep-learning data, DLSS can then use the Tensor GPU cores on Nvidias RTX cards to process what a 1080p frame should look like at 4K. And this method is so much more powerful than sharpening because it is rebuilding from data that isnt even necessarily present in each frame. Heres the result:

MechWarrior 5: Mercenaries and Control are the first two games to support DLSS 2.0. They will get the benefit of the more efficient AI network. This version of the tech is twice as fast on the Tensor cores already available in RTX cards like the RTX 2060 up to the RTX 2080 Ti.

Nvidia has also added temporal feedback to its DLSS system. This enables the super-sampling method to get information about how objects and environments change over time. DLSS 2.0 can then use that temporal feedback to improve the sharpness and stability from one frame to the next.

But the advantages go beyond improved processing. DLSS 2.0 also turns over more control to the player. One of the disadvantages of DLSS in many games is that it was often a binary choice. Either it was on or off, and developers got to decide how DLSS behaved.

DLSS 2.0 flips that by giving three presets: Quality, Balanced, and Performance. In Performance mode, DLSS 2.0 can take a 1080p frame and upscale it all the way up to 2160p (4K). Quality mode, meanwhile, may upscale 1440p to 2160p.

But you dont necessarily need a 4K display to get the advantages of DLSS 2.0. You can use the tech on a 1080p or 1440p display, and it will often provide better results than native rendering.

Again, this is possible because DLSS 2.0 is working from more data than a native 1080p frame. And all of this is going to result in higher frame rates and playable games even when using ray tracing.

DLSS 2.0 is rolling out soon as part of a driver update for RTX cards.

Read the rest here:
Nvidias DLSS 2.0 aims to prove the technology is essential - VentureBeat

Posted in Machine Learning | Comments Off on Nvidias DLSS 2.0 aims to prove the technology is essential – VentureBeat

With Launch of COVID-19 Data Hub, The White House Issues A ‘Call To Action’ For AI Researchers – Machine Learning Times – machine learning & data…

Originally published in TechCrunch, March 16, 2020

In a briefing on Monday, research leaders across tech, academia and the government joined the White House to announce an open data set full of scientific literature on the novel coronavirus. The COVID-19 Open Research Dataset, known as CORD-19, will also add relevant new research moving forward, compiling it into one centralized hub. The new data set is machine readable, making it easily parsed for machine learning purposes a key advantage according to researchers involved in the ambitious project.

In a press conference, U.S. CTO Michael Kratsios called the new data set the most extensive collection of machine readable coronavirus literature to date. Kratsios characterized the project as a call to action for the AI community, which can employ machine learning techniques to surface unique insights in the body of data. To come up with guidance for researchers combing through the data, the National Academies of Sciences, Engineering, and Medicine collaborated with the World Health Organization to come up with high priority questions about the coronavirus related to genetics, incubation, treatment, symptoms and prevention.

The partnership, announced today by the White House Office of Science and Technology Policy, brings together the Chan Zuckerberg Initiative, Microsoft Research, the Allen Institute for Artificial Intelligence, the National Institutes of Healths National Library of Medicine, Georgetown Universitys Center for Security and Emerging Technology, Cold Spring Harbor Laboratory and the Kaggle AI platform, owned by Google.

The database brings together nearly 30,000 scientific articles about the virus known as SARS-CoV-2. as well as related viruses in the broader coronavirus group. Around half of those articles make the full text available. Critically, the database will include pre-publication research from resources like medRxiv and bioRxiv, open access archives for pre-print health sciences and biology research.

To continue reading this article, click here.

Go here to read the rest:
With Launch of COVID-19 Data Hub, The White House Issues A 'Call To Action' For AI Researchers - Machine Learning Times - machine learning & data...

Posted in Machine Learning | Comments Off on With Launch of COVID-19 Data Hub, The White House Issues A ‘Call To Action’ For AI Researchers – Machine Learning Times – machine learning & data…

University Students Are Learning To Collaborate on AI Projects – Forbes

Penn States Nittany AI Challenge is teaching students the true meaning of collaboration in the age of Artificial Intelligence.

Nittany AI Challenge LionPlanner

This year, artificial intelligence is the buzzword. On university campuses, students who just graduated high school are checking out the latest computer science course offerings to see if they can take classes in machine learning. The truth about the age of Artificial Intelligence has caught many university administrators attention. In the age of AI, to be successful, everyone, no matter what jobs, skill sets, or majors will at some point encounter AI in their work and their life.Penn Statesaw the benefits of working on AI projects early, specifically when it comes to teamwork and collaboration. Since 2017, their successfulNittany AI Challengeeach year, has helped to teach students what it means to collaborate in the age of Artificial Intelligence.

Every university has challenges. Students bring a unique perspective and understanding of these challenges. The Nittany AI Challenge was created to provide a framework and support structure to enable students to form teams and collaborate on ideas that could address a problem or opportunity, using AI technology as the enabler. The Nittany AI Challenge is our innovation engine, ultimately putting students on stage to demonstrate how AI and machine learning can be leveraged to have a positive impact on the university.

The Nittany AI Challenge runs for 8 months each year. It has multiple phases such as the idea phase, the prototype phase, and the MVP phase. In the end, theres a pitch competition between 5 to 10 teams and they compete for a pool of $25,000. The challenge incentivizes students to keep going by awarding the best teams at each phase of the competition with another combined total of $25,000 during the 8 months of competition. By the time pitching comes around for the top 5 to 10 teams, these teams not only have figured out how they can work together as a team, but they have also experienced what it means to receive funding.

This year, the Nittany AI Challenge has expanded from asking students to solve the universitys problems using AI to broader categories based on the theme of AI for Good. Students are competing in additional categories such as humanitarianism, healthcare, and sustainability/climate change.

In the last two years, students formed teams amongst friends within their circle. But, as the competition matured, now, theres an online system that allows students to sign up for teams.

Students often form teams with students coming from different backgrounds and majoring in different disciplines based on their shared interest on a project. Christie Warren, the app designer from theLionPlanner team, helped her team to create a 4-year degree planning tool that won 2018s competition. She credits the competition for giving her a clear path to a career in app design and teaching her how to collaborate with developers.

For me the biggest learning curve is to learn to work alongside developers, as far as when to start to go into the high fidelity designs, wait for people to figure out the features that need to be developed, etc. Just looking at my designs and being really open to improvements and going through iterations of the design with the team helped me overcome the learning curve.

Early on, technology companies such as Microsoft, Google Cloud, IBM Watson, and Amazon Web Services recognized the value of an on-campus AI competition such as the Nittany AI Competition to provide teamwork education to students before they embark on internships with technology companies. Theyve been sponsoring the competition since its inception.

Both the students and us from Microsoft benefit from the time working together, in that we learn about each others culture, needs and aspirations. Challenges like the Nittany AI Challenge highlight that studying in Higher Education should be a mix of learning and fun. If we can help the students learn and enjoy the experience then we also help them foster a positive outlook about their future of work.

While having fun, some students like Michael D. Roos, project manager and backend developer from the LionPlanner team have seen synergy between his internships and his project for the Nittany AI competition. He credits the competition as giving him a pathway to success beyond simply a college education. Hes a lot more confident stepping out into the real world whether its working for a startup or a large technology company because of the experience gained.

I was doing my internship with Microsoft during a part of the competition. Some of the technology skills I learned at my internship I could then apply to my project for the competition. Also, having the cumulative experience of working on the project for the Nittany AI competition before going into my internship helped me with my internship. Even though I was interning at Microsoft, my team had similar startup vibes as the competition, my role on the team was similar to my role on the project. I felt I had a headstart in that role because of my experience in the competition.

One of the biggest myths that the Nittany AI Challenge helped to debunk is that AI implementations require only the skills of technologists. While computer science students who take a keen interest in machine learning and AI are central to every project inside the Nittany AI Challenge, its often the people who are the visionary project managers, creative designers, and students who are majoring in other disciplines such as healthcare, biological sciences, and business who end up making the most impactful contributions to the team.

The AI Alliance makes the challenge really accessible. For people like me who dont know AI, we can learn AI along the way.

The LionPlanner Team that won the competition in 2018 contributes their success mainly to the outstanding design that won over the judges. Christie, the app designer on the team credits her success to the way the team collaborated which enabled her to communicate with developers effectively.

Nyansapo_Team_pic

Every member of the Nyansapo Team that is trying to bring English education to remote parts of Kenya via NLP learning software contributes their success to the energy and the motivation behind the vision of the project. Because everyone felt strongly about the vision, even though they have one of the biggest teams in the competition, everyones pulling together and collaborating.

I really like to learn by doing. Everybody on the team joined, not just because they had something to offer, but because the vision was exciting. We are all behind this problem of education inequality in Kenya. We all want to get involved to solve this problem. We are this excited to want to go the extra step.

Not only does the Nittany AI challenge teach students the art of interdisciplinary collaboration, but it also teaches students time management, stress management, and how to overcome difficulties. During the competition, students are often juggling difficult coursework, internships, and other extracurricular activities. They often feel stressed and overwhelmed. This can pose tremendous challenges for team communication. But, as many students pointed out to me, these challenges are opportunities to learn how to work together.

There was a difficult moment yesterday in between my classes, where I had to schedule a meeting with Edward to discuss the app interface later during the day, at times, everything can feel a bit chaotic. In the back of my head, when I think about the vision of our project, how much Im learning on the project, and how Im working with all my friends, these are the things that keep me going even through hard times.

One of the projects from the Nittany AI Challenge that the university is integrating into their systems is the LionPlanner tool. It uses AI algorithms to help students match their profiles with clubs and extracurricular activities they might be interested in. It also helps students plan their courses to customize their degree to allow them to complete on time while keeping the cost of their degree as low as possible.

The students who worked on the project are now working to create a Prospective Student Planning Tool that can integrate into the University Admissions Office systems to be used by transfer students.

Currently, in the U.S., theres a skill gap of almost 1.5 million high tech jobs. Companies are having a hard time hiring people who have the skills to work in innovative companies. We now have coding camps, apprenticeships, and remote coding platforms.

Why not also have university-sponsored AI challenges where students can demonstrate their potential and abilities to collaborate?

The Nittany AI Challenge from Penn State presents a unique solution in the age of innovation that many employees are trying to solve. By sitting in the audience as judges, companies can follow the teams progress and watch students shine in their perspective areas. Students are not pitching their skills. Students are pitching their work products. They are showing what they can do in real-time for 8 months.

This could be a new way for companies to recruit. We have NFL drafts. Why not have drafts for star players on these AI teams that work especially well with others?

This year, Penn State introduced the Nittany AI Associates program where students can continue their work from the Nittany AI Challenge so that they can develop their ideas further.

So while thechallengeis the "Innovation Engine", theNittanyAIAssociates provides students the opportunity to work on managed projects with an actual client, funding to the students to reduce their debt (paid internships), a low cost, low risk avenue for the university (and other clients) to innovate, while providingAIknowledge transfer to client staff (the student becomes the teacher).

In the age of AI, education is becoming more multidisciplinary. When higher education institutions can evolve the way that they teach their students to enable both innovation and collaboration, then the potential they unleash in their graduates can have an exponential effect on their career and the companies that hire them. Creating competitions and collaborative work projects such as the Nittany AI Challenge within the university that fosters win-win thinking might just be the key to the type of innovations we need in higher education to keep up in the age of AI.

Original post:
University Students Are Learning To Collaborate on AI Projects - Forbes

Posted in Machine Learning | Comments Off on University Students Are Learning To Collaborate on AI Projects – Forbes

Novi Releases v2.0 of Prediction Engine, Adding Critical Economics to Its Machine Learning Outputs – Benzinga

AUSTIN, Texas, March 23, 2020 /PRNewswire-PRWeb/ --Novi Labs ("Novi") today announced the release of Novi Prediction Engine version 2.0. This provides critical economic data to E&P workflows such as well planning or acquisition & divestitures. Novi customers can now run a wide range of large-scale scenarios in minutes and get immediate feedback on the economic feasibility of each plan. As price headwinds face the industry, having the ability to quickly and easily evaluate hundreds of scenarios allows operators to efficiently allocate capital.

In addition to the economic outputs, Novi Prediction Engine 2.0 also includes new features targeting enhanced usability and increased efficiency. Novi is now publishing confidence intervals as a standard output for every prediction. This allows customers to understand how confident the model is of each prediction it makes, which is critical decision-making criterion. A video demonstration of Novi Prediction Engine version 2.0 is available at https://novilabs.com/prediction-engine-v2/.

"With the integration of economic outputs and confidence intervals into Novi Prediction Engine, customers have increased leverage, transparency and certainty in what the Novi models are providing in support of their business decisions. This form of rapid scenario driven testing that is unlocked by the Novi platform is vital in today's uncertain market," said Scott Sherwood, Novi's CEO.

About Novi Labs Novi Labs, Inc. ("Novi") is the leading developer of artificial intelligence driven business applications that help the oil & gas industry optimize the economic value of drilling programs and acquisition & divestiture decisions. Leveraging cutting-edge data science, Novi delivers intuitive analytics that simplify complex decisions with actionable data and insights needed optimize capital allocation. Novi was founded in 2014 and is headquartered in Austin, TX. For more information, please visit http://www.novilabs.com.

SOURCE Novi Labs

See the rest here:
Novi Releases v2.0 of Prediction Engine, Adding Critical Economics to Its Machine Learning Outputs - Benzinga

Posted in Machine Learning | Comments Off on Novi Releases v2.0 of Prediction Engine, Adding Critical Economics to Its Machine Learning Outputs – Benzinga

Artificial intelligence for fraud detection is bound to save billions – ZME Science

Fraud mitigation is one of the most sought-after artificial intelligence (AI) services because it can provide an immediate return on investment. Already, many companies are experiencing lucrative profits thanks to AI and machine learning (ML) systems that detect and prevent fraud in real-time.

According to a new report, Highmark Inc.s Financial Investigations and Provider Review (FIPR) department generated $260 million in savings that would have otherwise been lost to fraud, waste, and abuse in 2019. In the last five years, the company saved $850 million.

We know the overwhelming majority of providers do the right thing. But we also know year after year millions of health care dollars are lost to fraud, waste and abuse, said Melissa Anderson, executive vice president and chief audit and compliance officer, Highmark Health. By using technology and working with other Blue Plans and law enforcement, we have continually evolved our processes and are proud to be among the best nationally.

FIPR detects fraud across its clients services with the help of an internal team made up of investigators, accountants, and programmers, as well as seasoned professionals with an eye for unusual activity such as registered nurses and former law enforcement agents. Human audits performed to detect unusual claims and assess the appropriateness of provider payments are used as training data for AI systems, which can adapt and react more rapidly to suspicious changing consumer behavior.

As fraudulent actors have become increasingly aggressive and cunning with their tactics, organizations are looking to AI to mitigate rising threats.

We know it is much easier to stop these bad actors before the money goes out the door then pay and have to chase them, said Kurt Spear, vice president of financial investigations at Highmark Inc.

Elsewhere, Teradata, an AI firm specialized in selling fraud detection solutions to banks, claims in a case study that it helped Danske Bank reduce its false positives by 60% and increased real fraud detection by 50%.

Other service operators are looking to AI fraud detection with a keen eye, especially in the health care sector. A recent survey performed by Optum found that 43% of health industry leaders said they strongly agree that AI will become an integral part of detecting telehealth fraud, waste, or abuse in reimbursement.

In fact, AI spending is growing tremendously with total operating spending set to reach $15 billion by 2024, the most sought-after solutions being network optimization and fraud mitigation. According to theAssociation of Certified Fraud Examiners (ACFE)inauguralAnti-Fraud Technology Benchmarking Report,the amount organizations are expected to spend on AI and machine learning to reduce online fraud is expected to triple by 2021.

Mitigating fraud in healthcare would be a boon for an industry that is plagued with many structural inefficiencies.

The United States spends about $3.5 trillion on healthcare-related services every year. This staggering sum corresponds to about 18% of the countrys GDP and is more than twice the average among developed countries. However, despite this tremendous spending, healthcare service quality is lacking. According to a now-famous 2017 study, the U.S. has fewer hospital beds and doctors per capita than any other developed country.

A 2019 study found that the countrys healthcare system is incredibly inefficient, burning through roughly 25% of all its finances which basically go to waste thats $760 billion annually in the best case scenario and up to $935 billion annually.

Most money is being wasted due to unnecessary administrative complexity, including billing and coding waste this alone is responsible for $265.6 billion annually. Drug pricing is another major source of waste, account for around $240 billion. Finally, over-treatment and failure of care delivery incurred another $300 billion in wasted costs.

And even these astronomical costs may be underestimated. According to management firm Numerof and Associates, the 25% waste estimate might be conservative. Instead, the firm believes that as much as 40% of the countrys healthcare spending is wasted, mostly due to administrative complexity. The firm adds that fraud and abuse account for roughly 8% of waste in healthcare.

Most cases of fraud in the healthcare sector are committed by organized crime groups and a fraction of some healthcare providers that are dishonest.

According to the National Healthcare Anti-Fraud Association, the most common types of healthcare frauds in the United States are:

Traditionally, the most prevalent method for fraud management has been human-generated rule sets. To this day, this is the most common practice but thanks to a quantum leap in computing and Big Data, AI-based solutions based on machine learning algorithms are becoming increasingly appealing and most importantly practical.

But what is machine learning anyway? Machine learning refers to algorithms that are designed learn like humans do and continuously tweak this learning process over time without human supervision. The algorithms output accuracy can be improved continuously by feeding them data and information in the form of observations and real-world interactions.

In other words, machine learning is the science of getting computers to act without being explicitly programmed.

There are all sorts of various machine learning algorithms, depending on the requirements of each situation and industry. Hundreds of new machine learning algorithms are published on a daily basis. Theyre typically grouped by:

In a healthcare fraud analytics context, machine learning eliminates the use of preprogrammed rule sets even those of phenomenal complexity.

Machine learning enables companies to efficiently determine what transactions or set of behaviors are most likely to be fraudulent, while reducing false positives.

In an industry where there can be billions of different transactions on a daily basis, AI-based analytics can be an amazing fit thanks to their ability to automatically discover patterns across large volumes of data.

The process itself can be complex since the algorithms have to interpret patterns in the data and apply data science in real-time in order to distinguish between normal behavior and abnormal behavior.

This can be a problem since an improper understanding of how AI works and fraud-specific data science techniques can lead you to develop algorithms that essentially learn to do the wrong things. Just like people can learn bad habits, so too can a poorly designed machine learning model.

In order for online fraud detection based on AI technology to succeed, these platforms need to check three very important boxes.

First, supervised machine learning algorithms have to be trained and fine-tuned based on decades worth of transaction data to keep false positives to a minimum and improve reaction time. This is harder said than done because the data needs to be structured and properly labeled depending on the size of the project, this could take staff even years to solve.

Secondly, unsupervised machine learning needs to keep up with increasingly sophisticated forms of online fraud. After all, AI is used by both auditors and fraudsters. And, finally, for AI fraud detection platforms to scale, they require a large-scale, universal data network of activity (i.e. transactions, filed documents, etc) to scale the ML algorithms and improve the accuracy of fraud detection scores.

According to a new market research report released earlier this year, the healthcare fraud analytics market is projected to reach $4.6 billion by 2025 from $1.2 billion in 2020.

This growth is attributed to more numerous and complex fraudulent activity in the healthcare sector.

In order to tackle rising healthcare fraud, companies offer various analytics solutions that flag fraudulent activity some are rule-based models, but AI-based technologies are expected to form the backbone of all types of analytics used in the future. These include descriptive, predictive, and prescriptive analytics.

Some of the most important companies operating today in the healthcare fraud analytics market include IBM Corporation (US), Optum (US), SAS Institute (US), Change Healthcare (US), EXL Service Holdings (US), Cotiviti (US), Wipro Limited (Wipro) (India), Conduent (US), HCL (India), Canadian Global Information Technology Group (Canada), DXC Technology Company (US), Northrop Grumman Corporation (US), LexisNexis Group (US), and Pondera Solutions (US).

That being said, there is a wide range of options in place today to prevent fraud. However, the evolving landscape of e-commerce and hacking pose new challenges all the time. To keep up, these challenges require innovation that can respond and react rapidly to fraud. The common denominator, from payment fraud to abuse, seems to be machine learning, which can easily scale to meet the demands of big data with far more flexibility than traditional methods.

Original post:
Artificial intelligence for fraud detection is bound to save billions - ZME Science

Posted in Machine Learning | Comments Off on Artificial intelligence for fraud detection is bound to save billions – ZME Science

Google open-sources framework that reduces AI training costs by up to 80% – VentureBeat

Google researchers recently published a paper describing a framework SEED RL that scales AI model training to thousands of machines. They say that it could facilitate training at millions of frames per second on a machine while reducing costs by up to 80%, potentially leveling the playing field for startups that couldnt previously compete with large AI labs.

Training sophisticated machine learning models in the cloud remains prohibitively expensive. According to a recent Synced report, the University of Washingtons Grover, which is tailored for both the generation and detection of fake news, cost $25,000 to train over the course of two weeks. OpenAI racked up $256 per hour to train its GPT-2 language model, and Google spent an estimated $6,912 training BERT, a bidirectional transformer model that redefined the state of the art for 11 natural language processing tasks.

SEED RL, which is based on Googles TensorFlow 2.0 framework, features an architecture that takes advantage of graphics cards and tensor processing units (TPUs) by centralizing model inference. To avoid data transfer bottlenecks, it performs AI inference centrally with a learner component that trains the model using input from distributed inference. The target models variables and state information are kept local, while observations are sent to the learner at every environment step and latency is kept to a minimum thanks to a network library based on the open source universal RPC framework.

SEED RLs learner component can be scaled across thousands of cores (e.g., up to 2,048 on Cloud TPUs), and the number of actors which iterate between taking steps in the environment and running inference on the model to predict the next action can scale up to thousands of machines. One algorithm V-trace predicts an action distribution from which an action can be sampled, while another R2D2 selects an action based on the predicted future value of that action.

To evaluate SEED RL, the research team benchmarked it on the commonly used Arcade Learning Environment, several DeepMind Lab environments, and the Google Research Football environment. They say that they managed to solve a previously unsolved Google Research Football task and that they achieved 2.4 million frames per second with 64 Cloud TPU cores, representing an improvement over the previous state-of-the-art distributed agent of 80 times.

This results in a significant speed-up in wall-clock time and, because accelerators are orders of magnitude cheaper per operation than CPUs, the cost of experiments is reduced drastically, wrote the coauthors of the paper. We believe SEED RL, and the results presented, demonstrate that reinforcement learning has once again caught up with the rest of the deep learning field in terms of taking advantage of accelerators.

Read more:
Google open-sources framework that reduces AI training costs by up to 80% - VentureBeat

Posted in Machine Learning | Comments Off on Google open-sources framework that reduces AI training costs by up to 80% – VentureBeat