The Future Of Nano Technology
- Alan Watts
- Anti-Aging Medicine
- David Sinclair
- Gene Medicine
- Gene therapy
- Genetic Medicine
- Genetic Therapy
- Hormone Replacement Therapy
- Human Genetic Engineering
- Human Reproduction
- Integrative Medicine
- Life Skills
- Longevity Medicine
- Machine Learning
- Medical School
- Nano Medicine
- Parkinson's disease
- Quantum Computing
- Regenerative Medicine
- Stem Cell Therapy
- Stem Cells
- When death do us part: AI-enabled immortality and the merits of letting go – Business Standard
- Together Again: 1980 Miracle On Ice Team Celebrates 40th Anniversary – CBS Minnesota
- Black Art After Basquiat: Is Past Still Present in the Art World? – Highsnobiety
- The vampire video game that sinks its teeth into the 1% – The Guardian
- Cumbria Wildlife Trust ready to welcome the heralds of spring – Times & Star
- Julia Santell Day
- scientist discoveries with human anatomy
- stop the med school arms race
- february 29 2020 mcc celebration hillsboro ks
- med school arms race
- rutgers university base edditing
- tony pantalleresco testosterone
- coronavirus Andrew Rambaut of the University of Edinburgh an expert on viral evolution That pace is “unprecedented and
- WHAT HAPPENED TO JACKSON AND APRILS DAUGHTER 2020
- charles darwin video
|Search Immortality Topics:|
Category Archives: Machine Learning
For decades, discovering novel antibiotics meant digging through the same patch of dirt. Biologists spent countless hours screening soil-dwelling microbes for properties known to kill harmful bacteria. But as superbugs resistant to existing antibiotics have spread widely, breakthroughs were becoming as rare as new places to dig.
Now, artificial intelligence is giving scientists a reason to dramatically expand their search into databases of molecules that look nothing like existing antibiotics.
A study published Thursday in the journal Cell describes how researchers at the Massachusetts Institute of Technology used machine learning to identify a molecule that appears capable of countering some of the worlds most formidable pathogens.
When tested in mice, the molecule, dubbed halicin, effectively treated the gastrointestinal bug Clostridium difficile (C. diff), a common killer of hospitalized patients, and another type of drug-resistant bacteria that often causes infections in the blood, urinary tract, and lungs.
The most surprising feature of the molecule? It is structurally distinct from existing antibiotics, the researchers said. It was found in a drug-repurposing database where it was initially identified as a possible treatment for diabetes, a feat that showcases the power of machine learning to support discovery efforts.
Now were finding leads among chemical structures that in the past we wouldnt have even hallucinated that those could be an antibiotic, said Nigam Shah, professor of biomedical informatics at Stanford University. It greatly expands the search space into dimensions we never knew existed.
Shah, who was not involved in the research, said that the generation of a promising molecule is just the first step in a long and uncertain process of testing its safety and effectiveness in humans.
But the research demonstrates how machine learning, when paired with expert biologists, can speed up time-consuming preclinical work, and give researchers greater confidence that the molecule theyre examining is worth pursuing through more costly phases of drug discovery.
That is an especially pressing challenge in the development of new antibiotics, because a lack of economic incentives has caused pharmaceutical companies to pull back from the search for badly needed treatments. Each year in the U.S., drug-resistant bacteria and fungi cause more than 2.8 million infections and 35,000 deaths, with more than a third of fatalities attributable to C. diff, according to the the Centers for Disease Control and Prevention.
The damage is far greater in countries with fewer health care resources.
Without the development of novel antibiotics, the World Health Organization estimates that the global death toll from drug resistant infections is expected to rise to 10 million a year by 2050, up from about 700,000 a year currently.
In addition to finding halicin, the researchers at MIT reported that their machine learning model identified eight other antibacterial compounds whose structures differ significantly from known antibiotics.
I do think this platform will very directly reduce the cost involved in the discovery phase of antibiotic development, said James Collins, a co-author of the study who is a professor of bioengineering at MIT. With these models, one can now get after novel chemistries in a shorter period of time involving less investment.
The machine learning platform was developed by Regina Barzilay, a professor of computer science and artificial intelligence who works with Collins as co-lead of the Jameel Clinic for Machine Learning in Health at MIT. It relies on a deep neural network, a type of AI architecture that uses multiple processing layers to analyze different aspects of data to deliver an output.
Prior types of machine learning systems required close supervision from humans to analyze molecular properties in drug discovery and produced spotty results. But Barzilays model is part of a new generation of machine learning systems that can automatically learn chemical properties connected to a specific function, such as an ability to kill bacteria.
Barzilay worked with Collins and other biologists at MIT to train the system on more than 2,500 chemical structures, including those that looked nothing like antibiotics. The effect was to counteract bias that typically trips up most human scientists who are trained to look for molecular structures that look a lot like other antibiotics.
The neural net was able to isolate molecules that were predicted to have antibacterial qualities but didnt look like existing antibiotics, resulting in the identification of halicin.
To use a crude analogy, its like you show an AI all the different means of transportation, but youve not shown it an electric scooter, said Shah, the bioinformatics professor at Stanford. And then it independently looks at an electronic scooter and says, Yeah, this could be useful for transportation.
In follow-up testing in the lab, Collins said, halicin displayed a remarkable ability to fight a wide range of multidrug-resistant pathogens. Tested against 36 such pathogens, it displayed potency against 35 of them. Collins said testing in mice showed excellent activity against C. diff, tuberculosis, and other bacteria.
The ability to identify molecules with specific antibiotic properties could aid in the development of drugs to treat so-called orphan conditions that affect a small percentage of the population but are not targeted by drug companies because of the lack of financial rewards.
Collins noted that commercializing halicin would take many months of study to evaluate its toxicity in humans, followed by multiple phases of clinical trials to establish safety and efficacy.
Read the original post:
Machine learning finds a novel antibiotic able to kill superbugs - STAT - STAT
Do you know how Google Maps predicts traffic? Are you amused by how Amazon Prime or Netflix subscribes to you just the movie you would watch? We all know it must be some approach of Artificial Intelligence. Machine Learning involves algorithms and statistical models to perform tasks. This same approach is used to find faces in Facebook and detect cancer too. A Machine Learning course can educate in the development and application of such models.
Artificial Intelligence mimics human intelligence. Machine Learning is one of the significant branches of it. There is an ongoing and increasing need for its development.
Tasks as simple as Spam detection in Gmail illustrates its significance in our day-to-day lives. That is why the roles of Data scientists are in demand to yield more productivity at present. An aspiring data scientist can learn to develop algorithms and apply such by availing Machine Learning certification.
Machine learning as a subset of Artificial Intelligence, is applied for varied purposes. There is a misconception that applying Machine Learning algorithms would need a prior mathematical knowledge. But, a Machine Learning Online course would suggest otherwise. On contrary to the popular approach of studying, here top-to-bottom approach is involved. An aspiring data scientist, a business person or anyone can learn how to apply statistical models for various purposes. Here, is a list of some well-known applications of Machine Learning.
Microsofts research lab uses Machine Learning to study cancer. This helps in Individualized oncological treatment and detailed progress reports generation. The data engineers apply pattern recognition, Natural Language Processing and Computer vision algorithms to work through large data. This aids oncologists to conduct precise and breakthrough tests.
Likewise, machine learning is applied in biomedical engineering. This has led to automation of diagnostic tools. Such tools are used in detecting neurological and psychiatric disorders of many sorts.
We all have had a conversation with Siri or Alexa. They use speech recognition to input our requests. Machine Learning is applied here to auto generate responses based on previous data. Hello Barbie is the Siri version for the kids to play with. It uses advanced analytics, machine learning and Natural language processing to respond. This is the first AI enabled toy which could lead to more such inventions.
Google uses Machine Learning statistical models to acquire inputs. The statistical models collect details such as distance from the start point to the endpoint, duration and bus schedules. Such historical data is rescheduled and reused. Machine Learning algorithms are developed with the objective of data prediction. They recognise the pattern between such inputs and predict approximate time delays.
Another well-known application of Google, Google translate involves Machine Learning. Deep learning aids in learning language rules through recorded conversations. Neural networks such as Long-short term memory networks aids in long-term information updates and learning. Recurrent Neural networks identify the sequences of learning. Even bi-lingual processing is made feasible nowadays.
Facebook uses image recognition and computer vision to detect images. Such images are fed as inputs. The statistical models developed using Machine Learning maps any information associated with these images. Facebook generates automated captions for images. These captions are meant to provide directions for visually impaired people. This innovation of Facebook has nudged Data engineers to come up with other such valuable real-time applications.
The aim here is to increase the possibility of the customer, watching a movie recommendation. It is achieved by studying the previous thumbnails. An algorithm is developed to study these thumbnails and derive recommendation results. Every image of available movies has separate thumbnails. A recommendation is generated by pattern recognition among the numerical data. The thumbnails are assigned individual numerical values.
Tesla uses computer vision, data prediction, and path planning for this purpose. The machine learning practices applied makes the innovation stand-out. The deep neural networks work with trained data and generate instructions. Many technological advancements such as changing lanes are instructed based on imitation learning.
Gmail, Yahoo mail and Outlook engage machine learning techniques such as neural networks. These networks detect patterns in historical data. They train on received data about spamming messages and phishing messages. It is noted that these spam filters provide 99.9 percent accuracy.
As people grow more health conscious, the development of fitness monitoring applications are on the rise. Being on top of the market, Fitbit ensures its productivity by the employment of machine learning methods. The trained machine learning models predicts user activities. This is achieved through data pre-processing, data processing and data partitioning. There is a need to improve the application in terms of additional purposes.
The above mentioned applications are like the tip of an iceberg. Machine learning being a subset of Artificial Intelligence finds its necessity in many other streams of daily activities.
Go here to see the original:
Machine Learning: Real-life applications and it's significance in Data Science - Techstory
Recently, the international evaluation agency Standard Performance Evaluation Corporation (SPEC) has finalized the election of new Open System Steering Committee (OSSC) executive members, which include Inspur, Intel, AMD, IBM, Oracle and other three companies.
It is worth noting that Inspur, a re-elected OSSC member, was also re-elected as the chair of the SPEC Machine Learning (SPEC ML) working group. The development plan of ML test benchmark proposed by Inspur has been approved by members which aims to provide users with standard on evaluating machine learning computing performance.
SPEC is a global and authoritative third-party application performance testing organization established in 1988, which aims to establish and maintain a series of performance, function, and energy consumption benchmarks, and provides important reference standards for users to evaluate the performance and energy efficiency of computing systems. The organization consists of 138 well-known technology companies, universities and research institutions in the industry such as Intel, Oracle, NVIDIA, Apple, Microsoft, Inspur, Berkeley, Lawrence Berkeley National Laboratory, etc., and its test standard has become an important indicator for many users to evaluate overall computing performance.
The OSSC executive committee is the permanent body of the SPEC OSG (short for Open System Group, the earliest and largest committee established by SPEC) and is responsible for supervising and reviewing the daily work of major technical groups of OSG, major issues, additions and deletions of members, development direction of research and decision of testing standards, etc. Meanwhile, OSSC executive committee uniformly manages the development and maintenance of SPEC CPU, SPEC Power, SPEC Java, SPEC Virt and other benchmarks.
Machine Learning is an important direction in AI development. Different computing accelerator technologies such as GPU, FPGA, ASIC, and different AI frameworks such as TensorFlow and Pytorch provide customers with a rich marketplace of options. However, the next important thing for the customer to consider is how to evaluate the computing efficiency of various AI computing platforms. Both enterprises and research institutions require a set of benchmarks and methods to effectively measure performance to find the right solution for their needs.
In the past year, Inspur has done much to advance the SPEC ML standard specific component development, contributing test models, architectures, use cases, methods and so on, which have been duly acknowledged by SPEC organization and its members.
Joe Qiao, General Manager of Inspur Solution and Evaluation Department, believes that SPEC ML can provide an objective comparison standard for AI / ML applications, which will help users choose a computing system that best meet their application needs. Meanwhile, it also provides a unified measurement standard for manufacturers to improve their technologies and solution capabilities, advancing the development of the AI industry.
Inspur is a leading provider of data center infrastructure, cloud computing, and AI solutions, ranking among the worlds top 3 server manufacturers. Through engineering and innovation, Inspur delivers cutting-edge computing hardware design and extensive product offerings to address important technology arenas like open computing, cloud data center, AI and deep learning. Performance-optimized and purpose-built, our world-class solutions empower customers to tackle specific workloads and real-world challenges. To learn more, please go to http://www.inspursystems.com.
View source version on businesswire.com: https://www.businesswire.com/news/home/20200221005123/en/
Media Fiona LiuLiuxuan01@inspur.com
Most applications of artificial intelligence (AI) and machine learning technology provide only data to physicians, leaving the doctors to form a judgment on how to proceed. Because AI doesnt actually perform any procedure or prescribe a course of medication, the software that diagnoses health problems does not have to pass a randomized clinical trial as do devices such as insulin pumps or new medications.
A new study published Monday at JAMA Network discusses a trial including 68 patients undergoing elective noncardiac surgery under general anesthesia. The object of the trial was to determine if a predictive early warning system for possible hypotension (low blood pressure) during the surgery might reduce the time-weighted average of hypotension episodes during the surgery.
In other words, not only would the device and its software keep track of the patients mean average blood pressure, but it would sound an alarm if an 85% or greater risk of a patients blood pressure falling below 65 mm of mercury (Hg) was possible in the next 15 minutes. The device also encouraged the anesthesiologist to take preemptive action.
Patients in the control group were connected to the same AI device and software, but only routine pulse and blood pressure data were displayed. That means that the anesthesiologist had no early warning about a hypotension event and could take no action to prevent the event.
Among patients fully connected to the device and software, the median time-weighted average of hypotension was 0.1 mm Hg, compared to an average of 0.44 mm Hg in the control group. In the control group, the median time of hypotension per patient was 32.7 minutes, while it was just 8.0 minutes among the other patients. Most important, perhaps, two patients in the control group died from serious adverse events, while no patients connected to the AI device and software died.
The algorithm used by the device was developed by different researchers who had trained the software on thousands of waveform features to identify a possible hypotension event 15 minutes before it occurs during surgery. The devices used were a Flotrac IQ sensor with the early warning software installed and a HemoSphere monitor. The devices are made by Edwards Lifesciences, and Edwards also had five of eight researchers among the developers of the algorithm. The study itself was conducted in the Netherlands at Amsterdam University Medical Centers.
In an editorial at JAMA Network, associate editor Derek Angus wrote:
The final model predicts the likelihood of future hypotension via measurement of multiple variables characterizing dynamic interactions between left ventricular contractility, preload, and afterload. Although clinicians can look at arterial pulse pressure waveforms and, in combination with other patient features, make educated guesses about the possibility of upcoming episodes of hypotension, the likelihood is high that an AI algorithm could make more accurate predictions.
Among the past decades biggest health news stories were the development of immunotherapies for cancer and a treatment for cystic fibrosis. AI is off to a good start in the new decade.
By Paul Ausick
View original post here:
Artificial Intelligence and Machine Learning in the Operating Room - 24/7 Wall St.
Leadership team of credit-as-a-service startup Migo, one of a growing number of businesses using AI to create consumer-facing products.
The ability to make good decisions is literally the reason people trust you with responsibilities. Whether you work for a government or lead a team at a private company, your decision-making process will affect lives in very real ways.
Organisations often make poor decisions because they fail to learn from the past. Wherever a data-collection reluctance exists, there is a fair chance that mistakes will be repeated. Bad policy goals will often be a consequence of faulty evidentiary support, a failure to sufficiently look ahead by not sufficiently looking back.
But as Daniel Kahneman, author of Thinking Fast and Slow, says:
The idea that the future is unpredictable is undermined every day by the ease with which the past is explained. If governments and business leaders will live up to their responsibilities, enthusiastically embracing methodical decision-making tools should be a no-brainer.
Mass media representations project artificial intelligence in futuristic, geeky terms. But nothing could be further from the truth.
While it is indeed scientific, AI can be applied in practical everyday life today. Basic interactions with AI include algorithms that recommend articles to you, friend suggestions on social media and smart voice assistants like Alexa and Siri.
In the same way, government agencies can integrate AI into regular processes necessary for society to function properly.
Managing money is an easy example to begin with. AI systems can be used to streamline data points required during budget preparations and other fiscal processes. Based on data collected from previous fiscal cycles, government agencies could reasonably forecast needs and expectations for future years.
With its large trove of citizen data, governments could employ AI to effectively reduce inequalities in outcomes and opportunities. Big Data gives a birds-eye view of the population, providing adequate tools for equitably distributing essential infrastructure.
Perhaps a more futuristic example is in drafting legislation. Though a young discipline, legimatics includes the use of artificial intelligence in legal and legislative problem-solving.
Democracies like Nigeria consider public input a crucial aspect of desirable law-making. While AI cannot yet be relied on to draft legislation without human involvement, an AI-based approach can produce tools for specific parts of legislative drafting or decision support systems for the application of legislation.
In Africa, businesses are already ahead of most governments in AI adoption. Credit scoring based on customer data has become popular in the digital lending space.
However, there is more for businesses to explore with the predictive powers of AI. A particularly exciting prospect is the potential for new discoveries based on unstructured data.
Machine learning could broadly be split into two sections: supervised and unsupervised learning. With supervised learning, a data analyst sets goals based on the labels and known classifications of the dataset. The resulting insights are useful but do not produce the sort of new knowledge that comes from unsupervised learning processes.
In essence, AI can be a medium for market-creating innovations based on previously unknown insight buried in massive caches of data.
Digital lending became a market opportunity in Africa thanks to growing smartphone availability. However, customer data had to be available too for algorithms to do their magic.
This is why it is desirable for more data-sharing systems to be normalised on the continent to generate new consumer products. Fintech sandboxes that bring the public and private sectors together aiming to achieve open data standards should therefore be encouraged.
Artificial intelligence, like other technologies, is neutral. It can be used for social good but also can be diverted for malicious purposes. For both governments and businesses, there must be circumspection and a commitment to use AI responsibly.
China is a cautionary tale. The Communist state currently employs an all-watching system of cameras to enforce round-the-clock citizen surveillance.
By algorithmically rating citizens on a so-called social credit score, Chinas ultra-invasive AI effectively precludes individual freedom, compelling her 1.3 billion people to live strictly by the Politburos ideas of ideal citizenship.
On the other hand, businesses must be ethical in providing transparency to customers about how data is harvested to create products. At the core of all exchange must be trust, and a verifiable, measurable commitment to do no harm.
Doing otherwise condemns modern society to those dystopian days everybody dreads.
How can businesses and governments use Artificial Intelligence to find solutions to challenges facing the continent? Join entrepreneurs, innovators, investors and policymakers in Africas AI community at TechCabals emerging tech townhall. At the event, stakeholders including telcos and financial institutions will examine how businesses, individuals and countries across the continent can maximize the benefits of emerging technologies, specifically AI and Blockchain. Learn more about the event and get tickets here.
Continue reading here:
How businesses and governments should embrace AI and Machine Learning - TechCabal
How to Pick a Winning March Madness Bracket – Machine Learning Times – machine learning & data science news – The Predictive Analytics Times
In 2019, over 40 million Americans wagered money on March Madness brackets, according to the American Gaming Association. Most of this money was bet in bracket pools, which consist of a group of people each entering their predictions of the NCAA tournament games along with a buy-in. The bracket that comes closest to being right wins. If you also consider the bracket pools where only pride is at stake, the number of participants is much greater. Despite all this attention, most do not give themselves the best chance to win because they are focused on the wrong question.
The Right Question
Mistake #3 in Dr. John Elders Top 10 Data Science Mistakes is to ask the wrong question. A cornerstone of any successful analytics project starts with having the right project goal; that is, to aim at the right target. If youre like most people, when you fill out your bracket, you ask yourself, What do I think is most likely to happen? This is the wrong question to ask if you are competing in a pool because the objective is to win money, NOT to make the most correct bracket. The correct question to ask is: What bracket gives me the best chance to win $? (This requires studying the payout formula. I used ESPN standard scoring (320 possible points per round) with all pool money given to the winner. (10 points are awarded for each correct win in the round of 64, 20 in the round of 32, and so forth, doubling until 320 are awarded for a correct championship call.))
While these questions seem similar, the brackets they produce will be significantly different.
If you ignore your opponents and pick the teams with the best chance to win games you will reduce your chance of winning money. Even the strongest team is unlikely to win it all, and even if they do, plenty of your opponents likely picked them as well. The best way to optimize your chances of making money is to choose a champion team with a good chance to win who is unpopular with your opponents.
Knowing how other people in your pool are filling out their brackets is crucial, because it helps you identify teams that are less likely to be picked. One way to see how others are filling out their brackets is via ESPNs Who Picked Whom page (Figure 1). It summarizes how often each team is picked to advance in each round across all ESPN brackets and is a great first step towards identifying overlooked teams.
Figure 1. ESPNs Who Picked Whom Tournament Challenge page
For a team to be overlooked, their perceived chance to win must be lower than their actual chance to win. The Who Picked Whom page provides an estimate of perceived chance to win, but to find undervalued teams we also need estimates for actual chance to win. This can range from a complex prediction model to your own gut feeling. Two sources I trust are 538s March Madness predictions and Vegas future betting odds. 538s predictions are based on a combination of computer rankings and has predicted performance well in past tournaments. There is also reason to pay attention to Vegas odds, because if they were too far off, the sportsbooks would lose money.
However, both sources have their flaws. 538 is based on computer ratings, so while they avoid human bias, they miss out on expert intuition. Most Vegas sportsbooks likely use both computer ratings and expert intuition to create their betting odds, but they are strongly motivated to have equal betting on all sides, so they are significantly affected by human perception. For example, if everyone was betting on Duke to win the NCAA tournament, they would increase Dukes betting odds so that more people would bet on other teams to avoid large losses. When calculating win probabilities for this article, I chose to average 538 and Vegas predictions to obtain a balance I was comfortable with.
Lets look at last year. Figure 2 compares a teams perceived chance to win (based on ESPNs Who Picked Whom) to their actual chance to win (based on 538-Vegas averaged predictions) for the leading 2019 NCAA Tournament teams. (Probabilities for all 64 teams in the tournament appear in Table 6 in the Appendix.)
Figure 2. Actual versus perceived chance to win March Madness for 8 top teams
As shown in Figure 2, participants over-picked Duke and North Carolina as champions and under-picked Gonzaga and Virginia. Many factors contributed to these selections; for example, most predictive models, avid sports fans, and bettors agreed that Duke was the best team last year. If you were the picking the bracket most likely to occur, then selecting Duke as champion was the natural pick. But ignoring selections made by others in your pool wont help you win your pool.
While this graph is interesting, how can we turn it into concrete takeaways? Gonzaga and Virginia look like good picks, but what about the rest of the teams hidden in that bottom left corner? Does it ever make sense to pick teams like Texas Tech, who had a 2.6% chance to win it all, and only 0.9% of brackets picking them? How much does picking an overvalued favorite like Duke hurt your chances of winning your pool?
To answer these questions, I simulated many bracket pools and found that the teams in Gonzagas and Virginias spots are usually the best picksthe most undervalued of the top four to five favorites. However, as the size of your bracket pool increases, overlooked lower seeds like third-seeded Texas Tech or fourth-seeded Virginia Tech become more attractive. The logic for this is simple: the chance that one of these teams wins it all is small, but if they do, then you probably win your pool regardless of the number of participants, because its likely no one else picked them.
To simulate bracket pools, I first had to simulate brackets. I used an average of the Vegas and 538 predictions to run many simulations of the actual events of March Madness. As discussed above, this method isnt perfect but its a good approximation. Next, I used the Who Picked Whom page to simulate many human-created brackets. For each human bracket, I calculated the chance it would win a pool of size by first finding its percentile ranking among all human brackets assuming one of the 538-Vegas simulated brackets were the real events. This percentile is basically the chance it is better than a random bracket. I raised the percentile to the power, and then repeated for all simulated 538-Vegas brackets, averaging the results to get a single win probability per bracket.
For example, lets say for one 538-Vegas simulation, my bracket is in the 90th percentile of all human brackets, and there are nine other people in my pool. The chance I win the pool would be. If we assumed a different simulation, then my bracket might only be in the 20th percentile, which would make my win probability . By averaging these probabilities for all 538-Vegas simulations we can calculate an estimate of a brackets win probability in a pool of size , assuming we trust our input sources.
I used this methodology to simulate bracket pools with 10, 20, 50, 100, and 1000 participants. The detailed results of the simulations are shown in Tables 1-6 in the Appendix. Virginia and Gonzaga were the best champion picks when the pool had 50 or fewer participants. Yet, interestingly, Texas Tech and Purdue (3-seeds) and Virginia Tech (4-seed) were as good or better champion picks when the pool had 100 or more participants.
General takeaways from the simulations:
We have assumed that your local pool makes their selections just like the rest of America, which probably isnt true. If you live close to a team thats in the tournament, then that team will likely be over-picked. For example, I live in Charlottesville (home of the University of Virginia), and Virginia has been picked as the champion in roughly 40% of brackets in my pools over the past couple of years. If you live close to a team with a high seed, one strategy is to start with ESPNs Who Picked Whom odds, and then boost the odds of the popular local team and correspondingly drop the odds for all other teams. Another strategy Ive used is to ask people in my pool who they are picking. It is mutually beneficial, since Id be less likely to pick whoever they are picking.
As a parting thought, I want to describe a scenario from the 2019 NCAA tournament some of you may be familiar with. Auburn, a five seed, was winning by two points in the waning moments of the game, when they inexplicably fouled the other team in the act of shooting a three-point shot with one second to go. The opposing player, a 78% free throw shooter, stepped to the line and missed two out of three shots, allowing Auburn to advance. This isnt an alternate reality; this is how Auburn won their first-round game against 12-seeded New Mexico State. They proceeded to beat powerhouses Kansas, North Carolina, and Kentucky on their way to the Final Four, where they faced the exact same situation against Virginia. Virginias Kyle Guy made all his three free throws, and Virginia went on to win the championship.
I add this to highlight an important qualifier of this analysisits impossible to accurately predict March Madness. Were the people who picked Auburn to go to the Final Four geniuses? Of course not. Had Terrell Brown of New Mexico State made his free throws, they would have looked silly. There is no perfect model that can predict the future, and those who do well in the pools are not basketball gurus, they are just lucky. Implementing the strategies talked about here wont guarantee a victory; they just reduce the amount of luck you need to win. And even with the best modelsyoull still need a lot of luck. It is March Madness, after all.
Appendix: Detailed Analyses by Bracket Sizes
At baseline (randomly), a bracket in a ten-person pool has a 10% chance to win. Table 1 shows how that chance changes based on the round selected for a given team to lose. For example, brackets that had Virginia losing in the Round of 64 won a ten-person pool 4.2% of the time, while brackets that picked them to win it all won 15.1% of the time. As a reminder, these simulations were done with only pre-tournament informationthey had no data indicating that Virginia was the eventual champion, of course.
Table 1 Probability that a bracket wins a ten-person bracket pool given that it had a given team (row) making it to a given round (column) and no further
In ten-person pools, the best performing brackets were those that picked Virginia or Gonzaga as the champion, winning 15% of the time. Notably, early round picks did not have a big influence on the chance of winning the pool, the exception being brackets that had a one or two seed losing in the first round. Brackets that had a three seed or lower as champion performed very poorly, but having lower seeds making the Final Four did not have a significant impact on chance of winning.
Table 2 shows the same information for bracket pools with 20 people. The baseline chance is now 5%, and again the best performing brackets are those that picked Virginia or Gonzaga to win. Similarly, picks in the first few rounds do not have much influence. Michigan State has now risen to the third best Champion pick, and interestingly Purdue is the third best runner-up pick.
Table 2 Probability that a bracket wins a 20-person bracket pool given that it had a given team (row) making it to a given round (column) and no further
When the bracket pool size increases to 50, as shown in Table 3, picking the overvalued favorites (Duke and North Carolina) as champions significantly lowers your baseline chances (2%). The slightly undervalued two and three seeds now raise your baseline chances when selected as champions, but Virginia and Gonzaga remain the best picks.
Table 3 Probability that a bracket wins a 50-person bracket pool given that it had a given team (row) making it to a given round (column) and no further
With the bracket pool size at 100 (Table 4), Virginia and Gonzaga are joined by undervalued three-seeds Texas Tech and Purdue. Picking any of these four raises your baseline chances from 1% to close to 2%. Picking Duke or North Carolina again hurts your chances.
Table 4 Probability that a bracket wins a 100-person bracket pool given that it had a given team (row) making it to a given round (column) and no further
When the bracket pool grows to 1000 people (Table 5), there is a complete changing of the guard. Virginia Tech is now the optimal champion pick, raising your baseline chance of winning your pool from 0.1% to 0.4%, followed by the three-seeds and sixth-seeded Iowa State are the best champion picks.
Table 5 Probability that a bracket wins a 1000-person bracket pool given that it had a given team (row) making it to a given round (column) and no further
For Reference, Table 6 shows the actual chance to win versus the chance of being picked to win for all teams seeded seventh or better. These chances are derived from the ESPN Who Picked Whom page and the 538-Vegas predictions. The data for the top eight teams in Table 6 is plotted in Figure 2. Notably, Duke and North Carolina are overvalued, while the rest are all at least slightly undervalued.
The teams in bold in Table 6 are examples of teams that are good champion picks in larger pools. They all have a high ratio of actual chance to win to chance of being picked to win, but a low overall actual chance to win.
Table 6 Actual odds to win Championship vs Chance Team is Picked to Win Championship.
Undervalued teams in green; over-valued in red.
About the Author
Robert Robison is an experienced engineer and data analyst who loves to challenge assumptions and think outside the box. He enjoys learning new skills and techniques to reveal value in data. Robert earned a BS in Aerospace Engineering from the University of Virginia, and is completing an MS in Analytics through Georgia Tech.
In his free time, Robert enjoys playing volleyball and basketball, watching basketball and football, reading, hiking, and doing anything with his wife, Lauren.