Search Immortality Topics:

Page 118«..1020..117118119120..130140..»


Category Archives: Machine Learning

Introduction to Machine Learning Course | Udacity

Introduction to Machine Learning Course

Machine Learning is a first-class ticket to the most exciting careers in data analysis today. As data sources proliferate along with the computing power to process them, going straight to the data is one of the most straightforward ways to quickly gain insights and make predictions.

Machine learning brings together computer science and statistics to harness that predictive power. Its a must-have skill for all aspiring data analysts and data scientists, or anyone else who wants to wrestle all that raw data into refined trends and predictions.

This is a class that will teach you the end-to-end process of investigating data through a machine learning lens. It will teach you how to extract and identify useful features that best represent your data, a few of the most important machine learning algorithms, and how to evaluate the performance of your machine learning algorithms.

This course is also a part of our Data Analyst Nanodegree.

Read more from the original source:
Introduction to Machine Learning Course | Udacity

Posted in Machine Learning | Comments Off on Introduction to Machine Learning Course | Udacity

Machine Learning Engineer Interview Questions: What You Need to Know – Dice Insights

Along with artificial intelligence (A.I.), machine learning is regarded as one of the most in-demand areas for tech employment at the moment. Machine learning engineers develop algorithms and models that can adapt and learn from data. As a result, those who thrive in this discipline are generally skilled not only in computer science and programming, but also statistics, data science, deep learning, and problem solving.

According to Burning Glass, which collects and analyzes millions of job postings from across the country, the prospects for machine learning as an employer-desirable skill are quite good, with jobs projected to rise 36.5 percent over the next decade. Moreover, even those with relatively little machine-learning experience can pull down quite a solid median salary:

Dice Insights spoke with Oliver Sulley, director of Edge Tech Headhunters, to figure out how you should prepare, what youll be asked during an interviewand what you should say to grab the gig.

Youre going to be faced potentially by bosses who dont necessarily know what it is that youre doing, or dont understand ML and have just been [told] they need to get it in the business, Sulley said. Theyre being told by the transformation guys that they need to bring it on board.

As he explained, that means one of the key challenges facing machine learning engineers is determining what technology would be most beneficial to the employer, and being able to work as a cohesive team that may have been put together on very short notice.

What a lot of companies are looking to do is take data theyve collected and stored, and try and get them to build some sort of model that helps them predict what they can be doing in the future, Sulley said. For example, how to make their stock leaner, or predicting trends that could come up over they year that would change their need for services that they offer.

Sulley notes that machine learning engineers are in rarified air at themomentits a high-demand position, and lots of companies are eager to show theyve brought machine learning specialists onboard.

If theyre confident on their skills, then a lot of the time they have to make sure the role is right for them, Sulley said. Its more about the soft skills that are going to be important.

Many machine learning engineers are strong on the technical side, but they often have to interact with teams such as operations; as such, they need to be able to translate technical specifics into laymans terms and express how this data is going to benefit other areas of the company.

Building those soft skills, and making sure people understand how you will work in a team, is just as important at this moment in time, Sulley added.

There are quite a few different roles for machine learning engineers, and so its likely that all these questions could come upbut it will depend on the position. We find questions with more practical experience are more common, and therefore will ask questions related to past work and the individual contributions engineers have made, Sulley said.

For example:

Membership has its benefits. Sign up for a free Dice profile, add your resume, discover great career insights and set your tech career in motion. Register now

A lot of data engineering and machine learning roles involve working with different tech stacks, so its hard to nail down a hard and fast set of skills, as much depends on the company youre interviewing with.(If youre just starting out with machine learning, here are some resources that could prove useful.)

For example, if its a cloud based-role, a machine learning engineer is going to want to have experience with AWS and Azure; and for languages alone, Python and R are the most important, because thats what we see more and more in machine learning engineering, Sulley said. For deployment, Id say Docker, but it really depends on the persons background and what theyre looking to get into.

Sulley said ideal machine learning candidates posses a really analytical mind, as well as a passion for thinking about the world in terms of statistics.

Someone who can connect the dots and has a statistical mind, someone who has a head for numbers and who is interested in that outside of work, rather than someone who just considers it their job and what they do, he said.

As you can see from the following Burning Glass data, quite a few jobs now ask for machine-learning skills; if not essential, theyre often a nice to have for many employers that are thinking ahead.

Sulley suggests the questions you ask should be all about the technologyits about understanding what the companies are looking to build, what their vision is (and your potential contribution to it), and looking to see where your career will grow within that company.

You want to figure out whether youll have a clear progression forward, he said. From that, you will understand how much work theyre going to do with you. Find out what theyre really excited about, and that will help you figure out whether youll be a valued member of the team. Its a really exciting space, and they should be excited by the opportunities that come with bringing you onboard.

Continued here:
Machine Learning Engineer Interview Questions: What You Need to Know - Dice Insights

Posted in Machine Learning | Comments Off on Machine Learning Engineer Interview Questions: What You Need to Know – Dice Insights

Will COVID-19 Create a Big Moment for AI and Machine Learning? – Dice Insights

COVID-19 will change how the majority of us live and work, at least in the short term. Its also creating a challenge for tech companies such as Facebook, Twitter and Google that ordinarily rely on lots and lots of human labor to moderate content. Are A.I. and machine learning advanced enough to help these firms handle the disruption?

First, its worth noting that, although Facebook has instituted a sweeping work-from-home policy in order to protect its workers (along with Googleand a rising number of other firms), it initially required its contractors who moderate content to continue to come into the office. That situation only changed after protests,according toThe Intercept.

Now, Facebook is paying those contractors while they sit at home, since the nature of their work (scanning peoples posts for content that violates Facebooks terms of service) is extremely privacy-sensitive. Heres Facebooks statement:

For both our full-time employees and contract workforce there is some work that cannot be done from home due to safety, privacy and legal reasons. We have taken precautions to protect our workers by cutting down the number of people in any given office, implementing recommended work from home globally, physically spreading people out at any given office and doing additional cleaning. Given the rapidly evolving public health concerns, we are taking additional steps to protect our teams and will be working with our partners over the course of this week to send all contract workers who perform content review home, until further notice. Well ensure that all workers are paid during this time.

Facebook, Twitter, Reddit, and other companies are in the same proverbial boat: Theres an increasing need to police their respective platforms, if only to eliminate fake news about COVID-19, but the workers who handle such tasks cant necessarily do so from home, especially on their personal laptops. The potential solution? Artificial intelligence (A.I.) and machine-learning algorithms meant to scan questionable content and make a decision about whether to eliminate it.

HeresGoogles statement on the matter, via its YouTube Creator Blog.

Our Community Guidelines enforcement today is based on a combination of people and technology: Machine learning helps detect potentially harmful content and then sends it to human reviewers for assessment. As a result of the new measures were taking, we will temporarily start relying more on technology to help with some of the work normally done by reviewers. This means automated systems will start removing some content without human review, so we can continue to act quickly to remove violative content and protect our ecosystem, while we have workplace protections in place.

To be fair, the tech industry has been heading in this direction for some time. Relying on armies of human beings to read through every piece of content on the web is expensive, time-consuming, and prone to error. But A.I. and machine learning are still nascent, despite the hype. Google itself, in the aforementioned blog posting, pointed out how its automated systems may flag the wrong videos. Facebook is also receiving criticism that its automated anti-spam system is whacking the wrong posts, including those thatoffer vital information on the spread of COVID-19.

If the COVID-19 crisis drags on, though, more companies will no doubt turn to automation as a potential solution to disruptions in their workflow and other processes. That will force a steep learning curve; again and again, the rollout of A.I. platforms has demonstrated that, while the potential of the technology is there, implementation is often a rough and expensive processjust look at Google Duplex.

Membership has its benefits. Sign up for a free Dice profile, add your resume, discover great career insights and set your tech career in motion. Register now

Nonetheless, an aggressive embrace of A.I. will also create more opportunities for those technologists who have mastered A.I. and machine-learning skills of any sort; these folks may find themselves tasked with figuring out how to automate core processes in order to keep businesses running.

Before the virus emerged, BurningGlass (which analyzes millions of job postings from across the U.S.), estimated that jobs that involve A.I. would grow 40.1 percent over the next decade. That percentage could rise even higher if the crisis fundamentally alters how people across the world live and work. (The median salary for these positions is $105,007; for those with a PhD, it drifts up to $112,300.)

If youre trapped at home and have some time to learn a little bit more about A.I., it could be worth your time to explore online learning resources. For instance, theres aGooglecrash coursein machine learning. Hacker Noonalso offers an interesting breakdown ofmachine learningandartificial intelligence.Then theres Bloombergs Foundations of Machine Learning,a free online coursethat teaches advanced concepts such as optimization and kernel methods.

Read more from the original source:
Will COVID-19 Create a Big Moment for AI and Machine Learning? - Dice Insights

Posted in Machine Learning | Comments Off on Will COVID-19 Create a Big Moment for AI and Machine Learning? – Dice Insights

Self-driving truck boss: ‘Supervised machine learning doesnt live up to the hype. It isnt C-3PO, its sophisticated pattern matching’ – The Register

Roundup Let's get cracking with some machine-learning news.

Starksy Robotics is no more: Self-driving truck startup Starsky Robotics has shut down after running out of money and failing to raise more funds.

CEO Stefan Seltz-Axmacher bid a touching farewell to his upstart, founded in 2016, in a Medium post this month. He was upfront and honest about why Starsky failed: Supervised machine learning doesnt live up to the hype, he declared. It isnt actual artificial intelligence akin to C-3PO, its a sophisticated pattern-matching tool.

Neural networks only learn to pick up on certain patterns after they are faced with millions of training examples. But driving is unpredictable, and the same route can differ day to day, depending on the weather or traffic conditions. Trying to model every scenario is not only impossible but expensive.

In fact, the better your model, the harder it is to find robust data sets of novel edge cases. Additionally, the better your model, the more accurate the data you need to improve it, Seltz-Axmacher said.

More time and money is needed to provide increasingly incremental improvements. Over time, only the most well funded startups can afford to stay in the game, he said.

Whenever someone says autonomy is ten years away thats almost certainly what their thought is. There arent many startups that can survive ten years without shipping, which means that almost no current autonomous team will ever ship AI decision makers if this is the case, he warned.

If Seltz-Axmacher is right, then we should start seeing smaller autonomous driving startups shutting down in the near future too. Watch this space.

Waymo to pause testing during Bay Area lockdown: Waymo, Googles self-driving car stablemate, announced it was pausing its operations in California to abide by the lockdown orders in place in Bay Area counties, including San Francisco, Santa Clara, San Mateo, Marin, Contra Costa and Alameda. Businesses deemed non-essential were advised to close and residents were told to stay at home, only popping out for things like buying groceries.

It will, however, continue to perform rides for deliveries and trucking services for its riders and partners in Phoenix, Arizona. These drives will be entirely driverless, however, to minimise the chance of spreading COVID-19.

Waymo also launched its Open Dataset Challenge. Developers can take part in a contest that looks for solutions to these problems:

Cash prizes are up for grabs too. The winner can expect to pocket $15,000, second place will get you $5,000, while third is $2,000.

You can find out more details on the rules of the competition and how to enter here. The challenge is open until 31 May.

More free resources to fight COVID-19 with AI: Tech companies are trying to chip in and do what they can to help quell the coronavirus pandemic. Nvidia and Scale AI both offered free resources to help developers using machine learning to further COVID-19 research.

Nvidia is providing a free 90-day license to Parabricks, a software package that speeds up the process of analyzing genome sequences using GPUs. The rush is on to analyze the genetic information of people that have been infected with COVID-19 to find out how the disease spreads and which communities are most at risk. Sequencing genomes requires a lot of number crunching, Parabricks slashes the time needed to complete the task.

Given the unprecedented spread of the pandemic, getting results in hours versus days could have an extraordinary impact on understanding the viruss evolution and the development of vaccines, it said this week.

Interested customers who have access to Nvidias GPUs should fill out a form requesting access to Parabricks.

Nvidia is inviting our family of partners to join us in matching this urgent effort to assist the research community. Were in discussions with cloud service providers and supercomputing centers to provide compute resources and access to Parabricks on their platforms.

Next up is Scale AI, the San Francisco based startup focused on annotating data for machine learning models. It is offering its labeling services for free to any researcher working on a potential vaccine, or on tracking, containing, or diagnosing COVID-19.

Given the scale of the pandemic, researchers should have every tool at their disposal as they try to track and counter this virus, it said in a statement.

Researchers have already shown how new machine learning techniques can help shed new light on this virus. But as with all new diseases, this work is much harder when there is so little existing data to go on.

In those situations, the role of well-annotated data to train models o diagnostic tools is even more critical. If you have a lot of data to analyse and think Scale AI could help then apply for their help here.

PyTorch users, AWS has finally integrated the framework: Amazon has finally integrated PyTorch support into Amazon Elastic Inference, its service that allows users to select the right amount of GPU resources on top of CPUs rented out in its cloud services Amazon SageMaker and Amazon EC2, in order to run inference operations on machine learning models.

Amazon Elastic Inference works like this: instead of paying for expensive GPUs, users select the right amount of GPU-powered inference acceleration on top of cheaper CPUs to zip through the inference process.

In order to use the service, however, users will have to convert their PyTorch code into TorchScript, another framework. You can run your models in any production environment by converting PyTorch models into TorchScript, Amazon said this week. That code is then processed by an API in order to use Amazon Elastic Inference.

The instructions to convert PyTorch models into the right format for the service have been described here.

Sponsored: Webcast: Why you need managed detection and response

The rest is here:
Self-driving truck boss: 'Supervised machine learning doesnt live up to the hype. It isnt C-3PO, its sophisticated pattern matching' - The Register

Posted in Machine Learning | Comments Off on Self-driving truck boss: ‘Supervised machine learning doesnt live up to the hype. It isnt C-3PO, its sophisticated pattern matching’ – The Register

Put Your Money Where Your Strategy Is: Using Machine Learning to Analyze the Pentagon Budget – War on the Rocks

A masterpiece is how then-Deputy Defense Secretary Patrick Shanahan infamously described the Fiscal Year 2020 budget request. It would, he said, align defense spending with the U.S. National Defense Strategy both funding the future capabilities necessary to maintain an advantage over near-peer powers Russia and China, and maintaining readiness for ongoing counter-terror campaigns.

The result was underwhelming. While research and development funding increased in 2020, it did not represent the funding shift toward future capabilities that observers expected. Despite its massive size, the budget was insufficient to address the departments long-term challenges. Key emerging technologies identified by the department such as hypersonic weapons, artificial intelligence, quantum technologies, and directed-energy weapons still lacked a clear and sustained commitment to investment. It was clear that the Department of Defense did not make the difficult tradeoffs necessary to fund long-term modernization. The Congressional Budget Office further estimated that the cost of implementing the plans, which were in any case insufficient to meet the defense strategys requirements, would be about 2 percent higher than department estimates.

Has anything changed this year? The Department of Defense released its FY2021 budget request Feb. 10, outlining the departments spending priorities for the upcoming fiscal year. As is mentioned every year at its release, the proposed budget is an aspirational document the actual budget must be approved by Congress. Nevertheless, it is incredibly useful as a strategic document, in part because all programs are justified in descriptions of varying lengths in what are called budget justification books. After analyzing the 10,000-plus programs in the research, development, testing and evaluation budget justification books using a new machine learning model, it is clear that the newest budgets tepid funding for emerging defense technologies fails to shift the departments strategic direction toward long-range strategic competition with a peer or near-peer adversary.

Regardless of your beliefs about the optimal size of the defense budget or whether the 2018 National Defense Strategys focus on peer and near-peer conflict is justified, the Department of Defenses two most recent budget requests have been insufficient to implement the administrations stated modernization strategy fully.

To be clear, this is not a call to increase the Department of Defenses budget over its already-gargantuan $705.4 billion FY2021 request. Nor is this the only problem with the federal budget proposal, which included cuts to social safety net programs programs that are needed now more than ever to mitigate the effects from COVID-19. Instead, my goal is to demonstrate how the budget fails to fund its intended strategy despite its overall excess. Pentagon officials described the budget as funding an irreversible implementation of the National Defense Strategy, but that is only true in its funding for nuclear capabilities and, to some degree, for hypersonic weapons. Otherwise, it largely neglects emerging technologies.

A Budget for the Last War

The 2018 National Defense Strategy makes clear why emerging technologies are critical to the U.S. militarys long-term modernization and ability to compete with peer or near-peer adversaries. The document notes that advanced computing, big data analytics, artificial intelligence, autonomy, robotics, directed energy, hypersonics, and biotechnology are necessary to ensure we will be able to fight and win the wars of the future. The Government Accountability Office included similar technologies artificial intelligence, quantum information science, autonomous systems, hypersonic weapons, biotechnology, and more in a 2018 report on long-range emerging threats identified by federal agencies.

In the Department of Defenses budget press release, the department argued that despite overall flat funding levels, it made numerous hard choices to ensure that resources are directed toward the Departments highest priorities, particularly in technologies now termed advanced capabilities enablers. These technologies include hypersonic weapons, microelectronics/5G, autonomous systems, and artificial intelligence. Elaine McCusker, the acting undersecretary of defense (comptroller) and chief financial officer, argued, Any place where we have increases, so for hypersonics or AI for cyber, for nuclear, thats where the money went This budget is focused on the high-end fight. (McCuskers nomination for Department of Defense comptroller was withdrawn by the White House in early March because of her concerns over the 2019 suspension of defense funding for Ukraine.) Deputy Defense Secretary David L. Norquist noted that the budget request had the largest research and development request ever.

Despite this, the FY2021 budget is not a significant shift from the FY2020 budget in developing advanced capabilities for competition against a peer or near-peer. I analyzed data from the Army, Navy, Air Force, Missile Defense Agency, Office of the Secretary of Defense, and Defense Advanced Research Projects Agency budget justification books, and the department has still failed to realign its funding priorities toward the long-range emerging technologies that strategic documents suggest should be the highest priority. Aside from hypersonic weapons, which received already-expected funding request increases, most other types of emerging technologies remained mostly stagnant or actually declined from FY2020 request levels.

James Miller and Michael OHanlon argued in their analysis of the FY2020 budget, Desires for a larger force have been tacked onto more crucial matters of military innovation and that the department should instead prioritize quality over quantity. This criticism could be extended to the FY2021 budget, along with the indictment that military innovation itself wasnt fully prioritized either.

Breaking It Down

In this brief review, I attempt to outline funding changes for emerging technologies between the FY2020 and FY2021 budgets based on a machine learning text-classification model, while noting cornerstone programs in each category.

Lets start with the top-level numbers from the R1 document, which divides the budget into seven budget activities. Basic and applied defense research account for 2 percent and 5 percent of the overall FY2021 research and development budget, compared to 38 percent for operational systems development and 27 percent for advanced component development and prototypes. The latter two categories have grown from 2019, in both real terms and as a percentage of the budget, by 2 percent and 5 percent, respectively. These categories were both the largest overall budget activities and also received the largest percentage increases.

Federally funded basic research is critical because it helps develop the capacity for the next generation of applied research. Numerous studies have demonstrated the benefit of federally funded basic science research, with some estimates suggesting two-thirds of the technologies with the most far-reaching impact over the last 50 years [stemmed] from federally funded R&D at national laboratories and research universities. These technologies include the internet, robotics, and foundational subsystems for space-launch vehicles, among others. In fact, a 2019 study for the National Bureau of Economic Researchs working paper series found evidence that publicly funded investments in defense research had a crowding in effect, significantly increasing private-sector research and development from the recipient industry.

Concerns over the levels of basic research funding are not new. A 2015 report by the MIT Committee to Evaluate the Innovation Deficit argued that declining federal basic research could severely undermine long-term U.S. competitiveness, particularly for research areas that lack obvious real-world applications. This is particularly true given that the share of industry-funded basic research has collapsed, with the authors arguing that U.S. companies are left dependent on federally-funded, university-based basic research to fuel innovation. This shift means that federal support of basic research is even more tightly coupled to national economic competitiveness. A 2017 analysis of Americas artificial intelligence strategy recommended that the government [ensure] adequate funding for scientific research, averting the risks of an innovation deficit that could severely undermine long-term competitiveness. Data from the Organization for Economic Cooperation and Development shows that Chinese government research and development spending has already surpassed that of the United States, while Chinese business research and development expenditures are rapidly approaching U.S. levels.

While we may debate the precise levels of basic and applied research and development funding, there is little debate about its ability to produce spillover benefits for the rest of the economy and the public at large. In that sense, the slight declines in basic and applied research funding in both real terms and as a percentage of overall research and development funding hurt the United States in its long-term competition with other major powers.

Clean, Code, Classify

The Defense Departments budget justification books contain thousands of pages of descriptions spread across more than 20 separate PDFs. Each program description explains the progress made each year and justifies the funding request increase or decrease. There is a wealth of information about Department of Defense strategy in these documents, but it is difficult to assess departmental claims about funding for specific technologies or to analyze multiyear trends while the data is in PDF form.

To understand how funding changed for each type of emerging technology, I scraped and cleaned this information from the budget documents, then classified each research and development program into categories of emerging technologies (including artificial intelligence, biotechnologies, directed-energy weapons, hypersonic weapons and vehicles, quantum technologies, autonomous and swarming systems, microelectronics/5G, and non-emerging technology programs). I designed a random forest machine learning model to sort the remaining programs into these categories. This is an algorithm that uses hundreds of decision trees to identify which variables or words in a program description, in this case are most important for classifying data into groups.

There are many kinds of machine learning models that can be used to classify data. To choose one that would most effectively classify the program data, I started by hand-coding 1,200 programs to train three different kinds of models (random forest, k-nearest neighbors, and support vector machine), as well as for a model testing dataset. Each model would look at the term frequency-inverse document frequency (essentially, how often given words appear adjusted for how rarely they are used) of all the words in a programs description to decide how to classify each program. For example, for the Armys Long Range Hypersonic Weapon program, the model might have seen the words hypersonic, glide, and thermal in the description and guessed that it was most likely a hypersonic program. The random forest model slightly outperformed the support vector machine model and significantly outperformed the k-nearest neighbors model, as well as a simpler method that just looked for specific keywords in a program description.

Having chosen a machine-learning model to use, I set it to work classifying the remaining 10,000 programs. The final result is a large dataset of programs mentioned in the 2020 and 2021 research and development budgets, including their full descriptions, predicted category, and funding amount for the year of interest. This effort, however, should be viewed as only a rough estimate of how much money each emerging technology is getting. Even a fully hand-coded classification that didnt rely on a machine learning model would be challenged by sometimes-vague program descriptions and programs that fund multiple types of emerging technologies. For example, the Applied Research for the Advancement of S&T Priorities program funds projects across multiple categories, including electronic warfare, human systems, autonomy, and cyber advanced materials, biomedical, weapons, quantum, and command, control, communications, computers and intelligence. The model took a guess that the program was focused on quantum technologies, but that is clearly a difficult program to classify into a single category.

With the programs sorted and classified by the model, the variation in funding between types of emerging technologies became clear.

Hypersonic Boost-Glide Weapons Win Big

Both the official Department of Defense budget press release and the press briefing singled out hypersonic research and development investment. As one of the departments advanced capabilities enablers, hypersonic weapons, defenses, and related research received $3.2 billion in the FY2021 budget, which is nearly as much as the other three priorities mentioned in the press release combined (microelectronics/5G, autonomy, and artificial intelligence).

In the 2021 budget documents, there were 96 programs (compared with 60 in the 2020 budget) that the model classified as related to hypersonics based on their program descriptions, combining for $3.36 billion an increase from 2020s $2.72 billion. This increase was almost solely due to increases in three specific programs, and funding for air-breathing hypersonic weapons and combined-cycle engine developments was stagnant.

The three programs driving up the hypersonic budget are the Armys Long-Range Hypersonic Weapon, the Navys Conventional Prompt Strike, and the Air Forces Air-Launched Rapid Response Weapon program. The Long-Range Hypersonic Weapon received a $620.42 million funding increase to field an experimental prototype with residual combat capability. The Air-Launched Rapid Response Weapons $180.66 million increase was made possible by the removal of funding for the Air Forces Hypersonic Conventional Strike Weapon in FY2021 which saved $290 million compared with FY2020. This was an interesting decision worthy of further analysis, as the two competing programs seemed to differ in their ambition and technical risk; the Air-Launched Rapid Response Weapon program was designed for pushing the art-of-the-possible while the conventional strike weapon was focused on integrating already mature technologies. Conventional Prompt Strike received the largest 2021 funding request at $1 billion, an increase of $415.26 million over the 2020 request. Similar to the Army program, the Navys Conventional Prompt Strike increase was fueled by procurement of the Common Hypersonic Glide Body that the two programs share (along with a Navy-designed 34.5-inch booster), as well as testing and integration on guided missile submarines.

To be sure, the increase in hypersonic funding in the 2021 budget request is important for long-range modernization. However, some of the increases were already planned, and the current funding increase largely neglects air-breathing hypersonic weapons. For example, the Navys Conventional Prompt Strike 2021 budget request was just $20,000 more than anticipated in the 2020 budget. Programs that explicitly mention scramjet research declined from $156.2 million to $139.9 million.

In contrast to hypersonics, research and development funding for many other emerging technologies was stagnant or declined in the 2021 budget. Non-hypersonic emerging technologies increased from $7.89 billion in 2020 to only $7.97 billion in 2021, mostly due to increases in artificial intelligence-related programs.

Biotechnology, Quantum, Lasers Require Increased Funding

Source: Graphic by the author.

Directed-energy weapons funding fell slightly in the 2021 budget to $1.66 billion, from $1.74 billion in 2020. Notably, the Army is procuring three directed-energy prototypes to support the maneuver-short range air defense mission for $246 million. Several other programs are also noteworthy. The High Energy Power Scaling program ($105.41 million) will finalize designs and integrate systems into a prototype 300 kW-class high-energy laser, focusing on managing thermal blooming (a distortion caused by the laser heating the atmosphere through which it travels) for 300 and eventually 500 kW-class lasers. Second, the Air Forces Directed Energy/Electronic Combat program ($89.03 million) tests air-based directed-energy weapons for use in contested environments.

Quantum technologies funding increased by $109 million, to $367 million, in 2021. In general, quantum-related programs are more exploratory, focused on basic and applied research rather than fielding prototypes. They are also typically funded by the Office of the Secretary of Defense or the Defense Advanced Research Projects Agency rather than by the individual services, or they are bundled into larger programs that distribute funding to many emerging technologies. For example, several of the top 2021 programs that the model classified as quantum research and development based on their descriptions include the Office of the Secretary of Defenses Applied Research for the Advancement of S&T Priorities ($54.52 million), or the Defense Advanced Research Projects Agencys Functional Materials and Devices ($28.25 million). The increase in Department of Defense funding for quantum technologies is laudable, but given the potential disruptive ability of quantum technologies, the United States should further increase its federal funding for quantum research and development, guarantee stable long-term funding, and incentivize young researchers to enter the field. The FY2021 budgets funding increase is clearly a positive step, but quantum technologies revolutionary potential demands more funding than the category currently receives.

Biotechnologies increased from $969 million in 2020 to $1.05 billion in 2021 (my guess is that the model overestimated the funding for emerging biotech programs, by including research programs related to soldier health and medicine that involve established technologies). Analyses of defense biotechnology typically focus on the defense applications of human performance enhancement, synthetic biology, and gene-editing technology research. Previous analyses, including one from 2018 in War on the Rocks, have lamented the lack of a comprehensive strategy for biotechnology innovation, as well as funding uncertainties. The Center for Strategic and International Studies argued, Biotechnology remains an area of investment with respect to countering weapons of mass destruction but otherwise does not seem to be a significant priority in the defense budget. These concerns appear to have been well-founded. Funding has stagnated despite the enormous potential offered by biotechnologies like nanotubes, spider silk, engineered probiotics, and bio-based sensors, many of which could be critical enablers as components of other emerging technologies. For example, this estimate includes the interesting Persistent Aquatic Living Sensors program ($25.7 million) that attempts to use living organisms to detect submarines and unmanned underwater vehicles in littoral waters.

Programs classified as autonomous or swarming research and development declined from $3.5 billion to $2.8 billion in 2021. This includes the Army Robotic Combat Vehicle program (stagnant at $86.22 million from $89.18 million in 2020). The Skyborg autonomous attritable (a low-cost, unmanned system that doesnt have to be recovered after launch) drone program requested $40.9 million and also falls into the autonomy category, as do the Air Forces Golden Horde ($72.09 million), Office of the Secretary of Defenses manned-unmanned teaming Avatar program ($71.4 million), and the Navys Low-Cost UAV Swarming Technology (LOCUST) program ($34.79 million).

The programs sorted by the model into the artificial intelligence category increased from $1.36 billion to $1.98 billion in 2021. This increase is driven by an admirable proliferation of smaller programs 161 programs under $50 million, compared with 119 in 2020. However, as the Department of Defense reported that artificial intelligence research and development received only $841 million in the 2021 budget request, it is clear that the random forest model is picking up some false positives for artificial intelligence funding.

Some critics argue that federal funding risks duplicating artificial intelligence efforts in the commercial sector. There are several problems with this argument, however. A 2017 report on U.S. artificial intelligence strategy argued, There also tends to be shortfalls in the funding available to research and start-ups for which the potential for commercialization is limited or unlikely to be lucrative in the foreseeable future. Second, there are a number of technological, process, personnel, and cultural challenges in the transition of artificial intelligence technologies from commercial development to defense applications. Finally, the Trump administrations anti-immigration policies hamstring U.S. technological and industrial base development, particularly in artificial intelligence, as immigrants are responsible for one-quarter of startups in the United States.

The Neglected Long Term

While there are individual examples of important programs that advance the U.S. militarys long-term competitiveness, particularly for hypersonic weapons, the overall 2021 budget fails to shift its research and development funding toward emerging technologies and basic research.

While recognizing that the overall budget was essentially flat, it should not come as a surprise that research and development funding for emerging technologies was mostly flat as well. But the United States already spends far more on defense than any other country, and even with a flat budget, the allocation of funding for emerging technologies does not reflect an increased focus on long-term planning for high-end competition compared with the 2020 budget. Specifically, the United States should increase its funding for emerging technologies other than hypersonics directed energy, biotech, and quantum information sciences, as well as in basic scientific research even if it requires tradeoffs in other areas.

The problem isnt necessarily the year-to-year changes between the FY2020 and FY2021 budgets. Instead, the problem is that proposed FY2021 funding for emerging technologies continues the previous years underwhelming support for research and development relative to the Department of Defenses strategic goals. This is the critical point for my assessment of the budget: despite multiple opportunities to align funding with strategy, emerging technologies and basic research have not received the scale of investment that the National Defense Strategy argues they deserve.

Chad Peltier is a senior defense analyst at Janes, where he specializes in emerging defense technologies, Chinese military modernization, and data science. This article does not reflect the views of his employer.

Image: U.S. Army (Photo by Monica K. Guthrie)

Go here to read the rest:
Put Your Money Where Your Strategy Is: Using Machine Learning to Analyze the Pentagon Budget - War on the Rocks

Posted in Machine Learning | Comments Off on Put Your Money Where Your Strategy Is: Using Machine Learning to Analyze the Pentagon Budget – War on the Rocks

Data to the Rescue! Predicting and Preventing Accidents at Sea – JAXenter

Watch Dr. Yonit Hoffman's Machine Learning Conference session

Accidents at sea happen all the time. Their costs in terms of lives, money and environmental destruction are huge. Wouldnt it be great if they could be predicted and perhaps prevented? Dr. Yonit Hoffmans Machine Learning Conference session discusses new ways of preventing sea accidents with the power of data science.

Does machine learning hold the key to preventing accidents at sea?

With more than 350 years of history, the marine insurance industry is the first data science profession to try to predict accidents and estimate future risk. Yet the old ways no longer work, new waves of data and algorithms can offer significant improvements and are going to revolutionise the industry.

In her Machine Learning Conference session, Dr. Yonit Hoffman will show that it is now possible to predict accidents, and how data on a ships behaviour such as location, speed, maps and weather can help. She will show how fragments of information on ship movements can be gathered and taken all the way to machine learning models. In this session, she discusses the challenges, including introducing machine learning to an industry that still uses paper and quills (yes, really!) and explaining the models using SHAP.

Dr. Yonit Hoffman is a Senior Data Scientist at Windward, a world leader in maritime risk analytics. Before investigating supertanker accidents, she researched human cells and cancer at the Weizmann Institute, where she received her PhD and MSc. in Bioinformatics. Yonit also holds a BSc. in computer science and biology from Tel Aviv University.

Go here to see the original:
Data to the Rescue! Predicting and Preventing Accidents at Sea - JAXenter

Posted in Machine Learning | Comments Off on Data to the Rescue! Predicting and Preventing Accidents at Sea – JAXenter