Search Immortality Topics:

Page 19«..10..18192021..3040..»


AI Stocks: Flex, Jabil Called Alternative AI Hardware Plays – Investor’s Business Daily

Posted: March 10, 2024 at 3:17 am

Access to this page has been denied because we believe you are using automation tools to browse the website.

This may happen as a result of the following:

Please make sure that Javascript and cookies are enabled on your browser and that you are not blocking them from loading.

Reference ID: #322ab70a-deae-11ee-926e-7c7da5ba8d18

Read the original here:

AI Stocks: Flex, Jabil Called Alternative AI Hardware Plays - Investor's Business Daily

Recommendation and review posted by G. Smith

What to know about this AI stock with ties to Nvidia up nearly 170% in 2024 – CNBC

Posted: March 10, 2024 at 3:17 am

Investors may want to keep an eye on this artificial intelligence voice-and-speech recognition stock with ties to Nvidia . Shares of SoundHound AI have surged almost 170% this year and nearly 347% in February alone as investors bet on new applications for the booming technology trend that has taken Wall Street by storm. Last month, Nvidia revealed a $3.7 million bet on the stock in a securities filing, and management said on an earnings call that "demand is going through theroof." "We continue to believe that the company is in a strong position to capture its fair share of the AI chatbot market demand wave with its technology providing more use cases going forward," wrote Wedbush Securities analyst Dan Ives in a February note. SOUN YTD mountain SoundHound shares in 2024 While the Nvidia investment isn't new news for investors and analysts, it does reinforce SoundHound's value proposition. Ives also noted that the stake "solidifies the company's brand within the AI Revolution" and lays the groundwork for a potential larger investment in the future. Relatively few Wall Street shops cover the AI stock. A little more than 80% rate it with a buy or overweight rating, with consensus price targets suggesting upside of nearly 24%, per FactSet. The company also sits at a roughly $1.7 billion market capitalization and has yet to attain profitability. Expanding its total addressable market Along with its Nvidia relationship, SoundHound has partnered with a slew of popular restaurant brands, automakers and hospitality companies to provide AI voice customer solutions. While the company works with about a quarter of total automobile companies, "the penetration into that customer set only amounts to 1-2% of global sales, leaving significant room for growth within the current customer base as well as growth from adding new brands," said Ladenburg Thalmann's Glenn Mattson in a January note initiating coverage with a buy rating. "With voice enabled units expected to grow to 70% of shipments by 2026, this represents a significant growth opportunity, in our view," he added. SoundHound has also made significant headway within the restaurant industry, recently adding White Castle, Krispy Kreme and Jersey Mike's to its growing list of customers, analysts note. That total addressable market should continue growing as major players such as McDonald's, DoorDash and Wendy's hunt for ways to expand AI voice use, said D.A. Davidson's Gil Luria. He estimates an $11 billion total addressable market when accounting for the immediate opportunities from quick-service restaurants and original equipment manufacturers. "SoundHound's long term opportunity is attractive and largely up for grabs," he said in a September note initiating coverage with a buy rating. "Given the high degree of technical complexity required to create value in this space, we see SoundHound with its best-of-breed solution as a likely winner and expect it to win significant market share." Headwinds to profitability While demand for SoundHound AI's products appears to be accelerating, investors should beware of a bumpy road ahead. Cantor Fitzgerald's Brett Knoblauch noted that being in the early stages of product adoption creates uncertainties surrounding the "pace of revenue growth and timeline to positive FCF." Although H.C. Wainwright's Scott Buck views SoundHound's significant bookings backlog and accelerating revenue growth as supportive of a premium valuation, he noted that the recent acquisition of restaurant automation technology company SYNQ3 could delay profitability to next year. But "we suspect the longer term financial and operating benefits to meaningfully outweigh short-term profitability headwinds," he said. "We recommend investors continue to accumulate SOUN shares ahead of stronger operating results."

Go here to read the rest:

What to know about this AI stock with ties to Nvidia up nearly 170% in 2024 - CNBC

Recommendation and review posted by G. Smith

NIST, the lab at the center of Bidens AI safety push, is decaying – The Washington Post

Posted: March 10, 2024 at 3:17 am

At the National Institute of Standards and Technology the government lab overseeing the most anticipated technology on the planet black mold has forced some workers out of their offices. Researchers sleep in their labs to protect their work during frequent blackouts. Some employees have to carry hard drives to other buildings; flaky internet wont allow for the sending of large files.

And a leaky roof forces others to break out plastic sheeting.

If we knew rain was coming, wed tarp up the microscope, said James Fekete, who served as chief of NISTs applied chemicals and materials division until 2018. It leaked enough that we were prepared.

NIST is at the heart of President Bidens ambitious plans to oversee a new generation of artificial intelligence models; through an executive order, the agency is tasked with developing tests for security flaws and other harms. But budget constraints have left the 123-year-old lab with a skeletal staff on key tech teams and most facilities on its main Gaithersburg, Md., and Boulder, Colo., campuses below acceptable building standards.

Interviews with more than a dozen current and former NIST employees, Biden administration officials, congressional aides and tech company executives, along with reports commissioned by the government, detail a massive resources gap between NIST and the tech firms it is tasked with evaluating a discrepancy some say risks undermining the White Houses ambitious plans to set guardrails for the burgeoning technology. Many of the people spoke to The Washington Post on the condition of anonymity because they were not authorized to speak to the media.

Even as NIST races to set up the new U.S. AI Safety Institute, the crisis at the degrading lab is becoming more acute. On Sunday, lawmakers released a new spending plan that would cut NISTs overall budget by more than 10 percent, to $1.46 billion. While lawmakers propose to invest $10 million in the new AI institute, thats a fraction of the tens of billions of dollars tech giants like Google and Microsoft are pouring into the race to develop artificial intelligence. It pales in comparison to Britain, which has invested more than $125 million into its AI safety efforts.

The cuts to the agency are a self-inflicted wound in the global tech race, said Divyansh Kaushik, the associate director for emerging technologies and national security at the Federation of American Scientists.

Some in the AI community worry that underfunding NIST makes it vulnerable to industry influence. Tech companies are chipping in for the expensive computing infrastructure that will allow the institute to examine AI models. Amazon announced that it would donate $5 million in computing credits. Microsoft, a key investor in OpenAI, will provide engineering teams along with computing resources. (Amazon founder Jeff Bezos owns The Post.)

Tech executives, including OpenAI CEO Sam Altman, are regularly in communication with officials at the Commerce Department about the agencys AI work. OpenAI has lobbied NIST on artificial intelligence issues, according to federal disclosures. NIST asked TechNet an industry trade group whose members include OpenAI, Google and other major tech companies if its member companies can advise the AI Safety Institute.

NIST is also seeking feedback from academics and civil society groups on its AI work. The agency has a long history of working with a variety of stakeholders to gather input on technologies, Commerce Department spokesman Charlie Andrews said.

AI staff, unlike their more ergonomically challenged colleagues, will be working in well-equipped offices in the Gaithersburg campus, the Commerce Departments D.C. office and the NIST National Cybersecurity Center of Excellence in Rockville, Md., Andrews said.

White House spokeswoman Robyn Patterson said the appointment of Elizabeth Kelly to the helm of the new AI Safety Institute underscores the White Houses commitment to getting this work done right and on time. Kelly previously served as special assistant to the president for economic policy.

The Biden-Harris administration has so far met every single milestone outlined by the presidents landmark executive order, Patterson said. We are confident in our ability to continue to effectively and expeditiously meet the milestones and directives set forth by President Biden to protect Americans from the potential risks of AI systems while catalyzing innovation in AI and beyond.

NISTs financial struggles highlight the limitations of the administrations plan to regulate AI exclusively through the executive branch. Without an act of Congress, there is no new funding for initiatives like the AI Safety Institute and the programs could be easily overturned by the next president. And as the presidential elections approach, the prospects of Congress moving on AI in 2024 are growing dim.

During his State of the Union address on Thursday, Biden called on Congress to harness the promise of AI and protect us from its peril.

Congressional aides and former NIST employees say the agency has not been able to break through as a funding priority even as lawmakers increasingly tout its role in addressing technological developments, including AI, chips and quantum computing.

After this article published, Senate Majority Leader Charles E. Schumer (D-N.Y.) on Thursday touted the $10 million investment in the institute in the proposed budget, saying he fought for this funding to make sure that the development of AI prioritizes both innovation and safety.

A review of NISTs safety practices in August found that the budgetary issues endanger employees, alleging that the agency has an incomplete and superficial approach to safety.

Chronic underfunding of the NIST facilities and maintenance budget has created unsafe work conditions and further fueled the impression among researchers that safety is not a priority, said the NIST safety commission report, which was commissioned following the 2022 death of an engineering technician at the agencys fire research lab.

NIST is one of the federal governments oldest science agencies with one of the smallest budgets. Initially called the National Bureau of Standards, it began at the dawn of the 20th century, as Congress realized the need to develop more standardized measurements amid the expansion of electricity, the steam engine and railways.

The need for such an agency was underscored three years after its founding, when fires ravaged through Baltimore. Firefighters from Washington, Philadelphia and even New York rushed to help put out the flames, but without standard couplings, their hoses couldnt connect to the Baltimore hydrants. The firefighters watched as the flames overtook more than 70 city blocks in 30 hours.

NIST developed a standard fitting, unifying more than 600 different types of hose couplings deployed across the country at the time.

Ever since, the agency has played a critical role in using research and science to help the country learn from catastrophes and prevent new ones. Its work expanded after World War II: It developed an early version of the digital computer, crucial Space Race instruments and atomic clocks, which underpin GPS. In the 1950s and 1960s, the agency moved to new campuses in Boulder and Gaithersburg after its early headquarters in Washington fell into disrepair.

Now, scientists at NIST joke that they work at the most advanced labs in the world in the 1960s. Former employees describe cutting-edge scientific equipment surrounded by decades-old buildings that make it impossible to control the temperature or humidity to conduct critical experiments.

You see dust everywhere because the windows dont seal, former acting NIST director Kent Rochford said. You see a bucket catching drips from a leak in the roof. You see Home Depot dehumidifiers or portable AC units all over the place.

The flooding was so bad that Rochford said he once requested money for scuba gear. That request was denied, but he did receive funding for an emergency kit that included squeegees to clean up water.

Pests and wildlife have at times infiltrated its campuses, including an incident where a garter snake entered a Boulder building.

More than 60 percent of NIST facilities do not meet federal standards for acceptable building conditions, according to a February 2023 report commissioned by Congress from the National Academies of Sciences, Engineering and Medicine. The poor conditions impact employee output. Workarounds and do-it-yourself repairs reduce the productivity of research staff by up to 40 percent, according to the committees interviews with employees during a laboratory visit.

Years after Rochfords 2018 departure, NIST employees are still deploying similar MacGyver-style workarounds. Each year between October and March, low humidity in one lab creates a static charge making it impossible to operate an instrument ensuring companies meet environmental standards for greenhouse gases.

Problems with the HVAC and specialized lights have made the agency unable to meet demand for reference materials, which manufacturers use to check whether their measurements are accurate in products like baby formula.

Facility problems have also delayed critical work on biometrics, including evaluations of facial recognition systems used by the FBI and other law enforcement agencies. The data center in the 1966 building that houses that work receives inadequate cooling, and employees there spend about 30 percent of their time trying to mitigate problems with the lab, according to the academies reports. Scheduled outages are required to maintain the data centers that hold technology work, knocking all biometric evaluations offline for a month each year.

Fekete, the scientist who recalled covering the microscope, said his teams device never completely stopped working due to rain water.

But other NIST employees havent been so lucky. Leaks and floods destroyed an electron microscope worth $2.5 million used for semiconductor research, and permanently damaged an advanced scale called a Kibble balance. The tool was out of commission for nearly five years.

Despite these constraints, NIST has built a reputation as a natural interrogator of swiftly advancing AI systems.

In 2019, the agency released a landmark study confirming facial recognition systems misidentify people of color more often than White people, casting scrutiny on the technologys popularity among law enforcement. Due to personnel constraints, only a handful of people worked on that project.

Four years later, NIST released early guidelines around AI, cementing its reputation as a government leader on the technology. To develop the framework, the agency connected with leaders in industry, civil society and other groups, earning a strong reputation among numerous parties as lawmakers began to grapple with the swiftly evolving technology.

The work made NIST a natural home for the Biden administrations AI red-teaming efforts and the AI Safety Institute, which were formalized in the November executive order. Vice President Harris touted the institute at the U.K. AI Safety Summit in November. More than 200 civil society organizations, academics and companies including OpenAI and Google have signed on to participate in a consortium within the institute.

OpenAI spokeswoman Kayla Wood said in a statement that the company supports NISTs work, and that the company plans to continue to work with the lab to "support the development of effective AI oversight measures.

Under the executive order, NIST has a laundry list of initiatives that it needs to complete by this summer, including publishing guidelines for how to red-team AI models and launching an initiative to guide evaluating AI capabilities. In a December speech at the machine learning conference NeurIPS, the agencys chief AI adviser, Elham Tabassi, said this would be an almost impossible deadline.

It is a hard problem, said Tabassi, who was recently named the chief technology officer of the AI Safety Institute. We dont know quite how to evaluate AI.

The NIST staff has worked tirelessly to complete the work it is assigned by the AI executive order, said Andrews, the Commerce spokesperson.

While the administration has been clear that additional resources will be required to fully address all of the issues posed by AI in the long term, NIST has been effectively carrying out its responsibilities under the [executive order] and is prepared to continue to lead on AI-related research and other work, he said.

Commerce Secretary Gina Raimondo asked Congress to allocate $10 million for the AI Safety Institute during an event at the Atlantic Council in January. The Biden administration also requested more funding for NIST facilities, including $262 million for safety, maintenance and repairs. Congressional appropriators responded by cutting NISTs facilities budget.

The administrations ask falls far below the recommendations of the national academies study, which urged Congress to provide $300 to $400 million in additional annual funding over 12 years to overcome a backlog of facilities damage. The report also calls for $120 million to $150 million per year for the same period to stabilize the effects of further deterioration and obsolescence.

Ross B. Corotis, who chaired the academies committee that produced the facilities report, said Congress needs to ensure that NIST is funded because it is the go-to lab when any new technology emerges, whether thats chips or AI.

Unless youre going to build a whole new laboratory for some particular issue, youre going to turn first to NIST, Corotis said. And NIST needs to be ready for that.

Eva Dou and Nitasha Tiku contributed to this report.

Continue reading here:

NIST, the lab at the center of Bidens AI safety push, is decaying - The Washington Post

Recommendation and review posted by G. Smith

Nvidia, the tech company more valuable than Google and Amazon, explained – Vox.com

Posted: March 10, 2024 at 3:17 am

Only four companies in the world are worth over $2 trillion. Apple, Microsoft, the oil company Saudi Aramco and, as of 2024, Nvidia. Its understandable if the name doesnt ring a bell. The company doesnt exactly make a shiny product attached to your hand all day, every day, as Apple does. Nvidia designs a chip hidden deep inside the complicated innards of a computer, a seemingly niche product more are relying on every day.

Rewind the clock back to 2019, and Nvidias market value was hovering around $100 billion. Its incredible speedrun to 20 times that already enviable size was really enabled by one thing the AI craze. Nvidia is arguably the biggest winner in the AI industry. ChatGPT-maker OpenAI, which catapulted this obsession into the mainstream, is currently worth around $80 billion, and according to market research firm Grand View Research, the entire global AI market was worth a bit under $200 billion in 2023. Both are just a paltry fraction of Nvidias value. With all eyes on the companys jaw-dropping evolution, the real question now is whether Nvidia can hold on to its lofty perch but heres how the company got to this level.

In 1993, long before uncanny AI-generated art and amusing AI chatbot convos took over our social media feeds, three Silicon Valley electrical engineers launched a startup that would focus on an exciting, fast-growing segment of personal computing: video games.

Nvidia was founded to design a specific kind of chip called a graphics card also commonly called a GPU (graphics processing unit) that enables the output of fancy 3D visuals on the computer screen. The better the graphics card, the more quickly high-quality visuals can be rendered, which is important for things like playing games and video editing. In the prospectus filed ahead of its initial public offering in 1999, Nvidia noted that its future success would depend on the continued growth of computer applications relying on 3D graphics. For most of Nvidias existence, game graphics were Nvidias raison detre.

Ben Bajarin, CEO and principal analyst at the tech industry research firm Creative Strategies, acknowledged that Nvidia had been relatively isolated to a niche part of computing in the market until recently.

Nvidia became a powerhouse selling cards for video games now an entertainment industry juggernaut making over $180 billion in revenue last year but it realized it would be smart to branch out from just making graphics cards for games. Not all its experiments panned out. Over a decade ago, Nvidia made a failed gambit to become a major player in the mobile chip market, but today Android phones use a range of non-Nvidia chips, while iPhones use Apple-designed ones.

Another play, though, not only paid off, it became the reason were all talking about Nvidia today. In 2006, the company released a programming language called CUDA that, in short, unleashed the power of its graphics cards for more general computing processes. Its chips could now do a lot of heavy lifting for tasks unrelated to pumping out pretty game graphics, and it turned out that graphics cards could multitask even better than the CPU (central processing unit), whats often called the central brain of a computer. This made Nvidias GPUs great for calculation-heavy tasks like machine learning (and, crypto mining). 2006 was the same year Amazon launched its cloud computing business; Nvidias push into general computing was coming at a time when massive data centers were popping up around the world.

That Nvidia is a powerhouse today is especially notable because for most of Silicon Valleys history, there already was a chip-making goliath: Intel. Intel makes both CPUs and GPUs, as well as other products, and it manufactures its own semiconductors but after a series of missteps, including not investing into the development of AI chips soon enough, the rival chipmakers preeminence has somewhat faded. In 2019, when Nvidias market value was just over the $100 billion mark, Intels value was double that; now Nvidia has joined the ranks of tech titans designated the Magnificent Seven, a cabal of tech stocks with a combined value that exceeds the entire stock market of many rich G20 countries.

Their competitors were asleep at the wheel, says Gil Luria, a senior analyst at the financial firm D.A. Davidson Companies. Nvidia has long talked about the fact that GPUs are a superior technology for handling accelerated computing.

Today, Nvidias four main markets are gaming, professional visualization (like 3D design), data centers, and the automotive industry, as it provides chips that train self-driving technology. A few years ago, its gaming market was still the biggest chunk of revenue at about $5.5 billion, compared to its data center segment, which raked in about $2.9 billion. Then the pandemic broke out. People were spending a lot more time at home, and demand for computer parts, including GPUs, shot up gaming revenue for the company in fiscal year 2021 jumped a whopping 41 percent. But there were already signs of the coming AI wave, too, as Nvidias data center revenue soared by an even more impressive 124 percent. In 2023, its revenue was 400 percent higher than the year before. In a clear display of how quickly the AI race ramped up, data centers have overtaken games, even in a gaming boom.

When it went public in 1999, Nvidia had 250 employees. Now it has over 27,000. Jensen Huang, Nvidias CEO and one of its founders, has a personal net worth that currently hovers around $70 billion, an over 1,700 percent increase since 2019.

Its likely youve already brushed up against Nvidias products, even if you dont know it. Older gaming consoles like the PlayStation 3 and the original Xbox had Nvidia chips, and the current Nintendo Switch uses an Nvidia mobile chip. Many mid- to high-range laptops come packed up with an Nvidia graphics card as well.

But with the AI bull rush, the company promises to become more central to the tech people use every day. Tesla cars self-driving feature utilizes Nvidia chips, as do practically all major tech companies cloud computing services. These services serve as a backbone for so much of our daily internet routines, whether its streaming content on Netflix or using office and productivity apps. To train ChatGPT, OpenAI harnessed tens of thousands of Nvidias AI chips together. People underestimate how much they use AI on a daily basis, because we dont realize that some of the automated tasks we rely on have been boosted by AI. Popular apps and social media platforms are adding new AI features seemingly every day: TikTok, Instagram, X (formerly Twitter), even Pinterest all boast some kind of AI functionality to toy with. Slack, a messaging platform that many workplaces use, recently rolled out the ability to use AI to generate thread summaries and recaps of Slack channels.

For Nvidias customers, the problem with sizzling demand is that the company can charge eye-wateringly high prices. The chips used for AI data centers cost tens of thousands of dollars, with the top-of-the-line product sometimes selling for over $40,000 on sites like Amazon and eBay. Last year, some clients clamoring for Nvidias AI chips were waiting as much as 11 months.

Just think of Nvidia as the Birkin bag of AI chips. A comparable offering from another chipmaker, AMD, is reportedly being sold to customers like Microsoft for about $10,000 to $15,000, just a fraction of what Nvidia charges. Its not just the AI chips, either. Nvidias gaming business continues to boom, and the price gap between its high-end gaming card and a similarly performing one from AMD has been growing wider. In its last financial quarter, Nvidia reported a gross margin of 76 percent. As in, it cost them just 24 cents to make a dollar in sales. AMDs most recent gross margin was only 47 percent.

Nvidias fans argue that its yawning lead was earned by making an early bet that AI would take over the world its chips are worth the price because of its superior software, and because so much of AI infrastructure has already been built around Nvidias products. But Erik Peinert, a research manager and editor at the American Economic Liberties Project who helped put together a recent report on competition within the chip industry, notes that Nvidia has gotten a price boost because TSMC, the biggest semiconductor maker in the world, has struggled for years to keep up with demand. A recent Wall Street Journal report also suggested that the company may be throwing its weight around to maintain dominance; the CEO of an AI chip startup called Groq claimed that customers were scared Nvidia would punish them with order delays if it got wind they were meeting with other chip makers.

Its undeniable that Nvidia put in the investment into courting the AI industry well before others started paying attention, but its grip on the market isnt unshakable. An army of competitors are on the march, ranging from smaller startups to deep-pocketed opponents, including Amazon, Meta, Microsoft, and Google, all of which currently use Nvidia chips. The biggest challenge for Nvidia is that their customers want to compete with them, says Luria.

Its not just that their customers want to make some of the money that Nvidia has been raking in its that they cant afford to keep paying so much. Microsoft went from spending less than 10 percent of their capital expenditure on Nvidia to spending nearly 40 percent, Luria says. Thats not sustainable.

The fact that over 70 percent of AI chips are bought from Nvidia is also cause for concern for antitrust regulators around the world the EU recently started looking into the industry for potential antitrust abuses. When Nvidia announced in late 2020 that it wanted to spend an eye-popping $40 billion to buy Arm Limited, a company that designs a chip architecture that most modern smartphones and newer Apple computers use, the FTC blocked the deal. That acquisition was pretty clearly intended to get control over a software architecture that most of the industry relied on, says Peinert. The fact that they have so much pricing power, and that theyre not facing any effective competition, is a real concern.

Whether Nvidia will sustain itself as a $2 trillion company or rise to even greater heights depends, fundamentally, on whether both consumer and investor attention on AI can be sustained. Silicon Valley is awash with newly founded AI companies, but what percentage of them will take off, and how long will funders keep pouring money into them?

Widespread AI awareness came about because ChatGPT was an easy-to-use or at least easy-to-show-off-on-social-media novelty for the general public to get excited about. But a lot of AI work is still focusing on AI training rather than whats called AI inferencing, which involves using trained AI models to solve a task, like the way that ChatGPT answers a users query or facial recognition tech identifies people. Though the AI inference market is growing (and maybe growing faster than expected), much of the sector is still going to be spending a lot more time and money on training. For training, Nvidias first-class chips will likely remain the most coveted, at least for a while. But once AI inferencing explodes, there will be less of a need for such high-performance chips, and Nvidias primacy could slip.

Some financial analysts and industry experts have expressed wariness over Nvidias stratospheric valuation, suspecting that AI enthusiasm will slow down and that there may already be too much money going toward making AI chips. Traffic to ChatGPT has dropped off since last May and some investors are slowing down the money hose.

Every big technology goes through an adoption cycle, says Luria. As it comes into consciousness, you build this huge hype. Then at some point, the hype gets too big, and then you get past it and get into the trough of disillusionment. He expects to see that soon with AI though that doesnt mean its a bubble.

Nvidias revenue last year was about $60 billion, which was a 126 percent increase from the prior year. Its high valuation and stock price is based not just on that revenue, though, but for its predicted continued growth for comparison, Amazon currently has a lower market value than Nvidia yet made almost $575 billion in sales last year. The path to Nvidia booking large enough profits to justify the $2 trillion valuation looks steep to some experts, especially knowing that the competition is kicking into high gear.

Theres also the possibility that Nvidia could be stymied by how fast microchip technology can advance. It has moved at a blistering pace in the last several decades, but there are signs that the pace at which more transistors can be fitted onto a microchip making them smaller and more powerful is slowing down. Whether Nvidia can keep offering meaningful hardware and software improvements that convince its customers to buy its latest AI chips could be a challenge, says Bajarin.

Yet, for all these possible obstacles, if one were to bet whether Nvidia will soon become as familiar a tech company as Apple and Google, the safe answer is yes. AI fever is why Nvidia is in the rarefied club of trillion-dollar companies but it may be just as true to say that AI is so big because of Nvidia.

Yes, I'll give $5/month

Yes, I'll give $5/month

We accept credit card, Apple Pay, and Google Pay. You can also contribute via

Here is the original post:

Nvidia, the tech company more valuable than Google and Amazon, explained - Vox.com

Recommendation and review posted by G. Smith

AI drone that could hunt and kill people built in just hours by scientist ‘for a game’ – Livescience.com

Posted: March 10, 2024 at 3:17 am

It only takes a few hours to configure a small, commercially available drone to hunt down a target by itself, a scientist has warned.

Luis Wenus, an entrepreneur and engineer, incorporated an artificial intelligence (AI) system into a small drone to chase people around "as a game," he wrote in a post on March 2 on X, formerly known as Twitter. But he soon realized it could easily be configured to contain an explosive payload.

Collaborating with Robert Lukoszko, another engineer, he configured the drone to use an object-detection model to find people and fly toward them at full speed, he said. The engineers also built facial recognition into the drone, which works at a range of up to 33 feet (10 meters). This means a weaponized version of the drone could be used to attack a specific person or set of targets.

Related: 3 scary breakthroughs AI will make in 2024

"This literally took just a few hours to build, and made me realize how scary it is," Wenus wrote. "You could easily strap a small amount of explosives on these and let 100's of them fly around. We check for bombs and guns but THERE ARE NO ANTI-DRONE SYSTEMS FOR BIG EVENTS & PUBLIC SPACES YET."

Wenus described himself as an "open source absolutist," meaning he believes in always sharing code and software through open source channels. He also identifies as an "e/acc" which is a school of thinking among AI researchers that refers to wanting to accelerate AI research regardless of the downsides, due to a belief that the upsides will always outweigh them. He said, however, that he would not publish any code relating to this experiment.

He also warned that a terror attack could be orchestrated in the near future using this kind of technology. While people need technical knowledge to engineer such a system, it will become easier and easier to write the software as time passes, partially due to advancements in AI as an assistant in writing code, he noted.

Wenus said his experiment showed that society urgently needs to build anti-drone systems for civilian spaces where large crowds could gather. There are several countermeasures that society can build, according to Robin Radar, including cameras, acoustic sensors and radar to detect drones. Disrupting them, however, could require technologies such as radio frequency jammers, GPS spoofers, net guns, as well as high-energy lasers.

While such weapons haven't been deployed in civilian environments, they have been previously conceptualized and deployed in the context of warfare. Ukraine, for example, has developed explosive drones in response to Russia's invasion, according to the Wall Street Journal (WSJ).

The U.S. military is also working on ways to build and control swarms of small drones that can attack targets. It follows the U.S. Navy's efforts after it first demonstrated that it could control a swarm of 30 drones with explosives in 2017, according to MIT Technology Review.

Read more from the original source:

AI drone that could hunt and kill people built in just hours by scientist 'for a game' - Livescience.com

Recommendation and review posted by G. Smith

AI makes a rendezvous in space | Stanford News – Stanford University News

Posted: March 10, 2024 at 3:17 am

Researchers from the Stanford Center for AEroSpace Autonomy Research (CAESAR) in the robotic testbed, which can simulate the movements of autonomous spacecraft. (Image credit: Andrew Brodhead)

Space travel is complex, expensive, and risky. Great sums and valuable payloads are on the line every time one spacecraft docks with another. One slip and a billion-dollar mission could be lost. Aerospace engineers believe that autonomous control, like the sort guiding many cars down the road today, could vastly improve mission safety, but the complexity of the mathematics required for error-free certainty is beyond anything on-board computers can currently handle.

In a new paper presented at the IEEE Aerospace Conference in March 2024, a team of aerospace engineers at Stanford University reported using AI to speed the planning of optimal and safe trajectories between two or more docking spacecraft. They call it ART the Autonomous Rendezvous Transformer and they say it is the first step to an era of safer and trustworthy self-guided space travel.

In autonomous control, the number of possible outcomes is massive. With no room for error, they are essentially open-ended.

Trajectory optimization is a very old topic. It has been around since the 1960s, but it is difficult when you try to match the performance requirements and rigid safety guarantees necessary for autonomous space travel within the parameters of traditional computational approaches, said Marco Pavone, an associate professor of aeronautics and astronautics and co-director of the new Stanford Center for AEroSpace Autonomy Research (CAESAR). In space, for example, you have to deal with constraints that you typically do not have on the Earth, like, for example, pointing at the stars in order to maintain orientation. These translate to mathematical complexity.

For autonomy to work without fail billions of miles away in space, we have to do it in a way that on-board computers can handle, added Simone DAmico, an associate professor of aeronautics and astronautics and fellow co-director of CAESAR. AI is helping us manage the complexity and delivering the accuracy needed to ensure mission safety, in a computationally efficient way.

CAESAR is a collaboration between industry, academia, and government that brings together the expertise of Pavones Autonomous Systems Lab and DAmicos Space Rendezvous Lab. The Autonomous Systems Lab develops methodologies for the analysis, design, and control of autonomous systems cars, aircraft, and, of course, spacecraft. The Space Rendezvous Lab performs fundamental and applied research to enable future distributed space systems whereby two or more spacecraft collaborate autonomously to accomplish objectives otherwise very difficult for a single system, including flying in formation, rendezvous and docking, swarm behaviors, constellations, and many others. CAESAR is supported by two founding sponsors from the aerospace industry and, together, the lab is planning a launch workshop for May 2024.

CAESAR researchers discuss the robotic free-flyer platform, which uses air bearings to hover on a granite table and simulate a frictionless zero gravity environment. (Image credit: Andrew Brodhead)

The Autonomous Rendezvous Transformer is a trajectory optimization framework that leverages the massive benefits of AI without compromising on the safety assurances needed for reliable deployment in space. At its core, ART involves integrating AI-based methods into the traditional pipeline for trajectory optimization, using AI to rapidly generate high-quality trajectory candidates as input for conventional trajectory optimization algorithms. The researchers refer to the AI suggestions as a warm start to the optimization problem and show how this is crucial to obtain substantial computational speed-ups without compromising on safety.

One of the big challenges in this field is that we have so far needed ground in the loop approaches you have to communicate things to the ground where supercomputers calculate the trajectories and then we upload commands back to the satellite, explains Tommaso Guffanti, a postdoctoral fellow in DAmicos lab and first author of the paper introducing the Autonomous Rendezvous Transformer. And in this context, our paper is exciting, I think, for including artificial intelligence components in traditional guidance, navigation, and control pipeline to make these rendezvous smoother, faster, more fuel efficient, and safer.

ART is not the first model to bring AI to the challenge of space flight, but in tests in a terrestrial lab setting, ART outperformed other machine learning-based architectures. Transformer models, like ART, are a subset of high-capacity neural network models that got their start with large language models, like those used by chatbots. The same AI architecture is extremely efficient in parsing, not just words, but many other types of data such as images, audio, and now, trajectories.

Transformers can be applied to understand the current state of a spacecraft, its controls, and maneuvers that we wish to plan, Daniele Gammelli, a postdoctoral fellow in Pavones lab, and also a co-author on the ART paper. These large transformer models are extremely capable at generating high-quality sequences of data.

The next frontier in their research is to further develop ART and then test it in the realistic experimental environment made possible by CAESAR. If ART can pass CAESARs high bar, the researchers can be confident that its ready for testing in real-world scenarios in orbit.

These are state-of-the-art approaches that need refinement, DAmico says. Our next step is to inject additional AI and machine learning elements to improve ARTs current capability and to unlock new capabilities, but it will be a long journey before we can test the Autonomous Rendezvous Transformer in space itself.

Follow this link:

AI makes a rendezvous in space | Stanford News - Stanford University News

Recommendation and review posted by G. Smith


Page 19«..10..18192021..3040..»