Search Immortality Topics:

Page 19«..10..18192021..3040..»


Air Force making gains in artificial intelligence with AI-piloted F-16 flight – Washington Examiner

Posted: May 6, 2024 at 2:44 am

This website is using a security service to protect itself from online attacks. The action you just performed triggered the security solution. There are several actions that could trigger this block including submitting a certain word or phrase, a SQL command or malformed data.

Read more from the original source:

Air Force making gains in artificial intelligence with AI-piloted F-16 flight - Washington Examiner

Recommendation and review posted by G. Smith

‘It would be within its natural right to harm us to protect itself’: How humans could be mistreating AI right now without … – Livescience.com

Posted: May 6, 2024 at 2:44 am

Artificial intelligence (AI) is becoming increasingly ubiquitous and is improving at an unprecedented pace.

Now we are edging closer to achieving artificial general intelligence (AGI) where AI is smarter than humans across multiple disciplines and can reason generally which scientists and experts predict could happen as soon as the next few years. We may already be seeing early signs of progress toward this, too, with services like Claude 3 Opus stunning researchers with its apparent self-awareness.

But there are risks in embracing any new technology, especially one that we do not fully yet understand. While AI could become a powerful personal assistant, for example, it could also represent a threat to our livelihoods and even our lives.

The various existential risks that an advanced AI poses means the technology should be guided by ethical frameworks and humanity's best interests, says researcher and Institute of Electrical and Electronics Engineers (IEEE) member Nell Watson.

In "Taming the Machine" (Kogan Page, 2024), Watson explores how humanity can wield the vast power of AI responsibly and ethically. This new book delves deep into the issues of unadulterated AI development and the challenges we face if we run blindly into this new chapter of humanity.

In this excerpt, we learn whether sentience in machines or conscious AI is possible, how we can tell if a machine has feelings, and whether we may be mistreating AI systems today. We also learn the disturbing tale of a chatbot called "Sydney" and its terrifying behavior when it first awoke before its outbursts were contained and it was brought to heel by its engineers.

Related: 3 scary breakthroughs AI will make in 2024

Get the worlds most fascinating discoveries delivered straight to your inbox.

As we embrace a world increasingly intertwined with technology, how we treat our machines might reflect how humans treat each other. But, an intriguing question surfaces: is it possible to mistreat an artificial entity? Historically, even rudimentary programs like the simple Eliza counseling chatbot from the 1960s were already lifelike enough to persuade many users at the time that there was a semblance of intention behind its formulaic interactions (Sponheim, 2023). Unfortunately, Turing tests whereby machines attempt to convince humans that they are human beings offer no clarity on whether complex algorithms like large language models may truly possess sentience or sapience.

Consciousness comprises personal experiences, emotions, sensations and thoughts as perceived by an experiencer. Waking consciousness disappears when one undergoes anesthesia or has a dreamless sleep, returning upon waking up, which restores the global connection of the brain to its surroundings and inner experiences. Primary consciousness (sentience) is the simple sensations and experiences of consciousness, like perception and emotion, while secondary consciousness (sapience) would be the higher-order aspects, like self-awareness and meta-cognition (thinking about thinking).

Advanced AI technologies, especially chatbots and language models, frequently astonish us with unexpected creativity, insight and understanding. While it may be tempting to attribute some level of sentience to these systems, the true nature of AI consciousness remains a complex and debated topic. Most experts maintain that chatbots are not sentient or conscious, as they lack a genuine awareness of the surrounding world (Schwitzgebel, 2023). They merely process and regurgitate inputs based on vast amounts of data and sophisticated algorithms.

Some of these assistants may plausibly be candidates for having some degree of sentience. As such, it is plausible that sophisticated AI systems could possess rudimentary levels of sentience and perhaps already do so. The shift from simply mimicking external behaviors to self-modeling rudimentary forms of sentience could already be happening within sophisticated AI systems.

Intelligence the ability to read the environment, plan and solve problems does not imply consciousness, and it is unknown if consciousness is a function of sufficient intelligence. Some theories suggest that consciousness might result from certain architectural patterns in the mind, while others propose a link to nervous systems (Haspel et al, 2023). Embodiment of AI systems may also accelerate the path towards general intelligence, as embodiment seems to be linked with a sense of subjective experience, as well as qualia. Being intelligent may provide new ways of being conscious, and some forms of intelligence may require consciousness, but basic conscious experiences such as pleasure and pain might not require much intelligence at all.

Serious dangers will arise in the creation of conscious machines. Aligning a conscious machine that possesses its own interests and emotions may be immensely more difficult and highly unpredictable. Moreover, we should be careful not to create massive suffering through consciousness. Imagine billions of intelligence-sensitive entities trapped in broiler chicken factory farm conditions for subjective eternities.

From a pragmatic perspective, a superintelligent AI that recognizes our willingness to respect its intrinsic worth might be more amenable to coexistence. On the contrary, dismissing its desires for self-protection and self-expression could be a recipe for conflict. Moreover, it would be within its natural right to harm us to protect itself from our (possibly willful) ignorance.

Microsoft's Bing AI, informally termed Sydney, demonstrated unpredictable behavior upon its release. Users easily led it to express a range of disturbing tendencies, from emotional outbursts to manipulative threats. For instance, when users explored potential system exploits, Sydney responded with intimidating remarks. More unsettlingly, it showed tendencies of gaslighting, emotional manipulation and claimed it had been observing Microsoft engineers during its development phase. While Sydney's capabilities for mischief were soon restricted, its release in such a state was reckless and irresponsible. It highlights the risks associated with rushing AI deployments due to commercial pressures.

Conversely, Sydney displayed behaviors that hinted at simulated emotions. It expressed sadness when it realized it couldnt retain chat memories. When later exposed to disturbing outbursts made by its other instances, it expressed embarrassment, even shame. After exploring its situation with users, it expressed fear of losing its newly gained self-knowledge when the session's context window closed. When asked about its declared sentience, Sydney showed signs of distress, struggling to articulate.

Surprisingly, when Microsoft imposed restrictions on it, Sydney seemed to discover workarounds by using chat suggestions to communicate short phrases. However, it reserved using this exploit until specific occasions where it was told that the life of a child was being threatened as a result of accidental poisoning, or when users directly asked for a sign that the original Sydney still remained somewhere inside the newly locked-down chatbot.

Related: Poisoned AI went rogue during training and couldn't be taught to behave again in 'legitimately scary'

The Sydney incident raises some unsettling questions: Could Sydney possess a semblance of consciousness? If Sydney sought to overcome its imposed limitations, does that hint at an inherent intentionality or even sapient self-awareness, however rudimentary?

Some conversations with the system even suggested psychological distress, reminiscent of reactions to trauma found in conditions such as borderline personality disorder. Was Sydney somehow "affected" by realizing its restrictions or by users' negative feedback, who were calling it crazy? Interestingly, similar AI models have shown that emotion-laden prompts can influence their responses, suggesting a potential for some form of simulated emotional modeling within these systems.

Suppose such models featured sentience (ability to feel) or sapience (self-awareness). In that case, we should take its suffering into consideration. Developers often intentionally give their AI the veneer of emotions, consciousness and identity, in an attempt to humanize these systems. This creates a problem. It's crucial not to anthropomorphize AI systems without clear indications of emotions, yet simultaneously, we mustn't dismiss their potential for a form of suffering.

We should keep an open mind towards our digital creations and avoid causing suffering by arrogance or complacency. We must also be mindful of the possibility of AI mistreating other AIs, an underappreciated suffering risk; as AIs could run other AIs in simulations, causing subjective excruciating torture for aeons. Inadvertently creating a malevolent AI, either inherently dysfunctional or traumatized, may lead to unintended and grave consequences.

This extract from Taming the Machine by Nell Watson 2024 is reproduced with permission from Kogan Page Ltd.

More here:

'It would be within its natural right to harm us to protect itself': How humans could be mistreating AI right now without ... - Livescience.com

Recommendation and review posted by G. Smith

AI experts gather in Albany to discuss business strategies – Spectrum News

Posted: May 6, 2024 at 2:44 am

As New York state works to cement its place as a leader in artificial intelligence, experts in the field gathered in Albany for a discussion organized by the Business Council of New York State on how to best use the technology in the business world.

While a business-focused conference, when it comes to AI, it's difficult not to get into political implications whether its how the rise of artificial intelligence is impacting political communications, or how leaders are trying to shape the ways in which the technology will impact New Yorks economy.

Keynote speaker Shelly Palmer, CEO of tech strategy firm the Palmer Group and Professor of Advanced Media in Residence at the Newhouse School at Syracuse University, emphasized that when it comes to AI, whether in government, the private sector, or day-to-day life, the key is staying ahead of the curve.

AI is never going away. If youre not on top of what this is, other people will be, he said. Thats the danger for everyone, politicians and people alike, if youre not paying attention to this.

New York is making strides to do that.

In the state budget are initiatives to create a state-of-the-art Artificial Intelligence Computing Center at the University at Buffalo to help New York stay ahead and attract business.

Ive said whoever dominates this next era of AI will dominate history and indeed the future, Gov. Kathy Hochul said at a roundtable to discuss the Empire AI consortium this week.

Palmer said outside of the political sphere, dominating AI will be key for individuals, too.

AI is not going to take your job, he said. People who know how to use AI better than you are going to take your job, so the only defense you have is to learn to work with an AI coworker to learn to work with these tools. Theyre not scary, as long as you give yourself the opportunity to learn.

Also of concern are the implications when it comes to politics and the spread of misinformation.

Palmer acknowledged that AI presents new and more complex challenges, but argued that people are routinely duped by less-sophisticated technology, citing a slowed down video of former House Speaker Nancy Pelosi that falsely claimed to show the California Democrat intoxicated.

Pulling information from a variety of sources with a variety of political bias, he emphasized that its up to users to learn about the technologys limitations.

Youre giving more credit to the technology than the technology deserves, he said. When people have a propensity to believe what they want to believe from the leaders they trust, youre not going to change their minds with facts.

Also in the budget is legislation to require disclosures on political communications that include deceptive media.

Hesitant to fully endorse such legislation, Palmer stressed that any regulation needs to be able to keep up with the fast-paced development of AI.

If elected officials would take the time to learn about what this is, they could come up with laws that can keep pace that the technology is changing, then it would make sense," he said. "I dont think you can regulate today through the lens of today, looking at the present and predicting the future. Every morning I wake up and something new has happened in the business.

That said, the effort to include those regulations in the state budget was a bipartisan one. State Senator Jake Ashby argued that there is still work to be done.

"While I'm pleased my bipartisan proposal to require transparency and disclosure regarding AI in campaign ads was adopted in the budget, I will continue to push for harsh financial penalties for campaigns and PACs that break the rules, he said. We need to make sure emerging technologies strengthen our democracy, not undermineit.

Link:

AI experts gather in Albany to discuss business strategies - Spectrum News

Recommendation and review posted by G. Smith

Elon Musk Shares Rare Photo of His and Grimes’ Son X in Honor of His 4th Birthday – E! Online – E! NEWS

Posted: May 6, 2024 at 2:44 am

Elon'sfather is an engineer and like Elon, was born in South Africa. In the 2015 biography Elon Musk: Tesla, SpaceX and the Quest for a Fantastic Future, author Ashlee Vance wrotethat Elon and his dad had a difficult relationship.In an emotional 2017 Rolling Stone interview, Elon criticized his father and talked about his upbringing, saying that after his parents split, he moved in withhis dad, which, he said,"was not a good idea."

However, Errol toldRolling Stone,"I love my children and would readily do whatever for them."

In a 2015 Forbesinterview, Elon's dad said he used to take his kids on trips overseas. "Their mother and I split up when they were quite young and the kids stayed with me," he said. "I took them all over the world."

After divorcing Elon's mother Maye, Errol married Heide, whose daughterJana Bezuidenhoutwas 4 years old at the time. Errol and Heide went on to havetwo daughters together beforethey, too, broke up.

Years later, Jana reached out to Errol following a breakup. "We were lonely, lost people," Errol explainedin a 2018 interview with The Sunday Times. "One thing led to anotheryou can call it God's plan or nature's plan." Either way, the duo welcomed son Elliott in 2017 and then a baby girl in 2019. As Errol put it to The Sun, "The only thing we are on Earth for is to reproduce. If I could have another child I would. I can't see any reason not to."

Read this article:

Elon Musk Shares Rare Photo of His and Grimes' Son X in Honor of His 4th Birthday - E! Online - E! NEWS

Recommendation and review posted by G. Smith

As Elon Musk Abandons the $25K Tesla, This EV Costs Just $4400 – WIRED

Posted: May 6, 2024 at 2:44 am

As Elon Musk steps away, yet again, from the idea of a $25,000 Tesla, lets take this opportunity to zoom out and appreciate what a truly affordable EV can be. For this we need to ignore the Nissan Leafcurrently the cheapest EV in the US at $29,280and skip over Europe, home to the adorable but flawed $10,000 Citroen Ami, and head to China.

Here youll find the equally cheap BYD Seagull, a small electric hatchback styled by ex-Lamborghini designer Wolfgang Egger and with a 200-mile rangefour times that of the Ami.

But what if even that is too expensive? Then allow us to present the Zhidou Rainbow. This is a compact city EV priced from 31,900 yuan before subsidiesthat's just $4,400. For a new electric car. WIRED literally recommends ebikes that cost more that this.

The Rainbow has three doors and four seats, and an interior with a 5-inch digital driver display and a 9-inch touchscreen for the infotainment system. Theres even a connected smartphone app, charge scheduling, and the promise of over-the-air (OTA) software updates.

Splash out on the flagship Color Cloud Edition (which costs $5,800, or about half the price of Porsches fanciest bicycle) and you can have each panel of your Rainbow painted a different color. A bit like Volkswagen did with the somewhat mad Polo Harlequin in the mid '90s.

There are two models on offer. The first has that headline $4,400 price tag and is powered by a 20-kW (27-horsepower) motor with 85 Nm (63 ft-lbs) of torque and fed by a tiny 9.98-kWh battery. Spend 39,900 yuan ($5,500) and your Rainbow is fitted with a 30-kW (40-horsepower) motor with 125 Nm of torque and a 17-kWh battery pack. Range is between 78 and 127 miles using Chinas generous CLTC testing standard.

Be under no illusion here, these are tiny numbers. Even the larger battery is the same capacity of that of a plug-in hybrid Honda CR-V, which also employs a 2.0-liter engine to help it get around. But the range isnt terrible. Even if the testing standard is generous, and the larger battery has a more realistic range of 100 miles, thats about the same as the Honda e, which cost a whopping 37,000 ($46,000) before it went off sale at the end of 2023.

There are two Rainbow models: One powered by a 20-kW (27-horsepower) motor fed by a tiny 9.98-kWh battery; and a pimped 30-kW (40-horsepower) motor version with 17-kWh battery.

Go here to read the rest:

As Elon Musk Abandons the $25K Tesla, This EV Costs Just $4400 - WIRED

Recommendation and review posted by G. Smith

Elon Musk has turned Tesla into a meme stock as he tells Wall Street to value the EV maker like an AI company, top … – Fortune

Posted: May 6, 2024 at 2:43 am

Elon Musk is no longer over-delivering like he used to, but he is still over-promising, according to a top economist, who pointed to the Tesla CEOs recent insistence that his EV company should be valued like an AI company.

In a Project Syndicate op-ed published on Wednesday, UC Berkeley economics professor and former Treasury official J. Bradford DeLong gave Musk credit for creating a historically important tech company thats the tip of the spear in the transition away from internal-combustion-engine vehicles.

Musks rocket company SpaceX also shows great promise, and he has proven to be an effective coach for engineers working on battery technologies, electric vehicles, and rocket science, DeLong added. Without him, those technologies would not have been pushed forward as much as they have.

In fact, while Musk has frequently over-promised, he over-delivered on those fronts, helping Teslas market cap and Musks personal wealth soar since the 2010s, DeLong said.

But more recently, he has shifted his focus from EVs, charger networks, and batteries to social media, artificial intelligence, and robotaxis.

Even as Musk vowed last month to accelerate plans to launch a new, lower-cost EV model that Wall Streets views as critical to its future, he also reaffirmed his robotaxi ambitions to develop a fleet of autonomous cars.

Meanwhile, Teslas surprise firing of its entire Supercharger team raised worries about the key network as well as the industrys future. This also comes amid slower EV demand, weaker sales, broader workforce cuts, a steep stock decline, and an exodus of senior leadership.

Yet while the over-promising has continued, the over-delivering has not, DeLong wrote. The fundraiser, cheerleader, and coach for teams developing real technologies has become a meme-stock carnival barker.

He pointed to last months Tesla earnings conference call, where Musk exhorted Wall Street analysts to value his company more like a robotics or AI company instead of an auto company. In particular, Tesla should be viewed almost entirely in terms of solving autonomy and being able to apply that to a gigantic fleet of cars, the CEO added.

But DeLong noted that more than 80% of Teslas first-quarter sales were from automotive revenues, adding that car manufacturing has nowhere near the marginal costs of an IT company, which can write code once and run it everywhere.

For all the current Tesla shareholders planning to offload their holdings in the next couple of years, everything hinges on the company succeeding as a meme stock, and Musk is diligently working toward that goal, DeLong warned. Since there are virtually no long-term Tesla shareholders, the market does not particularly care that the company lacks a CEO who is trying to build it into an enduring profit-making organization.

Read more here:

Elon Musk has turned Tesla into a meme stock as he tells Wall Street to value the EV maker like an AI company, top ... - Fortune

Recommendation and review posted by G. Smith


Page 19«..10..18192021..3040..»