Search Immortality Topics:

Page 3«..2345..10..»


Category Archives: Ai

China tops the U.S. on AI research in over half of the hottest fields: report – Axios

Data: Emerging Technology Observatory Map of Science; Chart: Axios Visuals

China leads the U.S. as a top producer of research in more than half of AI's hottest fields, according to new data from Georgetown University's Center for Security and Emerging Technology (CSET) shared first with Axios.

Why it matters: The findings reveal important nuances about the global race between the U.S. and China to lead AI advances and set crucial standards for the technology and how it is used around the world.

Key findings: CSET's Emerging Technology Observatory team found global AI research more than doubled between 2017 and 2022.

Research in robotics grew slower than in vision and natural language processing by just 54% and made up about 15% of all AI research.

What they're saying: "The fact that research is growing so quickly, in so many directions, underscores the need for federal investment in basic measurement evaluation on the scientific techniques we need to ensure that AI getting deployed in the real world is safe, secure and understandable," said Arnold. But appropriations for the National Institutes of Standards and Technology, which is tasked with identifying those measurements, were recently cut.

The big picture: The top five producers of sheer numbers of AI research papers in the world are Chinese institutions, led by the Chinese Academy of Sciences.

Yes, but: At the country level, the U.S. had the top spot in producing highly cited articles.

"China is absolutely a world leader in AI research, and in many areas, likely the world leader," Arnold said, adding the country is active across a range of research areas, including increasingly fundamental research.

Caveat: The data only accounts for research papers published in English, and doesn't capture scientific work in other languages.

How it works: CSET's Map of Science groups together articles that cite each other often, because they have topics or concepts in common, into clusters of research. (It doesn't mean all papers on LLMs, for example, are in the top cluster. Some may appear in other clusters.)

See original here:

China tops the U.S. on AI research in over half of the hottest fields: report - Axios

Posted in Ai | Comments Off on China tops the U.S. on AI research in over half of the hottest fields: report – Axios

Writer Meghan O’Gieblyn on AI, Consciousness, and Creativity – Nautilus

These days, were inundated with speculation about the future of artificial intelligenceand specifically how AI might take away our jobs, or steal the creative work of writers and artists, or even destroy the human species. The American writer Meghan OGieblyn also wonders about these things, and her essays offer pointed inquiries into the philosophical and spiritual underpinnings of this technology. Shes steeped in the latest AI developments but is also well-versed in debates about linguistics and the nature of consciousness.

OGieblyn also writes about her own struggle to find deeper meaning in her life, which has led her down some unexpected rabbit holes. A former Christian fundamentalist, she later stumbled into transhumanism and, ultimately, plunged into the exploding world of AI. (She currently also writes an advice column for Wired magazine about tech and society.)

When I visited her at her home in Madison, Wisconsin, I was curious if I might see any traces of this unlikely personal odyssey.

I hadnt expected her to pull out a stash of old notebooks filled with her automatic writing, composed while working with a hypnotist. I asked OGieblyn if she would read from one of her notebooks, and she picked this passage: In all the times we came to bed, there was never any sleep. Dawn bells and doorbells and daffodils and the side of the road glaring with their faces undone And so it wentstrange, lyrical, and nonsensicaltapping into some part of herself that she didnt know was there.

That led us into a wide-ranging conversation about the unconscious, creativity, the quest for transcendence, and the differences between machine intelligence and the human mind.

Why did you go to a hypnotist and try automatic writing?

I was going through a period of writers block, which I had never really experienced before. It was during the pandemic. I was working on a book about technology, and I was reading about these new language models. GPT-3 had been just released to researchers, and the algorithmic text was just so wildly creative and poetic.

So you wanted to see if you could do this, without using an AI model?

Yeah, I became really curious about what it means to produce language without consciousness. As my own critical faculty was getting in the way of my creativity, it seemed really appealing to see what it would be like to just write without overthinking everything. I was thinking a lot about the Surrealists and different avant-garde traditions where writers or artists would do exercises either through hypnosis or some sort of random collaborative game. The point was to try to unlock some unconscious creative capacity within you. And it seemed like that was, in a way, what the large language models were doing.

You have an unusual background for a writer about technology. You grew up in a Christian fundamentalist family.

My parents were evangelical Christians. My whole extended family are born again Christians. Everybody I knew growing up believed what we did. I was homeschooled along with all my siblings, so most of our social life revolved around church. When I was 18, I went to Moody Bible Institute in Chicago to study theology. I was planning to go into full-time ministry.

But then you left your faith.

I had a faith crisis when I was in Bible school, which metastasized into a series of doubts about the validity of the Bible and the Christian God. I dropped out of Bible school after two years and pretty much left the faith. I began identifying as agnostic almost right away.

But my sense is youre still extremely interested in questions of transcendence and the spiritual life.

Absolutely.I dont think anyone who grew up in that world ever totally leaves it behind. And my interest in technology grew out of those larger questions. What does it mean to be human? What does it mean to have a soul?

A couple of years after I left Bible school, I read The Age of Spiritual Machines, Ray Kurzweils book about the singularity and transhumanism. He had this idea that humans could use technology to further our evolution into a new species, what he called post-humanity. It was this incredible vision of transcendence. We were essentially going to become immortal.

The algorithmic text was just so wildly creative and poetic.

There are some similarities to your Christian upbringing.

As a 25-year-old who was just starting to believe that I wasnt going to live forever in heaven, this was incredibly appealing to think that maybe science and technology could bring about a similar transformation. It was a secular form of transcendence. I started wondering: What does it mean to be a self or a thinking mind? Kurzweil was saying our selfhood is basically just a pattern of mental activity that you could upload into digital form.

So Kurzweils argument was that machines could do anything that the human mind can doand more.

Essentially. But there was a question that was always elided: Is there going to be some sort of first-person experience? And this comes into play with mind-uploading. If I transform my mind into digital form, am I still going to be me or is it just going to be an empty replica that talks and acts like me, with no subjective experience?

Nobody has a good answer for that because nobody knows what consciousness is. Thats what got me really interested in AI, because thats the area in which were playing out these questions now. What is first-person experience? How is that related to intelligence?

Isnt the assumption that AI has no consciousness or first-person experience? Isnt that the fundamental difference between artificial intelligence and the human mind?

That is definitely the consensus, but how can you prove it? We really dont know whats happening inside these models because theyre black box models. Theyre neural networks that have many hidden layers. Its a kind of alchemy.

A sophisticated large language model like Chat GPT has accumulated a vast reservoir of language by scraping the internet, but does it have any sense of meaning?

It depends on how you define meaning. Thats tricky because meaning is a concept we invented, and the definition is contested. For the past hundred years or so, linguists have determined that meaning depends on embodied reference in the real world. To know what the word dog means, you have to have seen a dog and belong to a linguistic community where that has some collective meaning.

Language models dont have access to the real world, so theyre using language in a very different way. Theyre drawing on statistical probabilities to create outputs that sound convincingly human and often appear very intelligent. And some computational linguists say, Well, that is meaning. You dont need any real-world experience to have meaning.

What does it mean to be human? What does it mean to have a soul?

These language models are constructing sentences that make a lot of sense, but is it just algorithmic wordplay?

Emily Bender and some engineers at Google came up with the term stochastic parrots. Stochastic is a statistical set of probabilities, using a certain amount of randomness, and theyre parrots because theyre mimicking human speech. These models were trained on an enormous amount of real-world human texts, and theyre able to predict what the next word is going to be in a certain context.

To me, that feels very different than how humans use language. We typically use language when were trying to create meaning with other people.

In that interpretation, the human mind is fundamentally different than AI.

I think it is. But there are people like Sam Altman, the CEO of OpenAI, who famously tweeted, I am a stochastic parrot, and so r u. There are people creating this technology who believe theres really no difference between how these models use language and how humans use language.

We think we have all these original ideas, but are we just rearranging the chairs on the deck?

I recently asked a computer scientist, What do you think creativity is? And he said, Oh, thats easy. Its just randomness. And if you know how these models work, there is a certain amount of correlation between randomness and creativity. A lot of the models have whats called a temperature gauge. If you turn up the temperature, the output becomes more random and it seems much more creative. My feeling is that theres a certain amount of randomness in human creativity, but I dont think thats all there is.

As a writer, how do you think about creativity and originality?

I think about modernist writers like James Joyce or Virginia Woolf, who completely changed literature. They created a form of a consciousness on the page that felt nothing like what had come before in the history of the novel. Thats not just because they randomly recombined everything they had read. The nature of human experience was changing during that time, and they found a way to capture what that felt like. I think creativity has to have that inner subjective quality. It comes back to the idea of meaning, which is created between two minds.

Its commonly assumed that AI has no thinking mind or subjective experience, but how would we even know if these AI models are conscious?

I have no idea. My intuition is that if it said something that was convincing enough to show that it has experience, which includes emotion but also self-awareness. But weve already had instances where the models have spoken in very convincing terms about having an inner life. There was a Google engineer, Blake Lemoine, who was convinced that the chatbot he was working on was sentient. This is going to be fiercely debated.

Artificial general intelligence is creating something thats essentially going to be like a god.

A lot of these chatbots do seem to have self-awareness.

Theyre designed to appear that way. Theres been so much money poured into emotional AI. This is a whole subfield of AIcreating chatbots that can convincingly emote and respond to human emotion. Its about maximizing engagement with the technology.

Do you think a very advanced AI would have godlike capacities? Will machines become so sophisticated that we cant distinguish between them and more conventional religious ideas of God?

Thats certainly the goal for a lot of people developing this technology. Sam Altman, Elon Musktheyve all absorbed the Kurzweil idea of the singularity. They are essentially trying to create a god with AGIartificial general intelligence. Its AI that can do everything we can and surpass human intelligence.

But isnt intelligence, no matter how advanced, different than God?

The thinking is that once it gets to the level of human intelligence, it can start doing what were doing, modifying and improving itself. At that point it becomes a recursive process where theres going to be some sort of intelligence explosion. This is the belief.

But theres another question: What are we trying to design? If you want to create a tool that helps people solve cancer or find solutions to climate change, you can do that with a very narrowly trained AI. But the fact that we are now working toward artificial general intelligence is different. Thats creating something thats essentially going to be like a god.

Why do you think Elon Musk and Sam Altman want to create this?

I think they read a lot of sci-fi as kids. [Laughs] I mean, I dont know. Theres something very deeply human in this idea of, Well, we have this capacity, so were going to do it. Its scary, though. Thats why its called the singularity. You cant see beyond it. Its an event horizon. Once you create something like that, theres really no way to tell what it will look like until its in the world.

I do feel like people are trying to create a system thats going to give answers that are difficult to come by through ordinary human thought. Thats the main appeal of creating artificial general intelligence. Its some sort of godlike figure that can give us the answers to persistent political conflicts and moral debates.

If its smart enough, can AI solve the problems that we imperfect humans cannot?

I dont think so. Its similar to what I was looking for in automatic writing, which is a source of meaning thats external to my experience. Life is infinitely complex, and every situation is different. That requires a constant process of meaning-making.

Hannah Arendt talks about thinking and then thinking again. Youre constantly making and unmaking thought as you experience the world. Machines are rigid. Theyre trained on the whole corpus of human history. Theyre like a mirror, reflecting back to us a lot of our own beliefs. But I dont think they can give us that sense of meaning that were looking for as humans. Thats something that we ultimately have to create for ourselves.

This interview originally aired on Wisconsin Public Radios nationally syndicated showTo the Best of Our Knowledge. You can listen to the full interview with Meghan OGieblynhere.

Lead image: lohloh / Shutterstock

Posted on May 2, 2024

Steve Paulson is the executive producer of Wisconsin Public Radios nationally-syndicated show To the Best of Our Knowledge. Hes the author of Atoms and Eden: Conversations on Religion and Science. You can find his podcast about psychedelics, Luminous, here.

Cutting-edge science, unraveled by the very brightest living thinkers.

Read more here:

Writer Meghan O'Gieblyn on AI, Consciousness, and Creativity - Nautilus

Posted in Ai | Comments Off on Writer Meghan O’Gieblyn on AI, Consciousness, and Creativity – Nautilus

Podcast: Resisting AI and the Consolidation of Power | TechPolicy.Press – Tech Policy Press

Audio of this conversation is available via your favorite podcast service.

In an introduction to a special issue of the journal First Monday on topics related to AI andpower, researchers Jenna Burrell and Jacob Metcalf argue that "what can and cannot be said inside of mainstream computer science publications appears to be constrained by the power, wealth, and ideology of a small cohort of industrialists. The result is that shaping discourse about the AI industry is itself a form of power that cannot be named inside of computer science." The papers in the journal go on to interrogate the epistemic culture of AI safety, the promise of utopia through artificial general intelligence how to debunk robot rights, and more.

To learn more about some of the ideas in the special issue, Justin Hendrix spoke to Burrell, Metcalf, and two of the other authors of papers included in it: Shazeda Ahmed and mile P. Torres.

A transcript of the discussion is forthcoming.

Read the original here:

Podcast: Resisting AI and the Consolidation of Power | TechPolicy.Press - Tech Policy Press

Posted in Ai | Comments Off on Podcast: Resisting AI and the Consolidation of Power | TechPolicy.Press – Tech Policy Press

JPMorgan Chase Unveils AI-Powered Tool for Thematic Investing – PYMNTS.com

J.P. Morgan Chasereportedly unveiled an artificial intelligence-powered tooldesignedto facilitate thematic investing.

The tool, calledIndexGPT, delivers thematic investment baskets created withthe assistance ofOpenAIsGPT-4model, Bloomberg reported Friday (May 3).

IndexGPT creates these thematic indexes by generating a list of keywords associated with a particular theme that are then analyzed using a natural language processing model that scans news articles to identify companies involved in that space, according to the report.

The tool allows forthe selection ofa broader range of stocks, going beyond the obvious choices that are already well-known,Rui Fernandes, J.P. Morgans head of markets trading structuring, told Bloomberg.

Thematic investing, which focuses on emerging trends rather than traditional industry sectors or company fundamentals, has gained popularity in recent years, the report said.

Thematic funds experienced a surge in popularity in 2020 and 2021, with retail investors spending billions of dollars on products based on various themes. However, interest in these strategies waned amid poor performance and higher interest rates, per the report.

J.P. Morgan Chases IndexGPT aims to reignite interest in thematic investing by providing a more accurate and efficient approach, according to the report.

While AI hasbeen widely usedin the financial industry for functions such as trading, risk management and investment research, the rise of generative AI tools has opened new possibilities for banks and financial institutions, the report said.

Fernandes said he sees IndexGPT as a first step ina long-term process ofintegrating AI across the banks index offering, per the report. J.P. Morgan Chase aims to continuously improve its offerings, from equity volatility products to commodity momentum products, gradually and thoughtfully.

In another deployment of this technology in the investment space,Morgan Stanleysaid in September that it was launching anAI-powered assistantfor financial advisers and their support staff. This tool, the AI @ Morgan Stanley Assistant, facilitates access to 100,000 research reports and documents.

In the venture capital world, AI has become a tool for making savvyinvestment decisions. VC firms are using the technology to analyze vast amounts of data on startups and market trends, help the firms identify the most promising opportunities and aid them in making better-informed decisions about where to allocate their funds.

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

Read more from the original source:

JPMorgan Chase Unveils AI-Powered Tool for Thematic Investing - PYMNTS.com

Posted in Ai | Comments Off on JPMorgan Chase Unveils AI-Powered Tool for Thematic Investing – PYMNTS.com

iOS 18: Here are the new AI features in the works – 9to5Mac

2024 is shaping up to be the Year of AI for Apple, with big updates planned for iOS 18 and more. The rumors and Tim Cook himself make it clear that there are new AI features for Apples platforms in the works. Heres everything we know about the ways Apple is exploring AI features

There have been a number of rumors about the various AI features in the works inside Apple. Bloomberg has reported that Apple thinks iOS 18 will be one of the biggest iOS updates ever, headlined by a number of new AI features.

Mark Gurman reported last July that Apple created its own Large Language Model(LLM) system, which has been dubbedAppleGPT. The project uses a framework called Ajax that Apple started building in 2022 to base various machine learning projects on a shared foundation. This Ajax framework will serve as the basis for Apples forthcoming AI features across all of its platform.

9to5Macfound evidenceof Apples work on new AI and large language model technology in iOS 17.4. We reported that Apple is relying on OpenAIs ChatGPT API for internal testing to help the development of its own AI models.

Bloomberg has reported that Apples iOS 18 features will be powered by an entirely on-device large language model, which offers a number of privacy and speed benefits.

Here are some of the rumors about new AI features coming to iOS 18:

Did you know that Apple has actually already launched a number of powerful AI frameworks and models? Heres a recap of those:

During a recent Apple earnings call, Tim Cook offered a rare teaser for a future product announcement. According to Cook, Apple is spending a tremendous amount of time and effort on artificial intelligence technologies, and the company is excited to share the details of our ongoing work in that space later this year.

Its extraordinarily rare for Cook to even remotely hint at Apples plans for future product announcements. Why did he do it this time? Likely to ease the concerns of investors and analysts worried about Apple falling behind the likes of OpenAI, Google, and Microsoft. Whether the teaser is enough to calm those fears until an actual product announcement materializes remains to be seen.

Also during an earnings call recently, Cook touted the advantages that Apple has which will set its AI apart from the competition:

We believe in the transformative power and promise of AI, and we believe we have advantages that will differentiate us in this new era, including Apples unique combination of seamless hardware, software, and services integration, groundbreaking Apple Silicon with our industry-leading neural engines, and our unwavering focus on privacy, which underpins everything we create.

In a surprising twist, Bloomberg has reported that Apple is in active negotiations with Google about potentially licensing Gemini, which is Googles set of generative AI models. The report explains that Apple is specifically looking to partner on cloud-based generative AI models.

In this scenario, Apple would rely on a partner such as Google for its cloud-based features. Other features would still be powered on-device by Apples own technology.

The generative AI features under discussion would theoretically be baked into Siri and other apps. New AI capabilities based on Apples homegrown models, meanwhile, would still be woven into the operating system. Theyll be focused on proactively providing users with information and conducting tasks on their behalf in the background, people familiar with the matter said.

While Apple is said to be in active negotiations for this partnership with Google, the company has also reportedly held talks with OpenAI as well.

In fact, most recently, it was reported that Apple had resumed talks with OpenAI about a partnership. According to reports, Apple would use OpenAIs technology to power an AI-based chatbot in iOS 18.

At this point, the question is which of the many rumors will come to fruition this year.

Id be surprised if all of these rumored AI features are ready for this year. My assumption is that Apple is working on all of this stuff (and more), but will pare down the final list of features included in iOS 18. Features that dont make the cut will likely come in a later update to iOS 18 or with iOS 19 in 2025.

Apple has officially set WWDC for June 10 this year, and thats where we expect the bulk of its AI announcements to be made.

Where do you want to see Apple direct its attention toward for new AI features this year? Let us know down in the comments.

Follow Chance:Threads,Twitter,Instagram, andMastodon.

FTC: We use income earning auto affiliate links. More.

Original post:

iOS 18: Here are the new AI features in the works - 9to5Mac

Posted in Ai | Comments Off on iOS 18: Here are the new AI features in the works – 9to5Mac

Google urges US to update immigration rules to attract more AI talent – The Verge

The US could lose out on valuable AI and tech talent if some of its immigration policies are not modernized, Google says in a letter sent to the Department of Labor.

Google says policies like Schedule A, a list of occupations the government pre-certified as not having enough American workers, have to be more flexible and move faster to meet demand in technologies like AI and cybersecurity. The company says the government must update Schedule A to include AI and cybersecurity and do so more regularly.

Theres wide recognition that there is a global shortage of talent in AI, but the fact remains that the US is one of the harder places to bring talent from abroad, and we risk losing out on some of the most highly sought-after people in the world, Karan Bhatia, head of government affairs and public policy at Google, tells The Verge. He noted that the occupations in Schedule A have not been updated in 20 years.

Companies can apply for permanent residencies, colloquially known as green cards, for employees. The Department of Labor requires companies to get a permanent labor certification (PERM) proving there is a shortage of workers in that role. That process may take time, so the government pre-certified some jobs through Schedule A.

The US Citizenship and Immigration Services lists Schedule A occupations as physical therapists, professional nurses, or immigrants of exceptional ability in the sciences or arts. While the wait time for a green card isnt reduced, Google says Schedule A cuts down the processing time by about a year.

Google says Schedule A is not currently serving its intended purpose, especially as demand for new technologies like generative AI has grown, so AI and cybersecurity must be included on the list. Google says the government should also consider multiple data sources, including accepting public feedback, to regularly update Schedule A so the process is more transparent and to really reflect workforce gaps.

Since the rise of generative AI, US companies have struggled to find engineers and researchers in the AI space. While the US produces a large cohort of AI talent, there is a shortage of AI specialists in the country, Bhatia says. However, the USs strict immigration policies have made attracting people to work in American companies to build AI platforms difficult. He adds Google employees have often had to leave the US while waiting for the PERM process to finish and for their green cards to be approved.

Competition for AI talent has been intense, with companies often poaching engineers and researchers. The Information reported AI developers like Meta have resorted to hiring AI talent without interviews. Wages for AI specialists soared, with OpenAI allegedly paying researchers up to $10 million. President Joe Bidens executive order on AI mandates federal agencies to help increase AI talent in the country.

Continued here:

Google urges US to update immigration rules to attract more AI talent - The Verge

Posted in Ai | Comments Off on Google urges US to update immigration rules to attract more AI talent – The Verge