Search Immortality Topics:

Page 5«..4567..»


Category Archives: Artificial General Intelligence

Moonshot: Coexisting with AI holograms – The Edge Malaysia

This article first appeared in Digital Edge, The Edge Malaysia Weekly on November 13, 2023 - November 19, 2023

Imagine owning a holo-pet that is able to respond to your commands and play with you, whenever and wherever. Or having a holo-friend that can be your best pal without your having to step out of your home.

The complexities of human relationships often make life unpredictable and difficult at times. So, what if we were able to construct an artificial intelligence (AI) powered companion based on our preferences? One that is able to generate real-time responses in your interactions?

AI Holographic technology has risen to new heights recently, with the Hypervsn SmartV Digital Avatar being released at the start of the year. The AI hologram functions on the SmartV Window Display, a gesture-based 3D display and merchandising system, allowing for real-time interaction with customers.

At home, Universiti Teknologi Malaysia (UTM) has developed its first home-grown real-time holo professor, which is able to project a speech given by a lecturer who is in another place. With Malaysia breaking boundaries with extended reality (XR) technology, is it possible for the next wave of hologram technology to be fully AI-powered without constraints?

3D holographic display solution for your business by Holographic Technology (Photo by HYPERVSN)

The idea of interacting with holograms essentially boils down to humans interacting with computers. Interacting with computers usually comes with interacting with the keyboard or mouse but holograms take it a step further, making computer interaction seamless and more natural.

So ultimately, its just humans interacting with computers. But in the next paradigm shift, it is going to be so easy that at times, we wont even know that they are there, says Ivan Gerard Khoo, director of Ministry XR, a spatial computing solutions developer.

With generative AI advancing at a rapid rate, to integrate it into holograms would provide a greater immersive experience of interacting with computers around you.

Khoo shares his thoughts on AI being able to push past the barrier of computer interaction through a device with holographic technology, especially in older communities who might not be tech savvy.

Weve got a billion apps here, right? But its still not easy to use for everyone (like the handicapped or the elderly). Imagine all the apps in our phone right now [becoming] accessible in the environment around us. And the evolution has begun as the enabling technologies, although nascent, are here today, says Khoo.

In fact, a lot of researchers are seeing that we are actually moving towards an artificial general intelligence that may even develop sentience, chimes in Andrew Yew, founder and chief technology officer of Ministry XR.

As much as it is promising to develop artificial sentients, Yew mentions that no machine thus far has ever passed the Turing test convincingly, which determines whether AI is capable of thinking like a human being.

(Photo by UTM)

With minimalism on the rise, the focus turns to the technology and hardware surrounding integrating AI into holograms. Is it possible to create a hologram which is not restricted by a display enclosure?

In movies, you dont need anything and you [are able] to interact with the virtual world just like that. But in order to make it happen, you need hardware to make it work. You need to set up those things in such a way that it has all of that, so that it can trick your mind [and you think it is] holographic but actually, it is not, explains Kapil Chhabra, founder of Silver Wings XR Interactive Solutions Pte Ltd.

Holograms demonstrate an illusion of light rays reflected onto a medium. They are three-dimensional images generated by interfering beams of light that reflect real, physical objects.

Now, imagine AI bringing the technology of eye tracking into holographic figures, allowing them to have eye contact with humans. Olaf Kwakman, managing partner of Silver Wings XR Interactive Solutions, thinks that it is a brilliant solution as users do not need glasses anymore. Theres still technology needed but with eye tracking, you can create some kind of projection. And that works beautifully, he says.

Now, if you make these screens really large and all around you, you can basically project it any way you like. But were not quite there yet, Kwakman says.

The challenge with projecting holograms onto mediums is the ability to project it in such a way that it is invisible to the human eye, so that the holograms are more realistic. Chhabra says this has been a struggle for some time and he hopes that it can be made possible in the future.

Taking inspiration from the Apple VR Headsets pocket-sized and portable battery solutions, Kwakman says it has a very promising augmented reality visualisation but adds that the hardware needs to be further evolved into something smaller.

If you ask me, whats going to happen in the future is that youre not going to wear glasses anymore, youre going to wear some kind of small lens, which you can just put in your eye. And with a lens like that, you can project augmented reality in full, he says.

With AIs potential, it could bring realistic 3D holograms to new heights, where it fills in the gaps and makes the interactive experience much more engaging and powerful.

In order to realise full holographic and 3D visualisation, you need a strong connection as well, because theres a lot of data flowing, says Kwakman.

The lack of usage of holographic solutions is due to poor understanding and awareness of the benefits of the technology, which in turn hampers progress, he adds.

Its very difficult to envision the advantage it can bring to a company to introduce holographics, 3D visualisation solutions, and how it will actually benefit them. And, leaders find that troublesome as well, which means that it is difficult sometimes to get the budget for it, says Kwakman.

(Photo by Silver Wings)

Having created Malaysias first home-grown holo professor, Dr Ajune Wanis Ismail, senior lecturer in computer graphics and computer vision at UTMs Faculty of Computing, shares that XR hologram systems can be complex to set up and maintain. Technical issues, such as connectivity problems or software glitches, could disrupt lessons.

AI algorithms are used to enhance the accuracy of holographic content, reducing artifacts and improving image quality. These holographic solutions in extended reality (XR) technology come as a challenge as the technology is relatively new and is rapidly evolving with new breakthroughs occurring since then.

Building and deploying AI-powered holographic systems can be costly [in terms of hardware and software components].

Incorporating AI into holograms could pose an immense demand on computational power. Most of the existing holograms produce non real-time content with a video editing loop, but AI models for holography are computationally intensive, says Ajune.

She emphasises the importance of achieving high-fidelity reconstruction in handling complex dynamic scenes with objects or viewers in motion.

Researchers are developing more efficient algorithms and leveraging hardware acceleration [such as graphics processing units] to reduce computational demands, says Ajune on how achieving real-time interaction with holographic content demands low latency.

There is no doubt that XR holograms systems are complicated and a challenge to integrate with AI, however, the prospect of being able to replicate environments and enable real-time global communication without the need for physical presence spurs excitement.

As we advance into the era of digitalisation, people need to start familiarising themselves with this technology and become proficient users, believes Ajune.

There is a lot of information out there but the teachers are still sticking to conventional methods of teaching, and the students are not paying attention because they are on their phone [and] learning it [the info] themselves, says Yew.

With AI and XR hologram technology becoming more and more advanced, it is also pertinent to educate users and raise awareness about digital wellbeing.

There must be sensibility and responsibility from business owners and users in utilising XR and AI technology, as societys mindset drives the continued advancement of such technologies.

I think [what AI can do] is going to be amazing but at the same time, like many others, I also see the risks there. And sometimes it feels a bit scary, if so much power is given out [of the] hands of humans and with computers being able to do that, says Kwakman.

Save by subscribing to us for your print and/or digital copy.

P/S: The Edge is also available on Apple's App Store and Android's Google Play.

Go here to read the rest:

Moonshot: Coexisting with AI holograms - The Edge Malaysia

Posted in Artificial General Intelligence | Comments Off on Moonshot: Coexisting with AI holograms – The Edge Malaysia

As AutoGPT released, should we be worried about AI? – Cosmos

A new artificial intelligence tool coming just months after ChatGPT appears to offer a big leap forward it can improve itself without human intervention.

The artificial intelligence (AI) tool AutoGPT was released by the same company, OpenAI, which brought us ChatGPT last year. AutoGPT promises to overcome the limitations of large language models (LLMs) such as ChatGPT.

ChatGPT exploded onto the scene at the end of 2022 for its ability to respond to text prompts in a (somewhat) human-like and natural way. It has, caused concern for occasionally including misleading or incorrect information in its responses and for its potential to be used for plagiarising assignments in schools and universities.

But its not these limitations that AutoGPT seeks to overcome.

AI is categorised as weak (narrow) or strong (general). As an AI tool designed to carry out a single task, ChatGPT is considered weak AI.

AutoGPT is created with a view to becoming a strong AI, or artificial general intelligence, theoretically capable of carrying out many different types of task, including those for which it wasnt originally designed to perform.

LLMs are designed to respond to prompts produced by human users. They then respond to that and await the next prompt.

AutoGPT is being designed to give itself prompts, creating a loop. Masa, a writer on AutoGPTs website, explains: It works by breaking a larger task into smaller sub-tasks and then spinning off independent Auto-GPT instances in order to work on them. The original instance acts as a kind of project manager, coordinating all of the work carried out and compiling it into a finished result.

But is a self-improving AI a good thing? Many experts are worried about the trajectory of artificial intelligence research.

The respected and influential British Medical Journal has published an article titled Threats by artificial intelligence to human health and human existence in which they explain three key reasons we should be concerned about AI.

Get an update of science stories delivered straight to your inbox.

Threats identified by the international team of doctors and public health experts, including those from Australia, relate to misuse of AI and the impact of the ongoing failure to adapt to and regulate the technology.

The authors note the significance of AI and its potential to have transformative effect on society. But they also warn that artificial general intelligence in particular poses an existential threat to humanity.

First, they warn of the ability of AI to clean, organise, and analyse massive data sets including of personal data such as images. Such capabilities could be used to manipulate and distort information and for AI surveillance. The authors note that such surveillance is in development in more than 75 countries ranging from liberal democracies to military regimes, [which] have been expanding such systems.

Second they say Lethal Autonomous Weapon Systems (LAWS) capable of locating, selecting, and engaging human targets without the need for human supervision, could lead to killing at an industrial scale.

Finally, the authors raise concern over the loss of jobs that will come from the spread of AI technology in many industries. Estimates are that tens to hundreds of millions of jobs will be lost in the coming decade.

While there would be many benefits from ending work that is repetitive, dangerous and unpleasant, we already know that unemployment is strongly associated with adverse health outcomes and behaviour, they write.

The authors highlight artificial general intelligence as a threat to the existence of human civilisation itself.

We are now seeking to create machines that are vastly more intelligent and powerful than ourselves. The potential for such machines to apply this intelligence and powerwhether deliberately or notin ways that could harm or subjugate humansis real and has to be considered

With exponential growth in AI research and development, the window of opportunity to avoid serious and potentially existential harms is closing. The future outcomes of the development of AI and AGI will depend on policy decisions taken now and on the effectiveness of regulatory institutions that we design to minimise risk and harm and maximise benefit, they write.

See the rest here:

As AutoGPT released, should we be worried about AI? - Cosmos

Posted in Artificial General Intelligence | Comments Off on As AutoGPT released, should we be worried about AI? – Cosmos

Opinion | We Need a Manhattan Project for AI Safety – POLITICO

At the heart of the threat is whats called the alignment problem the idea that a powerful computer brain might no longer be aligned with the best interests of human beings. Unlike fairness, or job loss, there arent obvious policy solutions to alignment. Its a highly technical problem that some experts fear may never be solvable. But the government does have a role to play in confronting massive, uncertain problems like this. In fact, it may be the most important role it can play on AI: to fund a research project on the scale it deserves.

Theres a successful precedent for this: The Manhattan Project was one of the most ambitious technological undertakings of the 20th century. At its peak, 129,000 people worked on the project at sites across the United States and Canada. They were trying to solve a problem that was critical to national security, and which nobody was sure could be solved: how to harness nuclear power to build a weapon.

Some eight decades later, the need has arisen for a government research project that matches the original Manhattan Projects scale and urgency. In some ways the goal is exactly the opposite of the first Manhattan Project, which opened the door to previously unimaginable destruction. This time, the goal must be to prevent unimaginable destruction, as well as merely difficult-to-anticipate destruction.

Dont just take it from me. Expert opinion only differs over whether the risks from AI are unprecedentedly large or literally existential.

Even the scientists who set the groundwork for todays AI models are sounding the alarm. Most recently, the Godfather of AI himself, Geoffrey Hinton, quit his post at Google to call attention to the risks AI poses to humanity.

That may sound like science fiction, but its a reality that is rushing toward us faster than almost anyone anticipated. Today, progress in AI is measured in days and weeks, not months and years.

As little as two years ago, the forecasting platform Metaculus put the likely arrival of weak artificial general intelligence a unified system that can compete with the typical college-educated human on most tasks sometime around the year 2040.

Now forecasters anticipate AGI will arrive in 2026. Strong AGIs with robotic capabilities that match or surpass most humans are forecasted to emerge just five years later. With the ability to automate AI research itself, the next milestone would be a superintelligence with unfathomable power.

Dont count on the normal channels of government to save us from that.

Policymakers cannot afford a drawn-out interagency process or notice and comment period to prepare for whats coming. On the contrary, making the most of AIs tremendous upside while heading off catastrophe will require our government to stop taking a backseat role and act with a nimbleness not seen in generations. Hence the need for a new Manhattan Project.

A Manhattan Project for X is one of those clichs of American politics that seldom merits the hype. AI is the rare exception. Ensuring AGI develops safely and for the betterment of humanity will require public investment into focused research, high levels of public and private coordination and a leader with the tenacity of General Leslie Groves the projects infamous overseer, whose aggressive, top-down leadership style mirrored that of a modern tech CEO.

Ensuring AGI develops safely and for the betterment of humanity will require a leader with the tenacity of General Leslie Groves, Hammond writes.|AP Photo

Im not the only person to suggest it: AI thinker Gary Marcus and the legendary computer scientist Judea Pearl recently endorsed the idea as well, at least informally. But what exactly would that look like in practice?

Fortunately, we already know quite a bit about the problem and can sketch out the tools we need to tackle it.

One issue is that large neural networks like GPT-4 the generative AIs that are causing the most concern right now are mostly a black box, with reasoning processes we cant yet fully understand or control. But with the right setup, researchers can in principle run experiments that uncover particular circuits hidden within the billions of connections. This is known as mechanistic interpretability research, and its the closest thing we have to neuroscience for artificial brains.

Unfortunately, the field is still young, and far behind in its understanding of how current models do what they do. The ability to run experiments on large, unrestricted models is mostly reserved for researchers within the major AI companies. The dearth of opportunities in mechanistic interpretability and alignment research is a classic public goods problem. Training large AI models costs millions of dollars in cloud computing services, especially if one iterates through different configurations. The private AI labs are thus hesitant to burn capital on training models with no commercial purpose. Government-funded data centers, in contrast, would be under no obligation to return value to shareholders, and could provide free computing resources to thousands of potential researchers with ideas to contribute.

The government could also ensure research proceeds in relative safety and provide a central connection for experts to share their knowledge.

With all that in mind, a Manhattan Project for AI safety should have at least 5 core functions:

1. It would serve a coordination role, pulling together the leadership of the top AI companies OpenAI and its chief competitors, Anthropic and Google DeepMind to disclose their plans in confidence, develop shared safety protocols and forestall the present arms-race dynamic.

2. It would draw on their talent and expertise to accelerate the construction of government-owned data centers managed under the highest security, including an air gap, a deliberate disconnection from outside networks, ensuring that future, more powerful AIs are unable to escape onto the open internet. Such facilities would likely be overseen by the Department of Energys Artificial Intelligence and Technology Office, given its existing mission to accelerate the demonstration of trustworthy AI.

3. It would compel the participating companies to collaborate on safety and alignment research, and require models that pose safety risks to be trained and extensively tested in secure facilities.

4. It would provide public testbeds for academic researchers and other external scientists to study the innards of large models like GPT-4, greatly building on existing initiatives like the National AI Research Resource and helping to grow the nascent field of AI interpretability.

5. And it would provide a cloud platform for training advanced AI models for within-government needs, ensuring the privacy of sensitive government data and serving as a hedge against runaway corporate power.

The alternative to a massive public effort like this attempting to kick the can on the AI problem wont cut it.

The only other serious proposal right now is a pause on new AI development, and even many tech skeptics see that as unrealistic. It may even be counterproductive. Our understanding of how powerful AI systems could go rogue is immature at best, but stands to improve greatly through continued testing, especially of larger models. Air-gapped data centers will thus be essential for experimenting with AI failure modes in a secured setting. This includes pushing models to their limits to explore potentially dangerous emergent behaviors, such as deceptiveness or power-seeking.

The Manhattan Project analogy is not perfect, but it helps to draw a contrast with those who argue that AI safety requires pausing research into more powerful models altogether. The project didnt seek to decelerate the construction of atomic weaponry, but to master it.

Even if AGIs end up being farther off than most experts expect, a Manhattan Project for AI safety is unlikely to go to waste. Indeed, many less-than-existential AI risks are already upon us, crying out for aggressive research into mitigation and adaptation strategies. So what are we waiting for?

Originally posted here:

Opinion | We Need a Manhattan Project for AI Safety - POLITICO

Posted in Artificial General Intelligence | Comments Off on Opinion | We Need a Manhattan Project for AI Safety – POLITICO

I created a billion-pound start-up business Elon Musk & Jeff Bezos asked to meet me heres the secret to… – The Sun

A DAD who created a billion-pound start-up business has revealed the secret to his success.

Emad Mostaque, 40, is the founder and CEO of artificial intelligence giant Stability AI and has recently been in talks with the likes of Elon Musk and Jeff Bezos.

But the London dad-of-two has worked hard to get where he is today - and doesn't plan on stopping any time soon.

Emad has gone from developing AI at home to help his autistic son, to employing 150 people across the globe for his billion-pound empire.

The 40-year-old usually calls Notting Hill home, but has started travelling to San Francisco for work.

On his most recent trip, Emad met with Bezos, the founder and CEO of Amazon, and made a deal with Musk, the CEO of Twitter.

He says the secret to his success in the AI world is using it to help humans, not overtake them.

Emad told The Times: I have a different approach to everyone else in this space, because Im building narrow models to augment humans, whereas almost everyone else is trying to build an AGI [Artificial general intelligence] to pretty much replace humans and look over them.

Emad is from Bangladesh but his parents shifted to the UK when he was a boy and settled the family in London's Walthamstow.

The dad said he was always good at numbers in school but struggled socially as he has Aspergers and ADHD.

The 40-year-old studied computer science and maths at Oxford, then became a hedge fund manager.

But when Emad's son was diagnosed with autism he quit to develop something to help the youngster.

Emad recalled: We built an AI to look at all the literature and then extract what could be the case, and then the drug repurposing.

He says that homemade AI allowed his family create an approach that took his son to a better, more cheerful place.

And, as a result, Emad inspired himself.

He started a charity that aims to give tablets loaded with AI tutors to one billion children.

He added: Can you imagine if every child had their own AI looking out for them, a personalised system that teaches them and learns from them?

"In 10 to 20 years, when they grow up, those kids will change the world.

Emad also founded the billion-pound start-up Stability AI in recent years, and it's one of the companies behind Stable Diffusion.

The tool has taken the world by storm in recent months with its ability to create images that could pass as photos from a mere text prompt.

Today, Emad is continuing to develop AI - and he says it is one of the most important inventions of history.

He explained it as somewhere between fire and the internal combustion engine.

The rest is here:

I created a billion-pound start-up business Elon Musk & Jeff Bezos asked to meet me heres the secret to... - The Sun

Posted in Artificial General Intelligence | Comments Off on I created a billion-pound start-up business Elon Musk & Jeff Bezos asked to meet me heres the secret to… – The Sun

AI At The Crossroads: Navigating Job Displacement, Ethical Concerns, And The Future Of Work – Forbes

Artificial intelligence (AI) is gaining more attention as its role in the future of work becomes increasingly apparent.

Last week, the Writers Guild of America (WGA), went on a strike because of the proposed use of AI, specifically ChatGPT, in television and film writing. The guild argued that the use of AI would replace jobs, increase compensation disparities and lead to greater job insecurity for writers, reported Time. While this was happening, Geoffrey Hinton, the 75-year-old scientist widely seen as the godfather of AI, announced his resignation from Google, warning of the growing dangers in the field.

The BBC reported that Hinton, whose research on neural networks and deep learning has paved the way for AI systems like ChatGPTwhich according to the Wall Street Journal is causing a stock-market ruckusexpressed regret over his work and raised concerns about bad actors potential misuse of AI. Hintons departure comes at a time when AI advancements are accelerating at an unprecedented pace. For example, KPMG announced last week that they would make generative AI available to all employees, including partners, for both client-facing and internal work.

Meanwhile, during an interview with the Wall Street Journal, DeepMind CEO Demis Hassabis expressed his belief that a form of Artificial General Intelligence (AGI) could be developed within a few years. Elsewhere, implications for medical leaders are becoming apparent. According to Erwin Loh, who explained in BMJ Leader, new technologies like ChatGPT and generative AIhave the potential to transform the way we practice medicine, and revolutionize the healthcare system. Lohs article provided a great explanation of AI technologies in the context of healthcare and also offered insights into how they could be used to improve delivery.

So, its clear there is enormous potential to revolutionize the world of work. The question now is: how do we make sure that AI works for us rather than against us? After all, the opportunities are vast and growing. For example, research published by MIT Sloan Management Review concluded that Data can help companies better understand and improve the employee experience, leading to a more productive workforce. But, it must be remembered that job displacement is a genuine concern. Insider reported that CEOs get closer to finally saying itAI will wipe out more jobs than they can count.

One study conducted by researchers from OpenAI, OpenResearch, and the University of Pennsylvania, revealed that around 80% of the US workforce could see at least 10% of their tasks affected by the introduction of GPTs (Generative Pre-trained Transformers), with around 19% of workers experiencing at least 50% of their tasks impacted. Having reviewed the study, Natalia Weisz, a professor at Argentinas IAE Business School, concluded in an interview that, unlike previous technological revolutions, higher-paying occupations with more education requirements, such as degrees and even doctorates, are more exposed compared to those that do not require a profession. We are moving into a phase in which traditional professions may very well be disrupted, said Weisz.

We are living in a time of rapid technological change. We must be mindful to ensure that these advances do not lead to job losses or create an unequal playing field, said Shrenik Rao, editor-in-chief of Madras Courier, in an interview. Rao predicted that Bots could replace journalists and columnists. Illustrators, cartoonists and artists could lose their jobs, too. Instead of telling stories in the public interest, stories will be produced based on what will garner views or clicks.

Rao, who is also a columnist at Haaretz, went on to probe the ethical implications of AI-driven news production. What will happen with journalistic ethics? Will the news be produced to serve certain political agendas? Will there be an objective filter for news and images? He concluded that a lack of transparency over how AI is used in journalism could lead to further mistrust in the media.

Governments, industries, and individuals need to engage in a collaborative effort to navigate this brave new world. By fostering open conversations, creating robust regulatory frameworks, and prioritizing education and adaptation, we can ensure that artificial intelligence serves as a force for good, empowering humanity to overcome challenges and reach new heights. Leadership is, therefore, required to ensure that AI is used responsibly and ethically: it is time for all to come together and propel AI forward in a way that works for everyone.

Disclaimer: The author of this article is an Associate Editor at BMJ Leader. This role is independent and distinct from his role as the author of this article. It should be noted that despite his position at BMJ Leader, he had no participation in the review, production, or publication of the academic paper referenced in this articlespecifically, the work by Erwin Loh on the potential of AI technologies in healthcare.

I'm a leadership professor writing expert commentary on global affairs

Read more:

AI At The Crossroads: Navigating Job Displacement, Ethical Concerns, And The Future Of Work - Forbes

Posted in Artificial General Intelligence | Comments Off on AI At The Crossroads: Navigating Job Displacement, Ethical Concerns, And The Future Of Work – Forbes

China’s State-Sponsored AI Claims it Will Surpass ChatGPT by End … – Tom’s Hardware

Chinese company iFlytek yesterday threw itself at OpenAI's bread and butter by announcing a product that's aimed to compete with ChatGPT. The company's "Spark Desk" was described by the company's founder and president Liu Qingfengas a "cognitive big model" and even as the "dawn of artificial general intelligence." Beyond those buzzwords was also a promise, however: that Spark Desk would surpass OpenAI's ChatGPT by end of year.

We should be happy that we can chalk some of the above to corporate marketing buzzwords. I can assure you my mind will be elsewhere if/when I have to write an article announcing that Artificial General Intelligence (AGI) is here. Perhaps even more so if that AGI was Chinese, as I'm unsure I can trust an AGI that thinks social scoring systems are the bread and butter of its "cognitive big model."

All that aside, however, there are a number of interesting elements to this release. Every day we hear of another ChatGPT spawn, whether official or unofficially linked to the work of OpenAI. With the tech's impact being what it is (even if that impact is still cloudy and mostly unrealized), it was only natural that every player with enough money and expertise to pursue its own models adapted to their own public and stakeholders would do so.

Of course, the question is whether iFlyTek and Spark Desk can actually deliver on its claims, specifically that of one-upping OpenAI at its own game. The answer will likely depend on multiple factors and how you view the subject.

ChatGPT wasn't made for the Eastern public. There's a training data, linguistic and cultural chasm (opens in new tab) that separates ChatGPT's impact on the Eastern seaboard compared to the Western world. And by that definition, it's entirely possible that "Spark Desk" will offer Eastern users a much improved (and more relevant) user experience compared to ChatGPT, given enough maturation time. Perhaps that could even happen before the end of the year. It certainly already offers a better experience for Chinese users in particular, as the country pre-emptively banned ChatGPT from passing beyond its Great Firewall (except in Hong Kong).

The decision to ban ChatGPT likely stifled innovation that it would have otherwise triggered. We need only look to our own news outlets to see the number of industries being impacted by the tech. That's something no country can willingly give up on at a whim; it really was simply a matter of time before a competent competitor was announced.

We'll have to wait for years' end to see whether iFlytek's claims materialize or evaporate. It'll be hard enough to quantitatively compare the two LLMs, especially when their target users are so culturally different. One thing is for sure: OpenAI won't simply rest on its laurels and wait for other industry players to catch up, especially not when there's a target date for that to happen.

The ChatGPT version iFlytek's Spark Model will have to contend with won't be the same GPT we know today. Perhaps OpenAI's expertise and time-to-market advantages will keep it ahead in the race (and that's what we'd expect); but we also have to remember there are multiple ways to achieve a wanted result. It's been shown that the U.S.'s technological sanctions against China have had less of an effect than hoped for, and that the country is willing to shoulder the burden (and costs) of paying for the training of cutting-edge technology in outdated, superseded hardware millions of dollars and hundreds of extra training hours be damned.

A few extra billions could be just enough to bridge the gap. That's China's bet, at least.

See more here:

China's State-Sponsored AI Claims it Will Surpass ChatGPT by End ... - Tom's Hardware

Posted in Artificial General Intelligence | Comments Off on China’s State-Sponsored AI Claims it Will Surpass ChatGPT by End … – Tom’s Hardware