Search Immortality Topics:

Page 11234..1020..»


OpenAI revives its robotic research team, plans to build dedicated AI – Interesting Engineering

Posted: June 3, 2024 at 2:39 am

OpenAI being in the news isnt a novelty at all. This time its bagging headlines for restarting its robotics research group after three years. A ChatGPT developer confirmed this move in an interview with Forbes.

It has been almost four years since OpenAI had disbanded a team which researched ways of using AI to teach robots new tasks.

According to media reports, OpenAI is now on the verge of developing a host of multimodal large language models for robotics use cases. A multimodal model is a neural network capable of processing various types of input, not just text. For instance, it can handle data from a robots onboard sensors.

OpenAI had bid goodbye to its original robotics research group. Wojciech Zaremba said, I actually believe quite strongly in the approach that the robotics [team] took in that direction, but from the perspective of AGI [artificial general intelligence], I think that there was actually some components missing. So when we created the robotics [team], we thought that we could go very far with self-generated data and reinforcement learning.

According to a report in Forbes, OpenAI has been hiring again for its robotics team and they have been actively on the lookout for a research robotics engineer. They are seeking an individual skilled in training multimodal robotics models to unlock new capabilities for our partners robots, researching and developing improvements to our core models, exploring new model architectures, collecting robotics data, and conducting evaluations.

Were looking for candidates with a strong research background and experience in shipping AI applications, the company stated.

Earlier this year, OpenAI also invested in humanoid developer Figure AIs Series B fundraising. This investment highlights OpenAIs clear interest in robotics.

Over the past year, OpenAI has significantly invested in the robotics field through its startup fund, pouring millions into companies like Figure AI, 1X Technologies, and Physical Intelligence. These investments underscore OpenAIs keen interest in advancing humanoid robots. In February, OpenAI hinted at a renewed focus on robotics when Figure AI secured additional funding. Shortly after, Figure AI released a video showcasing a robot with basic speech and reasoning skills, powered by OpenAIs model.

Peter Welinder, OpenAIs vice president and a member of the original robotics team, stated, Weve always planned to return to robotics, and we see a path with Figure to explore the potential of humanoid robots powered by highly capable multimodal models.

According to the report, OpenAI doesnt intend to compete directly with other robotics companies. Instead, it aims to develop AI technology that other manufacturers can integrate into their robots. Job listings indicate that new engineers will collaborate with external partners to train advanced AI models. It remains unclear if OpenAI will venture into creating its own robotics hardware, a challenge it has faced in the past. For now, the focus seems to be on leveraging its AI expertise to enhance robotic functionalities.

Apart from this Apple has also been reported to collaborate with OpenAI so that it can inculcate ChatGPT technology into its iOS 18 operating systems for iPhones, according to different media outlets.

The integration of ChatGPT, an advanced AI developed by OpenAI under Sam Altmans leadership, is set to revolutionize how Siri comprehends and responds to complex queries. This partnership, anticipated to be officially announced at this years Worldwide Developers Conference (WWDC), has been in the works for several months and has faced internal challenges and resistance from both companies.

NEWSLETTER

Stay up-to-date on engineering, tech, space, and science news with The Blueprint.

Gairika Mitra Gairika is a technology nerd, an introvert, and an avid reader. Lock her up in a room full of books, and you'll never hear her complain.

Excerpt from:

OpenAI revives its robotic research team, plans to build dedicated AI - Interesting Engineering

Recommendation and review posted by G. Smith

Can AI ever be smarter than humans? | Context – Context

Posted: June 3, 2024 at 2:39 am

Whats the context?

"Artificial general intelligence" (AGI) - the benefits, the risks to security and jobs, and is it even possible?

LONDON - When researcher Jan Leike quit his job at OpenAI last month, he warned the tech firm's "safety culture and processes (had) taken a backseat" while it trained its next artificial intelligence model.

He voiced particular concern about the company's goal to develop "artificial general intelligence", a supercharged form of machine learning that it says would be "smarter than humans".

Some industry experts say AGI may be achievable within 20 years, but others say it will take many decades, if at all.

But what is AGI, how should it be regulated and what effect will it have on people and jobs?

OpenAI defines AGI as a system "generally smarter than humans". Scientists disagree on what this exactly means.

"Narrow" AI includes ChatGPT, which can perform a specific, singular task. This works by pattern matching, akin to putting together a puzzle without understanding what the pieces represent, and without the ability to count or complete logic puzzles.

"The running joke, when I used to work at Deepmind (Google's artificial intelligence research laboratory), was AGI is whatever we don't have yet," Andrew Strait, associate director of the Ada Lovelace Institute, told Context.

IBM has suggested that artificial intelligence would need at least seven critical skills to reach AGI, including visual and auditory perception, making decisions with incomplete information, and creating new ideas and concepts.

Narrow AI is already used in many industries, but has been responsible for many issues, like lawyers citing "hallucinated" - made up - legal precedents and recruiters using biased services to check potential employees.

AGI still lacks definition, so experts find it difficult to describe the risks that it might pose.

It is possible that AGI will be better at filtering out bias and incorrect information, but it is also possible new problems will arise.

One "very serious risk", Strait said, was an over-reliance on the new systems, "particularly as they start to mediate more sensitive human-to-human relationships".

AI systems also need huge amounts of data to train on and this could result in a massive expansion of surveillance infrastructure. Then there are security risks.

"If you collect (data), it's more likely to get leaked," Strait said.

There are also concerns over whether AI will replace human jobs.

Carl Frey, a professor of AI and work at the Oxford Internet Institute, said an AI apocalypse was unlikely and that "humans in the loop" would still be needed.

But there may be downward pressure on wages and middle-income jobs, especially with developments in advanced robotics.

"I don't see a lot of focus on using AI to develop new products and industries in the ways that it's often being portrayed. All applications boil down to some form of automation," Frey told Context.

As AI develops, governments must ensure there is competition in the market, as there are significant barriers to entry for new companies, Frey said.

There also needs to be a different approach to what the economy rewards, he added. It is currently in the interest of companies to focus on automation and cut labour costs, rather than create jobs.

"One of my concerns is that the more we emphasise the downsides, the more we emphasise the risks with AI, the more likely we are to get regulation, which means that we restrict entry and that we solidify the market position of incumbents," he said.

Last month, the U.S. Department of Homeland Security announced a board comprised of the CEOs of OpenAI, Microsoft, Google, and Nvidia to advise the government on AI in critical infrastructure.

"If your goal is to minimise the risks of AI, you don't want open source. You want a few incumbents that you can easily control, but you're going to end up with a tech monopoly," Frey said.

AGI does not have a precise timeline. Jensen Huang, the chief executive of Nvidia, predicts that today's models could advance to the point of AGI within five years.

Huang's definition of AGI would be a program that can improve on human logic quizzes and exams by 8%.

OpenAI has indicated that a breakthrough in AI is coming soon with Q* (pronounced Q-Star), a secretive project reported in November last year.

Microsoft researchers have said that GPT-4, one of OpenAI's generative AI models, has "sparks of AGI". However, it does not "(come) close to being able to do anything that a human can do", nor does it have "inner motivation and goals" - another key aspect in some definitions of AGI.

But Microsoft President Brad Smith has rejected claims of a breakthrough.

"There's absolutely no probability that you're going to see this so-called AGI, where computers are more powerful than people, in the next 12 months. It's going to take years, if not many decades, but I still think the time to focus on safety is now," he said in November.

Frey suggested there would need to be significant innovation to get to AGI, due to both limitations in hardware and the amount of training data available.

"There are real question marks around whether we can develop AI on the current path. I don't think we can just scale up existing models (with) more compute, more data, and get to AGI."

Read the rest here:

Can AI ever be smarter than humans? | Context - Context

Recommendation and review posted by G. Smith

The AI revolution is coming to robots: how will it change them? – Nature.com

Posted: June 3, 2024 at 2:39 am

For a generation of scientists raised watching Star Wars, theres a disappointing lack of C-3PO-like droids wandering around our cities and homes. Where are the humanoid robots fuelled with common sense that can help around the house and workplace?

Rapid advances in artificial intelligence (AI) might be set to fill that hole. I wouldnt be surprised if we are the last generation for which those sci-fi scenes are not a reality, says Alexander Khazatsky, a machine-learning and robotics researcher at Stanford University in California.

From OpenAI to Google DeepMind, almost every big technology firm with AI expertise is now working on bringing the versatile learning algorithms that power chatbots, known as foundation models, to robotics. The idea is to imbue robots with common-sense knowledge, letting them tackle a wide range of tasks. Many researchers think that robots could become really good, really fast. We believe we are at the point of a step change in robotics, says Gerard Andrews, a marketing manager focused on robotics at technology company Nvidia in Santa Clara, California, which in March launched a general-purpose AI model designed for humanoid robots.

At the same time, robots could help to improve AI. Many researchers hope that bringing an embodied experience to AI training could take them closer to the dream of artificial general intelligence AI that has human-like cognitive abilities across any task. The last step to true intelligence has to be physical intelligence, says Akshara Rai, an AI researcher at Meta in Menlo Park, California.

But although many researchers are excited about the latest injection of AI into robotics, they also caution that some of the more impressive demonstrations are just that demonstrations, often by companies that are eager to generate buzz. It can be a long road from demonstration to deployment, says Rodney Brooks, a roboticist at the Massachusetts Institute of Technology in Cambridge, whose company iRobot invented the Roomba autonomous vacuum cleaner.

There are plenty of hurdles on this road, including scraping together enough of the right data for robots to learn from, dealing with temperamental hardware and tackling concerns about safety. Foundation models for robotics should be explored, says Harold Soh, a specialist in humanrobot interactions at the National University of Singapore. But he is sceptical, he says, that this strategy will lead to the revolution in robotics that some researchers predict.

The term robot covers a wide range of automated devices, from the robotic arms widely used in manufacturing, to self-driving cars and drones used in warfare and rescue missions. Most incorporate some sort of AI to recognize objects, for example. But they are also programmed to carry out specific tasks, work in particular environments or rely on some level of human supervision, says Joyce Sidopoulos, co-founder of MassRobotics, an innovation hub for robotics companies in Boston, Massachusetts. Even Atlas a robot made by Boston Dynamics, a robotics company in Waltham, Massachusetts, which famously showed off its parkour skills in 2018 works by carefully mapping its environment and choosing the best actions to execute from a library of built-in templates.

For most AI researchers branching into robotics, the goal is to create something much more autonomous and adaptable across a wider range of circumstances. This might start with robot arms that can pick and place any factory product, but evolve into humanoid robots that provide company and support for older people, for example. There are so many applications, says Sidopoulos.

The human form is complicated and not always optimized for specific physical tasks, but it has the huge benefit of being perfectly suited to the world that people have built. A human-shaped robot would be able to physically interact with the world in much the same way that a person does.

However, controlling any robot let alone a human-shaped one is incredibly hard. Apparently simple tasks, such as opening a door, are actually hugely complex, requiring a robot to understand how different door mechanisms work, how much force to apply to a handle and how to maintain balance while doing so. The real world is extremely varied and constantly changing.

The approach now gathering steam is to control a robot using the same type of AI foundation models that power image generators and chatbots such as ChatGPT. These models use brain-inspired neural networks to learn from huge swathes of generic data. They build associations between elements of their training data and, when asked for an output, tap these connections to generate appropriate words or images, often with uncannily good results.

Likewise, a robot foundation model is trained on text and images from the Internet, providing it with information about the nature of various objects and their contexts. It also learns from examples of robotic operations. It can be trained, for example, on videos of robot trial and error, or videos of robots that are being remotely operated by humans, alongside the instructions that pair with those actions. A trained robot foundation model can then observe a scenario and use its learnt associations to predict what action will lead to the best outcome.

Google DeepMind has built one of the most advanced robotic foundation models, known as Robotic Transformer 2 (RT-2), that can operate a mobile robot arm built by its sister company Everyday Robots in Mountain View, California. Like other robotic foundation models, it was trained on both the Internet and videos of robotic operation. Thanks to the online training, RT-2 can follow instructions even when those commands go beyond what the robot has seen another robot do before1. For example, it can move a drink can onto a picture of Taylor Swift when asked to do so even though Swifts image was not in any of the 130,000 demonstrations that RT-2 had been trained on.

In other words, knowledge gleaned from Internet trawling (such as what the singer Taylor Swift looks like) is being carried over into the robots actions. A lot of Internet concepts just transfer, says Keerthana Gopalakrishnan, an AI and robotics researcher at Google DeepMind in San Francisco, California. This radically reduces the amount of physical data that a robot needs to have absorbed to cope in different situations, she says.

But to fully understand the basics of movements and their consequences, robots still need to learn from lots of physical data. And therein lies a problem.

Although chatbots are being trained on billions of words from the Internet, there is no equivalently large data set for robotic activity. This lack of data has left robotics in the dust, says Khazatsky.

Pooling data is one way around this. Khazatsky and his colleagues have created DROID2, an open-source data set that brings together around 350 hours of video data from one type of robot arm (the Franka Panda 7DoF robot arm, built by Franka Robotics in Munich, Germany), as it was being remotely operated by people in 18 laboratories around the world. The robot-eye-view camera has recorded visual data in hundreds of environments, including bathrooms, laundry rooms, bedrooms and kitchens. This diversity helps robots to perform well on tasks with previously unencountered elements, says Khazatsky.

When prompted to pick up extinct animal, Googles RT-2 model selects the dinosaur figurine from a crowded table.Credit: Google DeepMind

Gopalakrishnan is part of a collaboration of more than a dozen academic labs that is also bringing together robotic data, in its case from a diversity of robot forms, from single arms to quadrupeds. The collaborators theory is that learning about the physical world in one robot body should help an AI to operate another in the same way that learning in English can help a language model to generate Chinese, because the underlying concepts about the world that the words describe are the same. This seems to work. The collaborations resulting foundation model, called RT-X, which was released in October 20233, performed better on real-world tasks than did models the researchers trained on one robot architecture.

Many researchers say that having this kind of diversity is essential. We believe that a true robotics foundation model should not be tied to only one embodiment, says Peter Chen, an AI researcher and co-founder of Covariant, an AI firm in Emeryville, California.

Covariant is also working hard on scaling up robot data. The company, which was set up in part by former OpenAI researchers, began collecting data in 2018 from 30 variations of robot arms in warehouses across the world, which all run using Covariant software. Covariants Robotics Foundation Model 1 (RFM-1) goes beyond collecting video data to encompass sensor readings, such as how much weight was lifted or force applied. This kind of data should help a robot to perform tasks such as manipulating a squishy object, says Gopalakrishnan in theory, helping a robot to know, for example, how not to bruise a banana.

Covariant has built up a proprietary database that includes hundreds of billions of tokens units of real-world robotic information which Chen says is roughly on a par with the scale of data that trained GPT-3, the 2020 version of OpenAI's large language model. We have way more real-world data than other people, because thats what we have been focused on, Chen says. RFM-1 is poised to roll out soon, says Chen, and should allow operators of robots running Covariants software to type or speak general instructions, such as pick up apples from the bin.

Another way to access large databases of movement is to focus on a humanoid robot form so that an AI can learn by watching videos of people of which there are billions online. Nvidias Project GR00T foundation model, for example, is ingesting videos of people performing tasks, says Andrews. Although copying humans has huge potential for boosting robot skills, doing so well is hard, says Gopalakrishnan. For example, robot videos generally come with data about context and commands the same isnt true for human videos, she says.

A final and promising way to find limitless supplies of physical data, researchers say, is through simulation. Many roboticists are working on building 3D virtual-reality environments, the physics of which mimic the real world, and then wiring those up to a robotic brain for training. Simulators can churn out huge quantities of data and allow humans and robots to interact virtually, without risk, in rare or dangerous situations, all without wearing out the mechanics. If you had to get a farm of robotic hands and exercise them until they achieve [a high] level of dexterity, you will blow the motors, says Nvidias Andrews.

But making a good simulator is a difficult task. Simulators have good physics, but not perfect physics, and making diverse simulated environments is almost as hard as just collecting diverse data, says Khazatsky.

Meta and Nvidia are both betting big on simulation to scale up robot data, and have built sophisticated simulated worlds: Habitat from Meta and Isaac Sim from Nvidia. In them, robots gain the equivalent of years of experience in a few hours, and, in trials, they then successfully apply what they have learnt to situations they have never encountered in the real world. Simulation is an extremely powerful but underrated tool in robotics, and I am excited to see it gaining momentum, says Rai.

Many researchers are optimistic that foundation models will help to create general-purpose robots that can replace human labour. In February, Figure, a robotics company in Sunnyvale, California, raised US$675 million in investment for its plan to use language and vision models developed by OpenAI in its general-purpose humanoid robot. A demonstration video shows a robot giving a person an apple in response to a general request for something to eat. The video on X (the platform formerly known as Twitter) has racked up 4.8 million views.

Exactly how this robots foundation model has been trained, along with any details about its performance across various settings, is unclear (neither OpenAI nor Figure responded to Natures requests for an interview). Such demos should be taken with a pinch of salt, says Soh. The environment in the video is conspicuously sparse, he says. Adding a more complex environment could potentially confuse the robot in the same way that such environments have fooled self-driving cars. Roboticists are very sceptical of robot videos for good reason, because we make them and we know that out of 100 shots, theres usually only one that works, Soh says.

As the AI research community forges ahead with robotic brains, many of those who actually build robots caution that the hardware also presents a challenge: robots are complicated and break a lot. Hardware has been advancing, Chen says, but a lot of people looking at the promise of foundation models just don't know the other side of how difficult it is to deploy these types of robots, he says.

Another issue is how far robot foundation models can get using the visual data that make up the vast majority of their physical training. Robots might need reams of other kinds of sensory data, for example from the sense of touch or proprioception a sense of where their body is in space say Soh. Those data sets dont yet exist. Theres all this stuff thats missing, which I think is required for things like a humanoid to work efficiently in the world, he says.

Releasing foundation models into the real world comes with another major challenge safety. In the two years since they started proliferating, large language models have been shown to come up with false and biased information. They can also be tricked into doing things that they are programmed not to do, such as telling users how to make a bomb. Giving AI systems a body brings these types of mistake and threat to the physical world. If a robot is wrong, it can actually physically harm you or break things or cause damage, says Gopalakrishnan.

Valuable work going on in AI safety will transfer to the world of robotics, says Gopalakrishnan. In addition, her team has imbued some robot AI models with rules that layer on top of their learning, such as not to even attempt tasks that involve interacting with people, animals or other living organisms. Until we have confidence in robots, we will need a lot of human supervision, she says.

Despite the risks, there is a lot of momentum in using AI to improve robots and using robots to improve AI. Gopalakrishnan thinks that hooking up AI brains to physical robots will improve the foundation models, for example giving them better spatial reasoning. Meta, says Rai, is among those pursuing the hypothesis that true intelligence can only emerge when an agent can interact with its world. That real-world interaction, some say, is what could take AI beyond learning patterns and making predictions, to truly understanding and reasoning about the world.

What the future holds depends on who you ask. Brooks says that robots will continue to improve and find new applications, but their eventual use is nowhere near as sexy as humanoids replacing human labour. But others think that developing a functional and safe humanoid robot that is capable of cooking dinner, running errands and folding the laundry is possible but could just cost hundreds of millions of dollars. Im sure someone will do it, says Khazatsky. Itll just be a lot of money, and time.

Originally posted here:

The AI revolution is coming to robots: how will it change them? - Nature.com

Recommendation and review posted by G. Smith

OpenAI says it’s charting a "path to AGI" with its next frontier AI model – ITPro

Posted: June 3, 2024 at 2:39 am

OpenAI has revealed that it recently started work on training its next frontier large language model (LLM).

The first version of OpenAIs ChatGPT debuted back in November 2022 and became an unexpected breakthrough hit which launched generative AI into public consciousness.

Since then, there have been a number of updates to the underlying model. The first version of ChatGPT was built on GPT-3.5 which finished training in early 2022., while GPT-4 arrived in March 2023. The most recent, GPT-4o, arrived in April this year.

Now OpenAI is working on a new LLM and said it anticipates the system to bring us to the next level of capabilities on our path to [artificial general intelligence] AGI.

AGI is a hotly contested concept whereby an AI would like humans be good at adapting to many different tasks, including ones it has never been trained on, rather than being designed for one particular use.

AI researchers are split on whether AGI could ever exist or whether the search for it may even be based on a misunderstanding of how intelligence works.

OpenAI provided no details of what the next model might do, but as its LLMs have evolved, the capabilities of the underlying models have expanded.

Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2024.

While GPT-3 could only deal with text, GTP-4 is able to accept images as well, while GPT-4o has been optimized for voice communication. Context windows have also increased markedly with each interaction, although the size of the models and technical details still remain secret.

Sam Altman, CEO at OpenAI, has stated that GPT-4 cost more than $100 million to train, per Wired, and the model is rumored to have more than one trillion parameters. This would make it one of, if not the biggest, LLM currently in existence.

That doesnt necessarily mean the next model will be even larger; Altman has previously suggested the race for ever bigger models may be coming to an end.

Smaller models working together might be a more useful way of using generative AI, he has said.

And even if OpenAI has started training its next model, dont expect to see the impact of it very soon. Training models can take many months and that can be just the first step. It took six months of testing after training was finished before OpenAI released GPT-4.

The company also said it will create a new Safety and Security Committee led by OpenAI directors Bret Taylor, Adam DAngelo, Nicole Seligman, and Altman. This committee will be responsible for making recommendations to the board on critical safety and security decisions for OpenAI projects and operations.

One of its first tasks will be to evaluate and develop OpenAIs processes and safeguards over the next 90 days. After that the committee will share their recommendations with the board.

Some may raise eyebrows at the safety committee being made up of members of existing OpenAIs board.

Dr Ilia Kolochenko, CEO at ImmuniWeb and adjunct professor of cyber security at Capital Technology University, questioned whether the move will actually deliver positive outcomes as far as AI safety is concerned.

Being safe does not necessarily imply being accurate, reliable, fair, transparent, explainable and non-discriminative the absolutely crucial characteristics of GenAI solutions, Kolochenko said. In view of the past turbulence at OpenAI, I am not sure that the new committee will make a radical improvement.

The launch of the safety committee comes amidst greater calls for more rigorous regulation and oversight of LLM development. Most recently, a former OpenAI board member has argued that self-governance isnt the right approach for AI firms and has argued that a strong regulatory framework is needed.

OpenAI has made public efforts to calm AI safety fears in recent months. It was among a host of major industry players to sign up to a safe development pledge at the Seoul AI Summit that could see them pull the plug on their own models if they cannot be built or deployed safely.

But these commitments are voluntary and come with plenty of caveats, leading some experts to call for stronger legislation and requirements for tougher testing of LLMs.

Because of the potentially large risks associated with the technology, AI companies should be subject to a similar regulatory framework as pharmaceuticals companies, critics argue, where companies have to hit standards set by regulators who can make the final decision on if and when a product can be released.

Read the rest here:

OpenAI says it's charting a "path to AGI" with its next frontier AI model - ITPro

Recommendation and review posted by G. Smith

Responsible AI needs further collaboration – Chinadaily.com.cn – China Daily

Posted: June 3, 2024 at 2:39 am

Wang Lei (standing), chairman of Wenge Tech Corporation, talks to participants at the World Summit on the Information Society. For China Daily

Further efforts are needed to build responsible artificial intelligence by promoting technological openness, fostering collaboration and establishing consensus-driven governance to fully unleash AI's potential to boost productivity across various industries, an executive said.

The remarks were made by Wang Lei, chairman of Wenge Tech. Corporation, a Beijing-based AI company recognized by the Ministry of Industry and Information Technology as a "little giant" firmnovel and elite small and medium-sized enterprises that specialize in niche markets. Wang delivered his speech at the recently concluded World Summit on the Information Society.

"AI has made extraordinary progress in recent years. Innovations like ChatGPT and hundreds of other large language models (LLMs) have captured global attention, profoundly transforming how we work and live," said Wang.

"Now we are entering a new era of Artificial General Intelligence (AGI). Enterprise AI has proven to create significant value for customers in fields such as government operations, ESGs, supply chain management, and defense intelligence, excelling in analysis, forecasting, decision-making, optimization, and risk monitoring," he added.

A recent report from the think-tank a16z and IDC reveals that global enterprise investments in AI have surged from an average of $7 million to $18 million, a 2.5-fold increase. In China, the number of LLMs grew from 16 to 318 last year, with over 80 percent focusing on industry-specific applications, Wang noted.

He predicted a promising future for Enterprise AI, with decision intelligence being the ultimate goal. "Complex problems will be broken down into smaller tasks, each resolved by different AI models. AI agents and multi-agent collaboration frameworks will optimize decision-making strategies and action planning, integrating AI into workflows, data streams, and decision-making processes within industry-specific scenarios."

Wang proposed a three-step methodology for successful Enterprise AI transformation: data engineering, model engineering, and domain engineering.

"To build responsible AI, we must address several challenges head-on," he emphasized. "Promoting technological openness can reduce regional and industrial imbalances, fostering collaboration can mitigate unfair usage restrictions, and establishing consensus-driven governance can significantly enhance AI safety."

Continue reading here:

Responsible AI needs further collaboration - Chinadaily.com.cn - China Daily

Recommendation and review posted by G. Smith

OpenAI announces new Safety and Security Committee as the AI race hots up and concerns grow around ethics – TechRadar

Posted: June 3, 2024 at 2:39 am

OpenAI, the tech company behind ChatGPT, has announced that its formed a Safety and Security Committee thats intended to make the firms approach to AI more responsible and consistent in terms of security.

Its no secret that OpenAI and CEO Sam Altman - who will be on the committee - want to be the first to reach AGI (Artificial General Intelligence), which is broadly considered as achieving artificial intelligence that will resemble human-like intelligence and can teach itself. Having recently debuted GPT-4o to the public, OpenAI is already training the next-generation GPT model, which it expects to be one step closer to AGI.

GPT-4o was debuted on May 13 to the public as a next-level multimodal (capable of processing in multiple modes) generative AI model, able to deal with input and respond with audio, text, and images. It was met with a generally positive reception, but more discussion around the innovation has since arisen regarding its actual capabilities, implications, and the ethics around technologies like it.

Just over a week ago, OpenAI confirmed to Wired that its previous team responsible for overseeing the safety of its AI models had been disbanded and reabsorbed into other existing teams. This followed the notable departures of key company figures like OpenAI co-founder and chief scientist Ilya Sutskever, and co-lead of the AI safety superalignment team Jan Leike. Their departure was reportedly related to their concerns that OpenAI, and Altman in particular, was not doing enough to develop its technologies responsibly, and was forgoing conducting due diligence.

This has seemingly given OpenAI a lot to reflect on and its formed the oversight committee in response. In the announcement post about the committee being formed, OpenAI also states that it welcomes a robust debate at this important moment. The first job of the committee will be to evaluate and further develop OpenAIs processes and safeguards over the next 90 days, and then share recommendations with the companys board.

The recommendations that are subsequently agreed upon to be adopted will be shared publicly in a manner that is consistent with safety and security.

The committee will be made up of Chairman Bret Taylor, CEO of Quora Adam DAngelo, and Nicole Seligman, a former executive of Sony Entertainment, alongside six OpenAI employees which includes Sam Altman as mentioned, and John Schulman, a researcher and cofounder of OpenAI. According to Bloomberg, OpenAI stated that it will also consult external experts as part of this process.

Sign up for breaking news, reviews, opinion, top tech deals, and more.

Ill reserve my judgment for when OpenAIs adopted recommendations are published, and I can see how theyre implemented, but intuitively, I dont have the greatest confidence that OpenAI (or any major tech firm) is prioritizing safety and ethics as much as they are trying to win the AI race.

Thats a shame, and its unfortunate that generally speaking, those who are striving to be the best no matter what are often slow to consider the cost and effects of their actions, and how they might impact others in a very real way - even if large numbers of people are potentially going to be affected.

Ill be happy to be proven wrong and I hope I am, and in an ideal world, all tech companies, whether theyre in the AI race or not, should prioritize the ethics and safety of what theyre doing at the same level that they strive for innovation. So far in the realm of AI, that does not appear to be the case from where Im standing, and unless there are real consequences, I dont see companies like OpenAI being swayed that much to change their overall ethos or behavior.

See the article here:

OpenAI announces new Safety and Security Committee as the AI race hots up and concerns grow around ethics - TechRadar

Recommendation and review posted by G. Smith


Page 11234..1020..»