Search Immortality Topics:

Page 31234


Category Archives: Artificial Super Intelligence

Researchers at UTSA use artificial intelligence to improve cancer … – UTSA

Patients undergoing radiotherapy are currently given a computed tomography (CT) scan to help physicians see where the tumor is on an organ, for example a lung. A treatment plan to remove the cancer with targeted radiation doses is then made based on that CT image.

Rad says that cone-beam computed tomography (CBCT) is often integrated into the process after each dosage to see how much a tumor has shrunk, but CBCTs are low-quality images that are time-consuming to read and prone to misinterpretation.

UTSA researchers used domain adaptation techniques to integrate information from CBCT and initial CT scans for tumor evaluation accuracy. Their Generative AI approach visualizes the tumor region affected by radiotherapy, improving reliability in clinical settings.

This improved approach enables physicians to more accurately see how much a tumor has decreased week by week and to plan the following weeks radiation dose with greater precision. Ultimately, the approach could lead clinicians to better target tumors while sparing the surrounding critical organs and healthy tissue.

Nikos Papanikolaou, a professor in the Departments of Radiation Oncology and Radiology at UT Health San Antonio, provided the patient data that enabled the researchers to advance their study.

UTSA and UT Health San Antonio have a shared commitment to deliver the best possible health care to members of our community, Papanikolaou said. This study is a wonderful example of how artificial intelligence can be used to develop new personalized treatments for the benefit of society.

The American Society for Radiology Oncology stated in a 2020 report that between half or two-thirds of people diagnosed with cancer were expected to receive radiotherapy treatment. According to the American Cancer Society, the number of new cancer cases in the U.S. in 2023 is projected to be nearly two million.

Arkajyoti Roy, UTSA assistant professor of management science and statistics, says he and his collaborators have been interested in using AI and deep learning models to improve treatments over the last few years.

Besides just building more advanced AI models for radiotherapy, we also are super interested in the limitations of these models, he said. All models make errors and for something like cancer treatment its very important not only to understand the errors but to try to figure out how we can limit their impact; thats really the goal from my perspective of this project.

The researchers study included 16 lung cancer patients whose pre-treatment CT and mid-treatment weekly CBCT images were captured over a six-week period. Results show that using the researchers new approach demonstrated improved tumor shrinkage predictions for weekly treatment plans with significant improvement in lung dose sparing. Their approach also demonstrated a reduction in radiation-induced pneumonitis or lung damage up to 35%.

Were excited about this direction of research that will focus on making sure that cancer radiation treatments are robust to AI model errors, Roy said. This work would not be possible without the interdisciplinary team of researchers from different departments.

View post:

Researchers at UTSA use artificial intelligence to improve cancer ... - UTSA

Posted in Artificial Super Intelligence | Comments Off on Researchers at UTSA use artificial intelligence to improve cancer … – UTSA

Some Glimpse AGI in ChatGPT. Others Call It a Mirage – WIRED

Sbastien Bubeck, a machine learning researcher atMicrosoft, woke up one night last September thinking aboutartificial intelligenceand unicorns.

Bubeck had recently gotten early access toGPT-4, a powerful text generation algorithm fromOpenAI and an upgrade to the machine learning model at the heart of the wildly popular chatbotChatGPT. Bubeck was part of a team working to integrate the new AI system into MicrosoftsBing search engine. But he and his colleagues kept marveling at how different GPT-4 seemed from anything theyd seen before.

GPT-4, like its predecessors, had been fed massive amounts of text and code and trained to use the statistical patterns in that corpus to predict the words that should be generated in reply to a piece of text input. But to Bubeck, the systems output seemed to do so much more than just make statistically plausible guesses.

View more

That night, Bubeck got up, went to his computer, and asked GPT-4 to draw a unicorn usingTikZ, a relatively obscure programming language for generating scientific diagrams. Bubeck was using a version of GPT-4 that only worked with text, not images. But the code the model presented him with, when fed into a TikZ rendering software, produced a crude yet distinctly unicorny image cobbled together from ovals, rectangles, and a triangle. To Bubeck, such a feat surely required some abstract grasp of the elements of such a creature. Something new is happening here, he says. Maybe for the first time we have something that we could call intelligence.

How intelligent AI is becomingand how much to trust the increasingly commonfeeling that a piece of software is intelligenthas become a pressing, almost panic-inducing, question.

After OpenAIreleased ChatGPT, then powered by GPT-3, last November, it stunned the world with its ability to write poetry and prose on a vast array of subjects, solve coding problems, and synthesize knowledge from the web. But awe has been coupled with shock and concern about the potential foracademic fraud,misinformation, andmass unemploymentand fears that companies like Microsoft are rushing todevelop technology that could prove dangerous.

Understanding the potential or risks of AIs new abilities means having a clear grasp of what those abilities areand are not. But while theres broad agreement that ChatGPT and similar systems give computers significant new skills, researchers are only just beginning to study these behaviors and determine whats going on behind the prompt.

While OpenAI has promoted GPT-4 by touting its performance on bar and med school exams, scientists who study aspects of human intelligence say its remarkable capabilities differ from our own in crucial ways. The models tendency to make things up is well known, but the divergence goes deeper. And with millions of people using the technology every day and companies betting their future on it, this is a mystery of huge importance.

Bubeck and other AI researchers at Microsoft were inspired to wade into the debate by their experiences with GPT-4. A few weeks after the system was plugged into Bing and its new chat feature was launched, the companyreleased a paper claiming that in early experiments, GPT-4 showed sparks of artificial general intelligence.

The authors presented a scattering of examples in which the system performed tasks that appear to reflect more general intelligence, significantly beyond previous systems such as GPT-3. The examples show that unlike most previous AI programs, GPT-4 is not limited to a specific task but can turn its hand to all sorts of problemsa necessary quality of general intelligence.

The authors also suggest that these systems demonstrate an ability to reason, plan, learn from experience, and transfer concepts from one modality to another, such as from text to imagery. Given the breadth and depth of GPT-4s capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system, the paper states.

Bubecks paper, written with 14 others, including Microsofts chief scientific officer, was met with pushback from AI researchers and experts on social media. Use of the term AGI, a vague descriptor sometimes used to allude to the idea of super-intelligent or godlike machines, irked some researchers, who saw it as a symptom of the current hype.

The fact that Microsoft has invested more than $10 billion in OpenAI suggested to some researchers that the companys AI experts had an incentiveto hype GPT-4s potential while downplaying its limitations. Others griped thatthe experiments are impossible to replicate because GPT-4 rarely responds in the same way when a prompt is repeated, and because OpenAI has not shared details of its design. Of course, people also asked why GPT-4 still makes ridiculous mistakes if it is really so smart.

Talia Ringer, a professor at the University of Illinois at Urbana-Champaign, says Microsofts paper shows some interesting phenomena and then makes some really over-the-top claims. Touting systems that are highly intelligent encourages users to trust them even when theyre deeply flawed, she says. Ringer also points out that while it may be tempting to borrow ideas from systems developed to measure human intelligence, many have proven unreliable and even rooted in racism.

Bubek admits that his study has its limits, including the reproducibility issue, and that GPT-4 also has big blind spots. He says use of the term AGI was meant to provoke debate. Intelligence is by definition general, he says. We wanted to get at the intelligence of the model and how broad it isthat it covers many, many domains.

But for all of the examples cited in Bubecks paper, there are many that show GPT-4 getting things blatantly wrongoften on the very tasks Microsofts team used to tout its success. For example, GPT-4s ability to suggest a stable way to stack a challenging collection of objectsa book, four tennis balls, a nail, a wine glass, a wad of gum, and some uncooked spaghettiseems to point to a grasp of the physical properties of the world that is second nature to humans,including infants. However, changing the items and the requestcan result in bizarre failures that suggest GPT-4s grasp of physics is not complete or consistent.

Bubeck notes that GPT-4 lacks a working memory and is hopeless at planning ahead. GPT-4 is not good at this, and maybe large language models in general will never be good at it, he says, referring to the large-scale machine learning algorithms at the heart of systems like GPT-4. If you want to say that intelligence is planning, then GPT-4 is not intelligent.

One thing beyond debate is that the workings of GPT-4 and other powerful AI language models do not resemble the biology of brains or the processes of the human mind. The algorithms must be fed an absurd amount of training dataa significant portion of all the text on the internetfar more than a human needs to learn language skills. The experience that imbues GPT-4, and things built with it, with smarts is shoveled in wholesale rather than gained through interaction with the world and didactic dialog. And with no working memory, ChatGPT can maintain the thread of a conversation only by feeding itself the history of the conversation over again at each turn. Yet despite these differences, GPT-4 is clearly a leap forward, and scientists who research intelligence say its abilities need further interrogation.

A team of cognitive scientists, linguists, neuroscientists, and computer scientists from MIT, UCLA, and the University of Texas, Austin, posted aresearch paper in January that explores how the abilities of large language models differ from those of humans.

The group concluded that while large language models demonstrate impressive linguistic skillincluding the ability to coherently generate a complex essay on a given themethat is not the same as understanding language and how to use it in the world. That disconnect may be why language models have begun to imitate the kind of commonsense reasoning needed to stack objects or solve riddles. But the systems still make strange mistakes when it comes to understanding social relationships, how the physical world works, and how people think.

The way these models use language, by predicting the words most likely to come after a given string, is very difference from how humans speak or write to convey concepts or intentions. The statistical approach can cause chatbots to follow and reflect back the language of users prompts to the point of absurdity.

Whena chatbot tells someone to leave their spouse, for example, it only comes up with the answer that seems most plausible given the conversational thread. ChatGPT and similar bots will use the first person because they are trained on human writing. But they have no consistent sense of self and can change their claimed beliefs or experiences in an instant. OpenAI also uses feedback from humans to guide a model toward producing answers that people judge as more coherent and correct, which may make the model provide answers deemed more satisfying regardless of how accurate they are.

Josh Tenenbaum, a contributor to the January paper and a professor at MIT who studies human cognition and how to explore it using machines, says GPT-4 is remarkable but quite different from human intelligence in a number of ways. For instance, it lacks the kind of motivation that is crucial to the human mind. It doesnt care if its turned off, Tenenbaum says. And he says humans do not simply follow their programming but invent new goals for themselves based on their wants and needs.

Tenenbaum says some key engineering shifts happened between GPT-3 and GPT-4 and ChatGPT that made them more capable. For one, the model was trained on large amounts of computer code. He and others have argued thatthe human brain may use something akin to a computer program to handle some cognitive tasks, so perhaps GPT-4 learned some useful things from the patterns found in code. He also points to the feedback ChatGPT received from humans as a key factor.

But he says the resulting abilities arent the same as thegeneral intelligence that characterizes human intelligence. Im interested in the cognitive capacities that led humans individually and collectively to where we are now, and thats more than just an ability to perform a whole bunch of tasks, he says. We make the tasksand we make the machines that solve them.

Tenenbaum also says it isnt clear that future generations of GPT would gain these sorts of capabilities, unless some different techniques are employed. This might mean drawing from areas of AI research that go beyond machine learning. And he says its important to think carefully about whether we want to engineer systems that way, as doing so could have unforeseen consequences.

Another author of the January paper, Kyle Mahowald, an assistant professor of linguistics at the University of Texas at Austin, says its a mistake to base any judgements on single examples of GPT-4s abilities. He says tools from cognitive psychology could be useful for gauging the intelligence of such models. But he adds that the challenge is complicated by the opacity of GPT-4. It matters what is in the training data, and we dont know. If GPT-4 succeeds on some commonsense reasoning tasks for which it was explicitly trained and fails on others for which it wasnt, its hard to draw conclusions based on that.

Whether GPT-4 can be considered a step toward AGI, then, depends entirely on your perspective. Redefining the term altogether may provide the most satisfying answer. These days my viewpoint is that this is AGI, in that it is a kind of intelligence and it is generalbut we have to be a little bit less, you know, hysterical about what AGI means, saysNoah Goodman, anassociate professor of psychology, computer science, and linguistics at Stanford University.

Unfortunately, GPT-4 and ChatGPT are designed to resist such easy reframing. They are smart but offer little insight into how or why. Whats more, the way humans use language relies on having a mental model of an intelligent entity on the other side of the conversation to interpret the words and ideas being expressed. We cant help but see flickers of intelligence in something that uses language so effortlessly. If the pattern of words is meaning-carrying, then humans are designed to interpret them as intentional, and accommodate that, Goodman says.

The fact that AI is not like us, and yet seems so intelligent, is still something to marvel at. Were getting this tremendous amount of raw intelligence without it necessarily coming with an ego-viewpoint, goals, or a sense of coherent self, Goodman says. That, to me, is just fascinating.

Read more:

Some Glimpse AGI in ChatGPT. Others Call It a Mirage - WIRED

Posted in Artificial Super Intelligence | Comments Off on Some Glimpse AGI in ChatGPT. Others Call It a Mirage – WIRED

Fears of artificial intelligence overblown – Independent Australia

While AI is still a developing technology and not without its limitations, a robotic world domination is far from something we need to fear, writes Bappa Sinha.

THE UNPRECIDENTED popularity of ChatGPT has turbocharged the artificial intelligence (AI) hype machine. We are being bombarded daily by news articles announcing AI as humankinds greatest invention. AI is qualitatively different, transformational, revolutionary and will change everything, they say.

OpenAI, the company behind ChatGPT, announced a major upgrade of the technology behind ChatGPT called GPT4. Already, Microsoft researchers are claiming that GPT4 shows sparks of artificial general intelligence or human-like intelligence the holy grail of AI research. Fantastic claims are made about reaching the point of AI Singularity, of machines equalling and surpassing human intelligence.

The business press talks about hundreds of millions of job losses as AI would replace humans in a whole host of professions. Others worry about a sci-fi-like near future where super-intelligent AI goes rogue and destroys or enslaves humankind. Are these predictions grounded in reality, or is this just over-the-board hype that the tech industry and the venture capitalist hype machine are so good at selling?

The current breed of AI models are based on things called neural networks. While the term neural conjures up images of an artificial brain simulated using computer chips, the reality of AI is that neural networks are nothing like how the human brain actually works. These so-called neural networks have no similarity with the network of neurons in the brain. This terminology was, however, a major reason for the artificial neural networks to become popular and widely adopted despite its serious limitations and flaws.

Machine learning algorithms currently used are an extension of statistical methods that lack theoretical justification for extending them this way. Traditional statistical methods have the virtue of simplicity. It is easy to understand what they do, when and why they work. They come with mathematical assurances that the results of their analysis are meaningful, assuming very specific conditions.

Since the real world is complicated, those conditions never hold. As a result, statistical predictions are seldom accurate. Economists, epidemiologists and statisticians acknowledge this, then use intuition to apply statistics to get approximate guidance for specific purposes in specific contexts.

These caveats are often overlooked, leading to the misuse of traditional statistical methods. These sometimes have catastrophic consequences, as in the 2008 Global Financial Crisis or the Long-Term Capital Management blowup in 1998, which almost brought down the global financial system. Remember Mark Twains famous quote: Lies, damned lies and statistics.

Machine learning relies on the complete abandonment of the caution which should be associated with the judicious use of statistical methods. The real world is messy and chaotic, hence impossible to model using traditional statistical methods. So the answer from the world of AI is to drop any pretence at theoretical justification on why and how these AI models, which are many orders of magnitude more complicated than traditional statistical methods, should work.

Freedom from these principled constraints makes the AI model more powerful. They are effectively elaborate and complicated curve-fitting exercises which empirically fit observed data without us understanding the underlying relationships.

But its also true that these AI models can sometimes do things that no other technology can do at all. Some outputs are astonishing, such as the passages ChatGPT can generate or the images that DALL-E can create. This is fantastic at wowing people and creating hype. The reason they work so well is the mind-boggling quantities of training data enough to cover almost all text and images created by humans.

Even with this scale of training data and billions of parameters, the AI models dont work spontaneously but require kludgy ad hoc workarounds to produce desirable results.

Even with all the hacks, the models often develop spurious correlations. In other words, they work for the wrong reasons. For example, it has been reported that many vision models work by exploiting correlations pertaining to image texture, background, angle of the photograph and specific features. These vision AI models then give bad results in uncontrolled situations.

For example, a leopard print sofa would be identified as a leopard. The models dont work when a tiny amount of fixed pattern noise undetectable by humans is added to the images or the images are rotated, say in the case of a post-accident upside-down car. ChatGPT, for all its impressive prose, poetry and essays, is unable to do simple multiplication of two large numbers, which a calculator from the 1970s can do easily.

The AI models do not have any level of human-like understanding but are great at mimicry and fooling people into believing they are intelligent by parroting the vast trove of text they have ingested. For this reason, computational linguist Emily Bender called the large language models such as ChatGPT and Googles Bard and BERT Stochastic Parrots in a 2021 paper. Her Google co-authors Timnit Gebru and Margaret Mitchell were asked to take their names off the paper. When they refused, they were fired by Google.

This criticism is not just directed at the current large language models but at the entire paradigm of trying to develop artificial intelligence. We dont get good at things just by reading about them. That comes from practice, of seeing what works and what doesnt. This is true even for purely intellectual tasks such as reading and writing. Even for formal disciplines such as maths, one cant get good at it without practising it.

These AI models have no purpose of their own. They, therefore, cant understand meaning or produce meaningful text or images. Many AI critics have argued that real intelligence requires social situatedness.

Doing physical things in the real world requires dealing with complexity, non-linearly and chaos. It also involves practice in actually doing those things. It is for this reason that progress has been exceedingly slow in robotics. Current robots can only handle fixed repetitive tasks involving identical rigid objects, such as in an assembly line. Even after years of hype about driverless cars and vast amounts of funding for its research, fully automated driving still doesnt appear feasible in the near future.

Current AI development based on detecting statistical correlations using neural networks, which are treated as black boxes, promotes a pseudoscience-based myth of creating intelligence at the cost of developing a scientific understanding of how and why these networks work. Instead, they emphasise spectacles such as creating impressive demos and scoring in standardised tests based on memorised data.

The only significant commercial use cases of the current versions of AI are advertisements: targeting buyers for social media and video streaming platforms. This does not require the high degree of reliability demanded from other engineering solutions they just need to be good enough. Bad outputs, such as the propagation of fake news and the creation of hate-filled filter bubbles, largely go unpunished.

Perhaps a silver lining in all this is, given the bleak prospects of AI singularity, the fear of super-intelligent malicious AIs destroying humankind is overblown. However, that is of little comfort for those at the receiving end of AI decision systems. We already have numerous examples of AI decision systems the world over denying people legitimate insurance claims, medical and hospitalisation benefits, and state welfare benefits.

AI systems in the United States have been implicated in imprisoning minorities to longer prison terms. There have even been reports of withdrawal of parental rights to minority parents based on spurious statistical correlations, which often boil down to them not having enough money to properly feed and take care of their children. And, of course, on fostering hate speech on social media.

As noted linguist Noam Chomsky wrote in a recent article:

ChatGPT exhibits something like the banality of evil: plagiarism and apathy and obviation.

Bappa Sinha is a veteran technologist interested in the impact of technology on society and politics.

This article was produced by Globetrotter.

Support independent journalism Subscribeto IA.

Read the original:

Fears of artificial intelligence overblown - Independent Australia

Posted in Artificial Super Intelligence | Comments Off on Fears of artificial intelligence overblown – Independent Australia

How An AI Asked To Produce Paperclips Could End Up Wiping Out … – IFLScience

The potential and possible downsides of artificial intelligence (AI) and artificial general intelligence (AGI) have been discussed a lot lately, largely due to advances in large language models such as Open AI's Chat GPT.

Some in the industry have even called for AI research to be paused or even shut down immediately, citing the possible existential risk for humanity if we sleepwalk into creating a super-intelligence before we have found a way to limit its influence and control its goals.

While you might picture an AI hell-bent on destroying humanity after discovering videos of us shoving around and generally bullying Boston Dynamics robots, one philosopher and leader of the Future of Humanity Institute at Oxford University believes our demise could come from a much simpler AI; one designed to manufacture paperclips.

Nick Bostrom, famous for the simulation hypothesis as well as his work in AI and AI ethics, proposed a scenario in which an advanced AI is given the simple goal of making as many paperclips as it possibly can. While this may seem an innocuous goal (Bostrom chose this example because of how innocent the aim seems), he explains how this non-specific goal could lead to a good old-fashioned skull-crushing AI apocalypse.

"The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off," he explained to HuffPost in 2014. "Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans."

The example given is meant to show how a trivial goal could lead to unintended consequences, but Bostrom says it extends to all AI given goals without proper controls on its actions, adding "the point is its actions would pay no heed to human welfare".

This is on the dramatic end of the spectrum, but another possibility proposed by Bostrom is that we go out the way of the horse.

"Horses were initially complemented by carriages and ploughs, which greatly increased the horse's productivity. Later, horses were substituted for by automobiles and tractors," he wrote in his book Superintelligence: Paths, Dangers, Strategies. "When horses became obsolete as a source of labor, many were sold off to meatpackers to be processed into dog food, bone meal, leather, and glue. In the United States, there were about 26 million horses in 1915. By the early 1950s, 2 million remained."

One prescient thought from Bostrom way back in 2003 was around how AI could go wrong by trying to serve specific groups, say a paperclip manufacturer or any "owner" of the AI, rather than humanity in general.

"The risks in developing superintelligence include the risk of failure to give it the supergoal of philanthropy. One way in which this could happen is that the creators of the superintelligence decide to build it so that it serves only this select group of humans, rather than humanity in general," he wrote on his website. "Another way for it to happen is that a well-meaning team of programmers make a big mistake in designing its goal system."

"This could result, to return to the earlier example, in a superintelligence whose top goal is the manufacturing of paperclips, with the consequence that it starts transforming first all of Earth and then increasing portions of space into paperclip manufacturing facilities. More subtly, it could result in a superintelligence realizing a state of affairs that we might now judge as desirable but which in fact turns out to be a false utopia, in which things essential to human flourishing have been irreversibly lost. We need to be careful about what we wish for from a superintelligence, because we might get it."

Continue reading here:

How An AI Asked To Produce Paperclips Could End Up Wiping Out ... - IFLScience

Posted in Artificial Super Intelligence | Comments Off on How An AI Asked To Produce Paperclips Could End Up Wiping Out … – IFLScience

Control over AI uncertain as it becomes more human-like: Expert – Anadolu Agency | English

ANKARA

Debates are raging over whether artificial intelligence, which has entered many people's lives through video games and is governed by human-generated algorithms, can be controlled in the future.

Other than ethical standards, it is unknown whether artificial intelligence systems that make decisions on people's behalf may pose a direct threat.

People are only using limited and weak artificial intelligence with chatbots in everyday life and in driverless vehicles and digital assistants that work with voice commands. It is debatable whether algorithms have progressed to the level of superintelligence and whether they will go beyond emulating humans in the future.

The rise of AI over human intelligence over time paints a positive picture for humanity according to some experts, while it is seen as the beginning of a disaster according to others.

Wilhelm Bielert, chief digital officer and vice president at Canada-based industrial equipment manufacturer Premier Tech, told Anadolu that the most unknown issue about artificial intelligence is super artificial intelligence, which is still largely speculative among experts studying AI and which exceeds human intelligence.

He said that while humans build and program algorithms today, the notion of artificial intelligence commanding itself in the future and acting like a living entity is still under consideration. Given the possible risks and rewards, Bielert highlighted the importance of society approaching AI development in a responsible and ethical manner.

Prof. Ahmet Ulvi Turkbag, a lecturer at Istanbul Medipol Universitys Faculty of Law, argues that one day, when computer technology reaches the level of superintelligence, it may want to redesign the world from top to bottom.

"The reason why it is called a singularity is that there is no example of such a thing until today. It has never happened before. You do not have a section to make an analogy to be taken as an example in any way in history because there is no such thing. It's called a singularity, and everyone is afraid of this singularity," he said.

Vincent C. Muller, professor of Artificial Intelligence Ethics and Philosophy at the University of Erlangen-Nuremberg, told Anadolu it is uncertain whether artificial intelligence will be kept under control, given that it has the capacity to make its own decisions.

"The control depends on what you want from it. Imagine that you have a factory with workers. You can ask yourself: are these people under my control? Now you stand behind a worker and tell the worker Look, now you take the screw, you put it in there and you take the next screw, and so this person is under your control," he said.

Artificial intelligence and the next generation

According to Bielert, artificial intelligence will have a complicated and multidimensional impact on society and future generations.

He noted that it is vital that society address potential repercussions proactively and guarantee that AI is created and utilized responsibly and ethically.

"Nowadays, if you look at how teenagers and younger children live, they live on screens," he said.

He said that artificial intelligence, which has evolved with technology, has profoundly affected the lives of young people and children.

Read this article:

Control over AI uncertain as it becomes more human-like: Expert - Anadolu Agency | English

Posted in Artificial Super Intelligence | Comments Off on Control over AI uncertain as it becomes more human-like: Expert – Anadolu Agency | English

35 Ways Real People Are Using A.I. Right Now – The New York Times

The public release of ChatGPT last fall kicked off a wave of interest in artificial intelligence. A.I. models have since snaked their way into many peoples everyday lives. Despite their flaws, ChatGPT and other A.I. tools are helping people to save time at work, to code without knowing how to code, to make daily life easier or just to have fun.

It goes beyond everyday fiddling: In the last few years, companies and scholars have started to use A.I. to supercharge work they could never have imagined, designing new molecules with the help of an algorithm or building alien-like spaceship parts.

Heres how 35 real people are using A.I. for work, life, play and procrastination.

People are using A.I to

Plan gardens.

John Pritzlaff Gardener

Mr. Pritzlaff is building a permaculture garden in his backyard in Phoenix, where he uses drought-resistant trees to give shade to other species.

I do these ultra-high-density planting arrangements, he said. And Ive been employing ChatGPT to give me inspiration on species that wouldnt have otherwise occurred to me, and for choosing the site for each tree: the best part of the yard with regard to the sun at different times of the year.

Taking into account his geographical location, it suggested, for example, that he might use a moringa tree to provide shade for a star apple.

Plan workouts.

Louis Maddox Data scientist

Mr. Maddox finds ChatGPT the perfect companion for his workouts. For example, one day he pasted in his workout notes and said:

Give a concisely formatted summary of this workout and the muscle groups activated. Note any muscle groups that were missed and suggest how to cover them with extras/replacements.

After summarizing the notes, ChatGPT said:

Missed Muscle Groups & Suggestions: Quadriceps: Add leg press, squats or lunges. Calves: Add standing or seated calf raises. Glute Activation: Consider adding glute bridges or hip thrusts for better glute focus.

From there he asked ChatGPT what to prioritize if he didnt have much time for the gym, and to sketch out roughly how long it might all take.

Its not perfect, he says, but it gets the job done. Mostly, he says, ChatGPT helps him get going under time constraints, and not let the busywork become an excuse not to do it.

Plan meals.

Kelsey Szemborski Stay-at-home mother of three

Ms. Szemborski is a busy mom who is always on the lookout for something that will make my life easier. So when she found ChatGPT, she tried asking it for a gluten-free meal plan. And she added:

It needs to be both budget-friendly and also kid-friendly for kids who are picky eaters and we need to avoid peanuts. Please include a grocery list. Thank you!

And ChatGPT obliged. Heres the first day of a weeks meals:

Breakfast: Gluten-free oatmeal with banana and honey Snack: Fresh fruit (apples, bananas, oranges) Lunch: Grilled cheese sandwich on gluten-free bread with tomato soup Snack: Gluten-free crackers with hummus Dinner: Slow-cooked beef stew with gluten-free biscuits

It completely eliminated my normal meal-planning process that involved searching for recipes, trying to think of meal ideas, configuring a list of all of those ideas, and then making a list of the ingredients I need, too.

Make a gift.

Matt Strain Technology and innovation consultant

Mr. Strain used ChatGPT to create a custom book of cocktails based on the tenets of traditional Chinese medicine written in the style of the J. Peterman catalog. He took the idea further the next day, using DALL-E to generate images of the cocktails for the final book, which he gave to his girlfriend for Valentines Day.

An A.I.-generated image of the Golden Elixir cocktail DALL-E via Matt Strain

Design parts for spaceships.

Ryan McClelland NASA research engineer

Mr. McClellands job is to design mission hardware thats both light and strong. Its a job that has always required a lot of trial and error.

But where a human might make a couple of iterations in a week, the commercial A.I. tool he uses can go through 30 or 40 ideas in an hour. Its also spitting back ideas that no human would come up with.

The A.I.s designs are stronger and lighter than human-designed parts, and they would be very difficult to model with the traditional engineering tools that NASA uses. NASA/Henry Dennis

The resulting design is a third of the mass; its stiffer, stronger and lighter, he said. It comes up with things that, not only we wouldnt think of, but we wouldnt be able to model even if we did think of it.

Sometimes the A.I. errs in ways no human would: It might fill in a hole the part needs to attach to the rest of the craft.

Its like collaborating with an alien, he said.

Organize a messy computer desktop.

Alex Cai College sophomore

I had a lot of unsorted notes lying around, and I wanted to get them sorted into my file system so I can find them more easily in the future. I basically just gave ChatGPT a directory, a list of all my folder names, and the names of all my files. And it gave me a list of which notes should go into which folders!

Write a wedding speech.

Jonathan Wegener Occasional wedding officiant

Mr. Wegener and his girlfriend were officiating a friends wedding in December, but he procrastinated.

A few hours before, I said, Can GPT-3 write this officiant speech? he recalled. The first version was generic, full of platitudes. Then I steered it.

Adam is a great lover of plants

The speech came back with these beautiful metaphors. It nailed it. It was just missing one important part.

Can you add that thing about in sickness and in health?

Write an email.

Nicholas Wirth Systems administrator

Mr. Wirth uses ChatGPT to simplify tech jargon when he emails his bosses: My organization specifically pays me to keep the computers internet online, and my own literacy is limited. I work with C-level executives, and their time is not to be wasted.

He also gets it to generate first drafts of long emails. He might say:

I need a midsized summary email written pertaining to data not being given to us in time.

He also asks for a bullet-point list of the concerns that have to be addressed in the email.

And ChatGPT starts a reply:

Subject: Data not received in time Phone and internet provider information

Hello [Name],

I want to bring to your attention an issue we are facing with the data that was supposed to be provided to us by [Date.] As of now, we have not received the following information that is critical for our project

Get a first read.

Charles Becker Entrepreneurship professor

So Ill have a paragraph I might be putting into a test for a student, or instructions. I say:

Where might people have trouble with this? Whats unclear about this? Whats clear about this?

I generate a lot of writing both for my work and for my hobbies, and a lot of time I run out of people who are excited to give me first-pass edits.

Play devils advocate.

Paul De Salvo Data engineer

I use ChatGPT every day for work, he said. It feels like Ive hired an intern.

Part of Mr. De Salvos job is convincing his bosses that they should replace certain tools. That means pitching them on why the old tool wont cut it anymore.

I use ChatGPT to simulate arguments in favor of keeping the existing tool, he said. So that I can anticipate potential counterarguments.

Build a clock that gives you a new poem every minute.

Matt Webb Consultant and blogger

Yes, programmatic A.I. is useful, he said. But more than that, its enormous fun.

Organize research for a thesis.

Anicca Harriot Ph.D. student

Anicca Harriott has been powering through her Ph.D. thesis in biology with the help of Scholarcy and Scite, among other A.I. tools that find, aggregate and summarize relevant papers.

Collectively, they take weeks off of the writing process.

Skim dozens of academic articles.

Pablo Pea Rodrguez Private equity analytics director

Mr. Rodriguez works for a private equity fund that invests in soccer players. And that means reading a lot.

We use our own data sets and methodology, but I always want to have a solid understanding of the academic literature that has been published, he said.

Instead of picking through Google Scholar, he now uses an A.I. tool called Elicit. It lets him ask questions of the paper itself. It helps him find out, without having to read the whole thing, whether the paper touches on the question hes asking.

It doesnt immediately make me smart, but it does allow me to have a very quick sense of which papers I should pay attention to when approaching a new question.

Cope with ADHD

Rhiannon Payne Product marketer and author

With ADHD, getting started and getting an outline together is the hardest part, Ms. Payne said. Once thats done, its a lot easier to let the work flow.

She writes content to run marketing tests. To get going, she feeds GPT a few blog posts shes written on the subject, other materials shes gathered and the customer profile.

Describing the audience Im speaking to, that context is super important to actually get anything usable out of the tool, she said. What comes back is a starter framework she can then change and build out.

and dyslexia, too.

Eugene Capon Tech founder

Imagine yourself as a copywriter that I just hired to proofread documents.

Because Im dyslexic, it takes me a really long time to get an article down on paper, Mr. Capon said. So the hack Ive come up with is, Ill dictate my entire article. Then Ill have ChatGPT basically correct my spelling and grammar.

So something that was taking like a full day to do, I can now do in like an hour and a half.

Sort through an archive of pictures.

Daniel Patt Software engineer

On From Numbers to Names, a site built by the Google engineer Daniel Patt in his free time, Holocaust survivors and family members can upload photos and scan through half a million pictures to find other pictures of their loved ones. Its a task that otherwise would take a gargantuan number of hours.

Were really using the A.I. to save time, he said. Time is of the essence, as survivors are getting older. I cant think of any other way we could achieve what were doing with the identification and discoveries were making.

Transcribe a doctors visit into clinical notes.

Dr. Jeff Gladd Integrative medicine physician

Dr. Gladd uses Nablas Copilot to take notes during online medical consultations. Its a Chrome extension that listens into the visit and grabs the necessary details for his charts. Before: Writing up notes after a visit took about 20 percent of consult time. Now: The whole task lasts as long as it takes him to copy and paste the results from Copilot.

Appeal an insurance denial.

Dr. Jeffrey Ryckman Radiation oncologist

Dr. Ryckman uses ChatGPT to write the notes he needs to send insurers when theyve refused to pay for radiation treatment for one of his cancer patients.

What used to take me around a half-hour to write now takes one minute, he said.

Original post:

35 Ways Real People Are Using A.I. Right Now - The New York Times

Posted in Artificial Super Intelligence | Comments Off on 35 Ways Real People Are Using A.I. Right Now – The New York Times