Search Immortality Topics:

Page 3«..2345..1020..»


Category Archives: Artificial Intelligence

Small is the new BIG in artificial intelligence – ET BrandEquity – ETBrandEquity

Representative image (iStock) There are similarities between the cold war era and current times. In the former, there was a belief that alliances having stronger nuclear arms will wield larger global influence. Similarly, organizations (and nations) in the existing era believe that those controlling the AI narrative, will control the global narrative. Moreover, scale was, and is, correlated with superiority; there is a belief that bigger is better.

Global superpowers competed in the cold war on whose nuclear systems are largest (highest megaton weapons), while in the current era, large technology incumbents and countries are competing on who can build the largest model, with highest number of parameters. Open AIs GPT 4 took global pole position last year, brandishing a model that is rumored to have over 1.5 trillion parameters. The race is not just about prestige; it is rooted in the assumption that larger models understand and generate human language with significant accuracy and nuance.

Democratization of AI One of the most compelling arguments for smaller language models lies in their efficiency. Unlike their larger counterparts, these models require significantly less computational power, making them accessible to a broader range of users. This democratization of AI technology could lead to a surge in innovation, as small businesses and individual developers gain the tools to implement sophisticated AI solutions without the prohibitive costs associated with large models. Furthermore, the operational speed and lower energy consumption of small models offer a solution to the growing concerns over the environmental impact of computing at scale.

Large language models popularity can be attributed to their ability to handle a vast array of tasks. Yet, this Jack-of-all-trades approach is not always necessary or optimal. Small language models can be fine-tuned for specific applications, providing targeted solutions that can outperform the generalist capabilities of larger models. This specialization can lead to more effective and efficient AI applications, from customer service bots tailored to a company's product line to legal assistance tools tailored on a countrys legal system.

On-device Deployment

The Environmental Imperative The environmental impact of AI development is an issue that cannot be ignored. The massive energy requirements of training and running large language models pose a significant challenge in the search for sustainable technology development. Small language models offer a path forward that marries the incredible potential of AI with the urgent need to reduce our carbon footprint. By focusing on models that require less power and fewer resources, the AI community can contribute to a more sustainable future.

As we stand on the cusp of technological breakthroughs, it's important to question the assumption that bigger is always better. The future of AI may very well lie in the nuanced, efficient, and environmentally conscious realm of small language models. These models promise to make AI more accessible, specialized, and integrated into our daily lives, all while aligning with the ethical and environmental standards that our global community increasingly seeks to uphold.

Their partnerships with leading mobile OEMs globally which cover 63 per cent of the global Android market helps Fintech brands to feature their apps on alternative app platforms. They also offer guidance throughout the campaign lifecycle for expanded reach and new revenue opportunities. Furthermore, some new age app growth companies have also launched their proprietary tools which fine-tune campaigns in real-time across mobile OEM inventory, aligning them with performance goals for enhanced Return On Ad Spend (ROAS).

Read more:

Small is the new BIG in artificial intelligence - ET BrandEquity - ETBrandEquity

Posted in Artificial Intelligence | Comments Off on Small is the new BIG in artificial intelligence – ET BrandEquity – ETBrandEquity

Ways to think about AGI Benedict Evans – Benedict Evans

In 1946, my grandfather, writing as Murray Leinster, published a science fiction story called A Logic Named Joe. Everyone has a computer (a logic) connected to a global network that does everything from banking to newspapers and video calls. One day, one of these logics, Joe, starts giving helpful answers to any request, anywhere on the network: invent an undetectable poison, say, or suggest the best way to rob a bank. Panic ensues - Check your censorship circuits! - until they work out what to unplug. (My other grandfather, meanwhile, was using computers tospy on the Germans, and then the Russians.)

For as long as weve thought about computers, weve wondered if they could make the jump from mere machines, shuffling punch-cards and databases, to some kind of artificial intelligence, and wondered what that would mean, and indeed, what were trying to say with the word intelligence. Theres an old joke that AI is whatever doesnt work yet, because once it works, people say thats not AI - its just software. Calculators do super-human maths, and databases have super-human memory, but they cant do anything else, and they dont understand what theyre doing, any more than a dishwasher understands dishes, or a drill understands holes. A drill is just a machine, and databases are super-human but theyre just software. Somehow, people have something different, and so, on some scale, do dogs, chimpanzees and octopuses and many other creatures. AI researchers have come to talk about this as general intelligence and hence making it would be artificial general intelligence - AGI.

If we really could create something in software that was meaningfully equivalent to human intelligence, it should be obvious that this would be a very big deal. Can we make software that can reason, plan, and understand? At the very least, that would be a huge change in what we could automate, and as my grandfather and a thousand other science fiction writers have pointed out, it might mean a lot more.

Every few decades since 1946, theres been a wave of excitement that sometime like this might be close, each time followed by disappointment and an AI Winter, as the technology approach of the day slowed down and we realised that we needed an unknown number of unknown further breakthroughs. In 1970 the AI pioneer Marvin Minsky claimed that in from three to eight years we will have a machine with the general intelligence of an average human being, but each time we thought we had an approach that would produce that, it turned out that it was just more software (or just didnt work).

As we all know, the Large Language Models (LLMs) that took off 18 months ago have driven another such wave. Serious AI scientists who previously thought AGI was probably decades away now suggest that it might be much closer.At the extreme, the so-called doomers argue there is a real risk of AGI emerging spontaneously from current research and that this could be a threat to humanity, and calling for urgent government action. Some of this comes from self-interested companies seeking barriers to competition (This is very dangerous and we are building it as fast as possible, but dont let anyone else do it), but plenty of it is sincere.

(I should point out, incidentally, that the doomers existential risk concern that an AGI might want to and be able to destroy or control humanity, or treat us as pets, is quite independent of more quotidian concerns about, for example, how governments will use AI for face recognition, or talking about AI bias, or AI deepfakes, and all the other ways that people will abuse AI or just screw up with it, just as they have with every other technology.)

However, for every expert that thinks that AGI might now be close, theres another who doesnt. There are some who think LLMs might scale all the way to AGI, and others who think, again, that we still need an unknown number of unknown further breakthroughs.

More importantly, they would all agree that they dont actually know. This is why I used terms like might or may - our first stop is an appeal to authority (often considered a logical fallacy, for what thats worth), but the authorities tell us that they dont know, and dont agree.

They dont know, either way, because we dont have a coherent theoretical model of what general intelligence really is, nor why people seem to be better at it than dogs, nor how exactly people or dogs are different to crows or indeed octopuses. Equally, we dont know why LLMs seem to work so well, and we dont know how much they can improve. We know, at a basic and mechanical level, about neurons and tokens, but we dont know why they work. We have many theories for parts of these, but we dont know the system. Absent an appeal to religion, we dont know of any reason why AGI cannot be created (it doesnt appear to violate any law of physics), but we dont know how to create it or what it is, except as a concept.

And so, some experts look at the dramatic progress of LLMs and say perhaps! and other say perhaps, but probably not!, and this is fundamentally an intuitive and instinctive assessment, not a scientific one.

Indeed, AGI itself is a thought experiment, or, one could suggest, a place-holder. Hence, we have to be careful of circular definitions, and of defining something into existence, certainty or inevitably.

If we start by defining AGI as something that is in effect a new life form, equal to people in every way (barring some sense of physical form), even down to concepts like awareness, emotions and rights, and then presume that given access to more compute it would be far more intelligent (and that there even is a lot more spare compute available on earth), and presume that it could immediately break out of any controls, then that sounds dangerous, but really, youve just begged the question.

As Anselm demonstrated, if you define God as something that exists, then youve proved that God exists, but you wont persuade anyone. Indeed, a lot of AGI conversations sound like the attempts by some theologians and philosophers of the past to deduce the nature of god by reasoning from first principles. The internal logic of your argument might be very strong (it took centuries for philosophers to work out why Anselms proof was invalid) but you cannot create knowledge like that.

Equally, you can survey lots of AI scientists about how uncertain they feel, and produce a statistically accurate average of the result, but that doesnt of itself create certainty, any more than surveying a statistically accurate sample of theologians would produce certainty as to the nature of god, or, perhaps, bundling enough sub-prime mortgages together can produce AAA bonds, another attempt to produce certainty by averaging uncertainty. One of the most basic fallacies in predicting tech is to say people were wrong about X in the past so they must be wrong about Y now, and the fact that leading AI scientists were wrong before absolutely does not tell us theyre wrong now, but it does tell us to hesitate. They can all be wrong at the same time.

Meanwhile, how do you know thats what general intelligence would be like? Isaiah Berlin once suggested that even presuming there is in principle a purpose to the universe, and that it is in principle discoverable, theres no a priori reason why it must be interesting. God might be real, and boring, and not care about us, and we dont know what kind of AGI we would get. It might scale to 100x more intelligent than a person, or it might be much faster but no more intelligent (is intelligence just about speed?). We might produce general intelligence thats hugely useful but no more clever than a dog, which, after all, does have general intelligence, and, like databases or calculators, a super-human ability (scent). We dont know.

Taking this one step further, as I listened to Mark Zuckerberg talking about Llama 3, it struck me that he talks about general intelligence as something that will arrive in stages, with different modalities a little at at a time. Maybe people will point at the general intelligence of Llama 6 or ChatGPT 7 and say Thats not AGI, its just software! We created the term AGI because AI came just to mean software, and perhaps AGI will be the same, and we'll need to invent another term.

This fundamental uncertainty, even at the level of what were talking about, is perhaps why all conversations about AGI seem to turn to analogies. If you can compare this to nuclear fission then you know what to expect, and you know what to do. But this isnt fission, or a bioweapon, or a meteorite. This is software, that might or might not turn into AGI, that might or might not have certain characteristics, some of which might be bad, and we dont know. And while a giant meteorite hitting the earth could only be bad, software and automation are tools, and over the last 200 years automation has sometimes been bad for humanity, but mostly its been a very good thing that we should want much more of.

Hence, Ive already used theology as an analogy, but my preferred analogy is the Apollo Program. We had a theory of gravity, and a theory of the engineering of rockets. We knew why rockets didnt explode, and how to model the pressures in the combustion chamber, and what would happen if we made them 25% bigger. We knew why they went up, and how far they needed to go. You could have given the specifications for the Saturn rocket to Isaac Newton and he could have done the maths, at least in principle: this much weight, this much thrust, this much fuel will it get there? We have no equivalents here. We dont know why LLMs work, how big they can get, or how far they have to go. And yet, we keep making them bigger, and they do seem to be getting close. Will they get there? Maybe, yes!

On this theme, some people suggest that we are in the empirical stage of AI or AGI: we are building things and making observations without knowing why they work, and the theory can come later, a little as Galileo came before Newton (theres an old English joke about a Frenchman who says thats all very well in practice, but does it work in theory). Yet while we can, empirically, see the rocket going up, we dont know how far away the moon is. We cant plot people and ChatGPT on a chart and draw a line to say when one will reach the other, even just extrapolating the current rate of growth.

All analogies have flaws, and the flaw in my analogy, of course, is that if the Apollo program went wrong the downside was not, even theoretically, the end of humanity. A little before my grandfather, heres another magazine writer on unknown risks:

I was reading in the paper the other day about those birds who are trying to split the atom, the nub being that they haven't the foggiest as to what will happen if they do. It may be all right. On the other hand, it may not be all right. And pretty silly a chap would feel, no doubt, if, having split the atom, he suddenly found the house going up in smoke and himself torn limb from limb.

Right ho, Jeeves, PG Wodehouse, 1934

What then, is your preferred attitude to risks that are real but unknown?? Which thought experiment do you prefer? We can return to half-forgotten undergraduate philosophy (Pascalss Wager! Anselms Proof!), but if you cant know, do you worry, or shrug? How do we think about other risks? Meteorites are a poor analogy for AGI because we know theyre real, we know they could destroy mankind, and they have no benefits at all (unless theyre very very small). And yet, were not really looking for them.

Presume, though, you decide the doomers are right: what can you do? The technology is in principle public. Open source models are proliferating. For now, LLMs need a lot of expensive chips (Nvidia sold $47.5bn in the last 12 months and cant meet demand), but on a decades view the models will get more efficient and the chips will be everywhere. In the end, you cant ban mathematics. On a scale of decades, it will happen anyway. If you must use analogies to nuclear fission, imagine if we discovered a way that anyone could build a bomb in their garage with household materials - good luck preventing that. (A doomer might respond that this answers the Fermi paradox: at a certain point every civilisation creates AGI and it turns them into paperclips.)

By default, though, this will follow all the other waves of AI, and become just more software and more automation. Automation has always produced frictional pain, back to the Luddites, and the UKs Post Office scandal reminds us that you dont need AGI for software to ruin peoples lives. LLMs will produce more pain and more scandals, but life will go on. At least, thats the answer I prefer myself.

Here is the original post:

Ways to think about AGI Benedict Evans - Benedict Evans

Posted in Artificial Intelligence | Comments Off on Ways to think about AGI Benedict Evans – Benedict Evans

3 Stocks Poised to Profit from the Rise of Artificial Intelligence – InvestorPlace

While artificial intelligence may be all the rage, the usual suspects in the space have largely flourished handsomely, which then incentivizes the case for underappreciated AI stocks to buy.

Rather than simply focusing on technology firms that have a direct link to digital intelligence, its useful to consider companies whether theyre tech enterprises or not that are using AI in their businesses. Yes, the semiconductor space is exciting but AI is so much more than that.

These less-appreciated ideas just might surprise Wall Street. With that, below are intriguing AI stocks to buy that dont always get the spotlight.

Source: Jim Lambert / Shutterstock.com

At first glance, agricultural equipment specialist Deere (NYSE:DE) doesnt seem a particularly relevant idea for AI stocks to buy. Technically, youd be right. After all, this is an enterprise that as roots going back to 1837. That said, an old dog can still learn new tricks.

With so much talk about autonomous mobility, Deere took a page out of the playbook and has invested in an automated tractor. Featuring 360-degree cameras, a high-speed processor and a neural network that sorts through images and determines which objects are safe to drive over or not, Deeres invention is the perfect marriage between a traditional industry and innovative methodologies.

Perhaps most importantly, Deere is meeting a critical need. Unsurprisingly, fewer young people are interested in an agriculture-oriented career. Therefore, these automated tractors are entering the market at the right time.

Lastly, DE trades at a modest price/earnings-to-growth (PEG) ratio of 0.54X. Thats lower than the sector median 0.82X. Its a little bit out there but Deere is one of the underappreciated AI stocks to buy.

Source: Eric Glenn / Shutterstock.com

While its just my opinion, grocery store giant Kroger (NYSE:KR) sells itself. No, the grocery industry is hardly the most exciting arena available. At the same time, people have to eat. Further, the company benefits from the trade-down effect. If economic conditions become even more challenging, people will eschew eating out for cooking in. Overall, that would be a huge plus for KR stock.

With that baseline bullish thesis out of the way, Kroger is also an enticing idea for hidden-gem AI stocks to buy. Earlier this year, the company announced that it will use AI technology for content management and product descriptions for marketplace sellers. Last year, Krogers head executive mentioned AI eight times during an earnings call.

Fundamentally, Kroger should benefit from revenue predictability. While the consensus sales target calls for a 1% decline in the current fiscal year, the high-side estimate is aiming for $152.74 billion. Last year, the print came out to just over $150 billion. With shares trading at only 0.27X trailing-year sales, KR could be a steal.

Source: Travelerpix / Shutterstock.com

Billed as a platform for live online learning, Nerdy (NYSE:NRDY) represents a legitimate tech play for AI stocks to buy. Indeed, its corporate profile states that its purpose-built proprietary platform leverages myriad innovations including AI to connect students, users and parents/guardians to tutors, instructors and subject matter experts.

Fundamentally, Nerdy should benefit from two key factors. Number one, the Covid-19 crisis disrupted education, particularly for young students. That could have a cascading effect down the line, making it all the more vital to play catchup. Nerdy can help in that department.

Number two, U.S. students have continued to fall behind in international tests. Its imperative for social growth and stability for students to get caught up, especially in the digital age. Therefore, NRDY is especially attractive.

Finally, analysts anticipate fiscal 2024 revenue to hit $237.81 million, up 23% from last years tally of $193.4 million. And in fiscal 2025, experts are projecting sales to rise to $293.17 million. Thats up more than 23% from forecasted 2024 sales. Therefore, its one of the top underappreciated AI stocks to buy.

On the date of publication, Josh Enomoto did not have (either directly or indirectly) any positions in the securities mentioned in this article.The opinions expressed in this article are those of the writer, subject to the InvestorPlace.comPublishing Guidelines.

A former senior business analyst for Sony Electronics, Josh Enomoto has helped broker major contracts with Fortune Global 500 companies. Over the past several years, he has delivered unique, critical insights for the investment markets, as well as various other industries including legal, construction management, and healthcare. Tweet him at @EnomotoMedia.

Go here to see the original:

3 Stocks Poised to Profit from the Rise of Artificial Intelligence - InvestorPlace

Posted in Artificial Intelligence | Comments Off on 3 Stocks Poised to Profit from the Rise of Artificial Intelligence – InvestorPlace

Warren Buffett Discusses Apple, Cash, Insurance, Artificial Intelligence (AI), and More at Berkshire Hathaway’s Annual … – The Motley Fool

Berkshire is bolstering its cash reserves and passing on riskier bets.

Tens of thousands ofBerkshire Hathaway(BRK.A -0.56%) (BRK.B 0.07%) investors flocked to Omaha this past week for the annual tradition of listening to Warren Buffett muse over the conglomerate's business, financial markets,and over 93 years of wisdom on life. But this year's meeting felt different.

Longtime vice chairman Charlie Munger passed away in late November. His wry sense of humor, witty aphorisms, and entertaining rapport with Buffett were missed dearly. But there were other noticeable differences between this meeting and those of past years -- namely, a sense of caution.

Let's dive into the key takeaways from the meeting and how it could influence what Berkshire does next.

Image source: The Motley Fool.

The elephant in the room was Berkshire's decision to trim its stake inApple (AAPL 5.98%) during the first quarter. Berkshire sold over 116 million shares of Apple in Q1, reducing its position by around 12.9%. It marks the company's largest sale of Apple stock since it began purchasing shares in 2016 -- far larger than the 10 million or so shares Berkshire sold in Q4.

Buffett addressed the sale with the first answer in the Q&A session: "Unless something dramatic happens that really changes capital allocation and strategy, we will have Apple as our largest investment. But I don't mind at all, under current conditions, building the cash position. I think when I look at the alternatives of what's available in equity markets, and I look at the composition of what's going on in the world, we find it quite attractive."

In addition to valuation concerns, market conditions, and wanting to build up the cash position, Buffett also mentioned the federal rate on capital gains, which Buffett said is 21% compared to 35% not long ago and even as high as 52% in the past. Fears that the tax rate could go up based on fiscal policies and a need to cut the federal deficit is another reason why Buffett and his team decided to book gains on Apple stock now instead of risking a potentially higher tax rate in the future.

Buffett has long spoken about the faith Berkshire shareholders entrust in him and his team to safeguard and grow their wealth. Berkshire is known for being fairly risk-averse, gravitating toward businesses with stable cash flows like insurance, railroads, utilities, and top brands like Coca-Cola (KO 0.29%), American Express (AXP -0.74%), and Apple. Another asset Berkshire loves is cash.

Berkshire's cash and U.S. treasury position reached $182.3 billion at the end of the first quarter, up from $163.3 billion at the end of 2023. Buffett said he expects the cash position to exceed $200 billion by the end of the second quarter.

You may think Berkshire is stockpiling cash because of higher interest rates and a better return on risk-free assets. But shortly before the lunch break, Buffett said that Berkshire would still be heavily in cash even if interest rates were 1% because Berkshire only swings at pitches it likes, and it won't swing at a pitch simply because it hasn't in a while. "It's just that things aren't attractive, and there are certain ways that could change, and we will see if they do," said Buffett.

The commentary is a potential sign that Berkshire is getting even more defensive than usual.

Berkshire's underlying business is doing exceptionally well. Berkshire's Q1 operating income skyrocketed 39.1% compared to the same period of 2023 -- driven by larger gains from the insurance businesses and Berkshire Hathaway Energy (which had an abnormally weak Q1 last year). However, Buffett cautioned that it would be unwise to simply multiply insurance income by four for the full year, considering it was a particularly strong quarter and Q3 tends to be the quarter with the highest risk of claims.

A great deal of the Q&A session was spent discussing the future of insurance and utilities based on new regulations; price increases due to climate change and higher risks of natural disasters; and the potential impact of autonomous driving reducing accidents and driving down the cost of insurance.

Ajit Jain, Berkshire's chairman of insurance operations, answered a question on cybersecurity insurance, saying the market is large and profitable and will probably get bigger but just isn't worth the risk until there are more data points. There was another question on rising insurance rates in Florida, which Berkshire attributed to climate change, increased risks of massive losses, and a difficult regulatory environment, making it harder to do business in Florida.

An advantage is that Berkshire prices a lot of its contracts in one-year intervals, so it can adjust prices if risks begin to ramp and outweigh rewards. Or as Jain put it, "Climate change, much like inflation, done right, can be a friend of the risk bearer."

As for how autonomous driving affects insurance, Buffett said the problem is far from solved, that automakers have been considering insurance for a while, and that insurance can be "a very tempting business when someone hands you money, and you hand them a little piece of paper." In other words, it isn't as easy as it seems. Accident rates have come down, and it would benefit society if autonomous driving allowed them to drop even further, but insurance will still be necessary.

Buffett's response to a question on the potential of artificial intelligence (AI) was similar to his response from the 2023 annual meeting. He compared it to the atomic bomb and called it a genie in a bottle in that it has immense power, but we may regret we ever let it out.

He discussed a personal experience he had where he saw an AI-generated video of himself that was so lifelike that his kids nor his wife would be able to discern if it really was him or his voice except for the fact that he would never say the things in the video. "if I was interested in investing in scamming, its going to be the growth industry of all time," he said.

Ultimately, Buffett stayed true to his longtime practice of keeping within his circle of competence, saying he doesn't know enough about AI to predict its future."It has enormous potential for good and enormous potential for harm, and I just don't know how that plays out."

Despite the cautious sentiment, Buffett's optimism about the American economy and the stock market's ability to compound wealth over time was abundantly clear.

Oftentimes, folks pay too much attention to Berkshire's cash position as a barometer of its views on the stock market. While Berkshire keeping a large cash position is certainly defensive, it's worth understanding the context of its different business units and the history of a particular position like Apple.

Berkshire probably never set out to have Apple make up 40% of its public equity holdings. Taking some risk off the table, especially given the lower tax rate, makes sense for Berkshire, especially if it believes it will need more reserve cash to handle changing dynamics in its insurance business.

In terms of life advice, the 93-year-old Buffett said that it's a good idea to think of what you want your obituary to read and start selecting the education paths, social paths, spouse, and friends to get you where you want to go. "The opportunities in this country are basically limitless," said Buffett.

We can all learn a lot from Buffett's steadfast understanding of Berkshire shareholders' needs and the hard work that goes into selecting few investments and passing on countless opportunities.

In investing, it's important to align your risk tolerance, investment objectives, and holdings to achieve your financial goals and stay even-keeled no matter what the market is doing. In today's fast-paced world riddled with rapid change, staying true to your principles is more vital than ever.

Read more from the original source:

Warren Buffett Discusses Apple, Cash, Insurance, Artificial Intelligence (AI), and More at Berkshire Hathaway's Annual ... - The Motley Fool

Posted in Artificial Intelligence | Comments Off on Warren Buffett Discusses Apple, Cash, Insurance, Artificial Intelligence (AI), and More at Berkshire Hathaway’s Annual … – The Motley Fool

$1 billion artificial intelligence company Replit abandons San Francisco for the Bay Area Peninsula – KGO-TV

The page you requested was not found. You may have followed an old link or typed the address incorrectly.

We've also been doing some house cleaning so the page may have been moved or removed.

Please try searching for what you are looking for or you could go to the home page and start from there. Or you may be interested in today's top stories.

Read more:

$1 billion artificial intelligence company Replit abandons San Francisco for the Bay Area Peninsula - KGO-TV

Posted in Artificial Intelligence | Comments Off on $1 billion artificial intelligence company Replit abandons San Francisco for the Bay Area Peninsula – KGO-TV

Air Force making gains in artificial intelligence with AI-piloted F-16 flight – Washington Examiner

This website is using a security service to protect itself from online attacks. The action you just performed triggered the security solution. There are several actions that could trigger this block including submitting a certain word or phrase, a SQL command or malformed data.

Read more from the original source:

Air Force making gains in artificial intelligence with AI-piloted F-16 flight - Washington Examiner

Posted in Artificial Intelligence | Comments Off on Air Force making gains in artificial intelligence with AI-piloted F-16 flight – Washington Examiner