In 2016, an artificial intelligence computer dubbed AlphaGo from Google’s DeepMind AI lab made history by defeating a Go champion.

DeepMind cofounder and CEO Demis Hassabis now claims that his engineers are employing AlphaGo principles to create Gemini, an AI system that will be more powerful than OpenAI’s ChatGPT.

DeepMind’s Gemini is a big language model that works with text and is similar in nature to GPT-4, which enables ChatGPT. However, Hassabis believes his team will merge that technology with AlphaGo approaches in order to give the system new capabilities such as planning and problem solving.

“At a high level, you can think of Gemini as combining some of the strengths of AlphaGo-type systems with the amazing language capabilities of large models,” Hassabis explains. “We also have some new innovations that are going to be pretty interesting.” Gemini was initially hinted last month at Google’s developer conference, when the firm introduced a slew of new AI initiatives.

AlphaGo is based on a process developed by DeepMind called reinforcement learning, in which software learns to tackle difficult issues that require decision-making, such as Go or video games, by making repeated tries and getting feedback on its performance. It also employed a technique known as tree search to examine and memorize potential moves on the board. The next great step for language models could be to conduct more functions on the internet and on computers.

Gemini is still in development, which will take several months, according to Hassabis. It might cost tens of millions or even hundreds of millions of dollars. OpenAI CEO Sam Altman stated in April that developing GPT-4 cost more than $100 million.

Trying to catch up
When completed, Gemini may play a significant role in Google’s response to the competitive challenge posed by ChatGPT and other generative AI technology. The search engine pioneered numerous approaches that enabled the current flood of new AI concepts, but it chose to develop and launch products based on them with caution.

Google has rushed out its own chatbot, Bard, and integrated generative AI into its search engine and many other products since ChatGPT’s launch. To boost AI research, the corporation merged Hassabis’ unit DeepMind with Google’s principal AI lab, Brain, in April to form Google DeepMind. According to Hassabis, the new team will bring together two powerhouses that have been critical to recent AI advancement. “If you look at where we are in AI, I would argue that 80 or 90 percent of the innovations come from one or the other,” Hassabis asserts. “Both organizations have done some brilliant things over the last decade.”

Hassabis has prior experience navigating AI gold rushes that engulf tech titans, however the previous time around he caused the frenzy.

DeepMind was purchased by Google in 2014 after exhibiting impressive results from software that employed reinforcement learning to master simple video games. DeepMind demonstrated how the approach achieves things that once seemed distinctively human—often with superhuman skill—over the next several years. When AlphaGo defeated Go champion Lee Sedol in 2016, many AI researchers were taken aback because they had predicted that machines would take decades to master a game of this complexity.

Innovative Thinking
To train a big language model, such as OpenAI’s GPT-4, massive amounts of curated text from books, websites, and other sources are fed into machine learning software known as a transformer. It learns to predict the letters and words that should follow a piece of text by using the patterns in the training data, a simple method that proves quite effective at answering queries and generating text or code.

Using reinforcement learning based on human input on an AI model’s answers to fine-tune its performance is an essential further step in developing ChatGPT and similarly capable language models. DeepMind’s extensive experience with reinforcement learning may enable its researchers to provide Gemini with novel skills.

Hassabis and his colleagues may also attempt to improve large language model technology by incorporating concepts from other areas of AI. DeepMind researchers work in fields ranging from robotics to neuroscience, and the business has revealed an algorithm capable of learning to perform manipulation tasks using a variety of robot arms.

Learning from physical experience of the environment, like people and animals do, is widely considered to be vital in increasing the capabilities of AI. Some AI scientists consider the fact that language models learn about the world indirectly, through text, to be a severe restriction.

Uncertain Future
Hassabis is in charge of boosting Google’s AI initiatives while simultaneously addressing unforeseen and potentially serious hazards. Many AI scientists, including those developing the algorithms, are concerned that the technology would be misused or become difficult to govern due to recent rapid breakthroughs in language modeling. Some IT insiders have even suggested that the creation of increasingly powerful algorithms be halted in order to avoid producing something hazardous.

According to Hassabis, the immense potential benefits of AI, including as scientific discoveries in areas such as health and climate, make it critical that mankind does not stop developing the technology. He also argues that requiring a pause is unworkable because it would be nearly hard to implement. “If done correctly, AI will be the most beneficial technology for humanity ever,” he said of the technology. “We have to go after those things boldly and bravely.”

That doesn’t imply Hassabis believes AI development should be hurried. DeepMind has been researching the potential perils of AI long before ChatGPT, and Shane Legg, one of the company’s cofounders, has led an internal “AI safety” group for years. Last month, Hassabis and other high-profile AI figures signed a declaration warning that AI could one day pose a risk similar to nuclear war or a pandemic.

According to Hassabis, one of the most difficult tasks right now is determining what the risks of more sophisticated AI are going to be. “I think more research by the field needs to be done—very urgently—on things like evaluation tests,” he says, in order to evaluate how capable and controlled emerging AI models are. DeepMind may make its systems more open to outside scientists to that purpose, he says. “I would love to see academia have early access to these frontier models,” he says, a remark that, if realized, might help alleviate fears that specialists outside of large corporations are being excluded from the latest AI research.

Should you be concerned? According to Hassabis, no one can be certain that AI will become a big threat. But, if progress continues at its current rate, he believes there won’t be much time to establish safeguards. “I can see the kinds of things we’re building into the Gemini series right now, and there’s no reason to think they won’t work,” he says.

Source


Download The Radiant App To Start Watching!

Web: Watch Now

LGTV™: Download

ROKU™: Download

XBox™: Download

Samsung TV™: Download

Amazon Fire TV™: Download

Android TV™: Download