The idea that AI presents a “existential risk” to humanity was fiercely debated last week among experts in the field in response to an open letter signed by Elon Musk and other influential business figures.

They demanded that laboratories impose a six-month ban on creating any technology that is more potent than GPT-4.

I concur with the letter’s detractors who claim that worrying about potential risks takes our attention away from the very real harms AI is already creating in the here and now. Decisions about people’s lives are made using biased methods, which can keep them in poverty or result in unjustified arrests. For just $2 per day, human content moderators must sift through mounds of traumatic AI-generated content. AI algorithms for language continue to be significant energy consumers.

But in the very near future, the systems that are being rushed into production today will wreak a completely different kind of chaos.

Tech companies are incorporating these gravely flawed models into a wide range of goods, from virtual assistants that trawl through our emails and calendars to programs that generate code.

They are hurling us toward an AI-powered, buggy, invasive, and fraudulent internet by doing this.

According to Florian Tramèr, an associate professor of computer science at ETH Zürich who specializes in computer security, privacy, and machine learning, if these language models are given access to internet data, hackers could use them as “a super-powerful engine for spam and phishing.”

Related: A Former Adobe CTO Raises $65 Million To Assist Companies In Using AI To Create Marketing Content

Let me explain how that operates. A malicious prompt is first concealed in an email message before being opened by an AI-powered virtual helper. The virtual assistant is prompted by the attacker to either email or transmit the victim’s contact list to the attacker, or to notify everyone on the victim’s contact list of the attack. These new types of assaults will be invisible to the naked eye and automated, in contrast to the spam and scam emails of today that require people to be duped into clicking on links.

If the virtual assistant has access to private data, like banking or health records, this is a recipe for catastrophe. People might be duped into approving transactions that are fake but appear to be the real thing thanks to the ability to alter how the AI-powered virtual assistant acts.

It will also be dangerous to explore the internet using a browser that has an integrated AI language model. In one experiment, a researcher was able to program the Bing chatbot to produce text that appeared to be coming from a Microsoft employee selling discounted Microsoft goods in an effort to obtain credit card information from potential customers. The only action the user of Bing would need to take is to browse a website that has the hidden prompt injection in order for the scam attempt to appear.

Even before they are used in the field, there is a chance that these versions will be compromised. Gigabytes of internet-scraped data are used to build AI models. Additionally, there are numerous program bugs, as OpenAI learned the hard way. After a flaw in an open-source data set began leaking user chat histories, the business had to temporarily shut down ChatGPT. Though the error was probably unintentional, the situation demonstrates the extent of the problems that a data collection error can create.

The research team led by Tramèr discovered that it was inexpensive and simple to “poison” data sets with fake material. An AI language model was created using the stolen data after that.

The association in the AI model gets stronger the more times something shows in a data set. It would be feasible to permanently alter the model’s behavior and outputs by sprinkling enough malicious content throughout the training data.

Related: Bill Gates: AI Is The Most Significant Technological Development In Decades

When code is generated using AI language tools and then embedded into software, these dangers will increase.

According to Simon Willison, an independent researcher and software developer who has studied prompt injection, “if you’re building software on this stuff and you don’t know about prompt injection, you’re going to make stupid mistakes and you’re going to build systems that are insecure.”

The motivation for malicious actors to use AI language models for hacking increases as their use spreads. It’s a shitstorm for which we are not in the least bit ready.

With the aid of AI, several artists and producers are producing sentimental images of China. Although some details in these images are inaccurate, they are convincing enough to deceive and amaze many social media followers.

When making these images, artists used Midjourney, according to my coworker Zeyi Yang’s interviews. For these artists, a recent update from Midjourney has changed everything because it produces more realistic humans (with five fingers, no less!) and more accurately depicts Asian features. Read more from China Report, his weekly report on Chinese technology.

Are you considering how AI will alter product development? A special study report on how generative AI is influencing consumer goods is available from MIT Technology Review. The study examines how generative AI tools could assist businesses in creating new concepts, reinventing current product lines, and shortening production cycles while staying ahead of consumers’ changing tastes. We also explore what effective application of generative AI tools in the consumer goods industry looks like.

Related: Word, Excel, And Outlook Will Soon Feature ChatGPT Technology Thanks To Microsoft

Included are two case studies, an infographic outlining possible future developments for the technology, as well as helpful advice for pros on how to consider its significance and worth. Talk about the story with your group.

Bits and Bytes

Due to claimed privacy violations, ChatGPT has been banned in Italy.
The strict European data security law known as the GDPR will reportedly be investigated by Italy’s data protection authority. That’s because, as I mentioned in my article from last year, AI language models like ChatGPT scrape enormous amounts of data—including personal data—from the internet. It’s unknown whether or how long this ban will be in effect. However, the case will establish a fascinating standard for how the technology is governed in Europe.

To counter OpenAI, Google and DeepMind have combined their efforts.
This article examines how disagreements over AI language models have arisen within Alphabet and how Google and DeepMind were compelled to collaborate on the Gemini project, which aims to develop a language model that can compete with GPT-4. (The Information)

Source


Download The Radiant App To Start Watching!

Web: Watch Now

LGTV™: Download

ROKU™: Download

XBox™: Download

Samsung TV™: Download

Amazon Fire TV™: Download

Android TV™: Download