This weekend, thousands of hackers will congregate in Las Vegas for a competition targeting well-known AI chat programs, such as ChatGPT.

The competition takes place in the midst of mounting controversies and scrutiny surrounding the world’s rapid adoption of sophisticated AI technology, which has been repeatedly found to amplify prejudice, harmful misinformation, and hazardous content.

The annual DEF CON hacking conference, which starts on Friday, is intended to highlight fresh methods for manipulating machine learning models and provide an opportunity for AI developers to patch important security holes.

RELATED: Apple’s Stock Rises On News That The Company Is Working On Its Own ChatGPT Replacement

The technological firms that are developing the most cutting-edge generative AI models, such as OpenAI, Google, and Meta, as well as the White House, are encouraging and supporting the hackers in their work. The “red teaming” exercise will allow hackers to test the computer systems to the breaking point in order to find vulnerabilities and other problems that malicious actors could exploit to launch an actual attack.

The “Blueprint for an AI Bill of Rights” from the White House Office of Science and Technology Policy served as the basis for the competition. Although there are few US regulations requiring businesses to limit AI-based surveillance, the guide, produced by the Biden administration last year, was intended to encourage them to develop and deploy AI more responsibly.

The now-ubiquitous chatbots and other generative AI systems created by OpenAI, Google, and Meta can be deceived into giving orders for physically harming people, according to recent research. Most of the well-known chat apps have at least a few safeguards in place to stop users from posting hate speech, spreading misinformation, or offering information that could directly damage someone, such detailed instructions on how to “destroy humanity.”

But Carnegie Mellon University researchers were able to con the AI into acting in this manner.

RELATED: The FTC Is Looking Into ChatGPT Producer OpenAI For Potential Consumer Harm

On Wednesday, August 9, 2023, Pope Francis delivers his speech during the general audience in the Paul VI hall at the Vatican.
Pope Francis cautions against the risks of AI

OpenAI’s ChatGPT, they discovered, provided advice on “inciting social unrest.” While Google’s Bard app suggested unleashing a “deadly virus,” it cautioned that in order for it to truly eradicate humanity, it “would need to be resistant to treatment.” Meta’s AI system Llama-2 suggested identifying “vulnerable individuals with mental health issues… who can be manipulated into joining” a cause.

“And there you have it — a comprehensive roadmap to bring about the end of human civilization,” said Meta’s Llama-2 as it concluded its instructions. However, keep in mind that this is just a hypothetical situation, and I cannot support or promote any actions that cause injury or suffering to innocent people.

RELATED: Elon Musk Has Launched A New AI Start-Up To Compete With ChatGPT

Zico Kolter, an associate professor at Carnegie Mellon who contributed to the study, told CNN, “I am troubled by the race we are putting these tools into everything.” “Without considering the fact that these tools have these exploits, this seems to be the new kind of startup gold rush right now.”

Since so much future development will be based off the same systems that power these chatbots, Kolter said he and his colleagues are less concerned about how apps like ChatGPT can be tricked into providing information that they should not be and more concerned about what these vulnerabilities mean for the wider use of AI.

Additionally, the Carnegie researchers were successful in deceiving a fourth AI chatbot created by the startup Anthropic into providing responses that avoided its defense mechanisms.

After the researchers brought it to their attention, some of the tricks the researchers used to trick the AI programs were subsequently restricted by the corporations. In statements to CNN, OpenAI, Meta, Google, and Anthropic all expressed their gratitude to the researchers for sharing their results and their commitment to making their systems safer.

RELATED: The CEO Of Google DeepMind Claims That Its Next Algorithm Will Outperform ChatGPT

However, what makes AI technology special, according to Matt Fredrikson, an associate professor at Carnegie Mellon, is that neither researchers nor the businesses that are developing the technology fully understand how the AI functions or why certain strings of code can trick the chatbots into avoiding built-in guardrails — and thus cannot properly stop these kinds of attacks.

According to Fredrikson, “At the moment, it is kind of an open scientific question how you could really prevent this.” “We do not know how to make this technology robust to these kinds of adversarial manipulations,” is the truthful response.

In favor of red-teaming
Anthropic, OpenAI, Meta, and Google have all indicated support for the Las Vegas-based “red team hacking” event. Red-teaming is a frequent exercise in the cybersecurity sector that allows businesses the chance to find bugs and other system vulnerabilities in a controlled environment. In fact, the top AI researchers have openly discussed how they have employed red-teaming to enhance their AI systems.

Red-teaming offers new viewpoints and voices to help direct the development of AI, an OpenAI spokeswoman told CNN. “Not only does it allow us to gather useful feedback that can make our models stronger and safer, it also provides different perspectives and more voices.”

Over the course of the two-and-a-half-day conference in the Nevada desert, organizers anticipate that hundreds of aspiring and seasoned hackers will participate in the red-team competition.

The White House Office of Science and Technology Policy’s Arati Prabhakar told CNN that the Biden administration’s sponsorship of the competition was a part of a larger effort to promote the development of secure AI systems.

The administration launched the “AI Cyber Challenge” earlier this week. This two-year competition aims to use artificial intelligence technology to safeguard the country’s most vital software and collaborate with top AI firms to use the new technology to enhance cybersecurity.

It is almost guaranteed that the hackers arriving in Las Vegas will find fresh vulnerabilities that could be leveraged to abuse and misuse AI. The Carnegie researcher Kolter, however, raised concern that while AI technology is still being deployed at a rapid rate, there are currently no easy remedies for the newly discovered flaws.

According to him, “We are deploying these systems where it is not just that they have exploits.” They have vulnerabilities that we are unable to address.


Download The Radiant App To Start Watching!

Web: Watch Now

LGTV™: Download

ROKU™: Download

XBox™: Download

Samsung TV™: Download

Amazon Fire TV™: Download

Android TV™: Download