The authors Guild of America’s contract agreement with Hollywood studios was touted as a major victory for authors, but industry experts are concerned that the accord’s artificial intelligence safeguards would be insufficient.

As it stands, the business currently faces various problems about AI and writing, specifically about copyright legislation, detecting AI usage, and how studios will respond.

AI is also a major sticking point in the ongoing actors’ strike, with discussions breaking down on Thursday due to a dispute between performers and studios over AI guardrails.

RELATED: The Actors’ Union And Hollywood Studios Have Halted Negotiations

Writers and actresses have long been concerned about the growing importance of AI, primarily because they believe the technology may replace their jobs in Hollywood.

“I hope I’m wrong, but I believe AI will take over the entertainment industry,” Justine Bateman, a writer, director, and actor guild member, told CNBC in July.

The WGA agreement stated that AI cannot be used to damage a writer’s reputation or be used to diminish a writer’s pay. The contract does, however, allow studios to train AI using pre-existing content. The WGA’s original May proposal, which sparked the walkout, would have outright prohibited studios from utilizing any content to train AI.

RELATED: John Cena Now Says He Was Wrong To “Allegations” Against The Rock Over The Hollywood Move

CNBC’s request for comment was not immediately responded to by the Alliance of Motion Picture and Television Producers.

By allowing the studios to use earlier work to develop similar materials without the writer’s approval or even understanding, Hollywood companies training AI with preexisting materials may create a whole new set of challenges for writers.

According to Leslie Callif, partner at Beverly Hills entertainment law firm Donaldson & Callif, difficult concerns may arise in this murky area.

“One of the biggest issues we’re dealing with is the misappropriation of how AI uses source material and creates new material out of it without permission,” Callif went on to say. “How do you keep this under control?” “I believe it all boils down to human behavior.”

Allowing studios to train AI with prior content was a “punt” down the road, and studios will undoubtedly “push to use AI as far as possible,” according to Peter Csathy, founder and chairman of media legal advice firm Creative Media.

“The biggest inhibitor is probably existing copyright law,” stated the law professor.

RELATED: Since The Beginning Of The Writer And Actor Strikes, Hollywood Has Lost 45,000 Jobs

In the United States, artificial intelligence has upended traditional copyright law.

Authors such as Jodi Picoult and George R.R. Martin sued OpenAI for copyright infringement earlier this year, accusing the startup of exploiting their published writings to train ChatGPT.

“We’re having productive conversations with many creators around the world, including the Authors Guild, and have been working cooperatively to understand and discuss their concerns about AI,” a spokesman for OpenAI told ABC News.

A group of visual artists sued Stability AI, Midjourney, and DeviantArt in January, claiming that Stability AI’s Stable Diffusion software illegally scraped billions of copyrighted images from the internet and allowed Midjourney and DeviantArt AI tools to generate images in the artists’ style.

Non-human-generated content is not eligible for copyright in the United States, which raises hurdles for studios looking to use AI.

“It’s clear from the U.S. copyright laws that AI-generated content is not capable of protection or exclusivity, and the studios will not have that,” Csathy said in a statement. “They need to own their intellectual property.”

For many years, accusations of copyright infringement have rested on the general notion of significant similarity. In other words, if one body of work is discovered to be significantly similar to an earlier body of work, the original artist is entitled to recompense.

RELATED: Hollywood Strikes Have An Effect Outside Of The Entertainment Business

The Supreme Court ruled earlier this year that photographer Lynn Goldsmith’s photographs of late pop superstar Prince were protected by copyright after artist Andy Warhol, who died in 1987, used one of her unlicensed photographs as a starting point to add his signature bold and colorful style. Following the death of Prince in 2016, Vanity Fair licensed one of Warhol’s artworks generated using Goldsmith’s original photos without compensating Goldsmith in any way.

According to Csathy, the verdict is especially relevant to writers.

“In the case [of using AI], if there’s substantial similarity to an existing script and it takes a commercial opportunity away, they could claim copyright infringement and cite the Warhol case,” Csathy said in a statement.

Where have all the AI detectives gone?
Given how swiftly technology changes, AI regulation is notoriously lax. However, some, such as Csathy, believe that detection and protection technology is improving.

Labs is a driving force behind the creation of “My Art My Choice,” an effort aimed at preventing copyrighted works from being utilized in AI learning. The method works by applying a protective layer to an image, rendering it unusable by an AI learning model. In the future, the team intends to apply the technology to other modalities.

HuggingFace, a machine learning business, announced a collaboration with media verification company Truepic earlier this month to implant a digital “watermark” into photographs to quickly identify authorship, modifications, and label AI-generated material.

The developments are reminiscent of Content ID, a digital fingerprinting technique that alleviated concerns that YouTube would violate copyright restrictions in its early days. The technique, which was introduced in 2007, has since been scaled to detect enormous copyright infringements. According to a July YouTube Transparency Report, Content ID detected more than 826 million potential copyright infringement in the second half of 2022, nearly all of which were detected automatically. Payouts to copyright holders totaled $9 billion as a result of the allegations.

“The technology is increasing on the detection side,” Csathy said in a statement. “There’s a whole burgeoning industry of forensic AI that’s going to be policing this.”

Despite advancements in content verification and AI detection technology, many are skeptical that this will be adequate to mitigate the threats of AI.

“The courts will say there are hundreds of thousands or millions of works in the training set,” Csathy said in a statement. “How can you possibly claim that there was no fair use of your works and that there was an infringement?” It will be a perpetual tug of war. There is no way to completely regulate this technology.”

Source


Download The Radiant App To Start Watching!

Web: Watch Now

LGTV™: Download

ROKU™: Download

XBox™: Download

Samsung TV™: Download

Amazon Fire TV™: Download

Android TV™: Download