The Hollywood Writers’ Strike and Ethical AI | By Ed Watal, Founder & Principal — Intellibus

Less than 12 months ago, with the debut of AI-powered ChatGPT, the true power of AI hit the mainstream. Since then, it has established an unshakeable belief in everyone’s minds that artificial intelligence is here to stay and will forever alter a number of industries.

This also sparked a series of conversations surrounding ethics, fairness, and the negative effects on creativity. Most notably, the pushback on AI-generated creative content from the Writers Guild of America (WGA), which has dominated headlines since mid-summer — with no resolution in sight.

The topics generated by the writer’s strike have allowed many of us to take a step back and evaluate the role of AI in our lives, work, and academics. The strike has also brought to light the real-world consequences of technological progress, especially when the development is done behind closed doors and the ethical considerations are not as transparently visible to all as one would expect.

WGA take on AI

New television and movie-making have ground to a halt. The WGA — comprised of the writers who create everyone’s favorite media — continue to picket in an attempt to gain fair pay, recognition, and a mechanism to enable them to leverage AI without giving AI the rights of a Creator as a person. Programs such as ChatGPT have proven to be so powerful and intuitive that studio execs, showrunners, and filmmakers have begun to rely on it more and more to generate content and save money.

Titans of the movie and television industry, such as Bob Iger, have even said AI presents “opportunities and benefits for the company.” For all the proposed opportunities brought about by AI, however, the writers maintain that with no regulation, their jobs and ability to create are in jeopardy.

AI is already being used in Hollywood to generate all sorts of content from episode titles to screenplay outlines. AI is also a major player in the rise of streaming services, where machine learning can suggest content based on preferences and even adjust image quality settings.

One of the main points of contention for the WGA strikers is the lack of fair compensation for their work on streaming services. WGA leadership maintains that the rise of AI and streaming content has negatively impacted their livelihood.

Labor rights in an age of AI

The WGA strike has brought the issue of labor rights for creatives to light. As Hollywood races to be at the forefront of AI advancement, writers and other creatives are being left behind as collateral damage. The Hollywood writer’s strike highlights the importance of fair compensation and ethical treatment for creative professionals, paralleling the ethical concerns surrounding AI and workers’ rights in automation.

Currently, there are no federal guidelines to govern the use of AI in the workplace. It is the Wild West where experimentation in the name of disruption and innovation regions supreme. At the same time, this lack of regulation comes with the exploitation of labor, an over-reliance on technology that hasn’t quite reached the level of human thought and ability, and — as we’ve seen with the writer’s strike — pushback.

Representation

The lack of representation for diverse communities has been an issue within Hollywood for decades. More recently, however, AI technology has been looped into the representation conversation due to a stark lack of representation in AI-generated data and development teams. While the WGA is fighting for inclusive storytelling, there is an ongoing debate about the staggering lack of diversity among those developing AI technologies, leading to biases in data and AI-generated content.

When it comes to AI, these programs are trained almost entirely through algorithms and machine learning. For example, if you teach AI programs what a “human being” looks like by using only images of white people, it will inadvertently create a machine-learned bias. Due to Hollywood’s increased reliance on AI-generated content, those biases are included (albeit unintentionally) in the content it generates, thereby feeding into an already noticeable lack of representation within the screenwriting and filmmaking industry.

Economic fairness

One of the primary concerns regarding AI has been its effect on the economy, especially the possible negative influence on pre-existing economic disparities. With the writer’s strike now approaching its sixth month, Hollywood’s writers have tried to make it clear that compensation was already falling short of livable wages, even before reliance on AI became the status quo.

To move forward ethically with AI, economic fairness must be made a distinct priority. The writers holding the picket line are demanding fair compensation for their hard work, and are attempting to draw attention to the notion that the industry’s overreliance on AI may not be the tremendous money-savings some executives like Iger believe it to be.

While AI has the potential to generate massive global economic activity, per a McKinsey report, the writers’ strike is calling attention to a continued need for creativity, imagination, and talent within the arts and filmmaking industry as a whole. Even though AI is incredibly useful, it still cannot replace the vast wealth of creativity that can come from the human mind — not yet anyway.

To build an ethical world in which AI can have a part in society, rather than a chokehold on it, collective action is required. Just as the writers and SAG-AFTRA have banded together to draw attention to inequity and injustice, so must researchers, developers, and policy-makers involved in the development of AI.

AI is here to stay, but emphasis must be placed on avenues towards its ethical application, non-biased development, and the continuation of support for creative output and those who build their lives around it.

Questions that Need to Still be Answered

Ownership and fair use

Currently, the WGA position is that only humans can create literary and source material. These are terms used to assign ownership and thereby residuals for any given piece of writing. The WGA is not opposed to writers using AI, except that even if a writer used an AI to generate a script or a storyline, the writer would still be considered the original author.

In theory, one may think that this should be acceptable. However, given that the generative AI tools have been created using materials that are copyrighted, the question of what would constitute “fair use” still needs to be answered.

Creating guardrails

Ensuring there is a mechanism to control writers or others from feeding AI copyrighting content would also need to be addressed. This concern is similar to that regarding the leakage of confidential data — one of the core drivers of why several companies have banned the use of GPT at work.

Additional guardrails that can monitor tracking, tracing, and potentially prevent what inputs/prompts were provided by the Writer to AI would also be required. One approach being proposed by generative AI organizations is to provide companies with a generative AI instance of technology specific to their enterprise. However, this approach may be cost-prohibitive for most studios.

Granular attribution chain and residuals

A necessary first step towards addressing the allocation of residuals in a fair manner would be the ability to structure the data that is input to an AI, and generated in a manner that it can be granularly tracked down to the paragraph or page of the literary or source material.

Technological Shift Needed

While there is a lot of technology out there, some fundamental shifts will be needed in order to make generative AI ethical. One such approach would be the genesis of ethical AI platforms that are based on some core principles that allow the ethical use of data by AI.

At the very core of this is not the ethics of AI, but the ethics of data — that is, what data has been or is being fed to the AI. One approach towards an ethical AI Platform would be to establish a not-for-profit data commons and self-regulating organization (SRO) for Hollywood to accomplish the following:

  • Establish a scripts repository: A unified data corpus of all scripts across the industry, similar to Wikipedia, would need to be created and mandated by regulation or standard as the data commons source for all scripts (more broadly, all literary and source material)
  • Establish a facial repository: A unified data corpus of all faces of actors in the industry, would have to be created and mandated by regulation or standards as the data commons source for all face likeness of actors.
  • Provide standard APIs with access controls: A standard set of APIs provided over the scripts and facial repositories which can be accessed by all generative AI tool providers.
  • Establish an SRO with enforceability: An organization that has the ability to hold accountable the writers, studios, and generative AI tool providers to adhere to the rules of the SRO.

The downside of the above approach is that all existing AI models would need to be retrained using the new data commons repositories. While this is a one-time cost, it would address multiple concerns, including the use of copyrighted data by generative AI providers, granular attribution of rights to content owners, and the ability to turn off/remove access for future content.

Exit mobile version