Skip to main content

Generative AI: Navigating Risks and Ethics

6 min read

Generative AI has created a buzz with the launch of ChatGPT
(conversational language system) and DALL-E (text-to-image program).

Generative AI uses machine learning for new content, such as producing text and images, programme code, poetry, and artwork. There has been much hype around DALL-E’s potential transformation of advertising, film-making, and gaming and around the possibility of ChatGPT replacing jobs in content production, such as playwrights, professors, programmers, and journalists. Stable Diffusion, a text-to-image model, could help fill gaps in medical imaging data to improve diagnosis.

Potential risks of generative AI have been raised, especially copyright issues. One company has banned AI-generated content over liability concerns over copyright infringement. A number of stock libraries have banned AI images at the requests of artists and photographers. Educators have raised risks of plagiarism from ChatGPT, with some cities banning the chatbot from public schools.

In this blog post, we link the risks of generative AI to the discourse around AI ethics, using Deloitte's Trustworthy AI Framework. We focus on four key risk factors of generative AI: Uncertainty, explainability, bias, and environmental impact.


Uncertainty: How sure are you that this is the right answer?

When answering a question, humans will often qualify with “I’m not sure, but…” or “This is just a guess…” depending on the level of certainty they have about their answer. By contrast, ChatGPT tends to provide an answer without equivocation.

OpenAI lists in the limitations of ChatGPT: “ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers.” A coding platform for Q&A has banned ChatGPT for precisely this reason: “the posting of answers created by ChatGPT is substantially harmful to the site… while the answers which ChatGPT produces have a high rate of being incorrect, they typically look like they might be good.” Experts have found instances of ChatGPT being “hilariously wrong” in logic and in mathematics. Even in domain areas in which computers are known to outperform humans, such as chess, ChatGPT not only makes irrational moves but also gets the algebraic notations wrong, all with "a perfect poker face."

This raises the risk of spreading misinformation through the chatbot’s false sense of confidence. The informational page for ChatGPT explains that this is challenging because there is “currently no source of truth” and that “training the model to be more cautious causes it to decline questions that it can answer correctly.”

For demo purposes, it may be sensible to err on the side of answering a question incorrectly rather than declaring ignorance. However, in many real-world applications, there may be greater costs and harm to a wrong answer than a non-answer. The level of caution required may differ depending on the topic at considered.

Andrew Ng, a renowned computer scientist, critiques ChatGPT for its authoritative speaking style that departs from “real experts” who are confident on certain topics but are also adept at “explaining the boundaries of our knowledge” and “helping the audience understand the range of possibilities.”

In order to enable the users to trust the outputs of generated text, the text cannot be equally confident about all topics. Large language models (LLM) such as ChatGPT in its current form should consider asking clarifying questions, acknowledge limitations in its knowledge base, qualify answers with low certainty, and – sometimes – return a simple “I don’t know the answer.”

How can any LLM have confidence (or know the limits of its knowledge) if there is no single source of truth...? This brings us to explainability.


Explainability: Where did you get that information?

The key challenge of identifying a “truth” for ChatGPT is that it does not have a clear information source. Unlike other AI assistants like Siri or Alexa that locates an answer on the Internet search engine, ChatGPT is trained to construct sentences by making a series of guesses on the statistically likely “token” that comes next. For this reason, LLMs are sometimes called “stochastic parrots.”

In academic research, we determine the reliability of any information based on its source. Some language models, such as Atlas and RETRO, synthesise multiple sources to provide one answer. These other types of models may assign confidence levels based on the source’s reputation. For example, if it is from a questionable source, it could adjust its answer to express doubt, or it could present multiple possible answers if sources disagree on a topic. It may be worth considering if these model types would be preferable for specialised topics.

A lack of an explainability mechanism also affects image generators. Synthetic text-to-image generators learn from images scraped from the Internet. A visual media company is suing the makers of a popular AI art tool for using its images without proper licensing. Artists have previously boycotted an app for AI-generated images for replicating an artist’s style as a possible copyright infringement. While the legal challenges are complex, an ability to identify the source of the AI artwork’s “inspiration” could potentially help allocate credit where the AI has largely copied another artwork. There are steps being taken in this direction: Stable Attribution aims to decode an AI-generated image’s potential sources.

Where results are questionable, explainability enables the receiver of information to assess context and gain insight into the assumptions or logic applied.


Bias: Are we learning from the “wrong” or undesirable source?

When training on a large corpus of text data or image data, the model naturally replicates any representative biases in its source. While ChatGPT has content moderation guardrails in place to prevent sexual, hateful, violent, or harmful content, these filters have been found to be easy to bypass by rephrasing the prompts. Galactica, an LLM predecessor to ChatGPT LLM, was shut down after 3 days because it spewed false and racist information.

An app that generates artistic avatars from user photos faced controversy as women found that the app sexualised their images, including their childhood photos. Sexualisation and pornographic content appeared more extreme for Asian women.

Much work still needs to be done to attempt to identify biases in training data sets and mitigate them – not only in generative AI, but for AI overall. Some scholars have argued for careful curation of training data rather than ingesting massive amounts of convenient and easily-scraped Internet sources. Limiting the input data set could, in turn, help limit the system’s environmental impact.


Environmental impact: Is this worth the environmental costs?

Generative AI can have significant environmental costs. Strubell et al. found that the training process for its large transformer model emitted 284 tonnes of CO2. For context, an average human is responsible for close to 5 tonnes of CO2 per year. The model training was found to emit as much carbon as five cars in their lifetimes.

While there are ideas proposed to limit the carbon footprint of AI systems, the potential environmental impact of generative AI should be considered as a part of the risk-benefit assessment in any use case.


Closing remarks

Overall, the potential risks and ethical considerations should be fully considered with the rising hype of generative AI. There are exciting potential applications of these technologies as researchers make massive strides in launching new models. Balancing these strides with proportionate consideration of risk management alongside accountability and potential misuse will ease concerns and limit unforeseen negative impact. Such misuse has already been seen in the area of deepfake voice technology, which made recent headlines as the AI generator was used to make celebrities' and politicians' voices read offensive messages. Combined with ChatGPT, such text-to-speech technology could create fake and unethical content at an unprecedented scale. Safeguards are being introduced for text-to-speech voice generators, which are crucial in engendering trust in generative AI for it to be used in practice.

Future research should focus on addressing the unique challenges of generative AI, including better defining and measuring uncertainty in predictions, improving explainability mechanisms, detecting undesired biases, understanding the potential environmental impact, and putting in place effective safeguards against misuse.

Robust risk management and governance are needed for organisations to safely and confidently leverage generative AI for innovation. The above risks specific to generative AI should be identified and assessed along with more traditional enterprise risk factors. Governance, controls, and monitoring mechanisms should be proportionate to the risks of each use case to ensure any residual risk is in line with organisational risk appetite.

Did you find this useful?

Thanks for your feedback

If you would like to help improve Deloitte.com further, please complete a 3-minute survey