Wired’s global editorial director Gideon Lichfield recently discussed a ChatGPT experiment he conducted. He asked ChatGPT to suggest US cities that a journalist writing a story on predictive policing in local communities should visit. Further probing prompts for justification eventually led to ChatGPT offering up a list of URLs from various media channels. According to Lichfield: “Every single one 404’d. They were all made up.”

This story is a sobering wake-up call for anyone planning to entrust their work, research, and thinking to Generative AI.

In an earlier post we touched upon the ways in which ChatGPT can simplify your work and act as a sounding board for ideation and research. But we also discussed how important it is to smartly navigate these initial days of Generative AI to avoid the potential pitfalls of the technology. We highlight the most common ones and how they could impact you.

Hallucinations that can lead to misinformation

AI hallucinations happen when an AI model produces unanticipated, untrue results that are unconnected to the data provided or the real-world data available to it. PCMag’s K. Thor Jensen suggests that this may be because current AI models are “too eager to please”, and generate responses that they deem will be what the prompter wants to hear. Worse still, AIs can be very certain even when they’re very wrong, according to machine-learning researcher Sam Kessler at the University of Oxford.

Biases that can impact your DEI

Microsoft had to shut down one of its earlier AI experiments, the chatbot Tay, when it began tweeting racist and offensive content that it learned from its interactions with other users on Twitter. Biases are inevitable on systems trained on the web, since some of the content on the internet is often quite malevolent or bigoted. OpenAI’s CEO, Sam Altman, has also highlighted ChatGPT’s issues around bias and mentioned that they are working on recalibrating its default settings to combat this.

Until then, it’s up to us to feed ChatGPT with the right prompts and probing questions to ensure it gives responses that have basic common sense and are free from biases. And when dealing with sensitive data, businesses and the workforce alike will need to take extra effort to ensure that the technology is secure and error-free.

Referencing ChatGPT can create an accountability problem, or even copyright infringement

ChatGPT and other Generative AI models are “black box” models, which means it is impossible to understand the rationale behind their outputs and no underlying reasoning is provided. Which makes us wonder: When, in ordinary situations, we expect human beings to explain their acts and ideas, why shouldn’t we expect the same from Generative AI?

Yet, numerous researchers and journalists are commenting about how it is not even advisable to ask ChatGPT for references as it doesn’t have the capability to match its answers to any relevant references. Technology editor David Gewirtz advises us to expect errors in the references more than 50% of the time, and recommends using ChatGPT only for writing ideas, rather than actual research and writing work. Otherwise, you could end up with information that is not backed by solid evidence or worse still, you could inadvertently end up plagiarizing content or committing copyright infringement.

What does this mean for organizations?

Nearly half the firms surveyed by Gartner recently said they are currently drafting up a ChatGPT policy, and indeed guidelines are certainly necessary. Among publishers, for instance, Wired has released a policy on using Generative AI which declares it won’t use the technology to generate or edit text and images, but use it to generate ideas for stories, headlines, and images. PwC has also launched Responsible AI, which offers organizations diagnostic tools and customizable frameworks, tools, and processes to develop a more reliable, unbiased, and safer AI.

At an individual level, letting Generative AI adopt the position of an “expert” and using the output it generates without a layer of human judgment or intervention is fraught with risks. High stakes tasks should be managed without AI intervention for robust, logical, and bias-free results that respect privacy, security, and personal rights.

Leave a Reply