Strengths and weaknesses of Gen AI
Here are some key strengths and weaknesses of Generative AI that we should consider when using Gen AI tools.
Strengths of Generative AI
Diverse outputs
GenAI can produce what seems like diverse and original outputs. It can create content through capturing nuances in language, based on patterns which we may not have seen before in the data they were trained on. This helps to open up different perspectives and can give us ideas on how to explore topics from a variety of viewpoints.
Levelling the playing field
Gen AI can process and interpret human language in a conversational style, which allows it to generate contextually relevant responses to user prompts. It can also reformulate text to simplify or summarise it, which can help people start to understand more complex ideas. Gen AI can also process and generate text in multiple languages.
If used appropriately, Gen AI can be a great leveller for those who do not speak English as their first language or may not have the same literacy or language skills as others.
Organisational productivity
Gen AI can be fine-tuned for different domains, so it can be made widely available for a variety of tasks. Some examples include chatbots, content generation and language translation. This can help boost organisational productivity. Gen AI can answer questions in a human-like style, reduce effort on tedious and monotonous tasks, provide accessible summaries of complex topics, produce automatic translations and transcriptions, etc.
Personalisation
Gen AI models can remember previous interactions, which results in more coherent and relevant conversation experiences for users. You can ask some models to remember your writing style or how you want to present your data. You can even ask a chat tool to test your knowledge against any piece of content. Gen AI can generate quick responses, which produces rapid interactions and real-time applications.
Industry applications
It is anticipated that most industry and workplaces will be using a form of Generative AI in the future to enhance and optimise their work. Gen AI is already integrating into our daily learning and work tools, such as Copilot within Microsoft Office, or the AI content generator in Grammarly. So it is important to develop your skills in using Gen AI effectively and ethically.
Weaknesses of Generative AI
Lack of trust and authenticity
Gen AI can generate information that appears factual but is often inaccurate. This is often called AI hallucinations. We must remember that:
- although Gen AI models appear to understand the content that they use and generate, they do not understand it
- the data that Gen AI models use for training have lots of inaccuracies and biases in them already
- Gen AI can also easily create fake news, misinformation and ‘deep fakes’.
Most AI models are created to provide a likely output based on prompts and training. Their outputs are designed to appear convincing even when there is no factual basis for the output. Consequently, ‘facts’ provided by these tools may appear to be trustworthy, but that appearance is false. Both input data and prompts can lead to bias in the output. As a result, ALL outputs from all AI tools must be independently verified for truthfulness.
Copyright and ownership
Gen AI output imitates or summarises existing content, mostly without the permission of the original content owners. The output's appearance of creativity and originality generates challenges for us. There are issues of copyright, ownership, intellectual property and lack of authoritative legislation in this rapidly evolving area. It is important to keep this in mind when using Gen AI tools.
You should not copy and paste any copyrighted text, or other sensitive or personal data, into an AI tool for it to use. The AI tool could incorporate this data into its training dataset, which could then be used illegally or unethically.
Carbon footprint
Training Gen AI requires huge amounts of power, which indirectly generates huge amounts of carbon. This has important consequences for climate change.
For example, an estimate of the electricity needed to train ChatGPT-4 is between 51,772–62,318 megawatt hours of electricity. This generated between 1,035–14,994 metric tons of carbon dioxide emissions. The variations depend on the global location of the training. As a comparison, a 3000-mile round trip flight from London to Boston emits 1 metric ton of carbon dioxide.
Feedback loop
The output of Gen AI is flooding the internet through tools such as ChatGPT. This poses an interesting risk for future GPT (Generative Pre-trained Transformer) models and leads to the concept of model collapse.
Future models trained with online content that earlier GPT models have created will include all its biases and errors. This self-referential loop compounds the mistakes in data. This might contaminate the training data and lead to model collapse. A model collapse occurs where the models forget most of the original data that they learnt from.
Ethical, Social and Human costs
After training, the Gen AI model is often checked and refined in a process known as Reinforcement Learning from Human Feedback (RLHF). In RLHF, human beings review the Gen AI responses and validate them. This ensures that the Gen AI responses are appropriate, accurate and align with the intended purpose.
There are issues of exploitation around this work. For ChatGPT, the RLHF reviewers were mostly workers in global south countries such as Kenya. Workers were paid less than $3 per hour to review the outputs of ChatGPT and identify any objectionable or toxic materials. This work has had a massive negative impact on many of those who were involved, including experiencing trauma.
Gen AI also tends to output standard answers that replicate the values and biases of the creators of the data used to train the models. This may constrain the development of plural opinions and further marginalise already marginalised voices.