The Misconception of AI's Ability to Summarize Crucial Information

07.03.2024|Christian Kreutz

There is a prevailing belief that generative AI has the ability to sift through vast amounts of information and provide quick summaries. Whether it's condensing a large PDF folder or finding relevant court cases for lawyers, there is high hope that large language models can assist in these tasks.

In the Economist Don and Charlie Sull rave about using generative AI for new form of employee feedback. "Freed from the shackles of traditional surveys, organisations can use AI to gather and process employee feedback from many sources."

However, the problem lies in underestimating the complexity of summarizing information and leaving it up to a machine to determine what is important. While this may give a general overview or highlight interesting connections between documents, it cannot replace human judgement when making critical decisions based on the information. Such autonated summarization can help me to get an initial overview of a document, which is what a good executive summary should do anyway. Large language models can even draw interesting connections between documents, but it is always only one version of the story. Such a summary will hardly ever find the one piece of information that is important to you, or that you maybe did not know.

Summarization requires prioritization and should still be a task left to humans. Especially for delicate projects, such as when the citizen participation platform CitizenLab proudly announces its ability to reduce citizen-generated feedback by half through AI technology. I doubt citizens or employees would be pleased if they were aware that their qualitative feedback may be prioritized by a machine in the future. Large language models may not always uncover the crucial piece of information or something previously unknown, making human involvement all the more necessary.