How ChatGPT Changes The Content We Consume Online

Research from the Gillmore Centre for Financial Technology at Warwick Business School has revealed that Generative AI and the use of Large Language Models (LLMs) inadvertently alter the emotional tone of the original text.

Their study investigates how the proliferation of LLMs impacts public sentiment, ultimately asserting that changes introduced to content by these models render existing results unreliable.

Changes the tone

The research, conducted by adapting established experiments, contributes significantly to the body of knowledge on Generative AI and user-generated content (UGC) by demonstrating that the widespread adoption of LLMs modifies the linguistic characteristics of the text.

This effect became apparent through a comprehensive analysis examining 50,000 tweets. The researchers harnessed the formidable capabilities of the GPT-4 model to rephrase the text. By applying the “Valence Aware Dictionary for Sentiment Reasoning” (VADER) methodology to compare the original tweets with their GPT-4 rephrased counterparts, the study uncovered that LLMs tend to shift sentiment toward greater neutrality, effectively moving the text away from both positive and negative emotional orientations.

“Our findings reveal a notable shift towards neutral sentiment in LLM-rephrased content compared to the original human-generated text,” the researchers explain. “This shift affects both positive and negative sentiments, ultimately reducing the variation in content sentiment.”

“While LLMs do tend to move positive sentiments closer to neutrality, the shift in negative sentiments towards a neutral position is more pronounced. This overall shift towards positivity can significantly impact the application of LLMs in sentiment analysis.”

Consuming content

A lot of research has been done on how people use content created by regular folks online. This research covers things like predicting stock prices and checking the quality of services. But, we’ve noticed a big issue when it comes to using super-smart computer programs called Large Language Models (LLMs). These programs can introduce a serious problem: they might make content biased.

This bias happens because LLMs are used to do things like rephrasing text or creating new content. This can lead to the content having feelings and opinions that don’t match what the person would have said without the help of LLMs.

“In turn, our research proposes a mitigation method aimed at reducing bias and enhancing the reliability of UGC,” the authors continue. “This involves predicting or estimating the sentiment of original tweets by analyzing the sentiments of their rephrased counterparts.”

However, further investigation is needed to see if other linguistic features of any UGC would change if AI were used, such as emotion, sentence structure, or ratio of specific words in a sentence.

Facebooktwitterredditpinterestlinkedinmail