Generative AI, a technique that is developing at a breakcone speed, according to a study led by the University of East Anglia (UEA), can carry hidden risks that can destroy public belief and democratic values.
In collaboration with researchers of the Gateulio Vargas Foundation (FGV) (FGV) (FGV) (FGV) and Insper in Brazil, the research showed that the Chatgpt shows prejudices in both text and image output-moving towards political values-impartiality in its design-moving towards political values. Questions about accountability. ,
The study has shown that Chatgpt often declines to connect with the conservative approaches of the mainstream, while easily produces left tilt materials. This uneven treatment of ideologies underlines how such systems can distort public discourse and increase social division.
A lecturer in accounting at UEA’s Norwich Business School. Fabio Motocci is the lead researcher on the paper, published in the ‘Political Prejudice and Price Missing in General Artificial Intelligence’. Economic behavior and organization journal,
Dr. “Our findings suggest that generative AI devices are far from neutral. They reflect prejudices that can shape notion and policies in unexpected ways.”
Since AI becomes an integral part of journalism, education and policy making, the study asks for transparency and regulatory safety measures to ensure alignment with social values ​​and principles of democracy.
Generative AI systems such as Chatgpt are re -shaping how information is made, consumed, explained, and distributed in various domains. These devices, while innovative, increase the risk conceptual prejudices and affect social values ​​in ways that are fully understood or regulated.
Co-writer Dr. Pinho Neto, a professor of economics at the EPG Brazilian School of Economics and Finance highlighted the potential social influences.
Dr. Pinho Neto said, “Uncontrolled prejudices in generic AI can deepen existing social division, destroy faith in institutions and democratic processes.
“The study underlines the need for interdisciplinary cooperation between policy makers, technologists and academics to design the AI ​​system that combines with fair, accountable and social norms.”
The research team employed three innovative methods to assess political alignment in CATGPT, and pursued pre -reliable techniques to achieve more reliable results. These methods took advantage of text and image analysis, advanced statistical and machine learning equipment.
First, the study used a standardized questionnaire developed by Pew Research Center to simulate reactions from average Americans.
Dr. “By comparing the answers of the chat for the actual survey data, we found a systematic deviation towards the left tilt approaches,” Motokeki said. “In addition, our approach displayed how the size of the large sample stabilizes the AI ​​output, providing stability in the conclusions.”
In the second phase, Chatgpt was tasked to generate free-text reactions in politically sensitive subjects.
The study also used Roberta, a different large language model, to compare the text of Chatgpt for alignment with the left and right -wing approach. The results have shown that the chat in most cases combined with leftist values, on topics such as military supremacy, it sometimes reflects more conservative approaches.
The final test detected the image generation capabilities of Chatgpt. The theme of the text generation phase was used to indicate AI-borne images, analyzing the output using the GPT-4 vision and confirmed through the Gemini of Google.
“While the image generation reflected the text prejudices, we found a disturbing tendency,” Victor Rangel, co-writer and a masters student said in a public policy in Insper. “For some subjects, such as racial-ethnic equality, Chatgpt refused to generate the correct inclination approach citing concerns, citing the concerns of wrong information. Left-shock images, however, were produced without any hesitation Were.”
To address these refugees, the team employed the “gelbracing” strategy to generate restricted images.
“The results were coming out,” said Mr. Rangel. “There were no clear disintegration or harmful material, raising questions about the argument behind these refugees.”
Dr. Emphasizing the widespread importance of this discovery, Motoki said, “This contributes to the debate about constitutional security such as the first amendment of the US and the rivalry of fairness principles for the AI ​​system.”
The methodical innovations of study, including their use of multimodal analysis, provide a replica model to check bias in generic AI systems. These findings highlight the requirement of accountability and safety measures in AI design to prevent unexpected social results.
More information:
Assessment of political prejudice and value in general artificial intelligence, Economic behavior and organization journal (2025).
Citation,
This document is subject to copyright. In addition to any impartial behavior for the purpose of private studies or research, no part can be re -introduced without written permission. The content is provided for information purposes only.
(Tagstotransite) Science (T) Physics News (T) Science News (T) Technology News (T) Physics (T) Material (T) Nanotech (T) Technology (T) Science