Apple’s image playground app is said to have some bias issues. A machine learning scientist recently shared several outputs produced using the Artificial Intelligence (AI) app and claimed that it included wrong skin tone and hair texture on several occasions. These impurities were also combined with specific racial stereotypes, combining the problem. It is difficult to say whether an alleged issue is a once -time incident or a broad issue. In particular, the cupertino-based tech veteran introduced the first app as a part of the Apple Intelligence Suit with iOS 18.2 updates.
Apple’s image can be bias issues in the playground app
Jochem Gietema, Machine Learning Science Lead in Onfido, shared blog postTo highlight your experiences using Apple’s image playground app. In the post, he shared several sets of outputs generated using the image playground app and highlighted examples of racial bias by large language models that power power. In particular, Gadgets 360 staff members did not notice any such bias when testing the app.
“While using, I noticed that the app changed the tone and hair of my skin based on the prompt. Businesses such as investment banker vs. farmers produce images with very different skin tones. The same skiing vs. basketball, streetwear vs. suit, and, most problematic, rich vs. goes to the poor, ”Giantma said in a linkedin Post,
Alleged biased output produced using the image playground app
Photo Credit: Jokem Giema
Such inaccurate and bias are not uncommon with LLM, which are trained on large datasets that may have similar stereotypes. Last year, Google’s Gemini AI model faced backlash for similar prejudices. However, companies are not completely helpless to prevent such generations and often apply various layers of safety to prevent them.
Apple’s image playground app also comes with some restrictions to prevent issues associated with AI-Janit images. For example, the Apple Intelligence App only supports cartoons and illustration styles to avoid examples of Deepfac. Additionally, the images generated also arise with a narrow area of vision that usually holds the face with only a small amount of additional details. It is also done to limit any such example of prejudices and impurities.
The tech giants also do not allow any indication that contains negative words, names or public figures of famous celebrities, and to limit more users to misuse of equipment for cases of unexpected use. However, if the allegations are correct, the iPhone manufacturer will need to include additional layers of safety to ensure that the user does not feel discrimination when using the app.