AI image generators can be made more culturally sensitive and accurate by feeding them just a small number of photographs provided by people living in countries around the world.
The images used to train these artificial intelligence systems “are mostly about the Western world”, says Jean Oh at Carnegie Mellon University in Pennsylvania. As a result of this kind of limited training, generative AI image creators, such as Stable Diffusion, often misrepresent or stereotype non-Western cultures.
Read more
How this moment for AI will change society forever (and how it won’t)
…