Adobe’s AI image creation tool Firefly fell into the same pitfalls as Google’s Gemini AI by creating a woke revision of history, raising concerns about the limitations and biases inherent in generative AI systems.
semaphores report As tech giants race to develop cutting-edge generative AI tools, there are growing concerns about the potential for these systems to perpetuate harmful bias and spread misinformation. Adobe’s recently launched AI image creation tool Firefly has been embroiled in controversy reminiscent of Google’s Gemini AI, highlighting the challenges businesses face in controlling these powerful but imperfect technologies. It’s highlighted.
Like Gemini before it, Firefly has attracted criticism for producing historically inaccurate and racially insensitive images. When inspired to create scenes depicting America’s Founding Fathers and the Constitutional Convention, AI tools gave Black men and women roles they hadn’t historically occupied. Similarly, it produced images of black soldiers who fought for Nazi Germany during World War II and depicted black Vikings, repeating the same mistakes that led to Gemini’s downfall.
Source: Semafor/Reed Albergotti/Screenshot
The root cause of these problems lies in the underlying technology used in generative AI systems. Although companies such as Adobe and Google have implemented various guardrails and filtering mechanisms to prevent the generation of harmful or offensive content, model training data and algorithms still produce biased or ahistorical output. may be generated.
Known for its traditional structured approach, Adobe has taken steps to train its algorithms on licensed stock images and public domain content with the aim of avoiding copyright infringement issues. Ta. But that doesn’t prevent Firefly from falling into the same trap as Gemini, which illustrates the inherent limitations of current AI technology.
Critics have accused these AI tools of trying to rewrite history in line with today’s politics, and some on the right have accused them of injecting a woke agenda into the system. However, some argue that this problem is not due to political bias, but rather a technical flaw inherent in the architecture of large-scale language models.
read more Semaphore here.
Lucas Nolan is a reporter for Breitbart News covering free speech and online censorship issues.





