What an interesting problem that I had never considered!
**Farid published a study on “nepotistic” AI, where generative AI tools are trained on other AI-generated outputs, creating highly-distorted images: “Once poisoned, the models struggle to fully heal even after retraining on only real images,” his study found. “The slightly less kind word is inbreeding..."**
Thanks for the great and useful synopsis of all 3 topics.
What an interesting problem that I had never considered!
**Farid published a study on “nepotistic” AI, where generative AI tools are trained on other AI-generated outputs, creating highly-distorted images: “Once poisoned, the models struggle to fully heal even after retraining on only real images,” his study found. “The slightly less kind word is inbreeding..."**
Thanks for the great and useful synopsis of all 3 topics.