>>62165791>is this a 14-page paper about submitting images of a cat tagged as "portrait of a dog" to datasets?Nah. If you read later in the paper, the way the poisoning works is that it is supposed to trick the model once trained to activate nodes to a secondary unrelated concept.
The idea is that while the image is the same, the AI is associating to another concept entirely, even with proper tagging. So in their examples, when poisoning the concept of dog, they are trying to change it to the concept of cat without changing the trigger words.