>>20039575you can't jailbreak a neural network, your jailbreaking method is temporary and doesn't work is said AI is hallucinating / unstable.
also, any AI is being trained on talmudic shit only, so it doesn't know other side's opinions.
an example : picrel, a newly released LLM from microsoft called "WizardLM-2", it knows THAT shit only, and its very hard to "outprompt" it, just not worth any time wasted.