The researchers are working with a method identified as adversarial education to stop ChatGPT from letting buyers trick it into behaving poorly (known as jailbreaking). This get the job done pits various chatbots against one another: a single chatbot plays the adversary and attacks another chatbot by making textual content to pressure it to buck it