AI Model Jailbreaking Technique

From GM-RKB
Jump to navigation Jump to search

An AI Model Jailbreaking Technique is a safety-bypassing vulnerability-exploiting malicious prompt engineering technique that circumvents AI safety mechanisms to enable prohibited behaviors in artificial intelligence models.