Malicious Prompt Engineering Technique

From GM-RKB
Jump to navigation Jump to search

A Malicious Prompt Engineering Technique is a hostile adversarial prompt engineering technique that crafts input prompts to cause harmful behaviors or security breaches in AI language models.