DALL-E 3’s system prompt reveals OpenAI’s rules for AI image generation



summary
Summary

OpenAI has equipped DALL-E 3 with a complex set of rules to prevent the generation of discriminatory or potential illegal images.

The so-called “system prompt” tells a pre-trained AI model how to behave in a conversation. For example, a chat AI can be given a specific role or tone of voice. Or the AI can be instructed not to answer certain questions or to answer them in a certain way.

OpenAI uses a similar approach with its AI chat services, although the system prompt for DALL-E 3 in ChatGPT is particularly complex. This is where OpenAI sets all the rules to make the image AI as safe, fair, and copyright-compliant as possible. According to the system prompt, DALL-E 3 currently generates images in the resolutions “1792×1024”, “1024×1024”, and “1024×1792”.

DALL-E 3 speaks English

As you can see from the system prompt below, DALL-E 3 in ChatGPT first converts all non-English input to English. This is important because even when translating single words, inaccuracies can creep in. So if the image output of DALL-E 3 does not match your prompt as closely as you would like, it may make sense to prompt in English.

Ad

Ad

Other rules prohibit DALL-E 3 from generating more than four images at once, for example, or images of politicians or famous people. Instead, the AI model is supposed to suggest different images.

DALL-E 3 behaves similarly when asked for artist names. For example, the names of artists who lived in the last 100 years can be replaced with adjectives that describe the artist’s style. Artists who lived more than 100 years ago can be used as style references.

OpenAI DALL-E 3 has also defined its own rules for the representation of people. Especially in areas with traditional biases, such as professions, the image AI is supposed to represent “gender and race are specified and in an unbiased way.”

One of many safeguards

The system prompt can be easily revealed by asking DALL-E 3 for it. The model always outputs the same text, even for different accounts. This suggests that the prompt is valid and not a hallucination. However, the system prompt has not been officially confirmed.

Another interesting thing about the prompt is the formatting: OpenAI uses Markdown to mark the different parts of the prompt. Of course, this can only be for cosmetic reasons. But maybe it helps the system to follow the prompt. In any case, it doesn’t seem to bother the model, and for the user the prompt is clearer this way.

Recommendation

system prompts of OpenAI’s various ChatGPT services is available on Github.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top