OpenAI announces GPT-4 Turbo, assistants, and new API features



summary
Summary

At its developer conference, OpenAI announced GPT-4 Turbo, a cheaper, faster and smarter GPT-4 model. Developers get plenty of new API features at a much lower cost.

The new GPT-4 Turbo model is now available as a preview via the OpenAI API and directly in ChatGPT. According to OpenAI CEO Sam Altman, GPT-4 Turbo is “much faster” and “smarter”.

The release of Turbo also explains the rumors about an updated ChatGPT training date: GPT-4 Turbo is up-to-date until April 2023. The original ChatGPT only had knowledge until September 2021.

Probably the biggest highlight for developers is the significant price reduction that comes with GPT-4 Turbo: input tokens (text processing) for Turbo are three times cheaper, and output tokens (text generation) are two times cheaper.

Ad

Ad

The new Turbo model costs $0.01 per 1000 tokens compared to $0.03 for GPT-4 for input tokens and $0.03 for output tokens compared to $0.06 for GPT-4. It’s also much cheaper than GPT-4 32K, even though it has a four times larger context window (see below).

Image: OpenAI

Another highlight for developers: OpenAI is extending the GPT-4 Turbo API to include image processing, DALL-E 3 integration, and text-to-speech. The “gpt-4-vision-preview” model can analyze and generate images and create human-like speech from text.


OpenAI is also working on an experimental GPT-4 fine-tuning program and a custom models program for organizations with large proprietary datasets.

GPT-4 Turbo has much more attention

Probably the most important technical change is an increase of the so-called context window, i.e. the number of words that GPT-4 Turbo can process at once and take into account when generating output. Previously, the context window was a maximum of 32,000 tokens. GPT-4 Turbo has 128,000 tokens. This is the equivalent of up to 100,000 words.

OpenAI also confirms the GPT-4 All model, which is also available now and has been seen in the wild prior to the conference. The All model automatically switches between the different GPT models for program code (Advanced Data Analysis) or image generation (DALL-E 3), depending on the user’s requirements. Previously, users had to manually select the appropriate model before entering data.

Recommendation

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top