OpenAI Unveils GPT-4o mini: Powerful AI at an Affordable Price

OpenAI Unveils GPT-4o mini
Please Share 🤝

Summary

Introducing Cost-Effective GPT-4o mini, Accessible to Everyone!

A Cost-effective AI model that excels in text and multimodal reasoning!

On 18th July, OpenAI Announced the GPT-4o mini, their most cost-efficient small model. This breakthrough significantly expands the potential applications of AI by making intelligent technology remarkably affordable. GPT-4o mini boasts impressive performance, achieving an 82% score on the MMLU benchmark and outperforming previous models in chat preference tasks. At a price point of 15 cents per million input tokens and 60 cents per million output tokens, it’s significantly cheaper than earlier models and over 60% more affordable than GPT-3.5 Turbo.

Video Credits: TechCrunch

Unleash AI Applications with Great Performance!

GPT-4o mini surpasses previous models in benchmarks like MMLU, MGSM, and HumanEval!

Currently, GPT-4o mini supports both text and vision through the API, with plans to expand its capabilities to handle text, image, video, and audio inputs and outputs. The model boasts a 128K token context window, supports generating up to 16K output tokens per request, and possesses knowledge up to October 2023. Additionally, thanks to a shared and improved tokenizer with GPT-4o, handling non-English text is even more cost-effective with GPT-4o mini.

Benchmark testing across multiple academic benchmarks demonstrates GPT-4o mini’s superiority over past small models in both textual intelligence and multimodal reasoning. It supports the same range of languages as GPT-4o and showcases exceptional performance in function calling, allowing developers to build applications that interact with external systems. Notably, the GPT-4o mini surpasses models like the GPT-3.5 Turbo in long-context performance.

Here’s a breakdown of GPT-4o mini’s performance on key benchmarks:

  • Reasoning Tasks: On the MMLU benchmark, GPT-4o mini excels with an 82.0% score compared to Gemini Flash (77.9%) and Claude Haiku (73.8%).
  • Math and Coding Proficiency: Compared to previous models, GPT-4o mini shines in mathematical reasoning and coding tasks. It scored 87.0% on MGSM (math reasoning) and 87.2% on HumanEval (coding proficiency), exceeding the scores of Gemini Flash and Claude Haiku.
  • Multimodal Reasoning: GPT-4o mini exhibits strong performance (59.4%) on the MMMU multimodal reasoning benchmark, outperforming Gemini Flash (56.1%) and Claude Haiku (50.2%).
OpenAI GPT-4o mini Model Eval Benchmark
Image Credits – OpenAI

What about the Safety and the Future of GPT-4o mini from Here?

From pre-training to post-training, OpenAI implemented safety measures to filter out harmful content and align the model’s behavior with our policies. GPT-4o mini incorporates the same safety mitigations as GPT-4o, assessed by over 70 external experts in fields like social psychology and misinformation. Their instruction hierarchy method helps improve the model’s resistance to jailbreaks, prompt injections, and system prompt extractions, making it safer for large-scale applications.

The cost per token of GPT-4o mini has decreased by 99% since the introduction of text-davinci-003 in 2022. As OpenAI continues to drive down costs and enhance model capabilities, OpenAI envisions a future where AI is seamlessly integrated into every app and website. GPT-4o mini is paving the way for developers to build and scale powerful AI applications more efficiently and affordably, making AI more accessible, reliable, and embedded in our daily digital experiences.


Check out more about Artificial Intelligence(AI) around the globe!

Source: GPT-4o mini: advancing cost-efficient intelligence | OpenAI


Please Share 🤝

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top