After months of anticipation, Microsoft–backed startup OpenAI has finally unveiled GPT–4, a groundbreaking new artificial intelligence model.
This latest iteration of the technology behind the wildly popular ChatGPT is more powerful than ever, and boasts a unique “multimodal“ capability. GPT–4 is able to generate content from both text and image prompts.
According to OpenAI, the advancements of the new model are as follows:
- Improved accuracy: GPT-4 is far more accurate than GPT-3.5, according to both developers and beta users. It now displays human-level performance during examination. In a simulated Bar law exam, GPT-4 scored in the 90th percentile, with its predecessor GPT-3.5 sitting in the bottom 10th.
- Enhanced capabilities: GPT-4 can now take both text and image inputs as a multimodal model. That means you can provide a text and image prompt to get your desired output.
- Limitations: GPT-4 still can’t learn, and still occasionally ‘hallucinates’, by presenting made-up facts. Although it has improved, OpenAI warns users to check outputs for inaccuracies.
How much better is this new model?
Curious about the distinction between GPT–4 and GPT–3.5? Let‘s take a closer look at the difference between these two large language models.
It turns out, the newer GPT-4 model can do more than just text. It can also use images as input and recognize objects in pictures, and it can generate longer responses of up to 25,000 words.
GPT-3.5, on the other hand, is limited to just text-based prompts and can only generate 3,000-word responses.
Advanced reasoning capabilities:
Credit: OpenAI
ChatGPT scored in the bottom 10th, while GPT-4 scored in the top 10th on the bar exam:
Credit: OpenAI
OpenAI’s GPT-4 technology is gaining traction with developers, writers, and other creative professionals. But, what can this revolutionary technology do? Here are some examples of its capabilities.
Create a website from a sketch in a few seconds.
GPT4 is capable of turning a picture of a napkin sketch to a fully functioning html/css/javascript website. pic.twitter.com/q6FLZL6oFO
— Lior⚡ (@AlphaSignalAI) March 14, 2023
Enjoying This Article? Subscribe For More!
Make a meal from an image in your fridge.
The New York Times demonstrated GPT-4’s image recognition ability by prompting it with ingredients like hummus, yogurt, strawberries, and carrots, which it used to create meals from a text and image combo..
Credit: The New York Times
GPT-Matchmaker
Dating app Keeper CEO revealed how he uses GPT-4 to match profiles by comparing data sets of preferences and qualities.
How Keeper is using GPT-4 for matchmaking.
— Jake Kozloski (@jakozloski) March 14, 2023
It takes profile data & preferences, determines if the match is worth pursuing & automates the followup.
With computer vision for the physical, you can filter on anything and find your ideal partner. pic.twitter.com/fdHj1LgUHo
GPT-4 is still closed to the public
GPT–4 is currently in closed beta. If you‘re not part of the closed beta yet, you can join OpenAI’s waitlist at their website.
They’re currently only offering spots to developers and businesses to build on their API, but they have announced that GPT-4 is coming to ChatGPT soon.
Promptlib’s AI Art Competition: $100 Prize!
How can you enter?
> Go to the contest page & make a free account
> Upload your submission to the contest tag