OpenAI Releases GPT-4! Scale up to Deep Learning

Open AI Released Chat GPT-4, a promising new image- and text-understanding AI model. Company defines this feat as “latest milestone in its effort to scale up deep learning.” GPT-4 is now available to OpenAI’s paying users through ChatGPT Plus (with a usage limit), and developers can join a waitlist to gain access to the API.

The cost is $0.03 every 1,000 “prompt” tokens (approx 750 words) and $0.06 per 1,000 “completion” tokens (approx 750 words).

Multiple models, each with different capabilities and price points. Prices are per 1,000 tokens. You can think of tokens as pieces of words, where 1,000 tokens is about 750 words. This paragraph is 35 tokens.

GPT-4 on Bing Chat! 

Also, today Microsoft confirmed that Bing Chat is co-created with OpenAI, and uses GPT-4. When you search the web, Bing Chat is like having a research assistant, personal planner, and creative collaborator at your side. Microsoft released it earlier this year on February 7th 2023.
Other early adaptors include – Stripe, Dulingo, Morgan Stanley and Khan Academy. 

Also: Read our article on ChatGPT Guide!

New Features and Improvements in GPT-4

GPT-4 performs at “human level” on a variety of professional and academic benchmarks, can generate text, and can accept both text and image inputs, an upgrade above GPT-3.5, which only accepts text. For instance, GPT-4 successfully completes a mock bar exam with a score in the top 10% of test takers, but GPT-3.5 received a score in the bottom 10%.

OpenAI spent six months “iteratively aligning” GPT-4 using learning from an internal adversarial testing programme as well as ChatGPT, resulting in “best-ever results” on factuality, steerability, and denying to go outside of guardrails. GPT-4, like prior GPT models, was trained using publicly available data, including data from public webpages and data licensed by OpenAI.

GPT-4 can accept images as inputs and generate captions, classifications, and more!

One of GPT-4’s most intriguing features is its capacity to grasp both images and text. GPT-4 is capable of captioning and even interpreting relatively complicated images. For example consider the below Image as an input Image.

GPT-4 Image understanding ability
Input Image

If you ask “What can I make with these ingredients?” GPT-4 will responds with a clear answer like mentioned below

Output
There are many options for what you can make with these ingredients! Some possibilities include:

  • Pancakes or waffles
  • Crepes
  • French toast
  • Omelette or frittata
  • Quiche
  • Custard or pudding
  • Cake or cupcakes
  • Muffins or bread
  • Cookies or biscuits

These are just a few examples, but the possibilities are endless!

Be My Eyes powered by GPT-4

The Image understanding capacity isn’t currently available to all OpenAI clients; for the time being, OpenAI is testing it with Be My Eyes. Be My Eyes new Virtual Volunteer feature, powered by GPT-4, can respond to queries about photographs given to it. 

Be My Eyes Virtual Volunteer is powered by OpenAI’s new GPT-4 language model, which includes a dynamic new image-to-text generator. Users can send photographs to an AI-powered Virtual Volunteer via the app, who can answer any questions regarding the image and provide instant visual assistance for a wide range of tasks. Read more about Be my Eyes here!

What’s Steerability? 

The aforementioned steerability tooling could be a more significant improvement in GPT-4. OpenAI introduces a new API capability, “system” messages, with GPT-4. System messages allow developers to prescribe style and task by giving explicit guidance. System messages, which will eventually be added to ChatGPT, are essentially directives that determine the parameters and tone for the AI’s subsequent interactions.

“GPT-4 generally lacks knowledge of events that have occurred after the vast majority of its data cuts off (September 2021), and does not learn from its experience. 
It can sometimes make simple reasoning errors which do not seem to comport with competence across so many domains, or be overly gullible in accepting obvious false statements from a user. And sometimes it can fail at hard problems the same way humans do, such as introducing security vulnerabilities into code it produces.”

OpenAI

OpenAI does acknowledge that it has made progress in some areas. For instance, GPT-4 is now less likely to reject requests for instructions on how to create hazardous substances. According to the business, GPT-4 is 29% more likely to react to sensitive requests, such as those for medical advice and information about self-harm, in accordance with OpenAI’s policies, and is 82% less likely overall to answer to requests for “disallowed” content than GPT-3.5.

GPT-4 and ChatGPT 3.5 compared
Image credit – OpenAI

“We look forward to GPT-4 becoming a valuable tool in improving people’s lives by powering many applications. 
There’s still a lot of work to do, and we look forward to improving this model through the collective efforts of the community building on top of, exploring, and contributing to the model.”

OpenAI

What’s ChatGPT Plus?

ChatGPT plus is a paid version of ChatGPT. It is available at $20 per month. Its benefits Include access to ChatGPT even at the time of high demand, Faster response speed and Priority access to new features. 

There is no doubt that GPT-4 is an amazing new upgrade to GPT-3.5 with new features and more. We can only wait and Look forward to see How it is going to help the world by improving prople’s life!

Thank you for reading, Let us know your thoughts on GPT-4 through Artzone.ai Social Media pages!

Follow us on Instagram, Facebook, Twitter!