November 22, 2024 07:02 (IST)
Follow us:
facebook-white sharing button
twitter-white sharing button
instagram-white sharing button
youtube-white sharing button
PM Modi bestowed Dominica's highest award at India-CARICOM Summit | 69-year-old Delhi man, a St. Stephen's alumnus, arrested for conning govt officers by posing as ex-IPS | 'Baseless': Adani Group denies US charges of bribery and fraud against Gautam Adani | AAP's first list of candidates for Delhi polls feature six turncoats | PM Modi is incapable to arrest Gautam Adani: Rahul Gandhi after tycoon charged with bribery and fraud in the US
Open AI introduces GPT-4o model, check out latest features
Open AI
Photo Courtesy: Open AI website

Open AI introduces GPT-4o model, check out latest features

| @indiablooms | 14 May 2024, 09:03 am

Tech giant Open AI announced a major update on Monday when the tech firm introduced its latest model named GPT-4o.

The new announcement was made during OpenAI Spring Update event which was hosted by company's CTO Mira Murati.

Company chief Sam Altman said in a statement: "First, a key part of our mission is to put very capable AI tools in the hands of people for free (or at a great price). I am very proud that we’ve made the best model in the world available for free in ChatGPT, without ads or anything like that."

"Second, the new voice (and video) mode is the best computer interface I’ve ever used. It feels like AI from the movies; and it’s still a bit surprising to me that it’s real. Getting to human-level response times and expressiveness turns out to be a big change," he said.

"The original ChatGPT showed a hint of what was possible with language interfaces; this new thing feels viscerally different. It is fast, smart, fun, natural, and helpful," he said.

The company said developers can also now access GPT-4o in the API as a text and vision model. GPT-4o is 2x faster, half the price, and has 5x higher rate limits compared to GPT-4 Turbo.

Prior to GPT-4o, one could use Voice Mode to talk to ChatGPT with latencies of 2.8 seconds (GPT-3.5) and 5.4 seconds (GPT-4) on average.

"To achieve this, Voice Mode is a pipeline of three separate models: one simple model transcribes audio to text, GPT-3.5 or GPT-4 takes in text and outputs text, and a third simple model converts that text back to audio," read Open AI website.

Support Our Journalism

We cannot do without you.. your contribution supports unbiased journalism

IBNS is not driven by any ism- not wokeism, not racism, not skewed secularism, not hyper right-wing or left liberal ideals, nor by any hardline religious beliefs or hyper nationalism. We want to serve you good old objective news, as they are. We do not judge or preach. We let people decide for themselves. We only try to present factual and well-sourced news.

Support objective journalism for a small contribution.