Why yesterday’s ChatGPT announcement was world changing🤯🤯

We are going to have a chatgpt-like reaction from the public like Nov 2022. Here’s some use cases that are going to lead to this:

  • Language barriers: Imagine a grandparent speaking native language, a child who speaks both native and English and grandchild who just speaks English. Now the Al can act like the child that can bridge the gap between grandparent and grandchild. This is going to be huge for immigrants all around the world and many language learners and racially diverse couples.
  • Health use cases: the model is going to be used in medical diagnostic use cases to detect symptoms based on live facial recognition and voice tone recognition. Depression detection is already being done but the capabilities of GPT 4-o takes these capabilities to another level. This is important for Canada because we have a large nursing shortage.
  • Al as a judge: best shown in a rock paper scissors match demo, because people attribute higher knowledge to chatgpt, they are going to let it be the judge in arguments and conversations and this is something akin to AGI scenarios where regular people didn’t get to experience back in Nov 2022.
  • Al as a friend: while rigorous testing has not been done yet, the demos show the technology is at a level where it can be like a friend very easily and we are going to see a lot of media coming out on this.

  • Price wars: The best model is now considerably better and lower price than all other competitors by a considerable margin(link). This is likely to put out some competitors out of the market.
  • Another reason competition is going to become hard is because gpt 4o is going to be free for chat.openai.com users (not enterprise). What that means is penal is going to have much stronger loyal base because so many people will now use the best model for free and would be more resistant to use other ai providers.
  • Over the span of a year openAl has reduced the enterprise cost of their model 2 times. The overall price per 1M tokens (input and output tokens combined) has gone from 37$ to 15$ and now from 15$ to 7.5$. This is now cheaper than the best models from Google (10.5$ for gemini 1.5 pro) and Anthropic (30$ for claude opus). Cohere’s top model is 6$ and open source models are much cheaper (Llama 3 is 0.9$ per 1M tokens).

  • Screen understanding and latency: The voice assistant tool responds in real time and that is a big engineering feat. On top of that the screen sharing feature is one big feature that other competitors don’t have. It was best presented in Khan Academy OpenAl demo where the chatgpt 40 helps a kid with doing a math problem. There are so many use cases this opens up.

*Who am I?*

I’m Mehrdad, the lead AI Engineer in a Canadian energy company with $4 Billion in revenue.