Recent discussions from OpenAI leaders, including Sam Altman, have reignited debates about the proximity of artificial general intelligence (AGI) and even artificial superintelligence (ASI). This episode explores key statements, shifting perspectives on AGI’s timeline, and the implications for society and innovation in 2025.

Brought to you by:
Vanta – Simplify compliance – ⁠⁠⁠⁠⁠⁠⁠https://vanta.com/nlw

⁠⁠⁠⁠⁠⁠⁠The AI Daily Brief helps you understand the most important news and discussions in AI.

Learn how to use AI with the world’s biggest library of fun and useful tutorials: Use code ‘youtube’ for 50% off your first month.

The AI Daily Brief helps you understand the most important news and discussions in AI.
Subscribe to the podcast version of The AI Daily Brief wherever you listen:
Subscribe to the newsletter:
Join our Discord:


A bunch of chatter on Twitter has people Wondering is Agi coming a lot sooner Than we think welcome back to the AI Daily Brief today's episode is fairly Interesting and something that I would Not have expected to be digging into Right now however the discourse has Shifted in a fairly significant way in The last couple of days and the Conversation is entirely around AGI and Asi now one of the Perpetual questions In the AI space is just how far along Are we how does the current state of Technology stack up to this mythological Artificial general intelligence or even Artificial super intelligence that Represent polls benchmarks and goals for The future as we've seen over the last Few months in many cases the answer to These questions have significant Financial implications until recently For example Microsoft and open ai's Deal Had a covenant that effectively Nullified the deal when the open AI Board said that AGI had been achieved Now recently if you've been listening to The show you'll know that the Definitions were tightened up to be Basically Revenue based but still the Point is that there are big Stakes here In addition to the financial Stakes of The conversation there's just a broader Question of what it means for the world And when it comes to how far along we

Are or more specifically how close to AGI we are most would have thought that Over the last few months we had felt a Setback everyone has been racing to Explore New Paths to scaling like test Time compute as the pre-training Paradigm of scaling seems to be Producing less and less positive results Anyway that was the setup of the Conversation heading into the last few Days but then some interesting things Started to happen first on Saturday January 4th openai CEO Sam Alman wrote I Always wanted to write a six-word story Here it is near the singularity unclear Which side now Alman came online later And tried to clarify saying it's Supposed to be either about one the Simulation hypothesis or two the Impossibility of knowing when the Critical moment in the takeoff actually Happens but I like that it works in a Lot of other ways too capturing the Collective are you kidding me dude Though the intern account writes dude You cannot just tweet this LOL this is Like if Putin hopped on Twitter and said He's dropping a six-word story might Press the button maybe not and if it had Just been that maybe we could write this Off as Sam being Sam and his pensan for Cryptic hints getting the better of him Around the holiday but that was far from The only indicator that we've seen that

Open aai folks seem to think that the Trajectory has changed agent safety Researcher Steven mallier at open AI Says I kind of missed doing AI research Back when we didn't know how to create Super intelligence the company has also Been dropping hints about exactly what That process is during the reveal of o03 Researchers joked about asking the model To improve itself Sam Alman cut him off And said maybe we shouldn't do that Chubby shared that tweet and said one Openai researcher said this yesterday And today Sam said we're near the Singularity WTF is going on they've all Gotten so much more bullish since They've started the O Series RL Loop one Sam's essay ASI in a few thousand days Referring to the essay which we read for The end of your lrs by the way number Two Sam's post from today yesterday this Post from open AI researcher Maier there Was also this thread from Joshua aaim The head of mission alignment on January 5th he wrote The World Isn't grappling Enough with the seriousness of AI and How it will upend or negate a lot of the Assumptions many seemingly robust Equilibria are based upon domestic Politics International politics market Efficiency the rate of change of Technological progress social graphs the Emotional dependency of people on other People how we live how healthy we are

Our ability to use technology to change Our own bodies and Minds every single Facet of the Human Experience is going To be impacted it's extremely strange to Me that more people are not aware or Interested or even fully believe in the Kind of changes that are likely to begin In this decade and continue well through The century it will not be an easy Century it will be a turbulent one if we Get it right the joy fulfillment and Prosperity will be unimaginable we might Fail to get it right if we don't Approach the challenge headon now Capping this off was a blog post from Sam Altman posted last night January 5th The post was simply called Reflections He kicks it off the second birth of chat Gbt was only a little over a month ago And now we've transitioned into the next Paradigm of models that can do complex Reasoning New Years get people in a Reflective mood and I wanted to share Some personal thoughts about how it has Gone so far and some of the things I've Learned along the way as we get closer To AGI it feels like an important time To look at the progress of our company There's still so much to understand Still so much we don't know and it's Still so early but we know a lot more Than we did when we started Sam then Walks through a bit of a history of the Company how surprised they were when

Chat gbt took off when it was launched In November of 2022 how messy the Company building process has been Sam Took some time to discuss getting fired By the board he basically reflects upon It as a learning experience for him and The company but the real meat of it and The thing that everyone's talking about Is the last few paragraphs Alman Concludes we are now confident we know How to build AGI as we have Traditionally understood it we believe That in 2025 we may see the first AI Agents join the workforce and materially Change the output of companies we Continue to believe that iteratively Putting great tools in the hands of People leads to great broadly Distributed outcomes we are beginning to Turn our aim beyond that to Super Intelligence in the true sense of the Word we love our current products but we Are here for the Glorious future with Super intelligence we can do anything Else super intelligent tools could Massively accelerate scientific Discovery and Innovation well beyond What we are capable of doing on our own And in turn massively increase abundance And prosperity this sounds like science Fiction right now and somewhat crazy to Even talk about that's all right we've Been there before and we're okay with Being there again we're pretty confident

That in the next few years everyone will See what we see and that the need to act With great care while still maximizing Broad benefit and empowerment is so Important now for months Altman has been Sort of resetting the goal post on AGI Saying that in terms of the way that We've thought about it in the past will Probably be there sooner than we think But it'll probably have less impact than We would have thought clearly this Resets the goal post even further to Really put the aim of open AI at Super Intelligence not just AGI Professor Ethan mik points out that this is not Coming from Sam alone and reflects what He's been hearing as well he tweeted the Last part of the ESS and wrote this bit Of Sam alman's newest post is similar in Tone to a post by the CEO of anthropic And what many not all researchers from Every lab have been saying publicly and Privately you do not have to believe Them but I think they believe what they Are saying for what it's worth for many Then the conversation is what do we do Now what are the implications of AGI Being here faster than we think this is Now a key question and one that we will Be exploring a lot more it seems in 2025 that's going to do it for today's AI Daily Brief until next time peace