JP Morgan CEO Jamie Dimon envisions a future transformed by AI, where shorter workweeks become the norm. With over 300 AI use cases in production at JP Morgan, Dimon believes the technology could redefine work and health, paving the way for major societal shifts. Plus, updates on Microsoft Recall, Rabbit’s AI wearable, and advances in test-time computing for reasoning models.

Brought to you by:
Vanta – Simplify compliance – ⁠⁠⁠⁠⁠⁠⁠https://vanta.com/nlw

⁠⁠⁠⁠⁠⁠⁠The AI Daily Brief helps you understand the most important news and discussions in AI.

Learn how to use AI with the world’s biggest library of fun and useful tutorials: Use code ‘youtube’ for 50% off your first month.

The AI Daily Brief helps you understand the most important news and discussions in AI.
Subscribe to the podcast version of The AI Daily Brief wherever you listen:
Subscribe to the newsletter:
Join our Discord:


On this short Thanksgiving work week the JP Morgan CEO says that this might be The norm in the AI future welcome back To the AI Daily Brief headlines Edition All the daily AI news you need in around 5 minutes in the US this is a short week For Thanksgiving which is on Thursday And the CEO of JP Morgan Jamie Diamond Thinks that this might not be that Different than a normal work week in the Future in an interview on Bloomberg TV Diamond said your children are going to Live to 100 and not have cancer because Of technology and literally they'll Probably be working three and a half Days a week Diamond's predictions are Backed up at least in practice by just How much JP Morgan has gone in on AI he Has previously called it quote critical To our company's future success Dedicated an entire section to AI in his Shareholder letter this year and said That JP Morgan already has more than 300 Use cases in production one of the Things I talk about a lot here is the Need for a discussion around a new Social contract and this is exactly what I mean by that one of the major Questions if a I does replace a huge Amount of human labor is whether we're Going to stay on the same Paradigm of How much we work and just fill it with The same amount of time on our new more Advanced tasks or if we actually get

Comfortable saying working a smaller Amount is enough to have contributed to Society it's hard to have that Conversation in the abstract and it's Hard to imagine a future that doesn't Look exactly like today but these are The types of conversations we are simply Going to have to have in the years ahead My prediction for the AI feature which Most feels feels insane to those of us Now who have grown up in a very specific Era of computing but will feel Completely and absolutely normal for People in the future is the type of Constant surveillance represented by Microsoft's recall the controversial Feature takes screenshots of everything You do on your computer to create a Searchable database of memories AI is Used to index images and text from the Screenshots and power the search Function the company announced this some Time ago but has finally released their First preview version couple things to Note one they say that recall is Entirely optional you have to opt into It second they say that they provided Strong controls over privacy and Security users have full access to all Of the screenshots and can manually Delete them as necessary the feature can Also be configured to exclude certain Apps and websites from being recorded Can't imagine what they're thinking with

That and Microsoft also says that Personal details like credit card info Passwords and ID documents can be Automatically detected so those Snapshots are not saved finally Microsoft won't have any access to any Recall snapshots they're not sent to the Cloud or used to train Microsoft's AI Models now the reason that I think that This is going to become completely Normalized is one that the general Pattern on the internet has been that we Get more comfortable with surveillance But two if they can solve for these Security and privacy concerns the number Of applications that this opens up is Immense this is so potentially useful And even Beyond just the Search Application that Microsoft is thinking About there are many many opportunities To use this sort of total information About what someone is doing to create Customized products and Serv services Around them that could be really really Powerful the question just of course is What people get comfortable with and for That we will just have to wait and See next up an update on an AI powered Device that had a ton of excitement and Then fell off quite quickly AI wearable Rabbit says that they're now rolling out A new agentic upgrade users of rabbit R1 Can now teach it to perform certain Tasks with a feature they're calling

Teach mode the R1 can learn through Demonstration for example a user could Show the device how to fetch social Media updates or save a song to your Spottify account using the web interface Users create a lesson by describing the Task and then record themselves Performing it the R1 can then recall the Lesson on command this is something that They had promised when they first Announced the R1 but which wasn't Immediately available the feature has Come online after the R1 received a big Update last month with the addition of Automated website browsing some reports Criticize teach mode for being a little Laborious but also recognize that no Code agent training is a powerful idea Fortune meanwhile highlighted how common Experimental and unpredictable AI Features are coming rabbit CEO Jesse Leu Defended the practice stating you have To kind of encounter all the edge cases And tweak on the Fly and continue that's Just the whole nature of developing with AI models he pointed out that Rabbit Doesn't have a 10-year Runway or the Ability to fully test edge cases saying We have to make sure that we take our Shot and move fast the question of Course is whether move fast and break Things is an appropriate mantra for the Generative AI World lastly today another Chinese lab has claimed a major

Breakthrough in the use of reasoning Models and inference time scaling last Week deep seat climed claim they have Produced a text based reasoning model That exceeded the capabilities of open Ai's 01 preview model they said they Merely took open ai's Chain of Thought Logic and added more time demonstrating That this approach to scaling could be Viable now a Consortium of Chinese Universities have produced an image Capable model called lava 01 based on The same principles Chain of Thought Prompting has been used for visual Language models or VMS in the past but It generally produced only marginal Gains the issue has been that VMS can Struggle when the Chain of Thought is Not sufficiently systematic or Structured often getting lost and losing Track of the specific problem they're Trying to solve the researchers behind This new model wrote we observe that VMS Often initiate responses without Adequately organizing the problem in the Available information moreover they Frequently deviate from a logical Reasoning towards conclusions instead of Presenting a conclusion prematurely and Subsequently attempting to justify it The researchers of lava 01 took a Similar approach to open AI o1 breaking The reasoning process down into four Steps the model first provides a high

Level summary of the problem it's being Asked to solve next the model captions The image input describing the relevant Parts and focusing on elements related To the question The model then performs structured Logical reasoning to produce a Preliminary answer finally the model Presents a summary of the answer based On the prior reasoning step only the Final step is visible to the user with The rest taking place behind the scenes The researchers wrote it is the Structured output design of lava 01 that Makes this approach feasible enabling Efficient and accurate verification at Each stage this validates the Effectiveness of structured output in Improving inference time scaling and While there is excitement about the Possibilities Ria Shakur also pointed Out love how the lava1 paper releases Zero information about their training no Code no annotated data set super helpful Still there is clearly a lot going on With the inference and test time compute Approach to scaling so anticipate some More developments there that however is Going to do it for today's AI Daily Brief headlines Edition next up the main Episode