we've asked the entire world to move from calculator technology punch the same Keys you'll get the same result to stochastic technology ask the same question you'll get a slightly different result this has not happened this is the biggest shift in you know the use of the tools that we have since the Advent of the computer we're asking an entire cohort of the workforce to move to a stochastic mindset and the only way you get that is by having a risk reward ratio that you're comfortable enough it's like you know what I don't I'm not asking
it to be right 100% of the time I'm asking it give me a draft that saves me time many many many times over and that distribution of Roi is something that I'm comfortable exploring with and it's rating on and I think that that is really one of the predictors that we see in people who've tried chat GPT or in people who are just curious with new technology is they expect that some of it's going to be a bit broken but the upside scenario to them is so clear and so 10x that they're willing to make
that trade-off or that local risk to uh to get things started [Music] welcome to training data this week we welcome Gabriel Hubert and Stanis L po the co-founders of dust a unified product to build share and deploy personalized AI assistance at work founded in early 2023 after spending years at stripe and open AI second time Founders Gabe and Stan started dust with the view that one model will not rule them all and that multimodel integration will be key to getting the most value out of AI assistance they were early to be convinced that access to
the proprietary data you have in data silos will be key to unlocking the full power of AI and they know that you want to keep that data private we've worked together for 18 months and their predictions have been consistently preed so today we decided to ask them about those predictions we'll get into their perspective on how they see the model landscape evolving on the importance of product Focus over building proprietary models and on how AI can augment rather than replace human capabilities Stan Gabriel welcome to trading data thank you glad to be here yeah thanks
Constantine super happy to be here guys first thing that I want to ask is you started this company in early 2023 at the time it seemed like one model might rule them all uh and that model at the time was I think it was probably G gbg 3.5 I don't know if four had yet come out but that was way ahead of the curve and people were super Blown Away you guys came out with a pretty contrarian view that there actually would be many models and that the ability to stitch those together and do Advanced
workflows on top of that would be important so far you've been completely right how did you get the confidence to make that decision a year and a half ago yeah I think on the on the M part it's uh it was clear that um many Labs were already emerging it was not clear from the kind of General audience but for the people that knew the Dynamics of the market was cl many laps were emerging and I think um it was uh kind of natural to us that there would be competition in that space and as
a result uh there would be value in enabling people to quickly switch from one model to another to get the best value depending on their use cases yeah and I think from the from the use a standpoint the the point on being able to quickly evaluate and compare is obviously important looking ahead or already at some the conversations we're having um it seems that the levels of scrutiny security sensitivity of the data that's being processed may also influence some different use cases and so we're excitingly seeing people thinking about running smaller models on device for
some use cases and you could imagine a world where you want to be able to switch between an API call to a Frontier Model for something that's less sensitive absolutely crucial to get like um Cutting Edge reasoning capabilities for and some smaller classification or summarization efforts that could be done locally uh while the interface that you use uh for your agent or your assistant Remains the Same that and that switching sort of requires the ability to to have a layer on top of the models so you you guys have been read about this every time
uh as you've called this out and so many of your predictions over the past couple years of partnership have been uh non obvious and then correct uh I think this is still not obvious as in there will be many models you'll have some local models you'll have some API call and then you actually as a customer want to choose between them or want to have some control first of all why do you think that will be as in why will there be multiple models secondly why doesn't that get abstracted away by some sort of router
mechanism some hypervisor layer uh and does that happen and would you be that hypervisor layer uh yeah help me understand that I think there's really uh two two modes of operation in free think about the future so it's a it's a Bodel distribution basically of the future there's one where the technology as it stands today keeps uh progressing rapidly in which case there's still going to be competition from rather big Lads because there's be incredible need for gpus to build those larger and larger models because the only way we know to get those model better
today is mostly by scale in that world the this kind of dynamic of being able to switch to the best model at time T will remain true for a long time I guess until we reach wherever it is that is at the end of that Dynamic and then there's the hypothesis and we can talk about that uh later in more details the theis of maybe the the technology plate uh in which case it's not going to be one model it's going to be a gazan model and eventually everybody will get this their model and eventually
on your MacBook M6 you'll be able to to to train a GPT 6 uh in a few hours in a couple years and then the kind of router uh the rout need those kind of disappears because the technology is really commoditized uh in terms of just producing the tokens and every company will have their own model we will we would have our own model in that world well we got to push you on that so there's you're building a business where you you sort of win regardless of which one of those worlds we go into
which one of those worlds do you think we're going into this one is definitely tricky I it's it it it's it's interesting because um so in terms of capabilities of the models we we've seen or had the perception that the ecosystem was moving very quickly over the past two years we've seen larger context support for audio support for image and stuff but at the same time the One Core thing that matters for changing the world is the resoning capabilities of those models right and the current reasoning capabilities of those models has been actually pretty flat
over the past two years uh they are at the level GT4 as the end of its training which is roughly slightly over two years ago if I remember correctly for the end of the internal training at and so that means that over the past two years in terms of reasoning capabilities it's been somewhat flat well so hang on so there's the Kevin Scot point of view which is there's actually exponential progress but you only get to sample that progress every so often and so in the absence of a recent sample people interpret it as having
been flat when in fact there's just sample bias you can't see it so do you do you Subs do you think he's right do you think there's actually exponential progress we just haven't gotten to see it yet or do you think it's actually like ASM toting and reasoning breakthroughs have not progressed at the rate one might hope uh as far as I'm concerned I have a strong feeling that it's uh it hasn't been moving as fast as I would have expected in my most uh optimistic views of the technology and so that's why I'm I'm
I'm allowing myself to to ask the question any or simply consider the different scenarios what I think one of your predictions for 2024 also is that we would have a major reasoning breakthrough do you think it's coming yeah it's going to be a tough one because uh this one hasn't come for sure and even gp5 or GPT n plus1 or or clro n plus1 doesn't matter who who cracks that first hasn't come yet uh and there's there's many reasons to to believe it might not be a a core technological limitation you can make many hypothesis
as to why it might be the case that it takes time um the the scale of the Clusters required to train the next generation of model is humongous and it involve a lot of complexity from an infrastructure and really programming standpoints because GPU fails when you scale to that many gpus per cluster they fail pretty much all the time all the training is very synchronous across a cluster and so it might just be the case that scaling up to the next order of magnitude of GVS needed is just very very very hard and that that
that wouldn't be kind of a inherent limitation it's just a phase where we learn how to go from red one to Red five but for gpus basically Stan you were uh you were at open AI at a pretty critical point in time so people know you from the dust experience but one thing that it has to be remember about Stan is you were a critical researcher at open aai from 2019 through late 2022 you got a one a bunch of wonderful Publications some of them relates to mathematics and AI you worked on these with Ilia
sitk and the crew at open AI do you think that mathematics will be essential to this type of reasoning breakthrough or is it orthogonal that's something that we're actually going to learn on textual language data I I remain quite convinced that it's a great environments to to study and it's it was a tesis that we had at time with G Lum we uh then founded mistra he was working at Fair on exactly the same subjects and our motivation was exactly was really shared at the time we really were friend is competing in the in in
the workspace but really FR by the ideas um I think the the uh the the the idea there was that mathematics and in particular in its form of formal mathematics that gives you perfect verification is a very unique environment to study reasoning capabilities and to push reasoning capabilities because you have a verifier so you're not not it's con constrained by being able to verify the prediction of the model that in an informal setup would require humans checking them uh to some extent and so that very bit is probably something that has to unlock something at
some point it hasn't yet for many reason but at some point it should unlock something so I remain extremely bullish on the kind of mass and formal mass and LM studies yeah I remember one of the ways you were presenting it to me when I was still very much ramping up was uh in math as the door to software software the door to the rest and and started with some of the critical systems that were the very only ones to have been hand proven and hand verified as an example of how much more costly it
was to do it by hand than do it by Machine uh and an an indication of the future gains we could expect from being able to extend that and and democratize that uh you guys see a lot of action through the dust API calls when you build a dust assistant you're able to choose what type of underlying model to use uh you're able to call many different models like me as a user I often call not just Cloud 3 but I call GPT 4 and I call the dust assistant and I call uh in my
custom assistance I select one of many options what have you guys seen in terms of Trends what's performing really well I've personally been super impressed by the anthropic models as of late um but you guys have a much closer view of that I I think the um I mean so word of caveat on on on Trends you know you're going to have the usual cognitive biases The Grass Is Always Greener people we're going to want to switch just to see what it looks like on the other side and so when you're observing those switches you're
not necessarily observing a conviction that the bottle on the other side is better you're just you're observing the conviction people want to try um but it is true we've gotten great feedback on on claude's latest sonnet release um and empirically we're seeing some stickiness on on on that model um in in our user base I think that uh you know word on the street is uh for some coding application Cod St is actually performing very very well we haven't uh yet made it available uh through dust uh but we're it's yesterday ah there we go
sorry see this is the thing you get for read San Francisco and waking up so you're a recording at 7 o'clock in the morning so yeah code St apparently is is really interesting on some on some coding capabilities and then you have to mix it in with the actual experience that people are getting so you reasoning uh cannot be fully uh made independent from latency latency at some points last year could be you know basically a way to tell the time in San Francisco you you could see latency literally in the API as people were
waking up on the west coast so uh people have use cases that may be more or less tolerent of those um so we we cover the Gemini uh models anthropics models open airs and mrr right now and uh and we have seen some interest in moving away from the default which when when we first launched were open AI models not to say that 4 isn't isn't performing very very well over the past year there's been a lot of enthusiasm about open source models and it's actually one of your predictions St you have these great predictions
every year about AI I always really enjoy reading them one of them was that at some point this year an open source model take the brief lead uh for llm Quality that doesn't seem to have happened yet and it also seems like the enthusiasm uh around not the enthusiasm around but rather the lead SL acceleration of uh the open source models in comparison to the closed Source models has maybe slowed down a little bit maybe back to that Kevin Scott point about we sampling at discrete times as opposed to continuous times we just haven't seen
it yet but where do you think the open source ecosystem is going to go will it actually at some point uh surpass the closed Source ecosystem I mean that Rems that that that that echos with what we said earlier it's really in that b model distribution there's one distribution where open SCE goes nowhere and there's one distribution where open source wins the whole thing right because uh if the technology platters open source obviously catches up and eventually everybody can train their their high quality model themselves and at that point uh there is uh no value
in going for for proprietary model um so I think there's a there's a scenario where open source really is the winner at the end uh which would be a fun turn of event obviously uh and then uh in the current Dynamic it's true that open source has been lagging in behind so far obviously there's um I think the the one that has to be called out is really Facebook or meta eff thought because they have what it takes to train an excellent model and so far has been releasing every model very openly and so that's
exciting to see what will come out of them uh in those next four months uh to maybe make the prediction true the cave adds to that is that assuming the best model are the largest which is a somewhat safe assumption yet it can be discussed uh it means that that model will be humongous to some extent and so that means that even if it's open source nobody will be able to make it run right uh it'll just cost too much money uh you'll need hpus just to do in France um and so that will really
Trump the kind of usage of those models even if they're better in the current state of refers in terms of cost of running them it's a point for consumption that's interesting because that means that you might still have a world where um there's a lot of API based inference demand for API based inference regardless of whether the model on the other end is controlled hosted open weights whatever um just because of the the technical abilities to perform that one one of your founding assumptions kind of related to model quality and model performance and this goes
back almost two years now was that even as of two years ago the models were powerful enough and potentially economically viable enough that you could unlock a huge range of unique and compelling applications on top and that the bottleneck even at that point was not necessarily model quality so much as product and Engineering that can happen on top of the model I don't know if that's a consensus point of view today you know we still hear a lot of people who are sort of waiting for the models to get better for what it's worth we
happen to agree with you but the question is what did you see in 22 that gave you that point of view and if we fast forward to today what has your lived experience been deploying this stuff into the Enterprise in terms of where are the product and Engineering unlocks that need to happen to bring this stuff to fruition my triggering point for for living up AI was seeing and playing with gp4 and it it is it was coming from two very contradictory motivations the first was I said it before it is is crazy useful nobody
knows about it nobody can use it yet and still it exists and literally it's almost already in the API I mean at the time it was GP 3.5 in the API which was kind of a slightly smaller version of gp4 but on the same train data it was a crazy good model um which was and it was basically codex the the Bas model and um it was much better than chpd and it was available in the API and yet the AR of open a was ridiculously small at the time like in existence by all standards
of what we see today and so that was kind of the motivation and and that was mixed with the fact that I I I I was starting to feel the uh the I mean I had the intuition that it would be hard to invent an artificial mathematician with the current technology and so I was kind of seeing it not a dead on a very long pass slow pass forward on what I was working on and at the same time I was seeing the the utility of those models already when you use them for your day-to-day
tasks and so that was first motivation and the Very contradictory motivation that I shared with Gabriel at the time was if that technology goes all the way to AGI it's the last train to build a company so we better we better do it right now because otherwise next time it's going to be sh and I absolutely didn't answer your question but I let Gabriel answer a question I I think I think what got me excited and when we did start brainstorming on on the ways to to deploy this just raw capability in the world what
where it made sense to dig was um one Insight on some of the limitations of the hyper and fine tuning at the time that people were talking a lot about fine-tuning a lot of consultancy firms were selling a lot of slides that were essentially telling big companies to spend a a lot of money fine-tuning and and the two things that cut it for me was Dan saying you know one it's expensive and you do it regularly and nobody knows that they'll have to do it regularly uh and two it's really not the right idea for
most of the things people are excited to F tune on and in particular like fine-tuning on your company's data is a bad idea as opposed to maybe sometimes fine-tuning on some specific tasks where you can see gains but um the idea that bringing the context of a company which is obviously every real company Obsession like how does this work for me how do I get it to work the way I like it to work uh was going to happen with technologies that weren't just changing the model itself but rather controlling the data it has access
to controlling the data any of its users have access to and those are somewhat Hybrid models between New World and old world uh the very old world version of it is you know the keyh holders are still the same the ceso is the one deciding how new technolog is exposed to members of a company the guard rails that are in place the observability that's available to the teams to measure its impact and and any data leaks those are old software problems but they still need to be rolled out on very new interfaces because the interfaces
now are these you assistants these agents um and then some of the new problems are around access controls does access controls look and feel the same in a world where you have half of uh the actions done by non-humans I might want to have access to a file that's like 2020 like do do I have access to the file yes or no in 2024 it's like well maybe an assistant might have access to the file and can give me a summary of it that leaves out some of the critical information I should not have access
to but still gives me access to some of the decision points that are important for me to move on with my job and that set of Primitives that set of nuances just doesn't really exist in how documents are stored today so if you think about deploying the capability in a real world environment where people are still going to have to face those controls and and those guard rails the product layer is actually very thick the application layer to build the logic and the usability uh to ensure performance but also adoption is is is quite thick
and that was the I think that was the go to say all right there there's a lot to do here we might get started maybe you can dig into that because when we intersected in Q2 2023 q1 Q2 2023 a lot of people were still starting these Foundation model companies and you guys had a very specific opinion which is the future is application layer and there's going to be a lot going on under the hood and we're just going to be an abstraction layer on top of that and let things happen as it see as
it happens uh we're going to succeed in any case by building something that people actually used and love first how do you have the conviction for that secondly how has that been playing out what has been the hard part about it you mentioned the cesos and the Enterprise and Enterprise deployments uh you guys have been way ahead of the curve on rag I mean everyone was talking about fine tuning but you guys have done so much in ter terms of uh retrieving this was before it was even called that really retrieving and actually making smart
decisions around information walk us through the step by step of from the idea of application layer to where you are today you can imagine the application layer conviction existing in a world where you still decide to build a Frontier Model the reason we split those two is one it seemed like a lot of money for a lot of risk and I mean a lot of money for a lot of risk to try and develop a Frontier Model or an equivalent to a fr model and also make a bet on the way it was going to
be distributed and um it's so our our internal slogan was like no gpus before pmf like we we don't see the value in training our own model until we actually know which use cases it's going to get deployed on uh and there are much cheaper ways to explore and confirm which use cases are actually going to uh make most of value and generate most of the engagement um the second uh reason was was about this this this this data contradiction like the fact that the cut off dates for training on internet data uh are hard
to set continuously the fact that you can't actually get an internal understanding of what happened last week in a Frontier Model means that fine tuning is a hard problem that it is not a solved problem at scale and so if you walk from that conviction backwards that means like it's there are many cases where it's not solved so another technology has to be the one to deliver um most of the most of the gains and extracting a small piece of context uh from documents where it lives feeding it into the scenario the workflow that you
need help for the the one Trend that seemed interesting was that actually many decisions require limited amounts of of context and information to be greatly improved so the context windows at the time that were small we're already compatible with some scenarios of saying let's just bring the the information in and what we've seen over the last year of course is the the increase in size of those context Windows which just makes it easier to expose all the right data no more than the right data hopefully to the reasoning capabilities of the of the Frontier Model
um and what we've what we experienced is first of all it takes time for people to understand those distinctions it's hard and you have to get yourself out of your own bubble regularly to realize that it's true the world um the the future isn't quite evenly distributed yet and people have varying assumptions on what it means to roll out AI internally or roll out the capabilities of of these Frontier models on their on their workflows and you have to walk them back on your what they really care about which is always very simple things you
know I want to work faster I want to know the stuff that I'm missing out on I want to be more productive or more efficient in some tasks that I find repetitive and then only bring the explanation of what technology is going to solve that when it's absolutely necessary because people will worry about their experience and how they feel about it more than how it's working under the hood 99% of the time uh the big Insight that's happened and that I think we're leaning into we have been for a while and it's great to see
some of the market also doing that is people are actually really good at recognizing which tool they need in the toolbox like I think we've we've we've not respected users enough in saying you need a single user that does absolutely everything and you know the routing problem should be completely abstracted from you you should ask this question to the one Oracle and the Oracle will reply people are pretty comfortable telling a screwdriver from a hammer and you know when they want to get to work and they need a screwdriver they're very very disappointed where one
they get is a hammer and it sounds like a hammer response and and so specializing agents specializing assistance and making that easy to do design deploy monitor iterate on improve all those verbs that require prod surface H it it was it was quickly apparent to us that people were very comfortable with that and uh and so the the number one question that made us feel like we had an insight to hang on to an and lean in on was everybody asking us about D was obsessed with the top use case it's like what are people
using it most for what is the top use case across companies what a and and I could almost see the the Amazon eyes trying to decide which diapers.com they're going to verticalized and integrate like which verticalized use case should we now just build as a specialized version of this but I think the full story is fragmentation I think the story is like giving the tools to a to a team or to a company to see opportunities for workflows to be uh improved proved on uh augmented and understanding the Lego breaks that are going to help
them do that so rather than um encapsulate the technological breaks that are Ed for and Abstract them away from users exposing them at the right level gives people a ton more autonomy H and really just the ability to design things that we had never thought of some of the scenarios that have come up we we literally could not have imagined ourselves that idea Mak sense like the fragmentation and providing people with the Lego blocks to see what sort of use cases emerge just to make it a little bit real though can you share a couple
of use cases that you've seen in your customer base that have been unique or surprising or particularly valuable just something to make it a little more tangible there's obviously a ton that people are thinking about like the category of obvious use cases that have been interestingly and quickly deployed are enablement of sales teams support teams marketing teams and that is essentially context retrieval and content generation uh I need to answer a ticket you know I need to understand what the answer to the ticket is and generate a draft to the ticket uh I need to
talk to a customer I need to understand which vertical they're in and how our product solves their problems and draft an email to follow up on their objections um I need to uh prepare a blog post to show how we're differentiated from the market again like I'm going to go and plow into what makes us special and generate with our tone of voice those were were were pretty obvious and and and quite expected what I've been excited by is to see two types of things one very individ SCH assistants personal personal coaches um people generally
actually quite young people in their first years of career asking for advice on a weekly on a daily basis how did I do today versus my goals where do you think I should focus my attention in the coming days can you actually break down my interactions on slack and and in in notion over the past couple of days and say where I could have been more concise uh I'm getting the feedback that I'm sort of sometimes talking to theoretically can you point out the ways in which I can improve on that in the in these
two notes that I'm about to send and and so that's exciting because our bet was you know we want to make everybody a builder we want to make everybody able to see that it's not that hard to get started and by reducing the activation energy there to see small gains immediately rather than wait for the next model or the next version that's going to really solve everything for them um personal use case have been great that the second family of use cases that that I'm excited by are um essentially cross functional so where the data
silos exist because the function don't speak the same they speak the same language but they don't speak the same language and so um understanding what's happened in the codebase when you don't know how to code is powerful having an assistant translate into plain English what the last poll request that's been merged does is powerful it's powerful to people that were blocked in their work uh didn't know who they should bug to actually get an update so you know marketing to engineering sales to engineering the other scenarios are you you're extracting technical information from a long
sales call is powerful because it means that the engineer doesn't need the abstraction of a pmm or a PM to get n nuggets from the last call with a key account they can just actually Focus the attention of an assistant on that type of content on their own project and get those updates so I'd say that's the family of of assistants that that we're excited by because they really represent I think the future of how we'd love uh fast moving well performing companies to work where the data that is useful to you and the decisions
you should make is always accessible you don't need to worry about which function decided on it or created it you can access it and that fluidity of information flowing through the company helps you make better and faster decisions day in day out um yeah any other examples that I'm missing Stan that you think you're excited by no I I think I think what I wanted to add is is the fact that as you said the usage is extremely fragmented we we see over and over the same scenario and so we have data to back that
kind of a proposition is is as we we built dust as a Sandbox which is makes it extremely powerful uh and extremely flexible but also has the complexity of making activation of our users uh not trivial because when you have an horizontal sunbox like product you're like yes but for what and so generally the pilot phase that goes with our users starts by clearly identified use cases so they really kind of try to the question what are the use case I should care about for my company and try to identify a couple of them and
we always see the same pattern we see first use case gets deployed use stage starts we try to move laterally to another use case Second Use case gets deployed useage picks up a little bit a little bit more and then we generally go through a phase where the usage is kind of flat increasing slowly and eventually it reaches can critical mass of usage and all of the sudden it skyrockets to something like 70% of the company and that's kind of the pattern of of kind of activation of our users and the skyro getting to 70%
the usage picks up the time the original use case that were identified by the stakeholders become just anecdotical compared to the rest of the usage and that's where we we feel like does provides all value and it's very hard to know for us what are all those us case because for we have examples of company with a few hundred people and a few hundred assistants and so it's just it's just hard to answer the question what are the best us cases like that's those are great examples and that that calls to mind an analogy that
I would like to try out on you guys and you may puke on this analogy but this is this is what just showed up in my brain which was um a lot of those use cases you described you could imagine some sort of vertical application being built around those use cases and the analogy that comes to mind is there are gazillion vertical applications and yet where does a lot of work happen spreadsheets why does it happen in spreadsheets everybody knows how to use a spreadsheet they're there they're flexible you can you can customize them to
your heart's content and so the analogy that I'm wondering about is this almost like the spreadsheet of the future you know some of these applications may get peeled off at a vertical specific applications but even then people are still going to come back to the to the personal agent because it's just it's there it's available it has access to your data it's familiar you know how to use it you can build what you want quickly and simply and effectively like is that a reasonable analogy for what this kind of is I think it's an amazing
analogy for another thing that I'm thinking about which is I it took me the hot it took me the longest time to get Stant to spreadsheets when we started work together and this is way this is way back when this is this is like I don't know if it was 20 years ago 15 years go and then at one point Stan uses it for something is like oh wow this is kind of like a cool reppel interface where you can just get the results of your functions in real time I was like yeah that's this
how the he's like it's a cool R in face for non-engineers I get it now and yeah I I think it's also interesting for that the experimentation cost is very very low if you think about um the way in which like some of our customers try and un try try and describe the gains that they're experiencing or that they're seeing and and their excitement for the future is some functions we've had 80% uh productivity gains some functions we're seeing 5% productivity gains and we're not even sure that we're measuring them right but we're seeing gains
when the specialization of the assistant is close enough to the actual workflow that is able to augment the distribution problem of that with a verticalized a verticalized set of assistance is almost impossible to solve how are you going to get that deep into that function at a time where you know budgets are tight decision- making on which technology is going to be a fit is sometimes complicated when sometimes that's where the performance Gams are are the are the most obvious one of our users has seen like 8,000 hours a year shaved off two workflows for
an expansion into a country where they decided not to have a full-time team and so basically sparing you some of the boring details but like the ability to review websites compare them to incorporation documents in a foreign language have a policy Checker that was making a certain number of checkpoints very clear to the agents that were reviewing the accounts all in a language and in a geography that none of these people were yet familiar with because they were really exploring the country um and immediate gains like very very easy iteration on the first version of
the Assistant two weeks to launch it into production roll it out to three a to three human agents that were then assisted by by by these assistants and their CTO U sharing like you know we're seeing we're seeing a north of 600 hours a month I'm thinking our pricing is terrible but what I'm excited by is that um is that that case could not have been explored or discovered with a verticalized uh sales motion because I just don't know how you get to that fairly Junior person in a specific team and and actually are able
to pitch them and deploy that quickly uh whereas if you have that common infrastructure that people understand the breaks of not everybody knows how to do some products not everybody knows how to do a pivot table but everybody understands that they can just play around with the basic things and probably get help from somebody close to them that's the other thing we've seen you know um the the the map of Builders within companies this heat map of people what's amazing about it is that it's people who just excited about iterating exploring and and testing new
stuff which I think correlates well to you know high performance or high potential in the future it's like dust is heat seeking for potential and talent across your teams because the the people using it the most are people who are the most comfortable saying I don't feel threatened by something that's going to take the boring and repetitive side of my job away from me I'm excited to have that go away and focus on the high value tasks I think for the first six months I was one of the loudest voices saying what is that main
use case I think you guys heard many many times and then eventually I realized like this is a primitive we're talking about spreadsheets you could talk about frankly a Word document you could talk about office suite when I interface with dust I think about it like slack except I'm not slacking my colleagues I'm slacking assistants and they actually do this kind of work for me and I can show them the kind of work so it feels Pat to your point something like a spreadsheet meets the ergonomics of a slack as it it's brought to me
as opposed to I have to go to it uh and that that that is it took me a while to get there and now I see how the fragmentation is the power of what you're going after and Gabriel I have a quick question on sort of the psychographic of your user because you're your comment that it's like heat seeking for the people who are sort of ambitious and Innovative and stuff like that um I don't know if you have a name for them but let's call them the makers you know the people who are not
afraid to try to try new things and try to build stuff have you come up with a systematic way to find those people or do they tend to find you through word of mouth or some other thing because that's not you know LinkedIn profiles don't say you know Gabriel migger right like I think it's a super interesting question at a couple of levels but um our motion is is dual right so the things that predict a great outcome with dust I'm I'm coming out of a core and trying trying to think about what was most
powerful about this call I had yesterday with uh Chief people and systems officer or the company that could not stop interrupting me 5 minutes into my PCT yes I did a talk on this yes I've already WR about this I've got a blog post on this okay when can I demo where do I put my credit card that's call you next week and it's the top down motion is enthusiasm and optimism about this technology changing most things for most people who spend most of their days in front of a computer you need that that's a
necessary condition because I think it unlocks three things one it unlocks the belief in a horizontal platform for exploration the ability for security to be in the supportive business rather than a than a blocker and um and in genuinely sometimes uh example setting like we have Founders and Leadership teams that are just like how have you augmented your own workflows last week and and Leadership meetings are being being asked they doing offsite about like how are you going to get better at answering uh like some of your team's queries faster with with us and so
once you have that then you have the the right sandbox I'd say that the right petri dish I don't think we fully cracked the bill identification uh so right now it's more like bait it's like the product is incredibly easy to use anybody can create an assistant even if they have not been labeled a builder by their organization and it's just the sharing capabilities of their assistant that are somewhat throttled but we can see from the way in which people explore the product create assistance for themselves share them with their teammates in a limited way
a great predictor of that type of uh of personality and if you ask me to look at LinkedIn and predict who are going to be um who who are going to in that in in that family I'd say the number one discriminator is uh somewhat to a degree it's a bit ages but like people who are maybe earlier on in their careers who have a mix of tasks that they obviously know they can get an assistant to help with so they have use case one just laid out for them people who have repetitive tasks and
people who whove scripted their way out of a lot of of of repetitive things before just to be explicit like we we had the conversation I think it's okay to say like it is people under 25 like we were saying yesterday the power users the people that are using this all the time um at the companies are the people under 25 because they aren't set in their ways just to be explicit and that doesn't mean everyone you can be 70 and constantly innovating in a new way but in general they don't have the pattern that
they've been set to and by the way that's true of a lot of the next generation of productivity notion which Pat works really closely with that is a under 25 power law type business and you know the the teammates here under 25 keep pushing me to transfer over to notion and it's just a different type of thinking it feels like a very similar motion at dust yeah I I I think that um the one thing we had that we have which is useful is that the immense B Toc success of chat GPT as a now
obviously world famous product uh has made it really easy to set up Pilots by just telling teams do you know what send a survey out ask people how often they they've used chat GPT for personal use in the last seven days like Rank by descending order and that's your pilot team that's the people you want to have poke holes at kick tires uh and because they have you know we've we've asked the entire world to move from calculator technology punch the same Keys you'll get the same result to stochastic technology ask the same question you'll
get a slightly different result this is not happened this is the biggest shift in you know the use of the tools that we have since the Advent of the computer we're asking an entire cohort of the workfor to move to a stochastic mindset and the only way you get that is by having a risk reward ratio that's that you're comfortable enough it's like you know what I don't I'm not asking it to be right 100% of the time I'm asking it to give me a draft that saves me time many many many times over and
that distribution of Roi is something that I'm comfortable exploring with and it's rating on and I think that that is really one of the predictors that we see in people who've tried chat GPT or in people who are just curious with new technology is they expect that some of it's going to be a bit broken but the up side scenario to them is so clear and so 10x that they're willing to make that tradeoff or that local risk to uh to get things started so you guys have a lot of very strongly held beliefs um
internally and externally and the good news is you've consistently been right about the strongly held beliefs you've named a few of them I mean you've talked about this shift from deterministic to stochastic way before it was mainstreamed um you talked about rasterization and vectorization I I think about that that can be unpacked if you'd like certainly would need being unpacked on the show if we go down that rabbit hole you talked about no gpus versus pmf right can you just walk through some of the beliefs that dust lives by it can either be philosophical um
as a couple of these are or tactical like the no dpus before pmf yeah first one is really uh the continued belief that uh focusing on product is the right thing to do because really feels to me like we are only scratching the surface of what we can do with those models right now we are starting from the conversational interface so that's why you use the slack analogy and I really duly I mean truly believe that that analogy the slack energy will not sustain in time because the way we interact with that technology will change
it started with the conversation interface but it will hand in a very different place M you basically those models are kind of the the CPUs of the computer uh the apis and and the tokens are really the The Bash interface what we're doing right now is is merily uh inventing bash scripts and we have yet to invent the guui we have yet to invent multiprocessing and we have yet to invent so many things we are really at the the the the very beginning of what we can do from a product standpoint with that technology whether
it evolves or whether it stays like this yeah one one word um that I think is is going to be important and and I feel recent news has has actually uh uh helped confirm or is an interesting new drop in the bucket for is um the the notion so we one of our product models is augmenting humans not replacing them and it's not just the naive version of saying like we're not here to get people fired it's really that we think there is a tremendous upside in giving people who will still have a job in
5 to 10 years time the best possible exoskeleton and that it's a very different kind of company and kind of product conversation to be like all right how many dollars are we going to take away from your Opex line next year uh versus this is the number of latent opportunities that you are not able to explore as a business because your people are dragged down and pushing like stale slideware around or not even knowing what dependencies they have on the rest of the company like this is how much friction you've imposed on the smart people
you've spent so much money hiring because half of their day or part of their week is spent doing things that we should literally not be talking about in 2024 so that's one and the thing that comes back to to to the droper you've been saying that from the beginning Gabriel and in the beginning you didn't use the word productivity like you didn't want to use the word productivity I I wonder if that shifted and if so the Nuance around why you chose not to I think productivity there's two terms that I was hesitant on productivity
to me sometimes feels like an an optimization when that when really um there's two ways to be productive there's doing the same things and is doing just better things and I think you know the mix effect of productivity is is is enshrined in effort versus impact at the end of the day your bus is never going to be mad if you spent no time doing the things you were assigned to do but brought in the biggest deal for the company nobody's actually going to make make any comments on on on that being the bad decision
because I think the more you grow in your career and the more you're close to the leadership of a company the more you realize it's not about the effort it's really about the impact and the impact comes in sometimes unplanned hyper plary completely like left field ways where it's like of course we needed to focus on this and it's clear and hindsight but you need to free up time space energy and and mental cognitive space for that um the other one was Enterprise search I just feel like Enterprise search is one that we didn't want
to put on the website because retrieval of information is obviously a use case that people are very excited about very quickly but um we're just very convinced that looking for the document is a step that people are not particularly passionate about nobody wakes up in the morning and it's like so happy that I'm going to get just get the right document the first time around when I do the search people just want to get their job done and it just so happens that using context from three different documents across seven data silos help them get
it done faster or better and so I think the search bit is just it's never the job to be done nobody really wants to search they want to they want to complete they want to they want to prove they want to test but the search bit is a is a step that we think will get abstracted and going back to stands point I think that the interfaces and the experiences we have with this technology will sort of really try to forget about what the original data source was quite fast potentially once we've gone over the
the trust hurdles that that exist today um no the the thing that that this all comes back to is collaboration collaboration between human and non-human agents and um and I think projects uh by by anthropic are an amazing um an amazing example here um we thought about co-edition La last summer we have an amazing intern uh from from om with us last summer and and uh who spend their time working on um a a co-edition interface how do you chat to an assistant to make something that you're thinking about better whether it's an app or
a project or a document or a script um and this is something that obviously the the recent release by anthropic has has made very palpable to many more people that is to me the the the the interface and the interaction that we need to get right and and that will be in the F so we we we say augmentation and we'll stick to it because I think it really helps us Focus on the interfaces that help humans and non-humans make progress faster um it's going to be about proposals how do I how do I get
to have a human in the loop with a proposal that's written just in the right way to decide if we swipe left or swipe right on it it's going to be coedition how do I have the language of the human in front of the assistant be as easy to interpret and as you know foolproof as possible for the the final uh project to move into its final form as quickly as possible and so you need that inter that interface that interaction between uh the agent and and and the human and you forget that when you
replace too quickly when when you focus on just replacing and removing you've you've built something that is fire and forget essentially and you'll see you'll see the gains you'll see the dollar gains um but I you know if you've automated 100% of your customer support tickets you still need the insides from what people are pissed off about you still need to understand and have your finger on the pulse of why people are stuck otherwi wayse you're slowing down your product development efforts and product development efforts today live and and die by some of the comments
that are coming in from support tickets and so how you how you've made that problem go away and become like actually maybe cheaper sure but also maybe virtual and harder to Conn to is is not I think a a super long-term view of how your product and business is going to serve your customers best um because you you still need to think about the ultimate interfaces that are going to enable the decision making to make it better and strategic and and and and and the best option for your customers in the future so keeping the
human in the loop always I mean it is human Liv one way to say it but it is Dri is human driven like the whole point of all of this technology that we are building is to serve humans better and as soon as you remove that you've made it you've made a terrible mistake because someone else is not going to do that and they're going to actually have a better experience with customers and employees and stakeholders and then they're they're going to win I you know obviously there's scenarios in which you're going to catch me
and you be like you know this one this you we we know that humans get it wrong way more and so we should obviously replace it and this is a complex Nuance problem so I'm sure there's certain areas of it where pure replacement has uh fully understood non external like with no negative externality value but I i' Venture that we're pretty poor at modeling where value is created and how it's funned through the parts of our company today and you know economists have been great at showing that when you don't price negative externalities well we
end up in pretty messy situation uh and so this is this is the question that I that I post to leaders who are asking you know what should I automate first I'm like well I don't know which parts of the company do you worry about the most and often I just find that CEOs are panicked about what their customers say on support tickets and so making that problem go away making that problem less visible might be great for some obcs conversations and and your your stock price could have unforeseen consequences if you haven't funneled it
through in in the right places but but a good but but also I think there's so much more to do than to shave 3% off your balance sheet the the the the the spectrum of opportunity that you're giving your team if this technology is in their hands and if they're able to come up with ideas is broader than just firing people out of their and I'm not saying you shouldn't do that like I think I I don't want dust to be perceived as like naive in in in in this ecosystem where the disruptive nature of
this technology is going to take some people's jobs away because those jobs were currently being done by humans for lack of a better alternative I think in certain situations you could see those jobs as having been created because we were waiting for the robots having been framed in a way that was because we were waiting for the robots um but I don't know that that's what leaders of companies are excited by I think that the the upside the future the way in which we need to be resilient antifragile for for for what's to come and
what our competition is going to come up in those are the ways in which energy and support I feel should be fueled to to support teams you guys um second time Founders uh you started your first company over 10 years ago uh you were an early acquisition of stripe you guys were there super early on what have you learned and done differently this time as second time Founders um I think uh really understanding that a few explosive bets are more likely to get you any meaningful than over optimizing too early on on something that is
still meaningless in the market that's one thing that I I think we think about differently so like exploring versus exploiting uh um and and all those Frameworks that's one uh I think the transparency that you could the the trust and empowerment that you give to your team is we I don't think we were against it it's more that we were clueless about how much more empowering you could be so uh the idea one of the best words from my stripe years was paper trail and it was you know you had two people in a corridor
have a conversation and then one of them would take the time to just write a paper trail in slack or in a document say you know what we just had this exchange and we've moved the needle in this in this direction and it saved and other humans the time and effort to go in a meeting room or figure out that this decision has been made and and and it feeds a graph network of trust and respect for your co-workers that is I think second to none in how you can then just achieve more as a
team um so culturally you need to sort of push that to begin with because especially people who are earlier in their career will not always feel comfortable with how information should be shared so I think that's one where where example is important um big markets that you really believe in for a long time H we we we loved technology when we started our first company like 12 13 years ago I was like this is great this is amazing these are QR codes everybody's going to use them uh and it's like no we have to wait
for pandemic to sell QR codes okay I'll do that next time um and and and so likeing falling in love with the technology and not really fundamentally understanding how big the business could be if it's successful and asking that question early and unabashedly is one thing that I feel is is is different so what what we kept is our experience together I think it's an natur advantage to uh having built a company with a person because you've explored everything you've explored the the the the beauty the the terrible the The Joy the the pain and
you know pretty much the entire API in and out and so that makes that enables a much more efficient um co-funding I mean co-funder uh interaction and collaboration I think it's a really big and fair and fair Advantage um I think the biggest one that I that I that I that I think is completely different from me and that Gabriel mentioned is about empowering people it's as a really as a fun it's not you this I mean it's not to you early and it's to you to build and to to to build that initial spark
but then uh for the sake of the company uh you are not the one that has to build you're the one that has to make create an environment for people to be empowered to build those things and explore and create new stuff and uh the best value you can give is I don't like to lose that to use that world uh that leadership was coming to mind it's not not necessar leadership is really guidance and trying to to to create an environment where every is has the chance to do what they they want but yes
in a guided environment so everything works as a whole but that that would be the the biggest difference and something that we learned about at St at this F so guys let's move to a lightning round we've got a couple questions for you all right lightning round question number one um Sten you share these predictions for where the world of AI is going on Twitter from time to time at this moment what is your top contrarian prediction for where the world of AI is going and don't don't give me this bodal little bit of this
little bit of that let's let's hear a point of view what's what's your top contrarian ped prediction for where the world of AI is going uh I see it [Music] uh it's a lightning so I have to something uh it's gonna it's gonna be tough it's gonna be a we we know we we I think we on the VRA on during a pretty tough period how so uh the excitement will go down maybe it'll take times to get to the next stage of the technology uh there's tremendous value to create but people will not see
it yet and it'll take a long time for it to diffuse through Society so there is massive amount of value to create but it's going to be a we may have tough times in front of us all right short-term pessimist long-term Optimist I'll tell all right Lightning Run question number two uh and this is for both of you who do you admire most in the world of AI uh Ilia uh is just is just incredible I've had the chance to work with him uh he's my favorite people in AI he's he's he's extremely smart but
he's not a genius Builder he's a genius leader he's he's just a Visionary and I think that would been anchorable Kar I know know him I don't I actually don't know him but I admire him a lot and in terms of pure genius nii I think it's Shimon and yaku at hope nii they have crazy last names so all let people look it up but shiman and yaku are I I impressed by those who've been around for a while and a good they're acting as good resistance and condensator elements in the system they're just you
know like providing the friction to uh remain optimistic but cautiously so uh and and to me one of one in one of the first I think it was I can't remember it was a tweet or a podcast or an article but the hearing yand to be like you know we can make pretty good decisions with a glass of water and a sandwich and these things require Power Station sized data sources and are not making great decisions on some things so we we feel something is missing and is like elegantly putting that back into into into
perspective has been interesting to me because it's hard to not cave to the hype I think and so uh in some ways um pushing for a simple ideal like being open which which which I think Yan is doing uh quite aggressively despite that not always probably being the easiest decision uh and also saying you know we probably haven't solved everything all the time uh is uh is nice and from my from my personal experience the researchers that have worked for or with him uh have learned and taken from from that quite a bit and so
that that and it's not French but uh you know some touch of modesty Touch of Temperance uh I've I've appreciated in in in in my discovery of the generative side side of artificial intelligence like after 10 years of just doing your prediction and classification from fraud and risk and and onboarding at stripe and and and healthc Care claims management and and and things like that um it's nice to to feel like there's uh some people who've seen seen a lot done a lot and just questioning rather than uh affirming all right so that brings me
to the the third and final lightning round question you chose a Frenchman for your most admired uh Gabriel and dust is proudly made in France Paris has been in an epicenter certainly an epicenter for all things AI um your take on the perisian ecosystem and what do you want to say for the French Founders listening to this podcast other than I'm star with an English uh it's be it's their fault not ours yeah I think the the French system is awesome because uh we we compared to where where it was 12 years or 15 years
ago it's our first company know we have talent because there's been a generation of scale-ups that went through the market and and train uh all all that talents and and most recently that kind of explosion of AI Talent as well which is super exciting um so I say it creates a pool of of of talent and uh with the rights to create to create incredible companies uh obviously it's not it's I mean tackling the the US market from France is a challenge and so that's that those to to be Tak into account of course yeah
I think there's if if you have ambition there's there's there's a lot more to do and then as long as you're not naive where you know there are still some realities like you you can you can fight some aspects of narratives you can't fight gravity or at least you shouldn't you should probably work with gravity way more than you should fight it uh but there's a ton more we can do and I think we we have to behave a little more like um Tech countries like Israel I think uh in uh mixing ruthless ambition a
a recognition for where Talent is and how it's already connected and and has high trusts connective tissue which I think is a great catalist and accelerant in in making great companies happen um but a recognition for where the markets are where people are buying where people are paying and how quickly people are making decisions on on shifting to new technologies especially in that space I think the the biggest advice is uh as as a French funer if you always been in France you have kind of that feeling that something magical must be happening in the
US something special there must be something special about those people well I'll tell you I've been at tribe I've been an openi I'm working with seya these all are normal humans they don't have any magical capabilities they're just like us and so it's really important to to really be ambitious and and believe strong that you can you can make it you can do it whatever it is uh from France versus US wonderful that's a good place to end it thank you gentlemen thank you guys [Music] [Music]