Inside CVC by u-path

Episode 9: AI Operating Systems, Digital Labor & the End of Pilot Purgatory with Alec Coughlin

u-path Season 1 Episode 9

Enterprise AI strategist Alec Coughlin joins Inside CVC to share hard-earned insights on scaling AI beyond the memo. Alec challenges common misconceptions about “AI-first” strategies and explains why so few companies are truly AI-mature. We explore how leading enterprises like McKinsey, Blue Yonder, and Union Square Ventures are operationalizing intelligent systems—and what corporate innovators and venture investors can learn from them. From automation vs. augmentation to stitching AI agents into workflows, Alec unpacks the risks, opportunities, and mindsets needed to thrive in the age of digital labor.

Support the show

Catch up on all episodes of Inside CVC at www.u-path.com/podcast.

Steve:

What are some of those misconceptions you often hear from corporates and investors that you think really need to be connected or corrected? Why don't we start with that?

Alec:

Yeah, that's a great question. I think that there's several that kind of jump to mind. One is I think a lot of people are excited about the AI first movement, especially within large corporations, right? So there's memos that kicked off by Toby Luecke and others. I think what's a lot harder is to find how to go ahead and operationalize AI at the enterprise level at scale and not fall into pilot purgatory and other things. I was out at the Snowflake Summit this past week in San Francisco, and I can't tell you how impressed I was by so many of the talk series that I attended where they were unpacking what they actually have been able to achieve by leveraging the Snowflake AI data in cloud. So I think one of the biggest misconceptions that sounds kind of obvious, but it's I think really important to call out is it's one thing to write a memo. It's one thing to top down it and have a mandate and what have you. It's an entirely different thing to operationalize within the fabric of your culture. the use of enterprise AI systems and tools, and of course, all the things that are in between. And I do think this calls into question, you know, a lot of traditional approaches to quote unquote top-down mandates versus bottom-up and happy to get into any of that stuff too. But that's the first one that comes to mind. I got plenty of others that we can get into if you want to go further into that topic.

Philipp:

Yeah, well, I mean, while we're on that, you know, a lot of people talk about AI, you know, as a tool. So, but maybe, you know, let's actually talk about it from a strategic perspective and really as a business transformation um a way of how to transform businesses how should corporate innovators and corporate venture capitalists really think about ai um and so i mean obviously not just as a tool but really from a strategic force that changes how businesses are built and how what are some some ways how you can help corporates think things with this

Alec:

yeah 1% to 4% of companies, corporates, are AI mature, according to McKinsey and BCG. That's a go-to stat that I use a lot because I think you can ladder so many different dimensions of answering that question onto that scaffolding. So to start with, I would suggest that anyone listening to this think about artificial intelligence. more so the way you would about a human being. Meaning when you first start introducing artificial intelligence into a corporate environment, you want to think about it as like, hey, you might have some, basically it's a really smart intern. And you want to make sure that you onboard the intern so that it becomes a really smart and increasingly productive member of your team with two to three years of experience. Hopefully it's domain specific because that mitigates a lot of the risks. And eventually it becomes, you know, it progresses further into becoming a PhD domain specific. The reason I keep saying domain specific is that you've got to be extremely careful about the way that you set your collective expectations for what AI is actually capable of. And so to be very direct on this, which thank you for acknowledging, I guess I do have a bit of reputation for just calling it exactly how I see it, whether folks do or don't agree with what I'm about to say is, I believe that you want to think about AI within the context of the next generation of the operating system upon which a business will operate. And those that are AI mature have built that foundation and they're starting to rip market share from those that do not have that foundation in place. And to be more specific about that, I've written some things about an ontology aware intelligence system that I think personifies that. And again, going back to the comment about Snowflake, and I'm not here to sling Snowflake per se, but I am here to tell you that I think what they've built and what they've demonstrated to the market is you can build an ontology aware intelligence system using artificial intelligence within an enterprise And as Robby Gupta said, you gotta re-found your company and that's the fastest, smartest way to do it. So I really think to your point, Philip, it is a force multiplier, but you really wanna be careful about understanding the risk associated with hallucinations when you bring that kind of horsepower into the mix. And I think the easiest way to mitigate the risk is just make sure you're focused on bringing it into an environment where you have extreme domain expertise so that you can detect that hallucination a mile away versus doing things outside of your domain of expertise. That's where you can get in some trouble.

Philipp:

And can you, you know, when we talk about this a little bit at the meta level, can you give us like maybe a concrete example of of an enterprise, a corporate, which has used AI to really differentiate and really kind of like redesign the process or create a new service for our listeners?

Alec:

Yeah, I mean, a couple come to mind. So one is the Lilly application that it's an intelligent application that McKinsey jumped into real quick, very early on. I think Cohere actually and Aiden might be directly involved in that one. So that's kind of exciting because they've effectively taken their entire knowledge base. I don't know, 20 plus years. You can imagine what that knowledge base looks like, right? Imagine the problems they're solving day to day, case by case, you know, over and over and over again. Imagine, you know, taking that all down and drop that into a knowledge base where all the data sets talk to each other. and then, you know, chunking it up and throwing a vector, you know, kind of semantic layer over top of it via a vector database. And then anybody in the world can query it. So the next time someone walks in the door, says, hey, I got this problem I need help with. I'm thinking about buy versus build versus partner. And I'm this big corporate and so on and so forth, stepping into a new area because I just bought a company. Lily, you know, compresses dramatically the amount of knowledge and the amount of impact McKinsey can have on a client, you know, straight away. So that would be one. Another would be just sticking in the investors can know this is a little bit in your guys' wheelhouse here. Union Square Ventures, I don't know if you guys have seen the librarian they've stood up. Fred Wilson is one of the best and the originals at thinking and building in public as an investor. And so they've done something similar, which you can check it out. Dan Shipper, I'm a big fan of. Big shout out to Dan. He interviewed, I forget the name of the librarian, literally the guy that put it together. But it's basically a knowledge system that's similar to Lilly in that regard. But you can check out Dan and an interview he had about six months ago, these guys, that was awesome. And then last but certainly not least, just sticking with the snowflake, you know, kind of thread, because it'll give people something, you know, coherent and, and deliberate that they can go after and go touch. Blue Yonder is one of the most prolific existing supply chain management companies in the world. And just recently, they launched a bunch of cognitive solutions and a bunch of very significant enterprise AI agents, all built off the back of Snowflake meets relational AI, because what they've done is they've introduced a generative AI enabled knowledge graph, which to your point, Philip, like something tangible, It took and big shout out to J.U. and I'm sure I'm going to butcher this. I hope he corrects me. But they basically, for example, took 200 or so thousand lines of code, legacy code, right? Old school stuff. And as a result of leveraging the infrastructure that I just described with Snowflake and relational AI, which is embedded inside of, as a partner of Blue Yonder, they took down the code base from 200,000 and change down to about 10,000. And so when you compress it that way and you elevate the business logic associated that was, it used to be caught in 200,000, a lot of mud, and now it's in this really clean, fast environment, you can imagine how much faster you can make better decisions, which In this environment of all these global changes and how quickly folks need to pivot on supply chain predictions, scenarios, what if, it's an extraordinarily tangible example. And the CEO did a great job at ICOM, which is their annual conference about a month and a half or so ago, walking folks through this, but happy to share more on that. But I think that's a pretty good tangible one as well.

Steve:

Alec, automation is a big productivity benefit that we talk about, about AI and And you mentioned Snowflake and there's a lot of companies out there that, you know, different varying degrees of AI, whether it's a Databricks, a Snowflake, an AWS, you know, if I'm a business leader, that's a lot of different companies coming at me saying, I've got the latest AI, I've got the best AI, et cetera, sort of could get overwhelming, a little bit confusing in terms of what's the best one to innovate in or which one is the best one to invest in and how's that best going to drive automation efficiency, et cetera. I've got two questions. As a business leader, and I've got all of these great platforms coming at me and trying to sell me or invite me to adopt their AI, how do I navigate which ones are the ones that are going to drive the best performance, the right ones to innovate in? And on the heels of that, are there certain areas of the business that you don't think are ripe for AI and automation?

Alec:

Yeah, those are two excellent questions. And first of all, I couldn't agree with you more. When I was out at Summit, one of the top three, worst case, top five takeaways for me is everybody sounds the same. Right. There's just a lot of people. And it's not a criticism. It's just an objective observation. It has a lot to do with how fast things have been moving and are moving and also our collective vocabulary. You know, knowing the difference between rag and graph rag and all this like whiz bag stuff like it's hard to keep up. So I get it. But I agree with you. It's a lot. So first and foremost, I do. I do have a pretty strong point of view and I think a lot less about automation than I do about augmentation. And it's not to suggest that automation is not interesting. I just think that the ability to combine human beings and machines, right? And these intelligent systems and augment that domain expertise, which does inherently have dimensions of automation, right? So you're delegating stuff to the machines that's important, but maybe somewhat tedious to machines that have 24 seven capacity and photographic memories and all these things. But I've spent a lot more time thinking about augmentation. That being said, to your point, I would definitely suggest that folks think about it in two different ways. So the first way is there's data infrastructure And then there's the data platforms. And so you name two, right? Snowflake and Databricks in particular. And I think let's just use those two as an example. If I were a listener and I was trying to think this through and say, hey, I want to get my data AI ready. I want to get my data right so I can do some of these awesome things. You're probably going to be looking at those two. And maybe there's a third or a fourth that you're going to be looking at, but you're definitely going to be looking at the Red Sox and the Yankees. And depending on where you're from, I don't have much to do with the Red Sox because I am I'm a diehard Yankee fan, but there are people that feel that way about those two, and I'll just leave that there. That being said, the elegance of one platform versus another is really important when you think about the amount of technical talent and sophistication on your side of the equation, the types of human beings and engineers that you're gonna need in order to get the most value out of those systems has a direct line and correlation with how well the platform abstracts away certain things that make it like more low code, no code for those of the listeners that might be less technical. So we're talking about low code, no code versus having to have a proper engineer to build something custom. So I think I meandered on that question, Steve, a bit. But I think when it comes to automation, I think you really want to think about automating the things that are best done by machines that have 24-7, you know, they never get tired, they don't forget anything. But you want to also be careful not to automate or think about, I think, automation in terms of a race to the bottom, right? Because of Javon's paradox, like the cheaper things get, the more you're going to use them. So if you automate away a bunch of people in a business and you think that's a great thing, that might look okay on your profitability, your quarterly or maybe for six months. But if your competition instead builds a system to take 20 to 30% of the workload of those same people and move it into an environment where machines can handle it, and then Pareto's principle, 80-20 rule, those same people now get to reinvest that energy and capability into things that have a four to one ratio on impact, I think you know what's going to happen. So I don't think as much about automation as I do augmentation. But I think I answered maybe one of those questions.

Steve:

You did touch upon data. And I often say in the context of data and the systems in which that data runs through, by the biggest, baddest engine on the planet, you run poor fuel through it. In this case, data, you're going to get poor performance and vice versa. By premium fuel being premium data, you run it through a poor engine, same result. I'm curious in the context of AI, are there ways a corporation can assess if its data is ready for AI? Yeah,

Alec:

I think generally speaking, you want to make sure that you understand your data. Your data understands and is accessible and your data is not siloed, right? So the nature of those relationships Data is only as useful, generally speaking, as the system within which it sits and therefore how well it can be put to work, right? So for example, and here comes, I can get into like nerdville mode pretty quick, but there's a really important distinction between descriptive analytics predictive analytics and prescriptive analytics. And so what does that mean? So for example, if my data was really well structured and accessible as it relates to BI or descriptive analytics, that means I'm really good at looking in the rear view mirror of my car. What happened? What, you know, a good clean representation of that, but that's good. But what is great is therefore what is most likely to happen? And even better than that, what should I do about it and do that in real time, which goes back to the Blue Yonder cognitive solutions example, where they're stepping into that arena to help folks with tariff related, you know, what if and scenario stuff. And I think that's where you really want to be thoughtful about getting the most out of what you currently have access to versus sometimes folks fall into the trap of like, oh, I'm just going to enrich my data by bringing all these other sources of data and it'll all get better that way. When instead, maybe think about, you know, am I getting the most out of the data that I have today? And if not, what are the constraints and how do I validate that through bringing in some maybe super specific AI native tools that can maybe knock some walls down and you can validate just how far your data could potentially go to create value in a shorter time span by just simply unifying it, right? And moving into a more data centric or knowledge centric architecture versus like an application centric one where they're stuck in silos.

Steve:

Yeah, absolutely. To your point, these data constructs, data silos, that's where these platforms, these applications that you've talked about are so important, not only in terms of the solutions they serve up. I would also say a lot of that data is also customer data and keeping it private and all of those sorts of things that come along with it as well. Philip, I'll hand it over to

Philipp:

you. Thank you. I think the piece around data is very important. Back at one of our summits, we actually had a discussion about what AI is going to do to CVCs and what the role of AI already has. And there was a pretty big debate of like, some were like, no, get away with the AI. We can do, you know, we've done this ourselves for a long time. You know, we are experts in it. We don't need support. And others were like, I do everything with AI. Like everything is supported. Maybe, you know, talking a bit about, you know, the future of investment strategy. Obviously we have seen in hedge funds, automation in public markets has some pretty, pretty powerful success and changes performance quite a bit. But when it comes to venture, obviously it's private markets, very, very limited information. How do you think AI is kind of gonna change how capital in the venture startup ecosystem is allocated And on the other hand side also, what do you think how AI is gonna really change how maybe CVCs and also venture firms are gonna look like? You mentioned agents, right? You mentioned kind of a lot of the automation of processes. We'd love to hear your thoughts on how you think AI is impacting the space.

Alec:

Yeah, for sure. So I think the idea of an intelligent system is very much a horizontal idea. So Lily at McKinsey or the librarian at Union Square Ventures, there's a group, can't actually talk about too much confidential, but it's consistent. Everyone tends to be thinking about it in that general, generally the same way or mental model, which is number one, Are there workflows within our business that can be radically improved and upgraded and or just effectively automated through the introduction of these systems? So that's the first bit. And then the second bit would be, If and when you can do that, what percentage of the team's resource and time is now available to them to do something else that they don't have to do that they used to have to do? Third thing is, how do we start doing that at scale? And I think that comes down to magic moments. So I don't know if you all have experienced Google Deep Research or Notebook LM or, you know, once you have that first experience of remixing a whole lot of content or having a bunch of agents go out and do something in three to seven minutes that would have taken you 22 hours. I think you start to realize, wow, when it comes to research in particular, which is a lot of investing, number one, I think that there's a lot of opportunities there. But then number two, one of the territories that I love the most is pattern recognition. So when you think about all the conversations that investors have with various stakeholders and entrepreneurs, corporates, etc. Imagine creating a knowledge base, going back to the Lilly example, that can become queryable, that can also associate back with some of the predictive and prescriptive analytics that are, again, aligned not to supply chain, but aligned to investment decisions and decision making. all of the information that's available to a system is something that perhaps is able to be processed in a way that was previously either impossible or economically not appropriate. So I think there's a lot of different territories that we're all figuring out as we go, but it's very early stages. But those examples I've mentioned, I think are good ones, including Walleye Capital. I keep talking about Dan Shipper, tells you how many podcasts I watch of his, But I forget the founder and CIO and CEO of the firm. But if you want to hear a guy that's all in on AI, he's also a PhD and a deeply mathematical guy. So it's no surprise. But that's another fantastic episode, 35 minutes. But listen to what Walleye Capital is doing with AI. And I think you get a sense for where things are going.

Philipp:

That's great. I mean, I started actually playing around a little bit myself using a few different AI tools and LLMs to simply... do an intake form and then automatically take the specific company which came out, compare it to competitors and other companies who are saying at least publicly they're doing something similar to then put it into a prioritized list, which helps us when we do our investments to at least have a good starting point. So I certainly can see that at some point, really an AI agent could become a proper team member and be really managed by a senior and experienced team to improve efficiency and productivity in an investment process.

Alec:

Yeah, I agree with you 100%. I think we're all going to be managing digital labor if we're not already. I am, of course, but I'm a little bit out there on this stuff. But I think we will all be managing digital labor in different forms. I also think we're going to have different AI agents that replicate ourselves in different workflows. There might be a general extension of me that can attend meetings and so on and so forth and help with scaling. But there also might be a version that if you have a cross-functional role, an Any business will just abstract it. You're probably going to be much better off if sooner as opposed to later, you create a specialized knowledge base and a specialized agent that becomes increasingly really good at that one thing. Because the good news with the agent stuff is generally speaking, we don't have to worry too much about the cost of spinning another one up, which is why digital labor is so fascinating for those that have their data in a position to capitalize on that because it's AI ready.

Steve:

I'm curious on that topic. How much of a barrier do you think... layman understanding is of stitching agents together. As an example, I'm highly interested in stitching together an AI agent that can do research on a topic, and then an AI agent that acts as a writer to write content on that, and then an AI agent that acts as a social media manager and puts that stuff out on LinkedIn, maybe even an AI that acts as a responder. It seems pretty straightforward. It sounds pretty cool. As somebody who's interested in knowledge about, I don't know where to begin to where to stitch those things together. How much of a barrier is something like that to an individual like me that's interested, understands the benefit, has this sort of mind in their head in terms of, hey, I can take one of these and put it here and connect it here. It seems pretty easy. Is it that easy? And how much is that a barrier?

Alec:

Yeah. So going back to that 1% to 4% number, there's a reason for that, right? And the good news is that... the capabilities of enterprise AI and the technology because of all of the capital that's flowed in, convenience to flow in, and how many incredibly bright people are just all in on this space. The capabilities are so far ahead of where the average company is in their understanding and their journey and ability to apply it. And there's all sorts of people and process stuff and human stuff, which is something you always wanna be super sensitive to. But generally speaking, I would say, Steve, if you took eight hours and you didn't have anything else to focus on. I think within eight hours, you would, without a doubt, be able to figure this out in a very short order. For example, I don't know if y'all are familiar with Langchain, but they've done a phenomenal job abstracting away so much of the complexity. And I love Harris and Chase and that team that just built in public. Everything is on their Twitter as individuals, as companies, like all of the various examples and the case studies and the use cases and and documentation in a super friendly way to consume it. So yeah, I think you'd be able to do that stuff a lot faster than you might think. And there are other environments, right? Like some of the folks that we've already mentioned within the infrastructure and platform space. But yeah, it's pretty unbelievable. Be careful with the vibe coding, plastic apps are not real. I mean, they're like really great and amazing prototypes, but careful with going to production with stuff and stuff like that, but it is all of it. It's better than PowerPoint, you know, for sure.

Steve:

Yeah, I mean, I've tried some AI PowerPoint things and they're all right, but they just they don't hit the mark. Right. And but the fact that you can even experiment with them, I think, is just crazy cool. And how fast that's happened is is think I just a signal for for how fast the future is coming from that. From a practical standpoint, let's talk a little bit about risk. We hear about risk and AI often. Right. Some of some of the worst case scenarios sort of Armageddon type of thing. From a practical standpoint, though, what is one risk? risk or a blind spot that you think corporate leaders may not be thinking about that you think needs to be top of their list?

Alec:

Yeah. I think everything should always start with what's the worst thing that can happen. So catastrophic risk. So a subtle catastrophic risk leading indicator would be people using AI with outrageous expectations and using it outside of their domain of expertise. That's going to turn into a problem. It could be a small problem. It could become a much bigger problem. So that's the first area is make sure you're using it in environments that you already have a pretty significant amount of expertise, kind of like what we talked about. So you can detect when it's doing something it's not supposed to. So that would be the first. Second would be hallucinations are today part of the evolution of these systems. And it's okay. They're getting better, but they're still there. So if you're in the marketing game, a hallucination is great because it's that's a big idea. Right. So like, go for it. Right. So that's a great place to use it. If you're in a health care environment or you're making investment decisions, you know, that that, you know, in real time and you're using it to figure certain things out, that's a different equation. So I would be very careful there in those in those those territories. But I think it's also good to call this out. So for all those doomers out there and everything else, there's FOMO. So I'm going to slow this down, but I have a fear of missing out. But then somebody introduced to us recently, ROMO, which is the risk of missing out. And I'm kind of jumping ahead here, but the number one risk just in general that I think is very clear now is the existential risk that businesses face of all different shapes and sizes if you're not running at this technology in some dimension of what we're talking about. Because if and when your competitors are, and they're ripping 20, 40 hours of workload out of workflows with really talented people who can then do something much more meaningful and high impact with those time savings, I think we all know where that's going to go. So when it comes to risk, I think it's the subtle risk and the catastrophic risk, but you got to run at this stuff because there's no more waiting around. You got to try it, you know?

Steve:

I'm curious, last week, maybe the week before, there was some headlines around labor risk, right? In terms of AI would put a lot of middle managers, white collar workers. How does that fall in terms of your risk sort of paradigm?

Alec:

You know, I think that it's critical that those types of topics are talked about transparently by stakeholders who are transparent about their risk. goals and their interests right i think the one that you're likely speaking to and i don't want to jump to any conclusions but i think it has to do with some timing around capital raising and things that also has a tendency to bring out the agi conversation right because it tends to accelerate valuations and all sorts of other things there's also regulatory capture to be concerned with so um to bring it back to answer your question though steve i think generally speaking I think every leader and every company has an obligation to their company and their people to get their hands in the dirt and start using these AI enabled systems and tools and so on and so forth themselves to have that magical moment happen where they start to understand the power and utility of these systems and tools, as opposed to quote unquote telling others to maybe use it. Because I think if and when more leaders get that firsthand, hands in the dirt experience, I don't think they're going to be thinking too much about, how do I automate away a bunch of jobs and then have that labor dislocation? I think what they're going to be thinking about is, how do I help my people delegate 20, 30% to the machines and then light it up on the other side with those savings and kick the living you know what out of my competition? I think when you don't have the firsthand hands in the dirt experience, you're susceptible to believing certain things because you just don't know any better. And I think that you need to get your hands in the dirt. And once you do, you're like, oh, this is incredible. Let's do more of this.

Steve:

Yeah, absolutely. I want to hand it to Philip here to wrap us up and ask some closing questions. But the thing you just described, as you described it, what I sort of evoked in my head when you described leaders being at the front of line, being in the dirt, et cetera, it sort of feels like a kid, right? Like you're a kid playing, right? And certainly the impact of that is on such much more magnitude when we're talking about this and leaders of corporations, et cetera, but To me, I think there are study after study out there that shows that when we as adults do these things that are creative and playful, et cetera, that the benefits of all of that are, and as you described that to me, I just had this vision in my head of a senior leader playing with AI and having fun and smiling and all of these things that I think have positive impacts, to your point, on the organization are not thinking about sort of the reduction in talent piece of it. So that's what I had in my head.

Alec:

I'm with you and I'll pass back to Philip. But to that point, your execution velocity and your innovation cycles go through the roof because you're playing an infinite game. You're not like, oh, I got to do my TPS report. You're like, I'm going to let this rip. You know what I mean? I'm going to build the next feature faster than anybody else on the other team. And all of a sudden the hackathons kick in and all sorts of wild stuff happens. And it's like, this is awesome. Versus if you don't have the experience, you don't know. You just read what you read.

Philipp:

Absolutely. So many questions come to mind right now, but a couple of weeks ago, we had another guest on the show, a futurist. And yeah, Jonathan Brill was on the show. We talked, you know, geopolitics and where the future is going. I just participated in a session in Singapore where AI and robotics was a big topic. And I also, like you talked about the, you know, the corporate risk, but also like from a country risk, right? If you are not open, if you're not creating the structures and the processes, what it means to an economic growth of a region, for example. And what stuck with me in Singapore was actually like the whole conversation was really about what AI can do good in the world, right? And what actually, how can we use it to create better outcomes and design it in a way that it is used in the right ethical approach. So therefore, yeah, you know, sometimes, you know, especially I'm German, right? We always like to criticize things and look at things with, you know, a bit of skepticism. But I do think, you know, with some positive views, it's a ton of fun to actually play with some of these topics. But I know we can talk, you know, much, much longer, but kind of want to bring us to a close with, you know, with two questions. I mean, one question, very simple, you know, a bit of a, you know, stop, start, shift. If you would advise CVCs, you know, what should they stop doing when it comes to AI? What should they start doing and where should they kind of like maybe, you know, shift their focus or their belief when it comes to AI?

Alec:

I love that type of question, and that question in particular is a great one. I think you want to maybe think about how do you move away from what effectively feels experimental and one-off and shiny object-oriented activity. We'll just kind of leave it at that. Because I do believe that we're at a stage now where the OLMs and the frontier models are becoming so incredibly useful. And I think the tectonic plates that had been moving and shifting so fast and you're not really knowing what you can actually stand on to build had been moving around so much, but they're really starting to stabilize. I think we really know what we're getting into. And now you can start thinking about inference and next level thinking around what you can actually build. So I think you want to think about stuff that's much more cohesive and connected, maybe do less better for lack of a better term. So that would be the stop, which might actually be a start. When it comes to prioritizing, and I'm biased because this is my arena, so cognitive bias, here we go. I am very convinced that the data infrastructure and data platforms and anything that is connected to that ecosystem are fantastic places to focus because they're hard, they're complicated, they're very important, and they are the direct bedrock upon which you build proper AI systems and value creation. And then if Shift, I think we touched on it, I think you did a great job as well, just kind of walk in through this with me, but I don't think that we should necessarily be thinking about AI as a tool. I think it's an enabler. I think it's a system. I think it's really important to understand that the companies that are AI mature, they're more or less operating off of an AI native operating system that has moved so far away from siloed data and application-centric architectures, not even just data-centric architecture. There are knowledge-centric architectures, which goes back to what we talked about with Blue Yonder as an example and other things. So that would be the shift is thinking that way, you know, for sure. You know, those would be the big three just that come top of

Philipp:

mind. And maybe to add one more, because we talked, you know, Steve, you talked about, you know, how do we get the kids out of us a little bit again and, you know, become creative. So if... You know, if you would advise a CEO, you know, CXO, you name it, and, you know, help him kind of like to think about what AI is going to, you know, going to do in the next three years, what would you tell them today? What would be kind of like your vision of where things are going? And you do it, you know, in a positive way, you know, AI for good perspective.

Alec:

Yeah, I stand by. I know I sound like a broken record, but I do believe that, We can create ontology aware, intelligent systems that can enable human beings in any environment to be able to unleash their unique human potential within that company to create extraordinary velocity and value. And I know those are a lot of business terms, but what I mean by that is I can't tell you how exciting it is when I jump in, for example, I vectorize all of the content that I've published over the last three, four years, 485 different pieces of content associated with enterprise AI so that I now can have a semantic layer over top of it and interact with it in real time, which I can then eventually offer to others as well. The reason why that's so important is let's step back and think about the following. So if there were a scenario, a workflow, where you could multiply times 10 the wow factor of using Notebook LM, the wow factor of Grok, in particular with Twitter, meets Google DeepMind and research with a highly specialized domain-specific and vertical-specific knowledge base. And in real time, you could enable a team that is extremely focused on going to market within the enterprise space to just in real time touch all those different dots and connect that to a knowledge base that brings them real time market insights into what's going on. They're going to be three to six months ahead of where the market is. And imagine how much fun, to your point, Steve and Philip, those people are going to have as opposed to the know begrudging the idea they got to go do a bunch of research or read a bunch of stuff and do this that the other thing and i i think that's like a it's kind of a roundabout way of saying i think it within three years is a long time in in the world that we're in but i think that 12 to 18 months from now people are going to be so much happier with the work that they're doing every day because the machines plus the humans are going to augment those human capabilities. And people are going to find their sweet spot of their circle of competence at scale. And I think this is where the abundance philosophy comes from, I think, in large part. I think what I just said is going to happen. And I think it's going to be really exciting. But it might be bumpy along the way to get there. But I really do genuinely believe we're going to get there for sure.

Steve:

Alec, why don't we start our next episode in 18 months with that question? Are people happier today than they were 18 months ago as a result of AI? How's that? I

Alec:

love that. I can guarantee I'll double down on that, Steve. As long as those leaders have their hands in their dirt, those people will be very happy. Those that have leaders that don't put their hands in the dirt, I don't know. I would put my hands up like this. Maybe, maybe not.

Steve:

Well, that's where we'll pick up the conversation then. Alec, thank you so much. Such a fascinating conversation. Appreciate you spending a few minutes with us and sharing your perspectives.

Philipp:

Yeah, thanks. Thanks, Alec. We could have gone much, much longer. But, Alec, let's reconnect in 18 months and see where we are. And in the meantime, let's motivate people to roll up their sleeves. And, yeah, let's go get a bit dirty again and build. and get the creative juices going. So really, really appreciate your time, Alec. And thanks for coming on the show.

Alec:

You bet. Thanks, Philip. Thanks, Steve. Good to see you guys.

People on this episode