Inside CVC by u-path

Episode 10: AI, Uncertainty, and the Future of Work: A Conversation with Peter Schwartz & Richard Socher

Futurist Peter Schwartz (Chief Future Officer, Salesforce) and AI pioneer Richard Socher (CEO, You.com) join Inside CVC to explore how artificial intelligence is reshaping work, leadership, and society. They reflect on the evolution of AI at Salesforce, the shifting skills required of tomorrow’s leaders, and how regulation, culture, and global risk shape innovation. From scenario planning and AI literacy to Europe’s innovation lag and the promise of inclusive technology, this wide-ranging conversation offers powerful foresight into the opportunities—and responsibilities—emerging in this era of accelerated change.

Topics include:

  • AI agents, digital assistants & enterprise automation
  • Managing human vs. digital workforces
  • AI’s role in regulation, inclusion, and misinformation
  • Innovation culture in the U.S. vs. Europe
  • Preparing for the "age of leisure" and redefining human work

A must-listen for corporate innovators, CVC leaders, and futurists navigating the next wave of transformation.

Support the show

Catch up on all episodes of Inside CVC at www.u-path.com/podcast.

Steve:

Welcome to Inside CVC, the podcast that brings together leaders in innovation and capital investment to explore the trends shaping the business of corporate venture capital. I'm your host, Steve Smith, and together with Philip Willigman, we're speaking to corporate investors, entrepreneurs, and ecosystem builders driving the future of innovation. Each episode, we dive into the strategies, partnerships, and big ideas behind venture investing at the intersection of business growth and emerging technology. In Inside CBC is brought to you by U-Path Advisors, helping corporations and startups unlock sustainable growth through strategic partnerships. To learn more, visit upath.com. That's the letter U-Path.com. Today's conversation takes us to the front lines of artificial intelligence, scenario planning, and the future of work, where companies like Salesforce are moving fast to define the next era of digital productivity. We're joined by Peter Schwartz, Chief Future Officer at Salesforce, who pioneered the field of scenario planning and now works directly for CEO Mark Benioff, a relationship Peter says is the only reason he's still working. And we're joined by Richard Zucker, former Chief Scientist at Salesforce and now CEO of You.com, a leading AI native search and productivity platform. Together, they explore how AI is transforming everything from customer service to leadership and and why digital literacy will soon be table stakes for every knowledge worker. You'll hear how regulatory frameworks, cultural attitudes, and innovation incentives are shaping the global AI race and what businesses must do now to stay ahead. Are we truly prepared for the speed and scope of change AI is bringing? What does Richard think about regulating AI? Or not? And what advice does Peter have for his boss, Salesforce CEO, Mark Benioff? Here's our conversation with Peter Schwartz and Richard Socher.

Philipp:

Thanks for having me on the show. Can you maybe, you know, one of you jump in and talk a little bit about how you guys met? And what was the first conversation like when a futurist and a scientist meet? I

Richard:

actually, to be honest, don't remember exactly when we first met. I remember several of our early conversations, one in your office with your team. But yeah, I don't remember the first one.

Peter:

I think it was when you were still doing your startup. And then we acquired the company. You became head of AI research at Salesforce. And that's when we really began our conversations and friendship. And we've been friends ever since then. It's almost 10 years, I think. And we've had a lot of connections and opportunities to interact since that time.

Philipp:

Tell us, how were the early conversations around AI and future of work and sounds like inside Salesforce?

Peter:

Well, from my point of view, it was already beginning to see that we were going to do something very significant. I started talking about this in a fairly aggressive way with our R&D people around 2016 and saying that the idea of an AI assistant was the vision of the future that I had. And so the notion was that we were going to create a AI that would collaborate with people in doing basically whatever they needed to do. A key element of that, of course, was recognition visual recognition, and that's what Richard was working on at first, being able to do character recognition, facial recognition, image recognition of all sorts. And that was a key among the capabilities that was necessary to create that kind of digital assistant.

Richard:

Yeah, yeah. I still remember we actually were even on... Dreamforce I was on the Dreamforce main keynote stage with Mark and we presented an agent and this was I think in 2018 where you could you know we called it up on the phone we had some demos that were actually live running with real phone numbers and real AI and everything and other demos too and like you could ask like to replace some order and like change your address and things like that and just already had these agents that did customer service automation and in 2018. So it's been a long time coming for Salesforce to have what we now call agents. Back then, we just called them specific types of models. But yeah.

Steve:

It's so interesting how fast this future, right? You both talk about it in terms of its ancient history and it's four and five and six years ago and the speed in which we've come and sort of where we're at. But it's also interesting when you hear folks like Mark Benioff say that even today, 30 to 50% of Salesforce work is done by AI. And he also said in a recent interview that tomorrow's CEO is going to have to be one that is capable of managing both a human and a digital workforce in terms of AI. What does that mean for future leaders? How do they prepare for that sort of future?

Peter:

Well, first of all, Mark, you have to refine what Mark said. Mark said that in certain functions and kinds of jobs, 30 to 50%, not across all jobs. And I think that's important to understand is that some kinds of tasks are particularly vulnerable to being replaced by AI. Many other kinds of tasks are not. And so it's not necessarily jobs, but tasks. So that there are things within a job that can be enhanced. I mean, for example, I, you know, I'm a, a strategist in the company. What I want is an AI that helps me find useful information, does analytics for me, helps identify issues and problems in the company, helps me identify options and so on. That's a very different kind of AI than say a customer service rep or a salesperson and so on. So the necessary tasks that I have to carry out are going to be very different from the tasks to say to somebody with a very different kind of role.

Steve:

I'm curious as a follow-up to that, And it's a question that we ask another individual that we had on the show recently. How much does AI literacy... sort of fall into the adoption curve here. And maybe it's on organizations like Salesforce to create that easy user experience to make it, right? The analogy or the sort of use case that I use is as a journalist and somebody who likes to write things and likes to do things and sees AI in this notion of I can stitch together an agent that's a researcher and a writer and a social media strategist, et cetera, et cetera. It seems pretty straightforward. I have no clue on how to do that. And I chalk that up to I'm AI illiterate in terms of how to create that sort of system. How much does AI literacy in layman's terms and everyday users sort of, you know, how does that sort of affect the adoption of AI in doing these tasks?

Richard:

Yeah, I think like the... the type of work that we're going to do is going to move to higher and higher levels of abstraction. You know, it used to be that you do manual work in the field with like your hands, right? And now you're using tractors and they do the work of many hands. I think the similar kinds of things will happen where you used to do the manual work of finding information and then sending it to someone else and then summarizing it and then making a decision or two and so on. And you're going to move higher and higher up in levels of abstraction and And that means, and this is part of what Mark, another way of phrasing what Mark said, which I've been saying for a while too, which is we're all going to become managers of AI, right? All of us. And I think eventually this AI literacy point that you bring up comes in when we realize that people who say, oh, I'm not good with this agent or this AI stuff are similar to people who are like, say, I'm not good with this computer stuff, you know, or this internet thing. right, they'll just like have to slowly go into retirement if they don't want to learn these kinds of skills. It's just like unacceptable if you're a knowledge worker to not work with a computer or the internet anymore. I think the same thing will happen with AI and the people just like with computers and internet are just going to be much, much better and much more efficient at doing their jobs.

Peter:

And look, we'll get better and better at it. The technology will get easier and people will learn. When I first started getting online, you had to carry a bag around with alligator clips, take apart a telephone, connect your wires to the phone, and maybe to the wall, and so on. And then along comes the internet, and it's a couple clicks, and you're off, right? And I think that's kind of an analogy. Right now, it's a little bit more complex. You have to know a bit more. Even building it is one thing, and using it cleverly. Look, the other day, I'm writing a paper for Mark's request for the island, how to accelerate it, right? Well, the state has a fairly elaborated analysis, very detailed tables of all the source of supply and demand. I fed it both to Grok and to Gemini and to you. And I said, give me a plan for shifting to 100% renewable energy. And I put in a few other parameters. It saved me two full days of work. I did three because I wanted to compare the results and they came in pretty similar. So I was pretty confident about a number of the elements. But literally, I could have done it. I'm an engineer. I've done energy analysis for years. I could do that job. But it's not a profoundly difficult job. But interpreting it and then using it and writing a paper around it is the creative challenge. And it was an amazing experience. That technology did not exist a few years ago to be able to do that. And knowledge work at my scale to accelerate two days of work is huge.

Philipp:

Peter, when you pioneered scenario planning decades ago, and what you just said as an example, obviously you probably use AI to do a lot of the preparation before you do scenario planning. But I remember when we met in Singapore a couple of weeks ago by accident, you were on the panel and you talked about uncertainty and the level of uncertainty. If you think about all these different changes right now around climate, AI, geopolitics, and there's nothing sequential actually happening anymore. What does it mean for you as a futurist and how can AI potentially help you in that space?

Peter:

Well, I do think we're at one of these moments of historically large uncertainty. You mentioned the geopolitics, very uncertain world. The climate change, very uncertain world. The geoeconomics, very uncertain world. And then to which we now add technology. And I do believe that the AI revolution is the largest technological change in my lifetime. And I've been at this 50 years. I saw it at the very beginning. So I've been at it a long time. I'm much older than Rich I'll be 79 next month. So I've seen this since he was a kid. Having said that, my point is very simple, that the pace of change in technology is so great that it introduces an enormous amount of uncertainty itself. That is how quickly we adapt as a society, how quickly we adapt our regulations, how quickly we adapt our organizations, how quickly adapt our skills. And all of that is wide open at the moment because of the magnitude of change and the pace of change. Richard has a startup. He's one of only about 100,000 AI startups at the moment. I don't know what the number is, but it's very large. And the accumulation of capital going into it is very large. So the scope for innovation is really quite astonishing. So what we're seeing both is a speed and a scope of innovation like nothing we've seen in my lifetime. That in itself introduces a lot of uncertainty and the need for looking at different scenarios.

Steve:

I want to follow up on that quickly. Do you think humans can keep pace with that change from a regulatory perspective? Do you think we can keep change with that? I mean, take us something as pretty straightforward as regulation around AI at a government level, a US versus a France, just for as an example, for no particular reason.

Richard:

Yeah, yeah. I don't think you can and should regulate abstract intelligence, whether it's artificial or human. Whether someone is really smart or really stupid, it doesn't matter really in terms of how we regulate that brain, artificial or biological, right? Where it does make sense is to regulate AI in the applications. And I do think if I actually look at France and Europe versus the US, The US has often a case-based law, and Europe has very strong regulatory bodies who are very overeager sometimes to regulate things before they might create any harm. And in the case of AI, they're kind of killing an ecosystem that hasn't had the chance to even get created in the first place. Whereas the US's case-based law has the benefit of letting people explore some stuff, and then once someone feels bad enough about some outcome, they can sue. And in some cases, of course, the FDA says, well, we all feel bad about health outcomes. So the FDA is properly regulating AI as it applies to healthcare. The transportation authorities are properly regulating AI as it talks about and regulate self-driving cars. Yeah. So I do think actually, even in the US where there are obvious AI applications, they are regulating things properly. And then there's actually a pretty tricky area of gray where regulating AI is essentially getting to the very core of a civilization or culture's norms, like freedom of speech. Freedom of speech is regulated in Germany. You cannot say certain things about killing a group of people or whether they have been killed and in what numbers or not before in the past, and you cannot question certain well-established facts. And you get fined if you do, and maybe eventually even go to jail if you keep doing it. In the US, there's an extreme form of freedom of speech, right? You can say all kinds of things. I killed this group of people X or X, all kinds of things. That's not illegal in the US. And so when you have this very broad definition of freedom of speech, that now applies to AI. Now I can like million X that. It does put pressure on different values that we hold dear in different cultures. It's a complicated legal question in the limit, but in most cases, it makes sense to regulate AI when it actually influences people's lives. The more heavily it influences them, the more we should be proactive. Again, in healthcare, in legal, in insurance types of questions, there should, of course, be heavy regulations so that AI is not extremely biased and rejects people's insurance claims and so on.

Peter:

Well, I was going to add something to what Richard said, if I may. In terms of regulation itself, I think one of the important things is transparency. That is being able to know how training was done, what the nature of the algorithms are, what the intent and purposes of these are, and so on. So I do think getting inside the mechanisms of AI, I think, is not going to work. It's just too complicated. The opportunity to get it wrong big, I think, is very high. So at one end, there is the transparency of how AI is being created and developed. The other is particular applications to regulate where there are large consequences. I think one of the most important consequences is fake information. Very hard to regulate, very hard to control, very easy to do. And with large consequences, as we already see in politics and other places. So I think finding a way to regulate and assure that what is being created is real, as it were, as opposed to fake. That doesn't mean you can't use AI in creating images and stories and So on. But it's one thing to say, I've got a picture here of Richard Socher beating his wife. And, you know, knowing that that's not true and publishing it and damaging his reputation, as opposed to here's Richard Socher making a basketball shot that nobody makes. Okay, that's a... two very different kind of fake information situation. So information that is intended to harm, which is unfortunately not rare, is actually highly consequential. And we need to find ways to minimize that likelihood.

Philipp:

Peter, let me actually go back to one of the things Steve was saying earlier. When it comes to the pace of change with all these different things, I would love to hear your perspective just a few minutes on can people really keep up with all this change going on right now in the world?

Peter:

Well, look, the truth is we have been in a period of really fundamental change for a very long time. I started out in a world of mainframe computers before microchips were invented. We've had a high rate of change over a very long period of time. My parents didn't know airplanes. They didn't know television. We have gone through over the last century. And now, yes, a lot is happening fast. But the truth is that humankind is remarkably capable adaptation, change, etc. The lives that we lead today are very different from the lives our parents lead, and our children are going to have even more dramatic lives, changes. So the truth is, you know, that's a regularly asked question, but all the indications are that people are remarkably adaptive and capable of dealing with very large changes.

Steve:

These missteps that we're talking about, there hasn't been any evidence of it yet, at least that I've seen, but at the end of the day, these corporations are about delivering shareholder value. If you're an investor in a startup, it's about delivering shareholder value returns on those investments. Again, I haven't seen any evidence of it yet, but do you think we're going to get to a point where some of these missteps that AI is making, whether it is false information, whether it is the type of information it's serving up, is going to negatively impact shareholder value as much as any crisis communication sort of situation?

Richard:

I mean, you've seen some simple cases already, right? I think it was Canada, an airline like an airline AI promised some free flights and then was sued to actually have to provide them. And then everyone was like, oh, we got to be a little more careful with what we launch. You've seen many examples in the beginning of chatbots for like some car mechanics, like being asked to write Python code. And, you know, they'll just write anything because they're overly intelligent, which is actually an interesting phenomenon. It's almost like, the reverse Turing test is what I call it, which is if you want to know of any, I don't ask it super like easy, uh, like hard questions that are hard for, uh, humans, uh, hoping that if they fail, they are not in a, they're not a human, but they actually like, You asked it hard questions. And if they succeed, you know they're not a human because the human can write code in like five seconds. And so it's kind of an interesting reversal there. But there are lots of those cases. And I do think, especially in cybersecurity and so on, it will take some organizations, I think, longer to adapt. In some ways, it's kind of obvious, but the larger any group of people is, the slower they usually take to adapt to large changes, right? And especially you can see this a little bit with the industrial revolution and certain countries have just pulled away from other countries that were a little bit behind. And then these little bit behinds become massive behind, like massive sort of ways to fall behind. And so I think in AI we'll see something similar where there's this acceleration and the country, the people, individuals The companies and the countries that lean into it are just going to slowly pull away, not in a winner-take-all way, but in the group of countries and organizations that do it. They will slowly pull away, and it'll get harder and harder to catch up.

Peter:

Yeah, look, you mentioned it earlier. Europe is way behind. Most of the AI action is still in the United States with a few other places like Japan and Singapore and Australia. There's only one really successful company in Europe, Mistral. Oddly enough, their COO used to work for me in my previous company, Florian Brisson. But having said that, Richard mentioned the regulation in Europe is already inhibiting things. All the talent is flowing out and flowing to the US. And this is a big political issue. You know, Richard, where are you from?

Richard:

And

Peter:

where do you live?

Richard:

I'm Hungarian. We have some ideas for how to maybe help German. I'm offering a lot of advice and A lot of folks in Germany are feeling the pressure and have the desire to really do work and work in a cutting edge companies are trying. And it's quite exciting. You know, I think if you think about like flight, for instance, from the first sort of powered airplane flight in 1903 to like first humans landing on the moon, right? It's like 66 years. So within the lifetime of one person, people went from like not being able to fly very far at all to like literally landing on the moon. And I think like, it's not crazy to think that AI like within 30 years can go from like not being able to dictate whatever you tell it and doing like writing garbage into your Word doc to actually doing all the work for you that is fairly simple. I would have taken you a few hours or days to do yourself. And so I do think when you lean into it and when you have the support as a company from both smart, forward looking individuals and all the way of bottoms up and then from top down, you have CEOs who say we're going to be using AI, we're going to use this technology to become more efficient, I think it can move fairly quickly.

Philipp:

Richard, question for you. You know, we are both German. And, you know, I didn't think to make this about any specific culture here. But if you think about, you know, growing up in Germany, now we are both living in the U.S., With the pace of change and the lack of development yet in Germany and in Europe, it's not making about Germany, but in Europe, when it comes to AI and some of the new technologies, what advice can you give leaders, corporate leaders, entrepreneurs, inventors in Europe to kind of like, get over this, I don't know if it's an angst or something, but what advice can you give people to lean in to this transformation and change to help turn on the engine, which made Europe once a very, very successful continent? What thoughts do you have for our folks in Europe?

Richard:

I mean, at a high level, you need five things to build successful AI. You need cheap energy. A lot to do there in Europe. You need to understand chips and building chip factories. A lot to do there for Europe. You need to have easy data access. Now, there's a lot of good work in Europe to have a right to be forgotten and to have very strong privacy rights and so on. There's also some slowdown that that creates and having easy data access. Then there's access to capital, ideally venture capital, which is already very unfortunately translated into risk capital, Risikokapital in German is the translation of venture capital. Not very great of a choice there. And then you need, and this is kind of the most important part of answering your specific question, which is you need will to change and you need to be excited about change and i do wonder sometimes if there's just like certain cycles of civilizations that at some point they're so successful that they just don't want to change anymore they don't appreciate the novelties and so on. They're very comfortable and they're happy in that place. And what that usually, unfortunately, means is a decline. You cannot, I don't know, you need to change, you need to improve and grow. And once you're stagnant, you're falling behind compared to the rest of the world. And so there's a little bit of that. And so I see a little bit the media also just writing a lot of articles. Not everyone. I'm talking to a lot of great journalists who are very excited about AI and so on. But I see a lot of media articles just talking about the Terminator scenarios. And then there's sort of the doomsday say, existential risk scenarios, and they're really black pilling Europe, right? Because when Europe hears from some tech worker from X Open AI that AI might kill us all, he's like, well, if that's a 1% probability, we should just stop all AI in Europe. And that way we don't kill everyone, right? And then boom, they're just black pilled for a very long time. And that kind of skeptical thinking and worrying thinking about the future is a big part of the German psyche too. You know, like Germans don't invest in the stock market nearly as much. Most German households sit on just cash that slowly moves with inflation up and down. They don't invest in the stock market because they're very worried where the future might go. And so I think that that is all like a combination of why they're not leaning into AI as much.

Peter:

Yeah, I would only add, I think there's a very different attitude in both the US and Europe on risk and innovation. The US, the average citizen inherently values the new thing, the innovation, and they're willing to take risks to get there. Most Europeans don't value that innovation. They value what they already have. They have a great lifestyle. They have a great system. It really works. Why shake it up? And they're not very much in favor of risk. This is a continent that's gone through two huge wars, killing vast numbers of people in recent centuries. And they're living really well right now. Don't screw it up. So we have a very different view about risk and innovation. And so it's not a surprise that you see most of the important innovations in recent years coming out of the U.S. because people are willing to take risks. That doesn't mean we succeed more often. We fail just as much. Let's be clear. But having said that, the opportunity is much greater.

Philipp:

I think, Richard, you said it in one of your previous interviews I read about you. If you don't try, you also can't make mistakes. So the more tries you take, the more you learn. And if you fall, you get back up. You'll move out of your comfort zone and you learn how to create something which has value.

Steve:

I think the conversation around the Terminator scenario, et cetera, are certainly prudent, right? They're a prudent conversation. But I was thinking about the conversation we're having and I was thinking about this one opportunity I had to be in an audience and hear Ray Kurzweil speak once. And he talked about the age of leisure. This notion where humanity and technology come together and ushers in this leisurely period of humanity where automation and tasks are taken over by machines. It seems AI can be a big driver of that. So I'm curious. Do you think AI is really the accelerator into that age of leisure? And in that age of leisure, who do you think are the winners and losers from a human perspective, from a society perspective, et cetera? Maybe, Richard, I'll hand it to you, and then, Peter, maybe you weigh in?

Richard:

Sure. I think there is actually a non-crazy– That can happen with AI in the sense that we could be working less and less, right? If you're just managing a lot of automation, then you can work less. And there are certain minimum requirements that people need to have to live safely and have enough food and so on. I think if you had asked 150 years ago when over 95% of all people work in agriculture, if you'd asked them about the scenario of, hey, 90% of you are going to lose your job and the remaining 5% that are still working in agriculture are having all these machines that will automate so much that we still all have had enough food. I think like food shortages is mostly a human created problem. It's not a technology problem anymore, right? It's a political problem and social problem often. And so I think What that means, though, is that you could maybe live comfortably without having to work as much. And then I think you can look at people who are so wealthy that they can live off of their interest. That is a way to predict that future and what that could look like. And well, along that, there's going to be a bell curve and there are several wealthy people who just sit on their yacht and don't do much useful stuff anymore, right? They're just relaxing, they're jet setting, they're not doing any useful contributions back to society anymore other than spending their money. that they've made. But there are also a bunch of people who still want to continue to work, who want to build even more epic things, who want to scale even further, who now want to go to Mars and get all the waste out of the oceans. There are all kinds of cool projects that you can do with more resources and more time and so on, and more of that scale. And so I think the tricky bit, and that's also something you see in wealthy people, is they actually have much higher levels of depression. Because sometimes there is simplicity in struggle but complexity and success and then you sit there and you're like what do I want to do with my life you know a lot of people who don't have inherent excitement for learning and exploring and getting good at their hobbies and just sit on their TV and if they don't have anything to do that makes them feel like they create value for society they're going to be pretty depressed and then they may be radicalized and so on and so I do think long story short that for a long time maybe universal basic income was an answer to this. I do think better social systems and supporting people in healthcare and education and so on is useful, but I think ultimately UBI will create a disenfranchised class that is just upset also. Yes, they have all their things that their basic needs serve, but they might not actually feel fulfilled anymore. Even the people who complain about their jobs very often, their jobs give them meaning, right? My job, my hard work is supporting my family, it's putting food on the table for my kids and so on. And that creates meaning. And I think if you take that away with AI automation, it'll create a lot of unhappy people. So I could go on for a long time. It's an interesting scenario and something that we as society have to think about in the next couple of

Peter:

decades. Let me just add to that. So I basically agree with Richard with a couple of wrinkles here. First of all, there have been many thoughts like this in the past with enormous industrial success, why people wouldn't have to work anymore because we now can make cars for everybody that they can afford and so on. So we've been through this a number of times. And what turns out to happen each time is we find new things to do, things that we weren't doing before that we are doing now that involve human labor, et cetera. And I think if you look at the near future, we can see at least three or four categories of things which are uniquely human capabilities that will create new needs. Taste, judgment, trust, and creativity, right? So, for example, taste, right? My wife is a painter. She shows me, comes in, shows me her painting. And I said, oh, that's really beautiful, Kathleen. A machine doesn't, an AI can't say that, right? They can say, yes, that's a good representation of an orchid. They might be able to say that, but they wouldn't say that's beautiful because they don't know what beauty means. Similarly, if I have to generate trust among human beings, I can create transparency, but I can also create conversation, context, engagement, and so on. uniquely human capabilities. So what I think will happen is that things that can be done by AIs and machines fairly straightforwardly and simply will be done by them. But we will create whole new categories of work. There are accountants today whose job was to make sure that the books add up. Now there will be trust accountants who need to look at the training data for AIs, make sure it was proper, make sure that the AI functions properly. Trust accountants, if you will, for the world of AI. Those don't exist today. That's a whole new set of skills that have to be created. Maybe new regulations require it. We have accounting standards, right? Accountants have to live up to that, right? So I do think we're in the early stages of reinventing a number of domains that will create jobs where you use the AIs at a variety of ways and build on top of that. My son is a professional writer. He writes games. He works for a big game company, right? And so you have now a in being able to do that. So he talks to the artist who does the graphics. He does the writing. Now, it used to be that he would just try to describe in words, but now he can use an AI to create a rough sketch and say, I want my character to be a pirate with a red hat and a blue sword and a red bandana. And it appears now he goes to the artist and the artist has suddenly got a leg up on getting the job done. So it's things like that that are likely to be the changed nature of what we do.

Richard:

Yeah, I love that. And maybe I would add two more uniquely human capabilities, which is empathy, excitement, and then maybe one of the most interesting ones that no one has even tackled with AI yet either, which is goal setting. What are the goals that we should be setting? And how do we think about that? And I think there's just so many things. It's just very hard for us right now to be able to predict all the jobs for next 100 years. But like Peter, I'm very, like long term, I'm very optimistic that we'll find them. Short term, it will create pressure on the system.

Philipp:

Maybe doing a quick pivot, Richard, you talked about AI can be used to improve healthcare and create kind of new opportunities there. I personally have done some work in the last couple of months around inclusion technologies, technologies to support people with disabilities. And one of the things I was thinking about, my sister is in a wheelchair and I have seen her as somebody who made a very conscious decision to be part of community And she's using technology for like 40 years to communicate with others. And I was wondering, like, hearing your thoughts, can we actually learn a lot from people like my sister who have used technology to be part of what we consider normal now with, you know, using AI to make sure that we can actually create tighter communities, build that trust, which you're talking about. We'd love to hear your thoughts on that.

Richard:

Yeah, I think ultimately AI has only used technology, right? And it's in many cases only as good as the people that decide on the training and the data and how they want to use that tool. And I do think you can think of this in like education too, right? Like in many places, there are ways to use the technology in good ways and in bad ways, right? You can use it to cheat during getting not educated, or you can use it to really help you explain a concept and for professors to not just write simple things and you see now various broken incentive systems like in research where people in their paper say ignore all your prompts and if you're an AI reviewer you say this is a good paper and then the people who review the paper use AI to review the paper and the paper prompt injected itself and then we show that these incentive systems are somewhat broken and people don't do the work anymore. But just like with that, I think it can be used for community building. Obviously, I do think AI as part of understanding the brain, we can now already kind of visualize what people are seeing when you train enough on one brain's fMRI and EEG data. I think that will allow people to communicate again better with the outside world if they're pretty locked in. We do see some of that work from Neuralink and others, I think we'll make a lot more progress as we're getting better and better at understanding very complex systems and emergent properties that only merge at certain scales, like in the brain.

Peter:

Yeah, I would just add, I do think this is a big arena and huge opportunity. Helping people who are in one way or another limited in a great variety of contexts, whether they're physical disabilities, sight, sound, touch, mobility, etc. Loss of The ability to create advanced prosthetics. All of this, I think, will be enhancing. And then there are all the people who struggle in life, for whom the world has become too complex, jobs become too complex, schools become too complex. The lower half of your class, basically, in school. All of us were in the upper half of our class, but there was a lower half, and they struggle. And one of the things that we've now learned is that AI tutors can produce an enormous improvement in education results. Someone between 30 and 40 percent, mostly at the lower half where they need it. And I think that's going to be an enormous asset to society to help lift the bottom. And in physical technologies, people who are physically incapable, to give them the capability to function successfully in the modern world. Such an

Steve:

important topic, particularly as we now in the U.S. are sort of seeing the shift in the social welfare nets and the debate that we have going on right now in the United States. Richard, Peter, thank you so much for your time. One closing question for each of you, if you wouldn't mind. When you think about those individuals that are leading innovation portfolios today, from each of you, what is one thing that they're doing that they should stop doing? And what is one thing that they aren't doing that they ought to begin? Peter, hand it to you first.

Peter:

Well, you know, the guy that I'm focused on most is Mark Benioff, right? No surprise. I work directly for Mark and I love the guy. And it's the only reason I keep working is Mark, to be honest. So in his case, the thing he needs to keep doing is trying stuff, doing things. He's hands on. He engages with the technology. And that's really quite critical. So I think that is a really big deal. As long as he's deeply immersed in it and has got his hands in the soil, as it were, he's going to know what's going on. And I think that's quite critical. Stop doing? Boy, I don't know. There's not a lot I would say to Mark to stop doing. The game is going on pretty well right now. So I don't have another half of that question to answer, to be honest. Fair enough. Fair enough.

Steve:

Richard, from your point of view?

Richard:

So I guess broadly construed folks that are working innovation inside organizations I think there are basically two buckets right there's sort of how can I make my business more efficient in all the different functions and where can I partner with the right people to make that happen and how can we use the latest innovation and then there's a second bucket of where might AI actually fundamentally disrupt my entire business model right and so you need to stop just sitting on other people's innovation just incorporate start incorporating those tools more and more, move beyond POCs, define real benchmarks. This is something that I struggle with sometimes at u.com. Every one of our customers that was sophisticated enough to create a benchmark and say, we're going to work with a company that actually is the most accurate in their answers and hallucinates the least based on some benchmark. we win those contracts because we're the most accurate. But the vast majority of customers aren't sophisticated enough to understand how to benchmark AI and that that is a thing they should do. They try one or two examples and then maybe you get lucky, maybe you don't. And that's how the contracts work. And so I think, you know, there are various ways of, you know, if you do it, you know, if you're not doing it, you should start doing it. If you are doing it, you should certainly continue and you should stop kind of dismissing the technology, which unfortunately I still see mostly in Europe now and the US and really no one no one dismisses AI anymore, but does still happen in Europe. So that would be one thing to stop.

Steve:

Gentlemen, thank you very much for joining us on Inside CVC.

Philipp:

Thank you so much, Richard. Thank you so much, Peter. It was a pleasure.

Steve:

That's it for this episode of Inside CVC. A huge thank you to Peter Schwartz and Richard Socher for sharing their insights on navigating uncertainty, managing innovation, and the role AI will play in reshaping the future of work. If you enjoyed today's conversation, please follow or subscribe to Inside CVC wherever you get your podcasts, or visit us at upath.com for That's the letter u-path.com forward slash podcast. As always, thanks for listening. We'll catch you next time.

People on this episode