---
title: "Coffee with Crow: Building A Future Where Everyone Can Work with AI"
slug: "ben-pring-jobs-for-the-future-coffee-with-crow-building-a-future-asu-gsv-2026"
author: "Ben Pring, Steve Yadzinski (Jobs for the Future (JFF))"
date: "2026-04-14 12:00:00"
category: "Premium"
topics: "ASU+GSV 2026, conference transcript, AI/ML, Workforce Learning, Emerging Technologies, AI in Education, Workforce Development"
summary: "A panel featuring former U.S."
banner: ""
thumbnail: ""
---
> **ASU+GSV 2026 Summit** | Tuesday, April 14, 2026, 3:00 pm-4:00 pm | Sponsored Partner Programming

<iframe width="560" height="315" src="https://www.youtube.com/embed/vkARK3YSK78" title="Coffee with Crow: Building A Future Where Everyone Can Work with AI" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>

## Speakers

- **Ben Pring**, Jobs for the Future
- **Steve Yadzinski (Jobs for the Future (JFF))**

## Key Takeaways

- A panel featuring former U.S.
- Commerce Secretary Gina Raimondo, ASU President Michael Crow, UIC Chancellor Marilyn Parnell, and former NSF Director Panch Natarajan debates how to ensure AI empowers rather than displaces workers, moderated by Jacqueline Pring.
- Raimondo calls for a "grand bargain" between government and employers, warning that 70% of Americans fear AI and drawing on her father's experience of displacement during the China manufacturing shock.
- Crow delivers a pointed critique of tech industry leaders who focus on macro disruption over individual empowerment, arguing for "principled innovation" centered on individual liberty and family success rather than shareholder value.
- The panel converges on the need for universities to radically evolve -- UIC's data-science-driven advising system using non-cognitive asset surveys is highlighted as a model -- and for industry-university "co-habitation" rather than traditional arms-length partnerships, with Natarajan proposing a CEO framework of Collaboration, Co-creation, and Co-evolution.

## Notable Quotes

> "We need to stop listening to people who are anti-democratic, who do not believe in liberty, that believe that these machines are going to become the masters of us all, and start focusing on the individual."
>
> — **Michael Crow**

> "If AI creates America's first trillionaire and millions of the rest of us are left behind, that's not a success. That's horrific for a democracy."
>
> — **Gina Raimondo**

> "Human capital has got to be seen as the work product of all the companies, all the corporations, all of the government agencies. It's about the empowerment of the individual human."
>
> — **Michael Crow**

> "I find that universities today are too slow, and they are in denial, and they at best adopt and adapt, which is in my view... necessary, but not sufficient."
>
> — **Panch Natarajan**

> "Every incentive -- you fire 10,000 people today, your stock price surges tomorrow. You invest in research and development today, you do well tomorrow... we need to change the incentives."
>
> — **Gina Raimondo**

## Full Transcript

Good morning, everyone. So good to be here. Quick show of hands. Who made it to karaoke last night?

A couple of you. Okay. So we're segue from last night's karaoke to a lyric. The times, they are a changing.

That's what we're talking about here today. What do these changes mean for our workforce and how will universities evolve? Let's jump in with questions for our panelists. Gina, I'm going to start with you this morning.

You served as the U.S. Secretary of Commerce in the Biden administration and as the 75th governor of Rhode Island. In March, you wrote an essay in the New York Times about workforce displacement and you warned that you fear that we might see the kind of displacement that your own father experienced in Rhode Island in the 80s. You call for a new grand bargain between governments and employers.

What would that kind of bargain in a best case scenario deliver for workers? Good morning. It's fabulous to be here. I was just saying to Michael, I came to this a dozen years ago and it was so much smaller and it's an incredible credit to his leadership that how much you have flourished and to all of you for the work you're doing.

So Michael, thank you for having me and thank you all for what you do. I think we're in a tough spot right now as it relates to AI. I spent my career, especially as Commerce Secretary, focused on U.S. competitiveness. How does the United States compete and outcompete in the world?

And I think AI is a key piece of that. We need to lead the global competition in AI. But right now, 70 plus percent of Americans are afraid of AI. When they hear AI, they hear, I'm going to lose my job.

You know how you know it's bad? I was at a bar a couple weeks ago watching NCAA basketball and all everybody was chattering. What's your post AI job? I'm going to be a park ranger, someone said.

I can't be thrown out because of AI. Another person was saying, God, I'm paying all this money for college tuition for my kids. What are they going to do when they leave? So we got to, you know, Americans are anxious.

And we have to honor the anxiety, not just with empathy, but with a plan and with action. Because otherwise, politicians are going to overregulate AI, stop it, and that's not competitive. So what's a grand bargain mean? A grand bargain means companies have to really finally step up and be key drivers in our entire education system, our entire workforce training system.

An effective workforce training system, which I would argue is not one that we have in America, would have companies continuously defining what are the skills needed today, what are the skills needed tomorrow, and then universities, workforce training partners, other, you know, workforce initiatives can continuously fulfill those needs. And it can't be one and done, right? You can't go to school once and think you're fine forever. It has to be continuous learning so that in the grand bargain, in the AI economy, an American has confidence to know as the labor market changes, as necessary skills change, there's a chance for them in that economy so that they can continue to change.

You mentioned my father, last thing I'll say is, in the China shock, you know, we sent all our manufacturing to China. People had their blind, companies had their blinders on. It was more profitable and more efficient. And guys like my dad, millions of them just got left behind.

Well, he was 56, he needed a job, and there was no bridge to a job in the services economy for him. That's what I mean by a grand bargain. Be pro AI, be pro innovation, but man, figure out a bridge so people can continuously learn and have a place in a constantly changing economy. Marilyn, you're the chancellor of the University of Illinois Chicago.

You know something about bridge building. You do a lot of that for your students. What's one move that you recommend that higher ed leaders make in the next year to meaningfully improve how universities connect learning and work? Thank you.

And thank you for including me in this conversation. And I just want to forewarn all the people in the front row that I talk a lot with my hands. I have a handheld mic. Be prepared to catch it.

You're in the danger zone. It's like the splash zone at an aquarium. I'm readjusting in case. I'm just getting ready.

So this is all outlined in an opinion piece that I just published in Science Magazine. So I think that higher education has an absolute obligation to make sure that this technological wave of innovation actually includes the full swath of the population. So we've had many technological waves. Those waves have created incredible economic growth and prosperity, but they produced it in a very uneven way where certain people who had access to capital and networks and early access to the technology, they benefited enormously and huge swaths of the population got left behind.

With AI, we have an opportunity to do things differently. And I think that institutions of higher education, especially ones like Arizona State and University of Illinois Chicago that serve so many Pell Grant students, first gen students, students who don't have those sorts of networks and capital assets at their disposal, we have the obligation to do something different. So we, in conversation with employers, talking to employers about what they need, what they want in their hires, in their interns, et cetera, we've developed this idea that there are three things that we're going to make sure that every one of our students, undergrad, professional student, graduate student, has at their disposal vis-a-vis AI. That is, they will be fluent, critical, and ethical users of the technology.

By fluent, I mean able to use the technology, able to incorporate it into process workflows, to be able to use it in a pretty sophisticated way. By critical, I mean not that everybody needs to be able to program large language models, but people need to understand the basics of how they work. That large language models draw their results from the middle of the distribution, so information on the tails of the distribution is not represented very well. So if we had AI in the era of should we wash our hands before seeing the next patient, AI would have said no, because there was very little evidence that was all on the tails of the distribution.

And it's also true that the distribution can be manipulated for political or economic purposes. So people need to be critical users so they can evaluate the output, and they need to be ethical users, have to figure out when to use AI, when not to use AI, when to disclose the manner in which AI was used, when to incorporate it into process workflows, and pay attention to whether or not that is going to create some, if you incorporate it into a particular process workflow, will it incorporate some natural biases that are part and parcel of large language models, and do you actually want to do that? So we're committed to making sure that all of our students graduate with those capabilities. Those capabilities come very much from our conversations with employers.

So I'm 100 percent behind Secretary Raimondo's idea that higher education and companies, employers need to be talking to each other robustly. It's something we've always done at UIC, and I know has done a lot, has always been done at Arizona State, at least under your leadership, Michael. Michael, turning to you, so when there's a displaced worker who's looking to get upskilled, how might they turn to a university, and what do universities need to do to evolve so that continuous lifelong learning is the standard? Well, one of the things that we need to do is we need to realize that we need lots of different kinds of universities.

We need universities of multiple ways of working, and so the university that we've been building at ASU is one in which we decided no boundary barrier, so that any learner from any stage of learning in their life, any teacher, any family, any child, anyone K-12, could have access to the assets or the tools that the university might be building. And so the key to all of this, and I like Mari-Lynn's example of this way to teach AI tools to the college students couldn't be more important when you think about the fact that at some point we have to reverse the logic that the tech bros and some of these other guys are talking about this sort of death star image of this unbelievable powerful tool that's going to alter everything and replace everyone in work. I think in the long run, if we can figure out how to do this, each person is going to be individually and particularly and personally empowered. If the tools can be designed in a way where they help the individual to learn, help the individual to make decisions, help the individual to project themselves, if you can build this idea or this concept of an agentic self and you can have universities then find a way to help facilitate that, help make that happen, help role model that with the right kinds of ethical uses and controls and so forth and so on, then what Gina is focused on in terms of trying to figure out how to do that.

got how to, rather than go through disruptions, we go through enhancements for the first time in terms of these kinds of technologies.

It means that the universities have got to then rethink their role, at least some of them. It can't be just, well, we're producing the best and the brightest to do this and this and that. You know, that got us to where we are. That got us to this point of very significant internal, national, social, and economic friction within our country because we've made education something which is attainable in a limited way as opposed to a massive way.

We've made education something that is accessible to some but not enough. We've made these tools for advanced learning something that become this big, massive disruptor rather than this personal enabler. And so what we need are universities that are much more adaptable, engageable, flexible, in addition to, I'm not saying in lieu of but I'm saying in addition to other kinds of places. And so what I'm arguing for are places in which some of the universities with their assets become deeply embedded at the level of the communities, at the level of the working with these companies, working with the workers, working with the transition, developing these kinds of tools.

One of the sessions after this, I think the next one we have Will I.M. who's our partner who's built this course for a hundred or so people in California and Arizona where they're building agentic selves. Well, we're learning a lot in the building of the agentic self and this agentic self is coming out of the university, coming out of the design and the tools that the university has, the pedagogies that we have, but it's now for the first time focused on the building of one of these massive technological capabilities down to the level of the individual. So a lot of the vision of AI is at the macro level. It's going to have macro level impacts, but micro level empowerment, if we think about it correctly, which is where we really need to put our energy.

Panch is the 15th director of the U.S. National Science Foundation. You've thought about these macro and micro level impacts. Right now you're also thinking about what it would mean to design a very AI forward university.

What are you thinking? Unpack that for us. How will this university be distinct from others? Thank you, Jacqueline.

I think, you know, first I want to say that being a researcher in AI for the last 35, 40 years, it's exciting to see where we are right now, but it's also interesting to see this because people think that somehow this all happened magically. And I'm trying to make the case here, because I can't but help use this platform to underscore the importance of fundamental discovery research investments. And here we are today because of that sustained investments, even in AI winters over the last five to six decades. And that's what we're seeing right now.

So I want to make that point first, because we have to understand that I don't think we appreciate this enough. Even in this audience, we don't appreciate this enough, I think. I think we need a sea change in this, and then we become the advocates. That's the first point I want to make.

The second point I want to make is that I find, and I'm just going to say this, and I'm going to offend a lot of people probably in this process, I find that universities today are too slow, and they are in denial, and they at best adopt and adapt, which is in my view as a mathematician, insufficient. Maybe it is sufficient, but insufficient. Maybe it is necessary, but not sufficient. I fundamentally believe that we need a completely different look.

And what is that it? That it is I, innovation, and T, transformation. It's a completely different way of looking at the world. And Gina talked about the industry.

And so I'm going to sort of frame it. I always like to frame it in some kind of a thing, you know, sort of with letters. So I'm going to use the word CEO. I think we have come to a point where we really need the CEO of collaboration first, partner, partner, partner first.

And so there should not be any distinction between even within the university, the disciplines. We have talked about that enough these days in universities. But with other sectors, industry being one of them, of course, there are other sectors more than industry. All of them need to play a role in this.

So it's that CEO. And that collaboration then leads to co-creation. I think we need to co-create. Somehow thinking that it can be done in the university as fond as I am, myself being a tenured faculty member, I think it is not sufficient.

I think for us to think somehow we are going to create it out of the university just as faculty members is not sufficient. It's not going to work. So it's a different world. So co-creation, and then co-design, co-delivery of the curriculum, and for which we need co-habitation.

It cannot be where you have a university with four walls and you think somehow it's all going to happen there because somebody from industry is going to come and partner with you. That is insufficient in my view. So the co-habitation idea is something that we need to really welcome and embrace. So it's a different world.

And if we do the work with co-habitation, then what happens is co-evolve. You co-evolve, and that's how it's going to be. The co-evolution is going to happen with not just only regular faculty, but all kinds of faculty, even university environment. Professors of practice, teaching faculty, tenure track faculty, research faculty, practitioner folks in industry, and other folks who live in the real world outside of the university.

All of them have to play a role in this as co-creators. It cannot be just only us. So I think this is something that I'm really keen to build. And what can we do for AI, with AI, by AI?

Yes. But what is X plus AI or AI plus X through that? And what does it look like? And that's what we need is that this completely transformed individuals that we need to, I think, make possible.

And so I think it's new models, new ways in which we do this, new delivery processes. I think it's, and it's not just four years, six years in the university, it's all the way from K to gray. So it's got to be a complete spectrum of people that we need to get involved in this right from day one. So I think it's a very different mindset that we all need to possess.

And this audience being one where we've always looked ahead of the curve, 17 years, having been the very first one, and also the one that Gina mentioned with Gina here and a few sampled events in the past. And I was prohibited in the last five years to participate. So it was agony that I watched from outside. The things that you do- Those were the best meetings, those last five years.

But I had the good fortune of working with my dear friend Gina Raimondo and I did a lot of things in chips and science and what commerce and NSF did through what we did in terms of the 27 AI institutes that we launched all across the country. And I want to just stop with this. The AI Institute, when we launched it, I said that it's got to be completely partner, partner, partner. Five AI institutes were fully paid by USDA, believe it or not.

Five institutes of agriculture. It's in context. So go look at all the 27 institutes and the regional innovation engine programs with AI. You will see the transformation starting to happen because to what Marie said, talent and ideas are democratized, opportunities are not.

So it's a very poignant moment for us to see what we can do with what AI can do. Panj, thank you. It's good to have you back. Marielynn, partner, partner, partner, cohabitate, co-create.

What would it look like in Chicago to do that? You and I talked a little bit about how AI empowered with student data could be really meaningful if your students, especially first generation students, you're so committed to finding ways to help them succeed. How are you starting to envision what it might look like to leverage AI to enable student success? Yeah.

I'll give you a specific example. Before I do that, I just want to riff off some of the things that were said. You know, one of the things I'm really tired of hearing is entry level jobs are going to be eliminated because that takes responsibility off the universities and it makes the students feel depressed. Entry level jobs will still exist.

The requirements for those entry level jobs are changing significantly and the universities have an obligation to our students and our K-12 system has an obligation to students and apprenticeship programs, all of that have an obligation to make sure that we're equipping people with the skills that they need. So we have a deep AI intervention, if you will, and I'll give this example in our advising system where we have a platform that all of their data from high school pours in. If they submitted test scores of any type, those get poured into it. Anything from their time at the university gets poured into it.

And the other thing that we do is it's referred to as a non-cognitive assets survey. And everybody who joins UIC has to take this non-cognitive assets survey before they can sign up for their first set of classes. And those ask questions like, if your car broke down and it was going to cost $300 to fix it, would you be able to get it fixed? If you were sick and you needed to go to the doctor, is there someone you could call who would take you to the doctor?

Those non-cognitive assets that tell us so much about the kinds of supports that students need. So non-cognitive assets survey, their high school grades, their grades from the university, their test scores, any notes from any advisor who's spoken with them at all can all be put onto this electronic platform. And then we use data science to crunch through this information on a regular basis to identify what the match is between what we believe the students need and the assets that we have at the university. So we're matching that up and rolling that out to students before they come in and say, I think I'm having problems in my calculus class.

So I can look at your high school background in math, I can look and see that you're an engineer, and I can say, I think it's a really good idea that you get started in the tutoring center for your calculus.

class from week one, instead of waiting until you have problems with your first midterm, something like that. So, this is an all, I don't know whether I would call it AI or AI informed data science system. I know there are, you and I can have a discussion about whether it's data science and AI as part of it, or AI and data science as part of it. I think, I'm a data scientist, so I think it's data science and AI as part of it.

But, you know, it's a, so there's this very proactive investment in understanding the context of the students' lives, if they're commuting students, if they're first gen, all of those sorts of things. And it has substantially changed our retention rates for students and our, has decreased the time to graduation, it's increased our graduation rates. And incorporated into all of that is where are they working on campus, are they developing, you know, developing some of their professional skills there, where are they interning, where are they looking for jobs, because we're interested in getting students to and through college and then placed into a good job. And that placed into a good job is really important to us.

So, it's a deep and proactive intervention, it seems to be working, the students like it, the advisors like it because they spend less time entering data and analyzing it and more time one-on-one with the students, which was the reason they became advisors in the first place. So, that's one of the things that I say about AI is we can use AI to do this rote work that really isn't why people went into education to free people's time up to do the work that they did go into education for, which is to interact with people to help transform their lives through the educational process. Thank you. Michael, question for you then I'll be coming to Gina.

ASU has advanced the idea of practicing principled innovation. How should that framework shape the way that universities adopt and leverage AI, especially when the pressure to move quickly is so intense? Well, we have to stop listening to people who are anti-democratic, who do not believe in liberty, that believe that these machines are going to become the masters of us all and we need to start focusing on the individual and the empowerment of their individual liberty, that liberty to learn, liberty to advance, liberty to move their families forward, liberties to express their ideas. And so, what I mean by that and I apologize for going so far back to a root.

So, if we're practicing innovation in which we become and you can see these people talking about these technologies all the time, they are heartless. They are not interested in the success of the family. They're not interested in the success of the democracy. Some of them aren't even interested in the success of the country.

They're interested in the success of their investors and there's nothing wrong with that as an objective. And I'm stating this in somewhat harsh terms because it's been unbelievable to me the extent to which there has not been a focus on the empowerment of the person. So, the principled innovation objective that we have in all of this is the empowerment of the child, the success of the family, the success of the worker, the advancement of the person to an opportunity where they have more opportunity rather than less opportunity. I literally have had people come up to me saying, well you're just going to have to get ready because you're going to be wiped out.

And I'm like, either you're a psycho or a fool. I don't know which one you are. You're probably just a greedy little bastard. And so, we have to bring some principles about the success of our idea, the individual liberty empowered democracy which is built around the individual and their life and their family and their progress.

The more we focus on that, the better we will end up at the end of the day and it's not simple, it's very difficult with these macro tools that are available to everyone. We have people running around saying, you know, universities are worthless and universities are not important and don't send your kid. We have some of these jokers that are funding people not to go to college on purpose thinking that it's some kind of a game. So, they find some genius in the 8th grade and they empower that kid to do something.

Good, good. I'm glad you empowered that person to do something. There's another few billion of us who aren't these geniuses in the 8th grade. We have to find a way to make our family work.

And so, what I'm interested in in terms of principled innovation is how do we advance with the notion of these core values, these core structures of what it is that we're trying to build and how do we use our technologies to achieve those things and so, there was this philosopher that I've probably read like too many philosophers but this one I probably read almost every word he's ever written, Philip Kitcher and the best book that he wrote is Science, Truth, and Democracy in which he outlines that in a democracy as you advance science and technological knowledge, if you do not do it to achieve the purposes of the democracy, you are carrying out an amoral act which leads to immoral outcomes. Now, that's heavy stuff for 9 o'clock in the morning. And so, principled innovation means what are we building all these things for? We're building it for the success of the person, the achievement of their life and so, that's what we need to focus on.

Thank you. Gina, turning to you. What does it look like for employers to practice principled innovation? Yeah, so much to unpack in what Michael just said and I agree with it.

You know, that assumes the people you're talking about have a desire to be moral actors. Yeah, some don't. Exactly. You know, that's not the goal, right?

I mean, I often say if AI creates America's first trillionaire and a couple of, you know, the first trillion dollar IPO and millions of the rest of us are left behind, that's not a success. That's horrific for a democracy. It certainly makes us less competitive, not more competitive and to really get to the issues that you're talking about, like the political system in which we're making these decisions needs massive change and that's the thing I worry about, by the way. The thing I worry about is unless we make the changes necessary to empower the average individual, the average company, the average startup, new company and instead just focus on, like I said, you know, creating trillionaires and trillion dollar companies, this will not end well.

There are people, including me, worried about the race with China for AI. The picture I just described may land America in a place where we have the best designed chips, you know, the most innovative AI companies, the most sophisticated models and losing the global AI race. And that's our challenge, right? You have to say, what does it really mean?

What will it really take for America to lead the global AI competition and it is much more than just plentiful energy, plentiful chips, plentiful data centers. It's the stuff we're talking about and what does it mean for employers? And honestly, I agree with you. It's exciting to think about what a well-trained individual can do with AI capability, how much more productive they'll be, how much more creative they'll be.

Emphasis on well-trained. I actually think, you know, people worried about their job loss, I think we'll see an explosion of new company creation if we do it right, if we do it right. So, what's the role of a company? Companies need to really understand that like they need to be in the driver's seat with respect to our future.

The future is theirs to create. If they simply sit back and conceive of their job as maximizing short-term shareholder value, I think this will wind up very poorly. In fact, it is in their interest to make sure our education system works, what do you say, from- I don't like your analogy because I'm never going gray. So, from- it works for everyone- it works for everyone from birth to death of all, you know, geographies, ages, stages, et cetera.

This isn't a business versus worker problem. This is an everybody problem and they better get in the driver's seat and start driving along with you and the rest of us. By the way, last time I checked, agents don't walk into the store and buy things, people do. How is deep recession, social upheaval, frayed democracy, good for a business?

Answer, it's not. So, what do they need to do? Yeah, keep the pedal to the metal on innovation for sure, but also keep the pedal to the metal on people strategies so every American is brought along. And don't think that that's somebody else's problem.

Don't think that's the government's problem, a university president's problem.

problem. No, it's your problem too. Making sure that everyone can move along. Panch, this really aligns with much of what you were committed to as director of NSF, broadening access to AI.

How do we ensure that this next generation of AI-enabled learning systems expands opportunity rather than concentrates it to a few future trillionaires? No, again, I go back to this industry point, right? I don't think we have done a good enough job of articulating to industry as much as we complain about industry not participating in our process. Let's not forget that because I've talked to many CEOs of companies.

It's not that they are not wanting this to happen. Let's be very clear. It's not the accusation that we make they all care about is in profit in the next quarter. Yes, that is true, but it is not just that.

They do realize that the importance of democratization of access is important. So this is what we picked up in NSF when we created the National AI Research Resource called Nair. Right now, the precursor to what is becoming as a Genesis project because I infected my good friend Dario Gil with this thinking too. So the point that we try to do there is the following.

If we rely on the companies to produce all the unbelievable innovations and training and so on and so forth because they have the access to the unbelievable infrastructure that they have, then we in the universities and other environments, including small startups, will never be able to thrive in the AI era. So how do you get them to partner with us? And that's what we did with the National AI Research Resource. We had 25 companies who committed their hardware, software, data, applications, models, everything because I said let's work in co-creating these things.

Yes, you have your objective, but let's also participate in this because you will not get the next generation of talent or even the current generation of talent being up-skilled and re-skilled unless we participate in this or the next frontiers in research to propel the innovation engine. So we got them to commit. I mean I was amazed because I put in or we put in NSF put in 30 million dollars for a pilot and within a few months we got another 120 million dollars that people just came in with it and then now it's scaling a lot lot more. So I think we need to also take responsibility.

Instead of just pointing, I always say when you point a finger there are three fingers pointing back at you. So let's not forget that. So with industry I think we have not created the case for why they should invest and co-create with us. And the models that we are saying is come give money to university, we will do it for you, is not good enough to the previous point that I made.

We have to work with them in a truly partnership collaborative mode with the problems that we want to solve together. And that's a different mindset which we don't have in large parts in universities these days. And so we need to change our way of thinking quite honestly. We have to change it dramatically in order for us to be able to get the participation and the true engagement.

Not just throwing a few shekels here and there but really serious investments. Yeah just very very quickly on that. It's not because CEOs are you know bad people. They're no incentive.

Every incentive, you fire 10,000 people today, your stock price surges tomorrow. You invest in research and development today, you do well tomorrow. Doing the things that we're all talking about there's pretty much no incentive for a CEO to do that. Now as I said before I think there is if they think you know beyond today, this quarter, this year.

But that's hard for a CEO to do. So I also think when you you know my concept of the grand bargain and of course I've spent 15 years in government. We need to create, the public sector has a job to do here which is change incentives. Because I think if there are incentives for companies, economic incentives for companies to do this.

Retrain, redeploy, you know lean into everything that we're talking about. Instead of just hit the easy button and lay folks off. I think they'll do it. Many of them want to do it but we do have to change the incentives to make that happen.

So one way to focus on that, I agree with these both these comments, is to stop viewing human capital as just a commodity and start viewing them as human beings. And the second you start thinking about human beings as being this particular part of the economy and everyone including the companies has responsibility for empowerment of these people. And then you calculate the the net return, the net benefit to the entire system of doing that. It'll be a measurable return if we stop acting in such selfish narrow interest.

And so human capital has got to be seen as the work product of all the companies, all the corporations, all of the government agencies. It's about human capital development. It's about the empowerment of the individual human. If we could get that worked into the model then we would find a way to, as opposed to, let's abandon these workers as we move our factories overseas in Detroit and Philadelphia and all these other places and then and then wonder why the people are mad or upset.

Well they're mad or upset because they got attracted to be a worker there and then the company said well you know we found some cheaper workers somewhere else too bad for you. You should move there. That shouldn't be the way that the world works. Everyone if they'd been working to build that individual to the highest adaptability level, everyone would win including those corporations when they come back later and say where's all the workers I can't find anybody that's qualified to work for me.

Well of course you can't because you abandoned the person to focus on on your company not the person. Okay show of hands in the audience we're gonna do upside or down. At this moment in time are you feeling hopeful about this discussion that we're having right now? Are we on the right track?

Are we going to align our education systems with employers? Who's feeling hopeful? Who's not sure? Who's not feeling so hopeful?

Okay in our last couple of minutes I'm gonna give our panelists a little bit of time to see what maybe we can get this group a little more hopeful. What values, what values, this is a question for everyone we'll start with Panch and come down, what values must guide the work ahead and how might those values be tested in the AI era? I think the values as was articulated very nicely it's about empowering every human being with their latent potential being expressed in the fullest form and their creativity juices being you know you know sort of working for the country, working for their family, working for society. I think it can be done.

This is a very positive thing when you talk about all the things that we discussed. These are problems that can be solved. What what is required is a mindset shift or a paradigm shift in some cases. That's all is required.

It can be done, it should be done, it will be done, it must be done and therefore it will happen. So I think I am very positive but it requires all of us to say that we are not going to just succumb to doing the same old thing and hoping for a better outcome. Marilyn, so you let Panch go first. So I would say you know from my very first response up here it has to be about inclusive opportunity.

We have to make sure the outcomes that were that grew income inequality and wealth inequality in the United States with all the technological waves that have occurred to this date were not predetermined. They didn't have to be that way. We make the decision today with AI that we are going to provide inclusive opportunity and we're thinking explicitly about that and we're learning from the history of these technological waves. We provide this inclusive opportunity.

I am in fact incredibly optimistic but I'm also incredibly certain that the activation energy that we need to put into convincing people that this is not a situation where you can say universities for example have been around for 200 years will be around for another 200 years. This will all blow over. We don't have to change like that. This is a pivotal decade.

This is a pivotal decade and I think if we realize that and we act on it and we believe that inclusive opportunity is the core value in this new era that we find ourselves in we can transform our country, our communities, our universities, our corporate university relationships, our whole country in incredible ways. Yeah a few just three quick points. One, I agree with everything they've said. To fix this it's a matter of incentives, innovation, and urgency and we got to hit it hard all three right now.

The second thing is we you know things change in times of crisis. We could see a crisis coming down the track. Why how dumb can we be to let it happen and then deal? System change big thing we need a system change.

Big things happen in times of crisis period. Okay we're smart enough to get ahead of it and say let's use this moment critical decade to make those changes. In terms of the value I agree with what's been said I think it's a it's a basic value of humility. How would you design these systems if you weren't sure how it was going to work out for you?

With empathy, with humanity, with kindness, with a recognition that we're all connected.

food. My final takeaway is I want to eat what he eats for breakfast because if I had a tenth of that energy, wow. I think I'll follow on Gina's point. I remember one of these books I was forced to read somewhere along the way by a guy named John Rawls, another philosopher.

He had this thing called the veil of ignorance. In the veil of ignorance, you basically are unable to determine where you're going to end up when you cross the veil relative to your life as an individual human. The empowerment of the tools that we have right now would say that because everyone will have access to literally everything, everyone will have access to some kind of an assistant that will help them. I mean everyone, everyone.

That's where the value system here is everyone, the empowerment of every human. The veil of ignorance then becomes a thing where everyone will say, let's empower everyone and then let's have the greatest chance possible. So the veil of ignorance was because you don't know where you're going to end up, you should treat everyone equally and the same and you should respect all because you don't know where you're going to end up on the other side. Well, that's in Professor Rawls' version of writing that decades ago.

He didn't have an understanding of a tool that would enable and empower human beings to have unlimited equal opportunity. Unlimited equal opportunity. We've never had that in the history of our species. Unlimited computational tools who could work with every individual, learn the way that every individual learns, create a learning pathway for every single person.

And so we're to the point now where we need a complete new philosophy and it's the philosophy of every single human being with this massive bioelectric supercomputer that we have between our two ears can be empowered in ways that we've never experienced in our history. So the value is this whole notion of individual empowerment. That is the overwhelming opportunity that sits in front of us. Friends, the times they are a changing, but we're going to make sure that this change is grounded in accessibility, humility, kindness.

Thank you very much. Let's give our panelists a warm round of applause.

---

*This transcript was put together by our friend [Philippos Savvides](https://scaleu.org) from Arizona State University. The original transcript and additional summit resources are available on [GitHub](https://github.com/savvides/asu-gsv-2026-summit-intelligence). Licensed under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/).*
