ASU+GSV 2026 Summit | Tuesday, April 14, 2026, 2:10 pm-2:50 pm | The Forum
Speakers
- Josh Allen, Walmart
- Naria Santa Lucia, Microsoft
- Taylor Stockton, US Dept of Labor
Key Takeaways
- This panel brought together Taylor Stockton (DOL Chief Innovation Officer), Josh Allen (Walmart Academy), and Naria Santa Lucia (Microsoft Elevate) to discuss AI's impact on the labor economy and workforce.
- The panelists pushed back against the dominant narrative of mass job loss, arguing the real story is job transformation and skill redistribution rather than elimination.
- Taylor Stockton highlighted DOL's "Make America AI Ready" initiative -- a text-message-based AI literacy course that already has 30,000 signups -- and an $86 million investment in industry-driven upskilling.
- Josh Allen from Walmart described concrete examples like the VisPick system that cut inventory scanning from 60 minutes to 20, freeing associates for customer-facing work, and noted that a 40-year Walmart associate was the first in the company to complete their OpenAI certification.
- The panel identified several key tensions: the impact on administrative workers (disproportionately women), the speed of AI skill obsolescence making traditional credentialing cycles too slow, and the critical role of managers in driving AI adoption.
- All agreed that the biggest challenge is not job loss but ensuring the benefits of AI are distributed equitably rather than creating a have/have-not divide.
Notable Quotes
"We're not seeing mass job loss today... the bigger things we are tracking are that AI is creating new opportunities in terms of new jobs, new AI-driven productivity gains, new opportunities for entrepreneurship."
— Taylor Stockton (DOL)
"You don't become a leader by watching a video. So you're not gonna become AI proficient by doing a micro certification. This has to be embedded in your jobs."
— Josh Allen (Walmart)
"That hockey stick adoption of AI was kind of a disservice... everybody got it, and we didn't have the systems in place to use it or think about how to really train our workforce for it."
— Naria Santa Lucia (Microsoft)
"My biggest push to state leaders, workforce nonprofits, higher ed institutions, is to really think from the ground up about how to do this 10X faster in a way that can actually respond to the changes happening in the labor market."
— Taylor Stockton (DOL)
"I loaded my in-depth personality results into AI, and then just said, how will you interact with me based on this? My product comes out much faster now than it used to."
— Josh Allen (Walmart)
Full Transcript
Hey, everybody. Thanks so much for being here today. We're going to get things kicked off. We're here today to talk about sort of AI's impact on the labor economy and workforce.
And we truly could not have a more interesting panel to talk about that today. So I'll do a quick introduction of myself. I'm the least interesting person on this stage. I'm a partner at a law firm.
It's called Cooley. Most of my work is focused on AI regulation. But these are the folks we're here to see today. So one thing I think that's going to kind of frame this conversation is that we're all worried about job loss with the impact of AI.
But I think what this panel might do for you today, and then I'll let them kind of carry it forward, is complicate that picture a little bit. That it's not necessarily about job loss, that it's also about redistribution of where work and skill sets are valued and where we need to kind of understand that change in the workforce. So first to my left, I have Taylor Stockton. Taylor's at the Department of Labor and currently serves as the chief innovation officer.
So he leads the department's exploration of how AI and other emerging technologies are going to affect the labor economy and the workforce, and also just think about how innovation can support workers. Next to Taylor, we have Josh Allen. Josh is at Walmart. He leads content and strategy for Walmart Academy, essentially shaping the enterprise learning across Walmart stores, Sam's Club, Supply Chain, and their campus roles as well.
And so he drives the strategy behind onboarding, educating, leadership development, and operational capacity. And then last and certainly not least, we have Narya Santalucía at Microsoft. Narya's at Microsoft Elevate, and there she leads Microsoft's global capacity building solutions team, shaping education and workforce systems as they transition to the AI economy. So with that, I kind of want to throw the first question to you, Taylor.
In prep, we talked about pushing back on this narrative of job loss, and that really maybe the question is more about redistribution. So I want to ask sort of how is DOL balancing that optimism with these potential macro shocks of job loss, but also optimism about kind of where new skill sets will be valuable? Well, I think it's, I would frame it as less about optimism versus pessimism, because I think people can get in their camps and try to stick there, and I think it's about realism of making sure that we're looking at the empirical evidence of what is actually happening to make sure that we're solving for the right problems, right? Because if we're solving for mass job loss, well, we're not seeing mass job loss today.
At the Department of Labor, we think that there may be some level of impact, but right now I think the bigger things that we are tracking are that, for example, AI is creating a lot of different new opportunities in the economy in terms of new jobs, new kind of AI-driven productivity gains, new opportunities for entrepreneurship, and so one of the things that we want to solve for up front is to say we don't want this to be a game of the haves and the have-nots in terms of which Americans benefit from all of those opportunities. First, let's prioritize thinking through how can we make AI literacy as widely accessible across the country, regardless of geography, regardless of education or income level, and so, for example, that's why in February, DOL released our AI literacy framework. In March, we launched our Make America AI Ready initiative, and so a lot of these things to say one of the problems we're trying to solve for is making these skills more accessible such that the benefits will be more widely available, but then the second piece and the last piece I would just name is that even within some of these job changes, again, certainly I don't want to pretend that there will not be some level of job loss in the future, but in the immediate term, what we are seeing much more of is the transformation of jobs, is saying a percentage of the skills and the percentage of the tasks within roles are changing, and so how do we do upskilling within the existing workforce, and so, again, one of the things that DOL did to invest in this area was invest about $86 million in industry-driven upskilling, including around AI skills, and so just the overall push that I would make is less of optimism versus pessimism, more about what does the empirical evidence say about what are the problems today that we should be solving for and making sure we have a targeted approach. I think that makes sense, and Nari, I think this sort of lines up with thoughts I've heard from you, which is that you agree sort of from your perspective, you haven't seen those macro shocks yet, but you have seen a sort of shift in the skill set that's becoming important.
Can you talk about kind of what are those new skills that you think are becoming important and how Microsoft is sort of training the workforce to build towards those skills? Great. Thank you so much, and thanks for the panel. Thanks to so many partners in the room.
I see Code.org. I see JFF. I can't see other people because I have really bad eyesight, but it's really amazing to have folks in here, and I want to say thank you to the administration because I think the AI literacy framework and also this new initiative on AI is like we're leading the way, so it's really great to see that. On the macro shock point, I definitely agree with Taylor and with the question.
I think there's a lot of hype in the system right now with some folks out there because they need to really get this product out there and moving that they're kind of hyping up like job losses everywhere, and that is just not the case right now. The macro shocks have not really hit home, but some roles are changing, and I think some of them are canary in the coal mine roles that we should think about. One of them that we were looking that we were just all referencing at Microsoft, there's a recent study about especially gig economy workers that are in writing and in translation. Those have gone down by 20% and 23% since generative AI has been on the scene.
Software engineers, there's a lot of interesting conversation about those roles. We do know that the bundle of skills needed for those roles are really shifting. Forward deployed engineers, you need to connect dots, but you can vibe code. You need to understand the underlying concepts, but there are new skills that are really emerging for software engineers for sure.
We're also definitely seeing impact like any other tech transformation, and I think that is the other point to discuss, too, that this is not anything new. It's like any other tech transformation that we've seen, electricity, the Internet, it's impacting women more, it's impacting people who are in the global south that don't have access to these certain kind of public goods that can really accelerate adoption. We are seeing potential impacts on early career. These are all things that we have seen over and over again in any transition, and I think it's all up to us in this room to really think thoughtfully about how to make that transition most effective.
What are we trying to do at Microsoft? This is in partnership with many of you in the room. We haven't launched it yet, and I'm not breaking any news or anything, but we're really thinking about what is our role as an LLM provider to really help economies transition? We think that we have a few things at our fingertips that we can bring forward in partnership with others.
First, of course, data, not only accessing public data, but our LinkedIn economic graph data as well as our own telemetry, and is there ways that we can kind of surface that for governments and even at the county level to really think about how jobs are changing, how skills are changing? We definitely know that this needs to be done in coalition with others, so can we convene in coalition and work together across a public-private sectors to really join forces together? And then, you know, how do we then think about who is potentially most exposed? Who doesn't have access to new opportunities?
Who doesn't have adjacent skills that they can go into another role, and then can we work together in coalition to really start mapping out new pathways for those individuals? And then, too, the last part of your question, I think, about what skills, I always go with the durable skill, the human skills, right, because while AI literacy is on the rise of the number one skill employers are looking for, also collaboration, communication, empathy, those skills. So if there's any one skill we all need to learn, it's lifelong learning and learning how to learn, for sure. I think there's a sort of implicit assumption built into some of these answers that we've had so far, which is that maybe it's actually highly educated, white-collar workforce that's going to be impacted the most.
And I wonder if that is something that we all kind of agree with, or, Josh, I can imagine maybe you have a different perspective, given the kind of shape of Walmart's workforce. Yeah, go ahead. You know, and I always hate to give this answer of the it depends, but I think it's both, really. Just in different ways, and I think traditionally automation has impacted physical jobs.
So this is really one of the fewer times that we're starting to see it impact knowledge workers. So, you know, gen AI can operate in things like language and problem solving and just areas that haven't been able to be automated or impacted in that way. So, you know, I think really for frontline jobs, what we're seeing is we have vision on kind of what to pull and what to stock. You know, we have the automation and the integration of AI into the roles, and what that is doing at the frontline level is giving us time for other activities.
So I'm sure many of you have been to a Walmart. Walmart store and just thought I'd need to talk to an associate or find an associate to do something. And I think that the AI that we're seeing for our frontline associates is giving them time back to do things like help customers. So I think the biggest thing that we're seeing though for both populations, frontline and our knowledge workers, is that it's raising that floor pretty quickly.
So I think it's compressing the skill gap, the experience gap we see. So I think people are much more effective, much more quickly than they were in the past for both frontline and our knowledge populations. Sean, can I just jump in there too? I think I would push a little bit.
I think that certainly the impact on some of these white collar jobs is going to be perhaps more so in the short term. But I think actually the larger impact in the long term may be on some of the blue collar jobs and the kind of physical labor jobs because of the combination and intersection between AI and other technologies like robotics. And so I would hesitate to say, let's just put the kind of physical blue collar jobs aside. I think this is ultimately something where all of these questions, I think we naturally search for simple answers of here is the one single impact that will be the case across the economy.
And the reality is in many industries, there will be very specific impacts. It might not be a question of are entry level jobs okay in general, right? There may be certain industries where entry level jobs actually grow dramatically. And there may be some entry level jobs where they shrink over time.
And so I think it's really important to focus on specific industries and specific occupations. I mean, last quick data point that I would just share from our conversations with some of the AI labs, I think one of the insights that we've learned from them is even looking at within one single industry, within direct competitors and how they are using AI internally, it can be vastly, vastly different in terms of companies who have decided to increase their workforce or decrease their workforce based on their acceleration of adoption. And so I think varied effects are gonna be the case in looking at some of these impacts across industries. You know, and sorry, real quick, I'll just add in the jobs are being created with that.
So you gave the example of the robotics. So, you know, as we automate our supply chain facilities, we need more technicians. So we have upscaling programs to, you know, we already had technicians that could work on certain things. We now have a different level of technician that has to work on our automation and robotics.
So, you know, and I don't think we can fully, fully know all of the new jobs being created. And as we talk today, I'm just thinking of like how many chimney sweepers do we have? You know, anymore, like there's all of these jobs that will go away and the new jobs that are gonna replace it. Sean's gonna kill us, cause we keep chiming in.
No, this is a talk arena. I don't have to do anything. Here, go for it. I think it's a really important question about like, hey, is it knowledge workers or not?
Cause you hear that a lot. Like, oh, this transition's happening to knowledge workers. And you're totally, we're all spot on in that, you know, it's both. It's a yes and, but then I think the interventions are different, right?
Because I think one of the most exposed job categories right now is administrative work. And that's why the women, cause right now more women are in administrative work. So we need to make sure we're like shining a light and not just assuming it's just a knowledge work. It's just the lawyers and the accountants and as a recovering lawyer myself, you know, like, but in many ways, maybe the interventions can be very, very different.
You don't have to do something wholesale new or you already know how to learn because you have that, you know, kind of higher education degree or whatever. So I think it really is very important for us to like, think about what role, very, be very specific about the occupation and the job and the skill, and then we can design the interventions related to that. Yeah, can we actually take a minute to talk about maybe some of those methods and interventions? Like where are you, we talked about uptake within particular industries or companies.
What are you seeing as the effective interventions? You know, labs are rolling out, AI labs are rolling out sort of micro certification programs. You know, obviously there's a role for primary education at an earlier stage. What are the interventions that you're seeing being effective?
You know, I'll start it. So I think one of them is the micro certification. So we've given access to all of our associates for started with the Google AI professional certificate. We've been working on a certificate with open AI.
You know, I think Microsoft has been doing upscaling for us as well. So, you know, I think kind of looking at, at all of these micro certificates, and, you know, I look at this as a foundation for associates. So, you know, how do I, and I gave the example this morning of, I had a 40 year associate was the first one to complete our open AI certification. So first one in the entire company.
She had not used AI before this, and, you know, gave this kind of testimony in one of our videos of just the confidence that came out of it and how she's integrated into her role. So I think that the micro certifications are good for exposure, especially for people that haven't had that exposure before. And the nice thing about those as well is you don't have to admit that you don't know something. You know, you can act on it independently.
There's no judgment involved and then you can come out with confidence. I do think it's just part of it though. So, you know, in order for, and I'll give the example of like, you don't become a leader by watching a video. So you're not gonna become AI proficient by doing a micro certification.
This has to be embedded in your jobs. You know, it has to be in the flow of work and you have to have the practice afterwards. So, you know, and I think there's conditions that organizations can do to reward these things, to give access to tools and make sure that it's embedded into your work. The other layer I would just add, I agree with what you said, Josh, but I think the layer I would add on top of it that's been a challenge for some companies in thinking about how to offer different types of credentials around AI skills specifically is because AI skills themselves are changing so quickly, right?
If you learned how to use AI and kind of proficiency in AI five years ago, what does that look like versus now? What does that look like versus six months, 12 months, 18 months in the future? And so I know that there's some companies out there, including OpenAI, I'm sure many others who are thinking about how do you make sure that you timestamp the kind of credentials that you have based on the current models that are out there, based on some of the current capabilities out there. And so I think it's something that a lot of people in this sector can think about is how to incorporate that agility element given the skills themselves are changing so quickly.
And then the last thing I'll say is just to add on the on-the-job learning piece, a huge priority for the Department of Labor right now is expanding registered apprenticeships. But I think registered apprenticeships themselves are really part of a broader spectrum that we wanna double down on of work-based learning in general. And part of the reason why is that I think especially compared to some of this classroom learning, which obviously still has value, but on-the-job learning is so much better at keeping up with what is actually happening on the job because you're learning on the job itself. And so we continue in the federal government to really think through how to expand both the investment in on-the-job learning, but also the credentials and recognition of the value of that learning by employers such that individuals can move with those skills and experiences throughout the economy.
I'll just try. I agree with all, I think on-the-job learning, learning in the flow of work, certifications, micro-credentials, short-form content, all these things, these are all really great interventions. I think community colleges, higher education, K-12, all of us together, we need to rethink how we're kind of preparing our students for the future. Employers should really continue to double down on how they're helping their workforce.
But I don't know, I do think we're still in this world where we don't even know yet what we don't know because the tools are available to us. Some of these things, like micro-credentials, they're great, but they're still, I think, in that old paradigm that we had before. So I have no silver bullet, I don't know what it is, but we were in a conversation with Pearson and ServiceNow yesterday, and ServiceNow was talking about how they have a graph kind of following you. Maybe the dream at the end of the day is you can have your own personal coach kind of giving you feedback throughout the day and really just these new and different ways that we're gonna learn that we made, I don't know that we know yet, but the power of AI to provide that is pretty stunning.
Yeah, I wanna ask also sort of a, Taylor, sort of a macro labor question, which is, we're talking about upscaling workers, changing the focus of a career trajectory at a late point in a career. Is there some portion of the labor force that we just need to accept is not gonna do that? And how do we respond to that segment of the labor force? Or do you have some reason to believe that really there's no section of the labor force that isn't in a position to kind of change their trajectory, change their skillset?
Yeah, I would push and say that there is no segment of the labor market that can't persevere and kind of think about the existing skills and experiences that they have and how that industry or occupation might transform over time. I think, again, that's why one of our biggest pushes at the Department of Labor has been kind of widespread AI literacy, thinking about AI skills development and what AI skills individuals in each industry and occupation can build. But maybe the other thing I would say is acknowledging, as others have, I think Narya did, is there is a level of uncertainty here and that's okay. And so I think part of the Department of Labor's view on what we should do despite that is to say, how do we build more agility to be better?
better at capturing in real time what is happening, and then having vehicles to quickly translate what we are seeing into actions.
Because there's only so much that we can predict about the opportunities or the challenges that will be there. One quick plug that I'll give, because it will relate to all three of us, is DOL is launching the AI Workforce Hub, which is an R&D lab around how we support workers in the age of AI. And a big part of that is saying, let's work with the private sector to collect data on some of these critical questions around AI adoption, around productivity gains, around job loss or job creation. And I'm excited that both Walmart and Microsoft are going to be a part of that hub.
Yeah, I think that's sort of an important question to think about, the certainty you need to allocate the resources to a particular tool or to a particular AI system. So I wonder, Nari and Josh, if you can talk a little bit about how you think about kind of getting to that level of certainty that, OK, it's now worth it to invest company resources into this particular system or tool. Because obviously, from the perspective of Microsoft and Walmart, you have resources to play around in a sandbox a little bit. There are probably a lot of folks here who are smaller organizations who these are kind of bet-the-company decisions when they decide to invest.
So how do you think about that selection process? I'm going to let the L&D person go first. So we've been traditionally fairly tool-agnostic here. We have access to Copilot, we have access to ChatGVT and Cloud and Gemini.
So I will say that that decision of which one we use is probably going to sit with tech. But I've kind of been involved in some of those discussions as well. I think the important thing is just access in general. So I don't know that it necessarily matters what the tool is.
But I do think you need to provide some sort of sandbox for associates. And we have. So actually, in our academy training for our team leads, hourly supervisors, we've redone all of our activities to require the use of AI to solve the problems. And these aren't populations that are historically using AI to solve everyday problems.
And they have access to an AI sandbox where they go in and they get exposure to this. So I don't know. I mean, I can't directly answer your question. I don't know that there's a silver bullet on tool.
But I think it's just giving some sort of access and exposure. It's interesting because we were working on this document for the Microsoft kind of memo about how AI and jobs are coming together. And we were trying to show how software engineering roles were shifting. And it had to be update.
I mean, by the every 12 hours, like the data was like we had to refresh it. So the point is so valid about that skills are changing so quickly. And like, hey, why don't we just wait because these skills are going to change anyway. And this is where I do think if there are ways where you can have a sandbox where people are continuously kind of encouraged to play around, one of the pieces of information the data is definitely showing managers have to really support it.
Managers are the key on like AI adoption to like talk about it, to celebrate when it's used. And we're definitely seeing teams that have managers that are pushing it definitely making more progress. So I think that's something that managers can do to kind of say, hey, let's try this together. It's okay if it's not perfect.
I also do think that, you know, if there are ways where people who kind of are the AI adopters that they do kind of have a light shine on them, that they're kind of celebrated, that's another really good way. But if you can learn in the flow of work, I think that would be the best option. If you don't have something to kind of like take people off of their current projects, but can continue to like have a feedback loop where you're learning the flow of work so that it's not something that, you know, you're like worried about investing in that the skills will change in the next few days. One last just point to emphasize of Naria's points, because I think it's important, is what we are tracking at DOL, I think the biggest challenge of AI and work that we see is the speed of change, to her point of just how some of these playbooks and some of these approaches are needing to change so quickly.
And the reason that we believe that's going to be such a challenge and that there's a need to build this type of agility that we're thinking about internally, but we're also encouraging all our stakeholders to think about, is because a lot of the traditional cycles of government, of education systems, of workforce systems, certainly of places like higher ed, are on much different timelines, right? Sometimes one year, three year, five year, ten year timelines to respond to the context that's there. And so one of my biggest pushes to state leaders, to workforce nonprofits, to higher ed institutions, is to look at what are our processes right now and our cycle times right now that we need to meaningfully look at to not just say how do we do this 10% faster or 20% faster, but really, really think from the ground up about how to do this 10X faster in a way that can actually respond to the changes that are happening in the labor market. Do you have any thoughts about some of those barriers maybe that the department is thinking about removing to shorten those cycle times in certain scenarios or instances?
Certainly so. One of our big workforce strategy that we released last year is called America's Talent Strategy, and one of the five pillars that we identified in that is this idea of flexibility, of to allow state leaders and grantees that we fund to actually innovate in these different ways, to move more quickly, they need to have a level of flexibility. And so one of the pieces of guidance that we put out was flexibility waivers, encouraging states to apply for waivers that would waive the existing rigidity of the rules that are set by law. And so we are encouraging states to say what are the rules and the rigid inputs that are standing in your way that are not allowing you to focus on the most important thing, which is outcomes, outcomes for workers.
And so that's certainly one example of a way that we're trying to encourage that level of flexibility. I think that's actually a great segue into another topic I want to discuss and think about is how do we think about distributing the benefits of AI to the workforce and the workers as opposed to, I mean, you can allocate these benefits kind of in different ways, right? They accrue to the company in increased margins and decreased costs, or they accrue to the worker in, you know, more time with their family, more time to go fishing, doing whatever. How is the department thinking about allocating those benefits and what are sort of the policy pushes you're making to benefit both sides?
I would say two things. I know I've harped on this a lot already, but I'll harp on it again. I think our view of one of the ways to do that is to make sure as many Americans as possible have the foundational AI skills to start benefiting from the real opportunities that exist today, right? Those benefits are being distributed right now in the economy of, you know, for example, the small business owners and the entrepreneurs who are taking advantage of this technology to accomplish than they ever could have before.
And so, again, that focus on AI skill development is one part. But maybe another element of the answer that I would speak to is kind of making sure that workers have, and this is something that we're working with businesses and labor unions on, making sure that workers have a voice at the table of how some of these decisions are being made of how AI is being used in the workforce in ways that can lead to both better value for the company, but also better value for workers. Because I think right now, if you just talk anecdotally to a lot of businesses, the ones that have been most successful at actually creating outcomes with AI are ones where there is, yes, some level of top-down approach to say, you know, executives who are looking across the business, what high-value use cases do we see, but also have a really, really agile bottoms-up approach to say, what about the workers who are in the trenches of some of these roles who are seeing the parts of the workflow that perhaps could be automated or could be systematized in a better way? And so I think whether you call it a framework of top-down and bottoms-up or whether it's worker voice at the table, we think that at the Department of Labor, labor and labor unions being at the table for some of these decisions is going to be an essential part to having value across the economy.
How about a four-day work week? Not likely. Josh, I wanted to ask you about two concepts we talked about sort of as we were prepping. You described a trust gap and a capability gap.
I wonder if you could describe those concepts and then talk about a little bit how you're seeing them manifest at Walmart and what are the solutions to those issues? Yeah. So I think the trust gap is about belief, and the capability gap is about skill. So on the trust gap, I don't think it's resistance, it's hesitation.
And I think this plays out, and I remember early versions of GPT, you know, I got in and the answer I was like, at first I was like, this is amazing, and then I got the same answer for like every question I asked, and I was like, oh, this is not that good yet. And I think what ends up happening is you get a wrong answer, and I mean, a lot of people are just going to discount it. So instead of either correcting it, you know, instead of coming back, sometimes people are like, I give up, you know, and I actually don't know how many times I've heard people say, you know, I'll get an answer, and then, you know, I'll copy it in my Word document or email or whatever.
fix it all myself. And it kind of defeats the purpose there.
And I mean, what I've told my team, I'm like, it's never going to be right the first time. Give it feedback. Give it your voice. Give it all of these things.
So I think what ends up happening is people, if you don't trust it, you're going to double-check everything. You're going to stop using it. That has to come along. And I think it comes along with, fundamentally, how do these things work?
And how do you implement it? I do think the capability gap is different. And I described this a little bit of my associate who hadn't used AI, goes through the certification, starts to use AI again. I think what you're going to see with the capability gap is people using it for super basic functions, not using the right prompts, not using it for all of its abilities here.
So I think what you find is trust is going to be through experience, reinforcement. And then the capability gap is going to be practice and application. Any perspectives from the Microsoft side, Nari, on this issue of trust and capability? Well, I'm going to say, Taylor, you just ruined my whole life.
No four-day work week. So I'm crushed. I'm crushed. No, just kidding.
I do, well, it's funny. As you were chatting, it made me think of something, which is that hockey stick adoption of AI, like you saw. I was like, is this right? Because I actually checked it.
I thought it was saying originally that it took 16 months for us to get to 100 million users of Chat GPT. But then Chat GPT said yesterday it was two months. I don't know about that. Back to the trust.
But anyway, the adoption curve was super steep. So everybody got it. Everybody got it. And they did exactly what you're talking about.
They wrote a poem or planned their vacation. Like, wait, that restaurant's no good. And then they stopped using it. And I think what happened, which was really different than any other technology kind of transformation, like not everybody had a PC right away.
It took a long time. And there was a buildup. And people wanted it. And suddenly, we all had it.
And we didn't have the systems in place to use it or think about how to really train our workforce for it or think about how it's going to integrate into school. So I think the way that the adoption hockey stick went up was kind of a disservice and played into that trust and capability kind of where we are now, which is interesting, as we're now the systems are so much better and so much advanced that I hope people do come back and take a look and think about how it can impact their roles. I would just say, for us, the trust part is the number one thing for Microsoft. There is not going to be anything that's more important or top of mind than making sure that our large language models that co-piloted is safe.
It is responsibly deployed. We have a very rigorous Office of Responsible AI. They have set AI principles. They stop products from shipping if they're not up to standard.
And I do hope, and I think all the AI companies, LLMs, I know we're all trying in this area. And we definitely encourage everybody to do that because once that kind of trust is gone, I think we lose our license to operate. So I would just add that on the trust piece. I think these also might be sort of self-solving problems as workers see a task that they thought of as drudgery on a day-to-day basis all of a sudden disappears because now the files are all renamed by the system, the AI system that they were using.
Josh, I'm wondering if you mentioned sort of using the sandboxes and also associates at the kind of frontline store level using these AI systems. Are there particular tasks that you're noticing or that you're encouraging associates to use sort of at the level of like something we can all appreciate, like a particular task that you can kind of get rid of on your day-to-day basis? You know, I think, and I gave the example high level, but I'll go a little more specific. So the way we do inventory management or used to do inventory management, you would scan all of the boxes or inventory up top and then it would kind of beep whether to bring it down or not.
You know, with AI, we're able to kind of read these labels, determine whether it needs to come down or not. So instead of scanning every single individual box, you're just holding your phone up and it's gonna give like a green check mark if you pull it down. So system that we call VisPick, that removed, it was about an hour task to do these things, cut it down to about 20 minutes. So this is something that's just embedded into the work.
And what happens is you get the time back to, you know, for managers, you're leading your teams, for associates, you're helping customers. So, I mean, it's giving time back for growth. It's giving time back for more human activities than kind of the manual activities. Great.
We've got about five minutes left. I have a couple sort of rapid fire questions that we can go through. And then I wanna close, I'll give you a preview on a tool that you're using in your daily life that maybe we all can try that you're loving today. So first, well, Josh, I just gave you one.
So I'll give you a break for a second. How about Taylor, I'll start with you. What's, you know, from the department and policy level perspective, what are one of the signs of success that you're looking for to prove that AI is helping workers negotiate this new economy? Is there like a metric or a data point that you're really looking at and hoping to see value in?
I think it's two parts. I think the first part is the AI skill development side and I keep harping on it, but I truly think that everything starts there of if we make sure that as many Americans as possible across the country have the foundational skills, we believe that many different diverse benefits could come from that. And so that's kind of part number one. But then I think part number two is saying, how do we actually make sure that once we make sure Americans have these foundational skills, that they are actually then reaping the benefits from them?
And so how many Americans used AI to start new businesses? How many Americans used AI to not only have higher productivity, but to have higher earnings from that productivity? And so I think it's those two part elements of making sure they start out with the skills, but then making sure that they truly are capturing the actual benefits in the real economy. Is there a way that you're sort of measuring that uptake or adoption or how are you thinking about, how do you know when that's happened or how do you know the department's being successful in pushing that?
I think there's a number of different initiatives that we have and truthfully the metrics for success will probably be different for each. But just to go back to one of them, I briefly mentioned we launched the Make America AI Ready initiative in March, which is an AI 101 course that's completely over text message. And so anyone in America, even if they don't have a laptop or access to the internet, can text the word ready to the number 20202. And in just 10 minutes a day, they can have bite-sized content over the course of a week that introduces them to the foundational elements of AI literacy.
And just in the past couple of weeks, we've already had 30,000 Americans sign up for that course. We think we can get to many, many more, but that's an example of one where, I hope we can get to a million Americans who've taken that foundational course as one of the initiatives that kind of puts them on the pathway to some of these benefits. All right, and then Josh and Ari, I'll close for you guys with just, what's one skillset that in five years you think will be more important than it is today, given the increase of AI usage? Naria hit this one a little bit.
I'm gonna cheat and say two, but I'm gonna work them together. And it's judgment powered by adaptability. I think resiliency. I like that one.
Was my lighting round supposed to be short? Maybe I went too long. Well, I followed up. No, no, no, I had follow up on that.
So you're excused for that. All right, and then last we'll end, because I think, kind of as we said, one way to increase uptake is just to shine a light on people who are actually using AI in their day-to-day lives. And we can all kind of share these stories and talk about how we're actually using it to solve sort of that trust and capability gap. So any particular tool or AI kind of hot tip you've got for the audience?
Okay, well, I haven't tried it yet, but my boss freaked me out yesterday because there's a new co-pilot with an Obsidian tool. It's the ugliest UI right now. It's like a DOS screen, but he's showing, kind of fed all of his meetings and teams into it and was like, hey, what should I be thinking about in this meeting with Naria? So all this stuff came up and it was pretty fantastic.
So I'm gonna be doing that. I need to upscale to do that, but I will do that. But the tool that, it's not a Microsoft tool, but it's integrated in Teams, which is WhisperFlow. I feel like as I'm getting old, my fingerprints are going away and I can't type on my phone anymore at all.
So, but this is really good product. It works perfectly, so try it out. I was actually gonna, Obsidian was gonna be mine as well. It's a good, it's a great program, just good for kind of organizing, getting value out of your notes, and then also has great UI connection with, or API connection with other AI systems, so yeah.
Not a tool, but I'll just say something that I've done that I don't know if any other people have. So I have like in-depth personality results on myself, loaded them into AI, and then just said, how will you interact with me based on this? So I've been trying to use AI to just get as efficient as possible to use my voice. So it describes, and actually I ask it every once in a while, make sure it remembers.
But there's certain things.
that it's just doing because of the way I like to process information and my role, all of these things that my product comes out much faster now than it used to. I'm not gonna endorse a specific vendor as a government official, but the use case that I have been using at work is AI for graphic design. A fun fact about DOL's recently launched Make America AI Ready initiative is that both the logo and one of our LinkedIn launch posts for it was created by AI. So that's how I'm using it.
There we go, okay, great. Thank you so much to our really incredible panel. Amazing kind of diverse perspectives on such an important issue, so thanks for being here. Thank you.
This transcript was put together by our friend Philippos Savvides from Arizona State University. The original transcript and additional summit resources are available on GitHub. Licensed under CC BY 4.0.