---
title: "Bull Market for Teachers... Architects of Human Potential"
slug: "aneesh-sohoni-teach-for-america-bull-market-for-teachers-architects-of-asu-gsv-2026"
author: "Aneesh Sohoni, Nonie Lesaux, Aylon Samouha"
date: "2026-04-14 12:00:00"
category: "Premium"
topics: "ASU+GSV 2026, conference transcript"
summary: "This session explored whether AI could create a \"bull market\" for teaching by making the profession more accessible, attractive, and effective, featuring Nonie Lesaux (Harvard), Aneesh Sohoni (Teach For America), and Aylon Samouha (Transcend)."
banner: ""
thumbnail: ""
---
> **ASU+GSV 2026 Summit** | Tuesday, April 14, 2026, 2:00 pm-2:35 pm | StarTrack

<iframe width="560" height="315" src="https://www.youtube.com/embed/Ih5ktv2lLyY" title="Bull Market for Teachers... Architects of Human Potential" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>

## Speakers

- **Aneesh Sohoni**, Teach For America
- **Nonie Lesaux**, Harvard
- **Aylon Samouha**, Transcend

## Key Takeaways

- This session explored whether AI could create a "bull market" for teaching by making the profession more accessible, attractive, and effective, featuring Nonie Lesaux (Harvard), Aneesh Sohoni (Teach For America), and Aylon Samouha (Transcend).
- Lesaux framed three promises of AI for teachers: practicing teaching with feedback, supporting self-regulation and executive functions through planning tools, and reducing administrative burden to create more cohesive learning environments.
- Sohoni offered concrete examples including a Michigan teacher who used AI to insert incorrect steps into math problems to force metacognition, and a Texas teacher managing 52 languages who used AI translation to maintain rigor while improving access.
- The panel warned strongly against repeating historical patterns of "massive investment, rapid deployment, then persistent inequities," with Lesaux insisting that in the absence of robust AI research, educators must default to 100 years of learning science.
- Both panelists emphasized that ed tech builders must co-design with teachers and ground products in cognitive science rather than optimizing for reach, users, or minutes on platform.

## Notable Quotes

> "I might dare say that in the age of AI, teaching might be one of the best first jobs in the AI economy."
>
> — **Aneesh Sohoni**

> "Access to information or even interaction with a trained bot, a highly trained bot, is not the same as learning and growth... We have known for 100 years that learning is inherently social."
>
> — **Nonie Lesaux**

> "Not reach, not number of users, not minutes on the platform, but what truly do we wanna see at the level of learning that is about human development?"
>
> — **Nonie Lesaux**

> "One of my biggest fears is we're creating too often tools and platforms without an understanding of the practitioner's perspective and point of view."
>
> — **Aneesh Sohoni**

> "The hardware in here isn't changing, and so we have to ground in the science of learning."
>
> — **Aylon Samouha**

## Full Transcript

Hello, everybody. What a beautiful room. I hope everyone's having a wonderful conference. We have a short program for you, so we're going to jump right in.

Of course, one of the big topics, AI. What is it good for? What are the perils? Is it going to save the world?

Is it going to ruin the world? In education, rightly, we're centering students and the experience that they have. How might it help them learn? How might it extend learning?

Also, how might they do cognitive offloading in ways that we don't want? We're not going to focus as much on that today. Today, we're going to think about AI through the lens of the educator role. What does AI mean for teachers?

And specifically, is there a hopeful way, path here where maybe AI could actually turn into a bull market for teaching, making the job more accessible, more attractive, more doable? And so with that, I want to jump right in. And Noni, Anish, thanks for joining us. And we'll start with you, Noni.

What do you think is the real promise for teaching when you think about AI? Yeah, thank you. Thrilled to be here with you both. Where are we on the promise for teachers?

And I'm so glad we're focusing on teachers, particularly after 20 years of working in states and communities and districts to improve literacy outcomes and support educators in this difficult and complex work. I think big picture, what we want to keep our eye on is there are at least three things we want to see more of in schools. We want more human competencies, right? We need more critical thinking, more metacognition, more extended dialogue and discourse.

We want to see enhanced social-emotional competencies, not just for our students, but also for our teachers in what is one of the most demanding jobs on these competencies, self-regulation, executive functions. And we want to grow attention spans today across the society. And the third thing I would say is we want to increase human connection and hopefully reduce administrative burden. So with that frame, I see three things for teachers, three ideas in the category of promise.

I see opportunities to practice my teaching and receive feedback that I can then discuss with someone in a cohort, ideally, right? And that should have all of us thinking about effective leadership. I would like to do more, see teachers with more support in the domains of self-regulation and executive functions. And by that, I don't mean more training or rote exercises.

I mean tools that drive planning and reflection. And the last thing I will say on the promise side is there is far too much activity in the system, in schools, in classrooms, in districts. There is just far too much activity. And more tools and more approaches that keep the system fragmented and do not drive into more cohesive learning environments for adults as learners and for students will simply continue to reproduce where we are today.

Adding off of that, I think the real promise of AI for teaching is to make the profession more attractive for teachers to spend times on the types of things you were saying, Noni, around real engagement with students, helping to motivate students to achieve their goals and to really strive for a different paradigm for learning as well. So I know we're talking about teachers, but I think the underlying assumption in all of this is we're also able to use the tools and technology to drive forward the kind of learning that is aligned to what we know about how students learn that gives students the opportunity to access learning that meets them where they are as well. And so as I've been out in classrooms, as I've talked to prospective teachers and current teachers, one of the big things I hear is, hey, will this actually help me meet my students where they are? Will this help me spend time on the things that got me interested in teaching in the first place, building relationships with students and really supporting students and families to achieve their ambitions and their goals?

And so I might dare say that in the age of AI, teaching might be the best first job one of the best first jobs in the AI economy. Amen. And so let's go even deeper there. So you both gestured to the possibility that AI might offload some of the less fun or impactful tasks that teacher might have and that AI might help a teacher to literally do her job better.

Could you give us some examples of what some of those might be so we can get really concrete about the promise? I mean, the one example I'll give is we had a teacher in Michigan who was actually very worried about brain rot when it comes to AI and was worried about the lack of critical thinking. And this is top of mind for K-12 students. It's top of mind for Generation Z.

I've seen it firsthand. Data just came out this week actually showing that more and more young people are skeptical of AI. They're skeptical of what it means for their futures and their ability to think. And so we had a teacher who, rather than give students math problems where students would then go use AI to solve the problem, used AI to insert an incorrect step into the math problem and then had the students deduce what did the AI get wrong about how it solved the math problem to make sure they really understood the concept.

Another example I'll give is we had a teacher in Texas who had 52 different languages across her classrooms and was trying to figure out how to create text that students could access. And so used AI to translate the text and keep it at the level of rigor necessary, but make it accessible, something that would have been impossible for the teacher to do prior to AI as well. Right, so I'll just point, I'll just add two points to that with my own developmental lens, which is one, that first teacher is forcing metacognition, right? So how do we use the tools to force metacognition?

The second is, in the second example, we're talking about access without compromising or manipulating the content, right, that's been developed. Right, and so the learning becomes more personalized, more precise, and matching without changing any of the rigor. Right, so we wanna see accommodations, we wanna see ways to get into text or into content, but we don't necessarily wanna generate it. Yeah, I know we're gonna jump into perils here in a second, but one of my biggest fears on this point is we have learned a lot in the last 20, 25 years about how students learn.

And one of the things we know is you need to maintain a high bar of rigor. You need to make sure the content is aligned to the standards. You need to make sure those standards set a bar of excellence that students can aspire to as well. And one of my biggest fears is a lot of that conversation is missing in some of the tools that are being developed for teachers in classrooms.

And so I fear that we're creating too often tools and platforms without an understanding of the practitioner's perspective and point of view, without an understanding of how students experience the technology as well. And so my sort of big encouragement advocacy would be to make sure these things happen consistently. We need teachers, school leaders at the table, co-creating and co-designing how the technology gets used in classrooms. Yeah, and I'll just say, just to add right there on that exact thread, we need to bring that science of learning to the table and the practitioners to the table.

We also need to, in the absence of a really robust body of research on the AI in terms of what we know and don't know, we must default to 100 years of research on how people learn. We know that. We know a lot about adult learning and we know a lot about child learning and we have to, at the level of tools and products, et cetera, be very specific about what it is we expect to see in terms of learning outcomes. Not reach, not number of users, not minutes on the platform, but what truly do we wanna see at the level of learning that is about human development?

I love that. And if I may, I'm gonna add just one more example that is grounded in the research, which is an example of a charter school that we've been working with that was paying attention to how much agency and relevance drives motivation and started thinking about, would there be ways for young people to actually act as learning designers themselves with the care of a teacher and guidance, et cetera, in a way that, first of all, helps them master learning, literally, which is probably a skill they're gonna need when they change their jobs 14 times in the next 20 years, but also take some of the work off of the plate of the teacher of being the only learning designer. And so there's hope here. Okay, but as you said, perils.

How did we get this wrong, right? I mean, there've been waves of technologies that haven't materialized and haven't held their promise and all the things, but let's stay here for a moment because we have to understand how we can get this wrong. What comes to mind when you think about the peril? Well, just to put a finer point on our last thread, I think what I go to in my own mind very quickly is that access to information or even interaction with a trained bot, a highly trained bot, is not the same as learning and growth.

That is not it. Those may be springboards by which we want to see some learning mechanism, some developmental mechanism, some kind of rapid burst occur, but I think we can't confuse the tools and the technologies with what we know is effectively a social enterprise. We have known for 100 years that learning is inherently social. It's the relationships, it's the interactions, it's the cognitive stimulation, it's the back and forth.

How actually do we bring the technology into the human loop to enhance that developmental process that is inherently social with peers? Adults and peers will always be the facilitators and the mediators, so how do we now think about our theory of action then? Yeah, building off of this idea that learning really is social, right, and certainly technology can help, my fear is as a parent of a 10 and seven-year-old having talked to other parents, smartphones and social media, there's been plenty of technologies before, right? We had the iPad revolution, we had computers, we had TV way back when.

We sort of posited these as the solutions and the be-all, end-all as to what's going to advance learning, and yet time and time again, let's talk about smartphones and social media, the research is not great on how we handled the introduction of those emerging technologies at the time into a student's learning, both inside and outside of school. And so I think what we have to do is really mine for what have we learned about what are use cases that work to really give students access to this highly relevant and engaging curriculum alongside what they need to know around the possibilities of human potential and the work that they need to do with their educators and teachers to drive forward learning. I think we have a lot of lessons to learn, but we haven't yet shown or demonstrated we can bridge those things, but the opportunity ahead of us is to learn those lessons and carry it forward. Yeah, and I think what we're seeing is a pattern that, to your point, Anish, we've seen before, massive investment, rapid deployment and disruption, and then persistent inequities, lots of patterns in our student outcomes that we worry a great deal about, and in this latest wave my developmentalist hat on from 2009 to the present, not only are we worried about the inequalities, the persistent inequalities, we are also clear there has been harm developmentally.

We have adolescents whose brain development in that moment is geared much more towards risk and reward while the prefrontal cortex, which is responsible for decision making and planning and execution, needs a lot of support and work. And so I see this as, to your point, Alon, about the peril. I am certainly optimistic on the side of if we get our theories of change exactly right and we go into this understanding there are ways we could augment the learning environment and, in fact, make teachers' hard jobs easier, offload some things that give them more time for what matters most, then I think we are back into the promise. But to your point, I think we are at a real critical moment to not repeat that pattern that has been repeated for a couple of decades at least.

So good. Such good reminders. I feel like we're grounding in some realities, starting with, as one of my board members, Boris Saxberg, says, the hardware in here isn't changing, and so we have to ground in the science of learning, like that learning is social, that's not changing, and that we have these human tendencies to overextend and hope and all that stuff is also all at play. And so let's end here with, I mean, in this room we have classroom leaders, school leaders, system leaders, product makers, investors, funders.

If there's a group of people that can get this right, here they are, right? And so maybe let's end with a shared vision of what would it look like if we got it right? And then everyone's job here is to backwards map and say, what role do I play in that? And so it's five years from now, Anish, your prediction came true, and every 22-year-old is hankering to become a teacher because the job is so awesome.

What is true in five years? What is that North Star that we might all collectively rally around? And then of course, what would need to be true to make that happen? I mean, I'll start and you can jump in.

I think if we look five years ahead, we will have been successful if we had teachers, school leaders, educators working alongside technology companies and platform companies to co-create and co-design what is possible. I think we will see that students are able to access a type of individualized learning that is honestly just too hard for a teacher to do alone without the technology, but that the AI tools available to us will be engaging, they'll be relevant, they'll be rigorous, and it'll be exactly what students need, right? And we have some of these tools and technologies already. The technology is also evolving very quickly, so we don't know what'll be true in five years.

What we know is it'll have a set of fundamental principles around teaching and learning that'll matter. And we'll see teachers working with students in ways that they are uniquely able to do, pursuing a set of learning outcomes that are different than maybe the learning outcomes we pursue today. I don't even think we have consensus yet. There are a lot of frameworks on what is the purpose of school and what is the purpose of learning, but we'll have more clarity and consensus around what are we learning for.

And teachers will be able to guide students on that journey to motivate them, to build relationships, to keep them engaged in their own journey so that the learning that we all want to be true, that meets students where they are, is fundamentally possible for them in the future. Double-clicking on everything Anish just said, and then I'll put my hat on as dean of the Harvard Graduate School of Education and say that in five years, too, I hope that our evidence base is much more robust and actionable. We at the Harvard Graduate School of Education are launching into this. We are already evaluating, for example, the rollout of Khan Academy at scale in the Philippines.

We need to have researchers at the table as well. We have to come up with much more actionable, evidence-based guidance. We have had a vacuum in certain ways on ed tech. We've gone for scale and efficiency, and I don't think that we've paid nearly enough attention to what it is we're learning for the sake of both the learning principles and designs and what we can expect in terms of measurable outcomes.

We, together, should be a field that can talk about, we can map tools and approaches and strategies to what we expect to see, therefore, in the learning environment and what we expect to measure at the level of the learner and their outcomes. So excited about that work, too. And I hope that in this world five years from now, we'll be able to point to whole learning environments that are showing what the future can look like. And if I can add one more dimension that I think will be important in five years, it's that we've accepted and built the infrastructure around the notion that it's going to keep changing, right?

That we don't need to have a refresh moment every 10 years, but that the refresh is happening on the daily, right? Based on new information, based on new evidence, coming from my own kids in my own classroom all the way to the kids in the Philippines, right? That we have some way of the shared decision-making to help us evolve together in real time. And what you hear in both of these comments, too, is that teachers are not the end users.

That's not the idea, right? In this kind of a dynamic system, this is a really healthy ecosystem of learners, including the adults. Ultimately, I think what we're getting at is AI is yet another tool, right? It's a technology.

I remember when I started in the classroom, I went from a chalkboard and overhead projector to a smart board, right? And that was a big deal. We were all so excited. And so, yeah, it was a tool, right?

It didn't fundamentally change what I had to teach my students. It didn't fundamentally change the rigor with which I had to show up, the relationships I had to build. And so I hope in five years, we really capture the opportunity to use it as a tool, but not as a replacement for what we know is uniquely human. Yeah, and I've loved for the last couple of days that lots of these conversations here are about centering the human in the technology.

And I hear that, and I hear that as kind of novel at this moment. Yes, yes, in this pivotal moment where we're at a crossroads and we could repeat our mistakes or we could really try to get it right. And many of the things that you brought up I think are pushing us in the right direction. Thank you so much for this rich conversation and for your expertise, and thank you for joining us.

Thank you. Thank you.

---

*This transcript was put together by our friend [Philippos Savvides](https://scaleu.org) from Arizona State University. The original transcript and additional summit resources are available on [GitHub](https://github.com/savvides/asu-gsv-2026-summit-intelligence). Licensed under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/).*
