ibl.ai AI Education Blog

Explore the latest insights on AI in higher education from ibl.ai. Our blog covers practical implementation guides, research summaries, and strategies for AI tutoring platforms, student success systems, and campus-wide AI adoption. Whether you are an administrator evaluating AI solutions, a faculty member exploring AI-enhanced pedagogy, or an EdTech professional tracking industry trends, you will find actionable insights here.

Topics We Cover

Featured Research and Reports

We analyze key research from leading institutions including Harvard, MIT, Stanford, Google DeepMind, Anthropic, OpenAI, McKinsey, and the World Economic Forum. Our premium content includes audio summaries and detailed analysis of reports on AI impact in education, workforce development, and institutional strategy.

For University Leaders

University presidents, provosts, CIOs, and department heads turn to our blog for guidance on AI governance, FERPA compliance, vendor evaluation, and building AI-ready institutional culture. We provide frameworks for responsible AI adoption that balance innovation with student privacy and academic integrity.

Interested in an on-premise deployment or AI transformation? Call or text 📞 (571) 293-0242
Back to Blog

Cracking Higher Ed: Why EdTech Startups Miss the Mark — Philippos Savvides at SXSWedu 2026

ibl.aiMarch 18, 2026
Premium

Philippos Savvides from ASU's ScaleU program presented a diagnostic framework at SXSWedu 2026 that explains why most EdTech startups fail to sell into higher education — and what founders should do instead. We break down every idea in detail.

This post is based on the open-source framework "Cracking Higher Ed: Why Startups Miss the Mark" by our friend Philippos Savvides, Head of ScaleU at ASU Enterprise Partners. Philippos presented this framework at SXSWedu 2026 and generously published the full content under a CC BY 4.0 license. What follows is our detailed exploration of his ideas, with commentary on how they connect to what we see building AI platforms for institutions.

The Problem Nobody Wants to Admit

EdTech startups don't fail because their products are bad. They fail because they validate with the wrong stakeholders, measure the wrong metrics, and mistake enthusiasm for actual demand.

The numbers are stark. The average higher education sales cycle runs 12 to 18 months. EdTech VC funding dropped to $2.4 billion in 2024 — the lowest since 2014. Most startups simply don't have enough runway to close a single institutional deal. The margin for error is nearly zero.

Philippos built a framework to help founders distinguish real institutional demand from misleading signals before they run out of money. It comes down to five core principles and a five-question diagnostic that every EdTech founder should internalize.

Principle 1: Map Your Product to the Student Journey

Every EdTech product corresponds to a specific stage in the learner lifecycle. That stage determines who your buyer is, what evidence they need, and how crowded the market already is.

Philippos breaks the student journey into six phases: pre-enroll, apply, onboard, select and enroll, course experience, and graduate and beyond. The critical insight is where founders actually build versus where the real opportunities lie.

The course experience phase — active learning, assessment, instruction, support — is where 80% of EdTech founders concentrate. It's also the most saturated, with 15 or more product categories already competing. Meanwhile, the first four phases (pre-enroll, apply, onboard, select and enroll) remain dramatically underserved. Pain in these phases is measurable at scale, yet solutions are sparse.

The pre-enroll phase is where prospective students try to figure out actual costs, compare programs, and understand what an accelerated format really means in terms of weekly hours. The apply phase is where transcript processing bottlenecks and slow admissions decisions cause institutions to lose applicants to faster competitors. Onboarding is where transfer credit confusion leads students to waste money on redundant courses. Course selection is where students commit based on a course title alone, with no access to syllabi, difficulty ratings, or workload expectations.

Each of these represents a validated, painful, and underfunded problem. Yet founders keep building another learning analytics dashboard.

Principle 2: Define the Job for the Person, Not the Institution

You're not selling to "universities." You're selling to a specific person who has a specific task to complete within a specific journey phase. Philippos applies the Jobs to Be Done methodology rigorously.

A prospective student needs to confirm whether their credits will transfer before they commit. An academic advisor needs to direct 300-plus students toward the right courses. A faculty member needs to deliver meaningful feedback across large sections without burning out. A career services leader needs to match graduates with employer opportunities at scale.

The critical test is straightforward: if you cannot name the specific person, their job, and which journey phase it falls in, you have a hypothesis, not a validated market.

Principle 3: Identify the Buyer, Not Just the User

This is where most founders get burned. Different stakeholders need different things from the same product. Philippos uses the example of a retention analytics platform to illustrate.

An academic advisor wants rapid at-risk student identification and needs to see time-savings data from a pilot. A VP of Student Affairs wants DFW rate reduction to justify the budget and needs measurable outcome improvements. A provost wants board-ready strategic metrics and needs year-over-year retention data.

Same product, three completely different value propositions and evidence requirements. The distinction that matters: users give you access and feedback. Buyers control procurement authority and budgets. If you only validate with users, you'll generate plenty of enthusiasm but zero purchase orders.

Principle 4: Separate Noise from Signal

Faculty enthusiasm is noise. A budget owner identifying a specific line item is signal. High signup volume is noise. A pilot that measures buyer-critical outcomes is signal. A dean's endorsement is noise. Procurement beginning before the pilot ends is signal. Multiple champions are noise. Your product working regardless of whether any single advocate stays in their role is signal.

The pattern is consistent: noise comes from users and advocates. Signal comes from buyers and formal processes. Philippos is blunt about this — if you're collecting testimonials from professors who love your product but have no purchasing authority, you're measuring the wrong thing.

Principle 5: Build the Moat That AI Cannot Replicate

This principle has become more urgent since generative AI exploded onto the scene. Standalone software without structural protection faces an 18-month substitution risk. If your product is essentially content generation, summarization, or templating, a foundation model vendor can replicate it within 12 to 18 months.

Philippos identifies four defensibility strategies that actually hold up.

First, proprietary data networks — longitudinal datasets that improve through use and that new entrants simply cannot replicate on day one. Second, deep integration — LTI and SIS write-back capabilities that create switching costs exceeding the product's price. Third, supply-side network effects — where an expanding contributor pool increases value for everyone already on the platform. Fourth, regulated access — FERPA and HIPAA compliance requirements or credentialing gates that LLM wrappers cannot bypass.

The reframing Philippos suggests is powerful: lead with the institutional dependency you create, not the underlying technology. Technology can be copied. Dependency cannot.

The Five-Question Diagnostic

Philippos distills the framework into five questions that founders should ask before, during, and after every pilot.

Question 1: Who struggles, and what is the struggling moment? If you can't name a specific person with a specific problem at a specific point in the journey, you don't have a validated opportunity. And be careful — if the answer is "students drop out" or "retention is low," you're describing a symptom. Keep asking why until you reach the decision point where the failure actually originated.

Question 2: What solutions have they already tried, and why did those fail? Remember that your competitors include spreadsheets, graduate students, and simply ignoring the problem. If "nobody's tried anything," either the job isn't painful enough to fund or you're talking to the wrong person. Real jobs always have workarounds, even terrible ones.

Question 3: Does a budget line item exist? This is the only filter that separates "interesting" from "funded." If the budget sits with someone other than the person experiencing the pain, you need job statements for both the user and the buyer.

Question 4: What happens if they do nothing? "We lose accreditation" is a job. "Students might learn better" is a hypothesis. Help the buyer quantify the cost of inaction — "we lose students" is perception, while "$X in tuition revenue lost from post-discovery dropouts" is a number someone can act on.

Question 5: Who else must say yes? If the answer is "just me," your pilot will succeed and your deal will die. IT needs risk mitigation. Legal needs compliance. Finance needs ROI. Faculty governance controls curriculum decisions. Accessibility review is mandatory. Each approver has their own job that needs to be addressed before they'll say yes.

The Jobs Atlas: What People Actually Need at Each Phase

Beyond the framework, Philippos catalogs fifteen validated jobs across the student lifecycle — each with a specific person, a struggling moment, the solutions they've already tried, why those solutions failed, and a clear job statement.

Pre-Enroll Phase

Prospective students can't determine actual costs until after enrollment. Published rates exclude per-credit fees, online surcharges, and textbook costs. Financial aid remains unknown until after commitment. The job: "Help me understand actual cost before committing."

Prospective students comparing programs have no access to syllabi, workload expectations, difficulty ratings, or textbook costs. They rely on Rate My Professor, Reddit, and guessing from course titles. The job: "Help me know course demands before enrolling."

Students unfamiliar with accelerated formats can't understand what compressed sessions actually mean for their daily and weekly workload. Marketing describes the format in calendar terms ("7.5 weeks") rather than workload terms ("15-plus hours weekly"). The job: "Help me understand what the pace actually feels like."

Apply Phase

When students apply to multiple programs simultaneously, the fastest admissions response wins enrollment regardless of program quality. The job: "Help me get a decision fast before I commit elsewhere."

Transfer students with credits from multiple institutions can't complete credit articulation before enrollment deadlines. Manual evaluation simply cannot scale to multi-institution complexity. The job: "Help me know which credits count before deciding."

Onboard Phase

Newly admitted students with prior credits don't know how those credits map to requirements until after they've committed. They discover redundancy and gaps post-enrollment and waste money on courses they didn't need. The job: "Help me see my degree standing before spending."

After acceptance, students receive no proactive guidance on next steps. There's an information void between acceptance and the first class. The job: "Help me know what to do between acceptance and first class."

Students about to start accelerated programs have no practical preparation for the pace, workload management, or online tool navigation. Generic orientation modules don't simulate reality. The job: "Help me be ready before week one arrives."

Select and Enroll Phase

Students must commit to courses based on title alone, with no syllabi or difficulty data available before registration. Information arrives post-enrollment, and expectation mismatches cause drops. The job: "Help me pick courses I won't drop."

Required courses are full, offered only once per year, or have scheduling conflicts. Online programs inherit seat-limited scheduling models from in-person formats. The job: "Help me get the courses I need when I need them."

Students can't determine the right course sequence. Advising is hard to access and inconsistent across departments. Degree audit tools are unintuitive and don't clearly communicate remaining requirements. The job: "Help me know exactly what's next without decoding the system."

Course Experience Phase

Faculty teaching large online sections face a structural constraint: class sizes make personalized feedback impossible. They're forced into templated responses. Students disengage. Students report "self-teaching" and "absent instructors." Faculty independently report burnout. These describe the same problem from different sides. The job for faculty: "Help me give meaningful feedback without unsustainable hours."

Students needing academic support can't access effective tutoring or don't know it exists. Support services are disconnected from the course experience and hard to find mid-struggle. The job: "Help me get help when I'm stuck, not after I've fallen behind."

Graduate and Beyond Phase

The degree is complete but the connection between credential and career outcome is unclear. Career services at scale lack the capacity and employer relationships needed for online graduates. The job: "Help me turn my degree into the career outcome I enrolled for." Philippos notes this is the least researched and least served phase of the entire lifecycle — the gap between "graduated" and "got the desired job or promotion" remains wide open.

Four Patterns Founders Miss

Philippos identifies four structural dynamics that remain invisible in surface-level discovery interviews. These are arguably the most valuable part of the entire framework.

Pattern 1: The Information Sequencing Trap

Online programs front-load commitment (apply, enroll, pay) and back-load information (syllabi, credit evaluation, true costs, workload expectations). Students commit on incomplete information, discover mismatches, then either struggle through or withdraw.

Founders building retention interventions and success dashboards miss the upstream cause. The student never had enough information to make a good decision in the first place. A student who enrolls without knowing that a 7.5-week term means 15-plus hours per week experiences workload shock in week one and drops. A retention tool that flags that student as at-risk in week two is solving the wrong problem — the failure happened months earlier during pre-enrollment.

The takeaway: if your product catches problems after a bad decision has already been made, ask whether helping with better decisions earlier is possible. Upstream opportunities are less crowded and higher leverage.

Pattern 2: The Upstream Cause and Downstream Symptom Split

Pain emerges in one phase but originates one or two phases earlier. Student interviews reveal suffering at the current phase, but the real opportunity exists where information gaps or broken processes created the inevitable struggle.

Week-two course drops trace back to missing workload information at registration. Redundant course discovery post-enrollment traces back to transfer credits being evaluated after commitment. Unexpected bills after onboarding trace back to unclear financial aid information pre-enrollment. Pace overwhelm traces back to unexplained accelerated formats. Wrong course sequences trace back to unavailable or inconsistent advising.

When you hear a pain point, ask: what happened one phase earlier that made this inevitable? Build for the cause, not the symptom.

Pattern 3: When Qualitative and Quantitative Evidence Disagree

This one is counterintuitive. Stakeholders unanimously report that X is the problem. Students, faculty, advisors all say it. Then outcome data shows X actually produces better results.

This happens frequently in higher ed. Qualitative data overrepresents struggling individuals because they file complaints. The silent majority who succeeded leave no interview trace. Accelerated formats, for example, are universally perceived as the primary driver of attrition. But controlled outcome analyses in specific disciplines find that students in shorter sessions actually pass at higher rates and withdraw less. Perception is driven by dropouts (a loud signal) while thriving students (the silent majority) don't appear in the pain data.

Unanimous qualitative agreement should trigger quantitative validation, not a rush to build. Consensus can be a bias artifact rather than confirmation.

Pattern 4: Same Problem, Two Job Descriptions

Students say "I'm teaching myself" and "feedback is generic." Faculty independently say "class sizes are unsustainable" and "I can't give real feedback to 200 students." Institutions track these as separate problems in separate reports. They are the same structural constraint viewed from different positions.

This pattern matters because of who holds the budget. A tool that solves the faculty capacity problem — helping professors give meaningful feedback at scale — also solves the student experience problem as a byproduct. And the faculty side is where procurement authority sits.

Why This Matters for AI Platform Builders

Philippos's framework validates several things we've observed building AI platforms at ibl.ai. The upstream information gaps he describes — cost transparency, credit articulation, workload expectations, advising consistency — are precisely the kinds of problems that autonomous AI agents can solve at scale. An agent that helps a prospective student understand actual program costs before commitment, or maps transfer credits across institutions in real time, or provides consistent advising at the "select and enroll" phase is addressing the highest-leverage, least-served points in the journey.

His point about AI defensibility is equally important. Wrappers around language models are not defensible products. What is defensible: proprietary data that improves through use, deep integration into institutional systems like SIS and LMS, and compliance infrastructure that takes years to build correctly. That's exactly the kind of platform architecture that survives the 18-month substitution window he describes.

The framework is a gift to the EdTech community. We're grateful to Philippos for publishing it openly, and we encourage every founder building for higher education to work through his diagnostic before their next pilot.

The full framework, slides, and diagnostic tool are available at github.com/savvides/cracking-higher-ed-sxswedu under a CC BY 4.0 license.

See the ibl.ai AI Operating System in Action

Discover how leading universities and organizations are transforming education with the ibl.ai AI Operating System. Explore real-world implementations from Harvard, MIT, Stanford, and users from 400+ institutions worldwide.

View Case Studies

Get Started with ibl.ai

Choose the plan that fits your needs and start transforming your educational experience today.