Skip to content
Product Strategy · 12 min read · By Yury

JTBD Framework: 2026 Update with AI Product Examples

The Jobs-to-Be-Done framework in 2026: how AI products are changing customer jobs, new examples from ChatGPT, Claude, and Perplexity, and how to apply JTBD to your roadmap today.

Quick Answer: Jobs-to-Be-Done (JTBD) is a framework for understanding why customers switch products. It focuses on the progress they’re trying to make, not demographics. In 2026, JTBD has become essential for AI product teams because AI doesn’t just improve existing jobs. It collapses, merges, and creates entirely new ones. The framework still works, but the jobs have changed.

The core idea behind JTBD hasn’t changed since Clayton Christensen popularized it: people don’t buy products, they hire them to make progress. But the products people are hiring in 2026 look radically different from even two years ago.

“People don’t want to buy a quarter-inch drill. They want a quarter-inch hole.” — Theodore Levitt, Harvard Business School

The reason JTBD matters is backed by hard data. According to Clayton Christensen, approximately 95% of new products fail. CB Insights’ 2024 analysis of 431 failed VC-backed startups confirmed the pattern: 43% cited poor product-market fit as the root cause of failure. Understanding the job your customer is hiring for isn’t academic — it’s survival.

AI assistants, search tools, and coding agents have rewritten what “getting help” means. Whether you’re building an AI product or competing against one, you need JTBD to understand the job shifts happening right now. Otherwise you’re flying blind.

Here’s the complete JTBD framework if you need the fundamentals. This post focuses on what’s changed.

What Is Jobs-to-Be-Done in 2026?

JTBD in 2026 is the same core framework with a new set of inputs. The classic formulation still holds: “When [situation], I want to [motivation], so I can [outcome].” But the situations, motivations, and outcomes have shifted because AI products have changed what’s possible.

Before AI assistants, if you needed to understand a complex topic (say, Kubernetes networking) your job-to-be-done was “help me learn this concept.” You’d hire a Google search, a Stack Overflow answer, a YouTube tutorial, or a colleague. Each of these “hires” came with tradeoffs: search was fast but shallow, a colleague was accurate but expensive socially, a tutorial was thorough but slow.

In 2026, a single AI assistant can absorb all of those jobs. And that compression of jobs into a single product is the most important JTBD shift of the decade.

How Have AI Products Changed the JTBD Lens?

Three structural changes make JTBD analysis different in 2026:

Jobs are merging. “Help me research,” “help me summarize,” and “help me draft” used to be three separate jobs that customers hired three separate tools to do. Now a single AI product handles all three in one workflow. Your competitive landscape is wider than you think. You’re competing not just with tools that do the same job, but with tools that absorb your job into a larger workflow.

The “hiring criteria” have shifted. In pre-AI JTBD analysis, customers evaluated products on execution speed, price, and reliability. In 2026, the new hiring criteria include trust in output accuracy, ability to verify results, and how well the tool adapts to context. A customer hiring an AI assistant for legal research has fundamentally different trust requirements than one hiring it for brainstorming.

New jobs are emerging. “Help me prompt effectively,” “help me verify AI output,” and “help me orchestrate multiple AI tools” are jobs that didn’t exist in 2023. Products that recognize these emerging jobs early, and hire themselves out for them, are winning.

How Did ChatGPT Win “Help Me Think” Jobs?

ChatGPT’s dominance is a textbook JTBD case study. OpenAI didn’t win by building the best language model. They won by identifying and owning a job that previously had no good solution.

The job: “When I’m stuck on a problem and need a thought partner, I want to talk through my ideas with someone knowledgeable, so I can move forward without waiting for a meeting or scheduling a call.”

Before ChatGPT, this job was primarily hired out to colleagues, mentors, or expensive consultants. The alternatives were slow (schedule a meeting), socially costly (interrupt a coworker), or unavailable (can’t afford a consultant).

ChatGPT’s breakthrough was making “thinking out loud” free, instant, and judgment-free. Users discovered they could brainstorm product names, debug logical arguments, explore business models, and stress-test ideas, all without the social overhead of involving another human.

Why it won: The previous solutions all had high friction. ChatGPT collapsed the friction to near zero. In JTBD terms, it dramatically reduced the “anxieties” associated with hiring a thought partner (will they judge me? do I have to explain context? will they be available?) while keeping the functional outcome intact.

The JTBD lesson: ChatGPT didn’t replace Google. It created a new job category, “help me think,” that didn’t have a clean product-market fit before. If you’re building an AI product, look for jobs where the current solutions carry unnecessary social, temporal, or financial friction.

How Did Perplexity Win “Help Me Research” Jobs?

Perplexity is a fascinating JTBD case because it didn’t create a new job. It stole an existing one from Google by doing it better along the dimensions that mattered most.

The job: “When I need a factual, well-sourced answer to a specific question, I want to get a synthesized response with references, so I can trust the answer and move on quickly.”

Google owned this job for two decades. But Google’s solution increasingly required customers to do assembly work: click through ten blue links, read multiple pages, mentally synthesize contradictory information, and decide which sources to trust. The job was “get a reliable answer,” but Google was only providing raw materials.

Perplexity reframed the output. Instead of links, it provides a synthesized answer with inline citations. The user can verify sources without doing the synthesis themselves. In JTBD terms, Perplexity reduced the “effort of consumption”: the hidden cost of processing a product’s output into something useful.

Why it won the research job specifically: Perplexity’s citation model maps directly to the trust requirements of the research job. When you’re researching a factual question, you need to know where the answer came from. ChatGPT provides answers without sources. Google provides sources without answers. Perplexity provides both, and that combination is what the research job demands.

The JTBD lesson: If your competitor owns a job but forces customers to do post-processing work, you can win by delivering the finished outcome. Look at where customers take your competitor’s output and manually transform it into something useful. That’s your opportunity.

How Did Claude Win “Help Me Write and Reason” Jobs?

Anthropic’s Claude carved out a distinct JTBD position by excelling at a specific cluster of jobs that require extended reasoning, careful analysis, and long-form output.

The job: “When I need to produce a detailed, well-structured document (a report, analysis, strategy brief, or technical explanation) I want an assistant that can reason through complexity and produce publication-ready output, so I don’t have to spend hours writing and revising.”

This job was previously hired out to junior analysts, freelance writers, or the PM’s own late-night writing sessions. The alternatives were expensive (freelancers), slow (doing it yourself), or inconsistent (junior team members).

Claude’s differentiation in JTBD terms is on the quality dimension of the functional job. Where ChatGPT optimized for conversational speed and Perplexity for factual accuracy, Claude optimized for depth and nuance in written output. Users hiring Claude for the “help me write” job consistently cite its ability to handle ambiguity, follow complex instructions, and produce long-form content that requires minimal editing.

The JTBD lesson: In a market where multiple products can technically do the same job, differentiation comes from which dimension of the job you optimize for. Speed, accuracy, depth, creativity, and trust are all valid dimensions, but you can’t win on all of them simultaneously. Pick the dimension your target users care about most.

How Do You Apply JTBD to Your AI Product Roadmap?

Here’s a practical four-step process for applying JTBD to AI product decisions in 2026:

Step 1: Map the jobs your product currently serves. List every distinct job customers hire your product for. Be specific: “help me write” is too broad. “Help me turn rough meeting notes into a structured follow-up email within 2 minutes” is a job.

Step 2: Identify job mergers and splits. Which jobs are your customers starting to expect as a single workflow? If customers currently hire your product for drafting and a separate tool for editing, that’s a merger opportunity. If customers use your product for two very different jobs, consider whether you’re underserving both.

Step 3: Audit the competition by job, not by product. Don’t ask “who are our competitors?” Ask “for each job we serve, what else do customers hire?” Your AI writing tool competes with ChatGPT, Google Docs’ AI features, Grammarly, freelance writers, and the customer doing it manually. Each competitor wins on a different job dimension.

Step 4: Prioritize by underserved jobs. Use customer interviews to find jobs where current solutions have high friction, low satisfaction, or missing dimensions. Those are your roadmap priorities.

What Are the Best JTBD Interview Questions for 2026?

These eight questions are adapted for AI product research, where switching costs are low and job definitions are evolving rapidly:

  1. “Walk me through the last time you needed to [do the job]. What did you try first?” This reveals the competitive set, often surprisingly broad for AI tools.

  2. “What were you doing right before you opened [product]? What triggered it?” Identifies the situational trigger, which is critical for activation and onboarding.

  3. “If [product] disappeared tomorrow, what would you do instead?” Reveals the true alternative and how differentiated your product actually is.

  4. “What’s the most frustrating part of using [product/competitor] for this task?” Surfaces underserved dimensions of the job.

  5. “Have you ever started this task in one tool and finished it in another? Why?” Reveals job fragmentation and merger opportunities.

  6. “How do you know when the output is good enough?” Critical for AI products where output quality is variable and trust is a key hiring criterion.

  7. “What would you never trust [product] to do, even if it could?” Identifies trust boundaries, which define the edges of your addressable job space.

  8. “Has the way you do this task changed in the last year? How?” Captures job evolution, essential in a market where capabilities change quarterly.

What Are Common JTBD Mistakes in 2026?

  • ❌ Defining jobs too broadly (“help me be more productive”) → ✅ Define jobs by specific situation and outcome (“help me summarize a 30-page report into 5 bullet points before my 2pm meeting”)
  • ❌ Treating AI competitors as a separate category from non-AI alternatives → ✅ Map all alternatives by job, regardless of whether they use AI. A human assistant and an AI assistant can serve the same job
  • ❌ Assuming your product serves one primary job → ✅ Interview users to discover the 3-5 distinct jobs they hire your product for. You’ll be surprised by the variety
  • ❌ Ignoring the emotional and social dimensions of AI jobs → ✅ Track emotional jobs like “feel confident in my decision” and social jobs like “appear knowledgeable to my team.” These drive retention more than functional performance
  • ❌ Running JTBD analysis once and treating it as done → ✅ Re-run job mapping quarterly. In AI markets, new competitors and capabilities emerge fast enough to shift job definitions in months

FAQ

How is JTBD different from user stories?

User stories describe what a user wants to do within your product (“As a user, I want to filter search results by date”). JTBD describes why the user showed up in the first place, independent of any product (“When I’m researching a topic for a presentation, I want to find the most recent credible sources so I don’t cite outdated information”). User stories are scoped to features. JTBD is scoped to the customer’s life. Use JTBD to decide what to build; use user stories to describe how to build it.

Can you use JTBD for AI products that create new behaviors?

Yes, but you need to reframe the question. AI products often don’t solve existing jobs. They create new ones by making previously impossible tasks trivial. ChatGPT didn’t solve “search better.” It created “think out loud with an AI.” When analyzing new-behavior products, look for latent jobs: things people wanted to do but couldn’t because the cost was too high. The job existed; the affordable solution didn’t.

How many jobs should one product serve?

Most successful products serve 2-5 core jobs well. Fewer than two and your market is too narrow. More than five and you’re spreading too thin, especially in AI where user expectations for quality are high. Notion serves roughly four core jobs (write, organize, collaborate, manage projects). ChatGPT serves three to four. If your product serves ten jobs, you’re probably underserving most of them.

How often should you redo JTBD research?

In stable markets, annually is sufficient. In AI-adjacent markets in 2026, quarterly is better. The reason: new AI capabilities ship monthly, and each capability can create, merge, or eliminate jobs. When OpenAI, Anthropic, or Google ships a major model update, the job landscape shifts. Your JTBD research from six months ago may already reflect a competitive landscape that no longer exists.


JTBD has survived every product management trend cycle because it focuses on the one thing that doesn’t change: customers want to make progress. The jobs are evolving faster than ever in 2026, but the framework for understanding them is the same. Start with the job. Build for the progress.

Map Your Customer Jobs for Free → rocknroll.dev/tools/jobs-to-be-done-canvas/

Related Posts

Turn JTBD insights into product specs

Rock-n-Roll takes your customer research and turns it into structured documentation: strategy briefs, solution blueprints, and builder-ready implementation plans.

Start your free project