Definition
Minimum Viable Product (MVP) is a version of a new product that includes only the essential features necessary to be used by the first early-adopter customers and provide feedback for future development. The goal is to validate the product idea with minimum effort and time possible.
Key MVP concepts:
Minimum: reduced feature set, only core ones to deliver value Viable: functional and usable, not broken or incomplete Product: something customers can actually use and potentially pay for
MVP formula: MVP = Minimal feature set + Sufficient quality + Real customer feedback
Unlike a Proof of Concept (PoC), which validates technical feasibility internally, an MVP:
- Is released to real customers (though limited)
- Must be production-quality for core functionality
- Goal is validate product-market fit, not just technical feasibility
- Focus on learning through usage metrics and customer feedback
Concrete AI MVP example: startup wants to build AI-powered recruiting assistant. Instead of developing complete platform (CV parsing, matching, interview scheduling, analytics), MVP includes only:
- Upload CV and job description
- AI matching score with brief explanation
- Email notification with top 5 candidates
Features excluded from MVP (future roadmap): interview scheduling, ATS integration, advanced analytics, mobile app, multi-language support.
MVP launched to 10 early-adopter HR managers. After 4 weeks: 8/10 use it weekly, NPS 45, feedback primarily requests interview scheduling integration. Validation: there’s demand, MVP successful. Next iteration: add scheduling feature.
The MVP concept was popularized by Eric Ries in “The Lean Startup” (2011) and earlier by Steve Blank in “The Four Steps to the Epiphany” (2005). The core idea is validated learning: incrementally build product based on real customer feedback, avoiding months/years developing features nobody wants.
How it works
An effective MVP follows Lean Startup methodology: iterative and rapid Build-Measure-Learn cycle.
Build-Measure-Learn cycle
1. Build
Define and develop minimal feature set that delivers core value:
Feature prioritization framework:
Must-have (included in MVP):
- Features without which product doesn’t work
- Core value proposition
- Minimal viable workflow (user can complete task end-to-end)
Should-have (post-MVP, iteration 2-3):
- Features that improve experience but aren’t critical
- Optimization and polish
Nice-to-have (long-term roadmap):
- Differentiating features but not essential
- Advanced functionality
AI writing assistant MVP example:
- Must-have: grammar check, tone adjustment, basic summarization
- Should-have: plagiarism check, style templates, browser extension
- Nice-to-have: multi-language, team collaboration, API access
Development timeline: MVP should be buildable in 6-12 weeks for startup, 3-6 months for corporate. Beyond this, risk of overbuilding.
Quality bar: core functionality must be production-ready. UI can be basic but not broken. No crashing bugs, acceptable performance.
2. Measure
Release MVP to early adopters and collect quantitative and qualitative data:
Quantitative metrics (usage analytics):
- Activation: % users completing onboarding and using core feature
- Engagement: frequency (DAU/MAU), session duration, feature usage
- Retention: % users returning after 1 day, 7 days, 30 days
- Conversion: if freemium/trial, % upgrade to paid
Qualitative feedback (customer interviews):
- Value delivered: is problem solved? How much value created?
- Pain points: what frustrates? What’s missing?
- Willingness to pay: would they pay? How much?
- Referral intent: would they recommend to others? (NPS - Net Promoter Score)
Target sample size: 10-50 early adopters for B2B MVP, 100-1000 for B2C MVP (depends on market size and acquisition cost).
Measurement timeline: 2-8 weeks. Enough to see retention patterns, not so long to delay iteration.
3. Learn
Analyze data and decide next step:
MVP successful (product-market fit signals):
- High activation and retention (above 40% week-1 retention for B2C, above 60% for B2B)
- Positive qualitative feedback (NPS above 30)
- Organic growth (word-of-mouth, referrals)
- Demonstrated willingness to pay
Decision: iterate on MVP, add requested features, scale go-to-market.
MVP mixed results (some traction, not strong):
- Medium engagement, some passionate users, others drop
- Mixed feedback: some love it, others indifferent
Decision: pivot on segment or feature. Focus on user segment showing most traction, or modify core value proposition.
MVP failed (no product-market fit):
- Low activation/retention (below 20% week-1 retention)
- Negative feedback: “doesn’t solve problem”, “too complex”, “better alternative exists”
Decision: major pivot (change target customer, problem, solution) or kill product.
MVP types and approaches
1. Concierge MVP
Manually deliver service that will eventually be automated. No software (or minimal), all human-powered.
AI content moderation example: instead of building ML model, team manually moderates content for first 50 clients. This validates demand and generates training data for future automation.
Pros: very fast (no dev time), maximum learning (human interaction reveals insights) Cons: not scalable, labor-intensive
When to use: validate that problem exists and solution has value before investing in automation.
2. Wizard of Oz MVP
User thinks they’re interacting with automated product, but human is doing work behind scenes.
AI email assistant example: user writes email, clicks “improve”, receives improved version. In MVP, there’s no AI: human copywriter behind scenes rewrites email in real-time.
Pros: realistic user experience, fast validation Cons: not scalable, requires team on standby
3. Single-feature MVP
Product with literally one core feature, everything else stripped.
Example: Instagram MVP was only photo upload + filters + feed. No Stories, no Reels, no DM, no Explore. One feature, executed perfectly.
Pros: total focus, high quality on core feature Cons: value proposition must be extremely clear and compelling
4. Piecemeal MVP
Combine existing tools to create MVP without building from scratch.
AI-powered job board example: use Airtable for database, Zapier for automation, GPT-4 API for matching, landing page on Webflow. No custom development.
Pros: zero/minimal dev cost, very fast Cons: limited customization, tool costs can be high
5. Landing page MVP (Smoke Test)
Only landing page describing product, with signup CTA. If enough people sign up, means demand exists.
Example: Dropbox pre-launch video MVP. 3-minute video shows how Dropbox works, call-to-action “sign up for beta”. 75,000 signups in one night. Validation: huge demand.
Pros: minimal cost, fast validation Cons: doesn’t validate if solution works, only if there’s interest
Feature scoping for AI/ML MVP
For AI products, MVP scoping is critical because ML development has high uncertainty:
Scoping principles:
1. Start with rule-based, upgrade to ML if necessary
MVP can use simple heuristics. If user adoption is good, invest in ML.
Recommendation engine MVP example: start with popularity-based (“top 10 products overall”). If traction is good, build collaborative filtering. If still good, invest in personalized deep learning.
2. Pre-trained model vs custom model
MVP should use off-the-shelf pre-trained when possible. Custom training only if strictly necessary.
Sentiment analysis example: MVP uses Hugging Face pre-trained model. If accuracy insufficient, fine-tune on domain-specific data.
3. Human-in-the-loop for complex tasks
MVP can have human review/override for low-confidence predictions.
Fraud detection example: ML model flags suspicious transactions, human reviewer makes final decision. This allows launch with imperfect model, improve gradually.
4. Limit domain scope
AI MVP can focus on narrow use case, expand later.
Legal contract analysis example: MVP only for NDAs (most common, standardized). Expand to employment contracts, then M&A agreements incrementally.
Use cases
B2C SaaS: Notion MVP
Notion today is all-in-one workspace (notes, wiki, database, project management, 30M+ users). 2016 MVP was much more limited:
MVP feature set:
- Block-based editor (text, heading, bullet)
- Nested pages
- Basic collaboration (share, comment)
- Web + desktop app
Features NOT in MVP:
- Database (added 2018)
- Templates
- API
- Mobile app (arrived later)
- Integrations
Launch strategy:
- Invitation-only beta for 1,000 early adopters (Product Hunt community)
- Focus on intensive qualitative feedback
Metrics tracked:
- Daily active usage
- Pages created per user
- Retention day-7, day-30
Results after 3 months:
- Day-30 retention: 50% (very high for productivity tool)
- NPS: 60+ (extremely positive)
- Top feature request: database functionality
Learning:
- Core value (flexible, beautiful note-taking) validated
- User segment identified: knowledge workers, creatives, startup teams
- Clear roadmap: database feature most requested
Iteration: Notion added database in 2018, triggering viral growth. MVP allowed validating core concept before building complex feature.
B2B AI: Jasper (AI copywriting) MVP
Jasper (originally Jarvis) is AI writing assistant for marketing content. 2021 MVP:
MVP feature set:
- GPT-3 powered text generation
- 5 templates: blog post intro, product description, Facebook ad, email subject line, AIDA framework
- Simple web UI: input template + context, output text
- Copy-paste workflow (no integrations)
Features NOT in MVP:
- Chrome extension
- Plagiarism checker
- SEO optimization
- Brand voice training
- Team collaboration
- 50+ templates (arrived later)
Launch strategy:
- AppSumo deal to acquire early adopters (lifetime deal $49)
- 5,000 customers first 2 months
Metrics tracked:
- % users generating content beyond trial
- Word count generated per user
- NPS and qualitative feedback
Results after 2 months:
- 80% users generated content beyond first session (high activation)
- Average 10K words/user/month
- NPS: 50+
- #1 feature request: Chrome extension to write directly in Google Docs, WordPress
Learning:
- Core value proposition validated: AI saves time on copywriting
- Target segment: solopreneurs, small marketing agencies, content creators
- Integration with existing tools (Docs, WordPress) is critical for workflow
Iteration: Jasper built Chrome extension (shipped 3 months after MVP), then added SEO features. Reached 100M ARR in 18 months. MVP allowed fast validation and iteration.
Marketplace: Airbnb MVP
Airbnb MVP (2008) was extremely minimal:
MVP feature set:
- Photo upload apartments
- Listing description
- Booking request via email (no payment processing)
- No reviews, no messaging platform, no identity verification
Launch strategy:
- Founders personally recruited first 10 hosts in San Francisco
- Photographed apartments themselves (no host photos)
- Target event: Democratic National Convention (housing shortage opportunity)
Results:
- 3 bookings first week
- Revenue: $200
- Learning: concept works, but execution rough (low photo quality, trust issue)
Iteration:
- Founders personally visited every listing, took professional photos (not scalable but critical for quality)
- Added payment processing (Stripe integration)
- Added reviews (trust mechanism)
Airbnb iterated for 3+ years before achieving strong product-market fit. MVP allowed starting with minimal investment, learning incrementally.
Enterprise AI: Scale AI MVP
Scale AI (data labeling platform for ML training) 2016 MVP:
MVP feature set:
- API to submit labeling task (image bounding box, classification)
- Human labeling workforce (crowd-sourced)
- Basic quality control (majority vote)
- Dashboard to view results
Features NOT in MVP:
- ML-assisted labeling
- Advanced QC (consensus algorithms, expert review)
- Video, LiDAR, 3D labeling (images only)
- Custom workflow builder
- Enterprise SSO, compliance features
Launch strategy:
- Y Combinator Demo Day pitch
- Target: self-driving car startups (hot sector, high demand for training data)
- First 5 customers: personal founder outreach
Metrics tracked:
- Tasks completed per day
- Accuracy (measured against ground truth)
- Customer retention month-over-month
Results after 6 months:
- 10 paying customers
- 1M tasks labeled
- Accuracy: 95%+
- Retention: 80% (high for B2B)
Learning:
- Market demand validated (ML teams willing to pay premium for quality labels)
- Quality bar critical: accuracy below 90% unacceptable for autonomous driving
- Speed matters: turnaround time under 24h is competitive advantage
Iteration: Scale added ML-assisted labeling, video/LiDAR support, raised Series A. Today valuation 7B+ dollars. MVP validated market and allowed raising capital to scale.
Corporate innovation: Adobe Firefly MVP (internal)
Adobe Firefly (generative AI for creative content) started as internal MVP:
MVP feature set (internal beta):
- Text-to-image generation (Stable Diffusion based)
- Integration with Adobe Express (web)
- Limited to Adobe employees (500 beta testers)
Features NOT in MVP:
- Photoshop/Illustrator integration
- Video generation
- 3D generation
- Commercial-safe training data (MVP used open datasets)
Internal metrics:
- % employees using weekly
- Images generated per user
- Satisfaction survey
Results after 3 months internal beta:
- 60% weekly active among beta group
- Average 50 images/user/month
- Feedback: sufficient quality, good speed, Creative Cloud tools integration critical
Learning:
- Internal demand validated (employees would use)
- Training data concern: enterprise customers need commercial-safe (no copyright issues)
- Workflow integration more important than standalone tool
Iteration: Adobe rebuilt Firefly with proprietary training data (Adobe Stock), deep Photoshop integration. Public launch 2023. Internal MVP de-risked multi-million investment in proprietary model training.
Practical considerations
MVP timeline and cost
Typical timelines:
Startup MVP:
- Solo founder + no-code tools: 2-4 weeks
- Small team (2-3 dev) + custom dev: 6-12 weeks
- VC-backed team (5-10 people): 3-6 months
Corporate MVP:
- Innovation lab: 3-6 months (includes stakeholder alignment, compliance)
- New product line: 6-12 months (enterprise requirements, existing systems integration)
Budget:
- No-code MVP: 5-20K euros (tools subscription, landing page, ads)
- Simple web app: 30-80K euros (3 months, 2 developers)
- Complex AI MVP: 100-300K euros (6 months, ML engineer + backend + frontend + data)
Rule of thumb: MVP should cost 5-10% of projected full product budget.
MVP success metrics
Activation: % users completing core action in first session
- Good: above 40% for B2C, above 60% for B2B
- Excellent: above 60% B2C, above 80% B2B
Retention:
- Day-1: above 30%
- Week-1: above 20% (B2C), above 40% (B2B)
- Month-1: above 10% (B2C), above 30% (B2B)
NPS (Net Promoter Score):
- Below 0: serious problem, users unhappy
- 0-30: ok but not strong enthusiasm
- 30-50: good, some passionate advocates
- Above 50: excellent, strong word-of-mouth potential
Qualitative signals:
- Users asking “when will feature X be available?” (demand for iteration)
- Users willing to pay without prompting
- Unsolicited positive feedback, testimonials
- Organic referrals
Common mistakes in MVP
1. Overbuilding (too many features)
MVP including 20 features instead of 3 core. Result: 6 months development, delayed validation, scope creep.
Fix: apply 80/20 rule. 20% features deliver 80% value. Focus on that.
2. Underbuilding (too minimal, broken experience)
MVP so minimal it doesn’t deliver value. User tries once, immediately drops.
Bad MVP example: AI writing tool generating text but 50% is gibberish. User can’t use it.
Fix: “Viable” is key. Core feature must work well. Better have 1 excellent feature than 5 mediocre features.
3. Wrong target audience
Launch MVP to mass market instead of early adopters. Mass market requires polish and completeness. Early adopters tolerate rough edges but want innovation.
Fix: identify innovators/early adopters segment. These are forgiving, provide best feedback.
4. No metrics/feedback plan
Build MVP, launch, but no systematic data collection. Anecdotal feedback only.
Fix: pre-define key metrics, setup analytics (Mixpanel, Amplitude), schedule customer interviews.
5. Analysis paralysis (too much time analyzing, little iterating)
Spend 3 months analyzing MVP feedback, then 6 months building version 2. Cycle too slow.
Fix: rapid iteration. 2-4 weeks max per iteration cycle. Ship fast, learn fast.
MVP vs other validation methods
When MVP is appropriate:
- Product concept is clear but uncertain if market wants it
- You have access to early adopter segment
- You have resources to build functional product (6-12 weeks dev)
- Risk is medium-high (investment 100K+ if full build)
When to use alternatives:
Landing page test (Smoke test): if you want fast demand validation, pre-build. Cost 1-5K, timeline 1 week.
PoC: if uncertainty is technical feasibility, not market demand. Validate solution works technically before building user-facing product.
Prototype: if uncertainty is UX/design. Interactive mockup to test workflow without backend.
Concierge/Wizard of Oz: if you want to minimize dev entirely. Manually deliver service.
Sequencing: Often optimal is to combine: Landing page test → PoC (if AI/ML) → Concierge MVP → Software MVP → Scale.
Transitioning MVP to full product
After MVP success, typical roadmap:
Phase 1: MVP (month 0-3)
- Core features only
- 10-100 users
- Manual ops acceptable
- Goal: validate PMF
Phase 2: Early product (month 3-9)
- Add top 3-5 requested features
- Improve core feature quality
- 100-1000 users
- Automate critical ops
- Goal: achieve strong retention, NPS 40+
Phase 3: Growth product (month 9-18)
- Feature parity with main competitors
- Scale infrastructure
- 1000-10000 users
- GTM investment (sales, marketing)
- Goal: acquire customers efficiently
Phase 4: Mature product (month 18+)
- Differentiation features
- Platform capabilities (API, integrations)
- 10K+ users
- Optimize for profitability
- Goal: market leadership or defensible niche
Investment scaling: MVP 50K → Early product 300K → Growth product 2M → Mature product 10M+ (cumulative).
MVP is foundation. Avoid temptation to rebuild from scratch: iterate and refactor incrementally.
Common misconceptions
”MVP must be low quality”
MVP must be minimal in scope, not quality. Core feature must work well.
Example: Dropbox MVP had only file sync (minimal scope), but sync was reliable, fast (high quality). If file sync had been buggy, nobody would have used it.
Contrast: if Dropbox MVP had included sharing, mobile, photo backup, version history (overscoped) but file sync were unreliable, guaranteed failure.
Principle: nail one thing, don’t do ten things poorly.
”MVP is only for tech startups”
MVP methodology applicable to:
Corporate innovation: test new product line with internal MVP before full launch Physical products: hardware MVP (3D printed prototype, limited batch) Services: launch consulting service with MVP offering (one core service, limited geography) Non-profit: test program impact with restricted pilot before scaling
Lean Startup principles are universal: minimize waste, maximize learning, iterate rapidly.
”MVP is one-time event”
MVP is beginning of iterative process, not end.
Correct sequence:
- MVP v1: validate core hypothesis
- MVP v2: add most-requested feature, improve retention
- MVP v3: expand user segment or use case
- … continue iterating
Airbnb iterated for 3 years. Slack iterated for 2 years. Product-market fit is emergent, not instant.
Mindset: MVP is learning vehicle, not shipping goal. Be prepared to iterate 5-10 cycles.
Related terms
- PoC: Proof of Concept validates technical feasibility before building MVP
- Product-Market Fit: final goal that MVP helps achieve
Sources
- Ries, E. (2011). The Lean Startup. Crown Business
- Blank, S. (2005). The Four Steps to the Epiphany
- Maurya, A. (2012). Running Lean: Iterate from Plan A to a Plan That Works