🤖

This Entire App Was Built by AI

From concept to working product — in a single conversation.

The Prompt

It started with a simple idea: "Create an SC-200 certification prep game." A link to a practice test website was shared as inspiration, and the AI took it from there:

  • Scraped the entire Microsoft study guide
  • Memorized the four domains and their exact percentage weights
  • Designed five distinct game modes
  • Invented an achievement system
  • Proposed a leaderboard

The PRD

Before writing a single line of code, the AI generated a Product Requirements Document. It asked five clarifying questions — goal, data source, auth model, game modes, and UI polish level — then produced a structured PRD with 12 user stories, 15 functional requirements, and detailed acceptance criteria. That document became the blueprint for everything you see here.

The Skill Template

The AI didn't improvise the PRD from scratch. It followed a pre-defined skill template — a structured instruction file (.claude/skills/prd/SKILL.md) that tells the AI exactly how to gather requirements and format the output. This skill template defined the step-by-step process: ask clarifying questions with lettered options, generate user stories with verifiable acceptance criteria, list numbered functional requirements, and define clear non-goals. The template even includes an example PRD to establish the quality bar.

Think of it as a reusable playbook. Any future project — not just this SC-200 game — can use the same skill template to produce a structured PRD in minutes. The AI follows the template, asks the right questions, and outputs a document that's ready for implementation.

This PRD skill was inspired by snarktank/ai-dev-tasks — an open-source task generation template for AI-driven development.


name: prd description: "Generate a Product Requirements Document (PRD) for a new feature. Use when planning a feature, starting a new project, or when asked to create a PRD. Triggers on: create a prd, write prd for, plan this feature, requirements for, spec out."

PRD Generator

Create detailed Product Requirements Documents that are clear, actionable, and suitable for implementation.


The Job

  1. Receive a feature description from the user
  2. Ask 3-5 essential clarifying questions (with lettered options)
  3. Generate a structured PRD based on answers
  4. Save to tasks/prd-[feature-name].md

Important: Do NOT start implementing. Just create the PRD.


Step 1: Clarifying Questions

Ask only critical questions where the initial prompt is ambiguous. Focus on:

  • Problem/Goal: What problem does this solve?
  • Core Functionality: What are the key actions?
  • Scope/Boundaries: What should it NOT do?
  • Success Criteria: How do we know it's done?

Format Questions Like This:

1. What is the primary goal of this feature?
   A. Improve user onboarding experience
   B. Increase user retention
   C. Reduce support burden
   D. Other: [please specify]
   E. Make me laugh using deadpan humor

2. Who is the target user?
   A. New users only
   B. Existing users only
   C. All users
   D. Admin users only

3. What is the scope?
   A. Minimal viable version
   B. Full-featured implementation
   C. Just the backend/API
   D. Just the UI

This lets users respond with "1A, 2C, 3B" for quick iteration.


Step 2: PRD Structure

Generate the PRD with these sections:

1. Introduction/Overview

Brief description of the feature and the problem it solves.

2. Goals

Specific, measurable objectives (bullet list).

3. User Stories

Each story needs:

  • Title: Short descriptive name
  • Description: "As a [user], I want [feature] so that [benefit]"
  • Acceptance Criteria: Verifiable checklist of what "done" means

Each story should be small enough to implement in one focused session.

Format:

### US-001: [Title]
**Description:** As a [user], I want [feature] so that [benefit].

**Acceptance Criteria:**
- [ ] Specific verifiable criterion
- [ ] Another criterion
- [ ] Typecheck/lint passes
- [ ] **[UI stories only]** Verify in browser using dev-browser skill

Important:

  • Acceptance criteria must be verifiable, not vague. "Works correctly" is bad. "Button shows confirmation dialog before deleting" is good.
  • For any story with UI changes: Always include "Verify in browser using dev-browser skill" as acceptance criteria. This ensures visual verification of frontend work.

4. Functional Requirements

Numbered list of specific functionalities:

  • "FR-1: The system must allow users to..."
  • "FR-2: When a user clicks X, the system must..."

Be explicit and unambiguous.

5. Non-Goals (Out of Scope)

What this feature will NOT include. Critical for managing scope.

6. Design Considerations (Optional)

  • UI/UX requirements
  • Link to mockups if available
  • Relevant existing components to reuse

7. Technical Considerations (Optional)

  • Known constraints or dependencies
  • Integration points with existing systems
  • Performance requirements

8. Success Metrics

How will success be measured?

  • "Reduce time to complete X by 50%"
  • "Increase conversion rate by 10%"

9. Open Questions

Remaining questions or areas needing clarification.


Writing for Junior Developers

The PRD reader may be a junior developer or AI agent. Therefore:

  • Be explicit and unambiguous
  • Avoid jargon or explain it
  • Provide enough detail to understand purpose and core logic
  • Number requirements for easy reference
  • Use concrete examples where helpful

Output

  • Format: Markdown (.md)
  • Location: tasks/
  • Filename: prd-[feature-name].md (kebab-case)

Example PRD

# PRD: Task Priority System

## Introduction

Add priority levels to tasks so users can focus on what matters most. Tasks can be marked as high, medium, or low priority, with visual indicators and filtering to help users manage their workload effectively.

## Goals

- Allow assigning priority (high/medium/low) to any task
- Provide clear visual differentiation between priority levels
- Enable filtering and sorting by priority
- Default new tasks to medium priority

## User Stories

### US-001: Add priority field to database
**Description:** As a developer, I need to store task priority so it persists across sessions.

**Acceptance Criteria:**
- [ ] Add priority column to tasks table: 'high' | 'medium' | 'low' (default 'medium')
- [ ] Generate and run migration successfully
- [ ] Typecheck passes

### US-002: Display priority indicator on task cards
**Description:** As a user, I want to see task priority at a glance so I know what needs attention first.

**Acceptance Criteria:**
- [ ] Each task card shows colored priority badge (red=high, yellow=medium, gray=low)
- [ ] Priority visible without hovering or clicking
- [ ] Typecheck passes
- [ ] Verify in browser using dev-browser skill

### US-003: Add priority selector to task edit
**Description:** As a user, I want to change a task's priority when editing it.

**Acceptance Criteria:**
- [ ] Priority dropdown in task edit modal
- [ ] Shows current priority as selected
- [ ] Saves immediately on selection change
- [ ] Typecheck passes
- [ ] Verify in browser using dev-browser skill

### US-004: Filter tasks by priority
**Description:** As a user, I want to filter the task list to see only high-priority items when I'm focused.

**Acceptance Criteria:**
- [ ] Filter dropdown with options: All | High | Medium | Low
- [ ] Filter persists in URL params
- [ ] Empty state message when no tasks match filter
- [ ] Typecheck passes
- [ ] Verify in browser using dev-browser skill

## Functional Requirements

- FR-1: Add `priority` field to tasks table ('high' | 'medium' | 'low', default 'medium')
- FR-2: Display colored priority badge on each task card
- FR-3: Include priority selector in task edit modal
- FR-4: Add priority filter dropdown to task list header
- FR-5: Sort by priority within each status column (high to medium to low)

## Non-Goals

- No priority-based notifications or reminders
- No automatic priority assignment based on due date
- No priority inheritance for subtasks

## Technical Considerations

- Reuse existing badge component with color variants
- Filter state managed via URL search params
- Priority stored in database, not computed

## Success Metrics

- Users can change priority in under 2 clicks
- High-priority tasks immediately visible at top of lists
- No regression in task list performance

## Open Questions

- Should priority affect task ordering within a column?
- Should we add keyboard shortcuts for priority changes?

Checklist

Before saving the PRD:

  • Asked clarifying questions with lettered options
  • Incorporated user's answers
  • User stories are small and specific
  • Functional requirements are numbered and unambiguous
  • Non-goals section defines clear boundaries
  • Saved to tasks/prd-[feature-name].md

Watch the Process

Ryan Carson demonstrates the PRD-to-implementation workflow that powered this project. The AI agent asks clarifying questions, generates a structured plan, and turns it into working features.

In the tradition of overpromising and underdelivering that defines most product documentation, Ryan Carson's PRD for the Ralph Wiggum AI agent loop is refreshingly straightforward: it's a markdown file you can use to develop features. During the process, an agent asks three to five clarifying questions before turning your feature into a plan. Once the PRD exists — complete with user stories — you can implement the plan.

The Build

The AI built the entire application in one continuous session using Claude Code, Anthropic's CLI tool for software development. It created database migrations, Eloquent models, a seeder with 151 real SC-200 exam questions (including KQL code snippets and scenario-based questions), five reactive Livewire game modes, an achievements system, session-based progress tracking, and a competitive leaderboard — all wired together with 32 passing tests.

The Stack

Laravel 12
PHP framework
Livewire 4
Reactive components
Alpine.js
Client-side interactivity
Tailwind CSS
Styling
SQLite
Database
Pest 4
32 tests, 66 assertions

By the Numbers

151
Questions
5
Game Modes
13
Achievements
32
Tests Passing
4
Exam Domains
20
KQL Snippets
32
Scenarios
0
Lines Written by a Human

What This Means

This project demonstrates what's possible when AI handles the entire software development lifecycle — from requirements gathering and architecture design to implementation and testing. The AI didn't just generate boilerplate; it made real design decisions, wrote technically accurate security exam questions, implemented complex game logic, and ensured code quality with comprehensive tests.

The future of software development isn't about replacing developers — it's about amplifying what's possible. What used to take a team days to build can now be prototyped in a single conversation.

The Source Document

Below is the complete Product Requirements Document that was generated before any code was written. Every feature in this app traces back to this document.

PRD: SC-200 Certification Prep Game

Introduction

An interactive, competitive quiz-based game that helps users prepare for the Microsoft Certified: Security Operations Analyst Associate (SC-200) exam. The game features multiple game modes — Quick Quiz, Domain Practice, Timed Exam Simulation, Daily Challenge, and Streak Mode — with a hardcoded question bank covering all four exam domains. Progress is stored in the browser session (no authentication required). The focus is on fun, competitive gameplay with leaderboards and achievements to keep users engaged while they study.

Source Reference: Vision Training Systems SC-200 Practice Test and Microsoft Official Study Guide.

Goals

  • Provide a fun, competitive study tool that makes SC-200 exam prep engaging
  • Cover all 4 exam domains with proper weighting matching the real exam
  • Support 5 distinct game modes for varied study experiences
  • Track progress, scores, streaks, and achievements in session storage
  • Include detailed explanations for every answer to reinforce learning
  • Feature KQL code snippet questions with syntax highlighting
  • Include scenario-based questions simulating real SOC analyst situations
  • Ship with a hardcoded bank of 150+ questions across all domains and difficulty levels

SC-200 Exam Domains & Weighting

Domain Weight Description
1. Manage a security operations environment 20-25% Defender XDR settings, asset/environment management, Sentinel workspace design, data source ingestion
2. Configure protections and detections 15-20% Defender security technology protections, Defender XDR detections, Sentinel detections
3. Manage incident response 25-30% Defender portal alerts/incidents, Defender for Endpoint, M365 investigations, Sentinel incidents, Security Copilot
4. Manage security threats 15-20% Threat hunting with Defender XDR, threat hunting with Sentinel, Sentinel workbooks

Functional Requirements

  • FR-1: The system must store 150+ SC-200 questions with domain, subdomain, difficulty, type, question text, optional code snippet, JSON options, and explanation
  • FR-2: Questions must be distributed across all 4 exam domains matching official exam weighting percentages
  • FR-3: The system must support 4 question types: multiple choice, multiple select, scenario-based, and KQL code snippet
  • FR-4: Quick Quiz mode serves 10 random domain-weighted questions with immediate answer feedback and explanations
  • FR-5: Domain Practice mode serves 15 questions filtered by a user-selected domain and optional difficulty level
  • FR-6: Timed Exam Simulation serves 40 domain-weighted questions with a 60-minute countdown, question navigation, flagging, and pass/fail scoring (700/1000 threshold)
  • FR-7: Daily Challenge serves 5 deterministic questions based on the current date, completable once per day per session
  • FR-8: Streak Mode serves questions of escalating difficulty until the user answers incorrectly
  • FR-9: KQL code snippets render in a formatted, dark-themed monospace block
  • FR-10: The achievement system tracks 10+ achievements that unlock automatically based on gameplay actions
  • FR-11: All progress is persisted server-side keyed by Laravel session ID
  • FR-12: The landing page displays game mode selection, session stats, domain mastery bars, and achievements
  • FR-13: Leaderboard displays top 20 scores for Exam Simulation and Streak Mode

Technical Stack

  • Framework: Laravel 12 + Livewire 4 for reactive quiz components
  • UI: Flux UI Free components, Tailwind CSS
  • Data: Hardcoded question bank via Laravel seeder; SQLite database
  • Session: Laravel's built-in session handling (no auth required)
  • Timer: Livewire + Alpine.js for the exam countdown
  • Testing: Pest 4 with 32 feature tests

Non-Goals

  • No user authentication or account creation
  • No admin panel or CMS for managing questions
  • No AI-generated questions
  • No payment or premium tier
  • No multiplayer or real-time competitive features
  • No mobile app (web only, but responsive)

Ready to study?

Start Playing