Live Coding

We build real software in real time.

Live sessions where we integrate GenAI into professional Frontend development workflows.

Juan Andrés Núñez

Juan Andrés Núñez

Frontend Engineer specialized in GenAI and professional educator

My Tech Stack

This is what I use to build the Web in 2026.

CRAFTView on GitHub
Last updated: March 26, 2026

Building Methodology AI First

Craft: Build with AI Without Losing Control

Spec · Build · Close

flowchart LR
    Spec -->|spec.md| Build
    Build -->|tech-plan + code| Close
    Build -.->|iterate| Build
    Close -.->|next feature| Spec

An LLM predicts the next token based on statistical patterns. It doesn't understand your project, your business, or your circumstances. It only generates plausible text given a context.

Craft is a 3-phase process for working with AI agents professionally. The goal: for humans to maintain judgment and control while leveraging the agent's speed.

Who is this for

Craft is designed for Individual Contributors. The principles scale to teams, but the focus is on your personal workflow.

It doesn't matter if you're just starting out or if you've been doing this for years. If you want to build with AI without relying on luck, this is for you.

Why it works

Each phase produces concrete, verifiable artifacts:

  1. Any competent model can execute it. If the work is well-defined, you don't need the latest frontier model.
  2. You can audit everything. If each task is small, you can review each output. If the plan has 50 steps and 40 files, you can't.
  3. Errors strengthen the system. Each iteration uncovers something the plan didn't foresee. That's normal, not a failure.

This doesn't expire. Specifying, building, reconciling, and closing will remain valid regardless of the tool or model.

1. Spec

Define the what before touching code

The agent doesn't know your preferences or your project's context. You need a context file that the agent reads automatically at the start of each session. The tool doesn't matter. The principle: context must live in a file the agent reads without you asking.

Global and project context

Global: your personal preferences that apply to all your projects. Project: stack, architectural decisions, business edge cases, technical debt, non-obvious relationships.

Don't add the obvious. If the agent can infer it from manifests or the folder structure, it's redundant.

Define scope as behaviors

Share your ideas with the agent: what you want to build and why. Discuss it. Let the agent challenge you, challenge it back. Once aligned, ask the agent to translate those requirements into concrete, verifiable behavioral scenarios (GIVEN/WHEN/THEN).

What matters is that each requirement is defined as something that can be verified, not as a vague description. Save these scenarios. This defines the "what" of your application and survives across sessions.

Spec audit

Before approving a spec, attack it: what happens with empty states? With errors? How many items can there be? Who can see or modify this? If you haven't thought about the edges, the agent will fill the gaps with plausible assumptions — and you won't notice until production.

Time invested in the spec multiplies. Every ambiguity resolved here saves an entire iteration later.

2. Build

Plan, execute, iterate

The scope is in the spec. Here you decide how to build it and in what order.

Planning

Let the agent propose a technical plan: architecture decisions (with trade-offs), atomic tasks grouped by dependencies, and what can be parallelized. Interrogate it: look for hidden dependencies, unnecessary complexity, edge cases.

The plan must include which acceptance criterion each task covers. If a criterion has no task, work is missing. If a task covers no criterion, it's unnecessary.

If you can't review the plan in 1 minute, it's too big. Divide.

Execution

The agent executes the tasks. Review diffs. Interrogate every decision you don't understand. Your professional responsibility is to audit the code — even if you didn't write it, it carries your name.

Test where it matters: business logic, data integrity, authentication, API contracts. You don't need to test everything — test what hurts if it fails.

Iteration

If something fails or you discover a gap, adjust the plan and repeat. The agent logs what changed and why in an iteration log. This is a normal part of the cycle, not a failure.

If a spec gap blocks tasks, the agent asks you, records the decision, and continues. The formal spec isn't touched here — that's Close's job.

Don't plan in detail what you'll build in 3 sessions. The detailed plan is only for the immediate work. The rest will change when you build.

Golden rule

Don't commit during Build. All code is committed in Close, once it works and is reconciled.

3. Close

Reconcile, secure, persist

You've built. Now make the record match reality.

Spec reconciliation

Compare what the spec says against what was actually built. If you implemented something not in the spec, add it. If something wasn't implemented, flag it. If something changed during execution, update the criterion. The spec stops being a plan and becomes a record of what exists.

This is key: the spec is updated at the end, once. Not during execution, where each change creates noise.

Pre-commit checks

Before committing, two automated checks:

  1. Security: scan the diff for hardcoded secrets (API keys, tokens, private keys). If found, stop.
  2. Quality: run the project's lint, format, and typecheck tools. Mechanical errors are fixed before committing.

Commits and close

One commit per logical change. Messages that explain the what and the why. Update the project context with non-obvious decisions. If tomorrow you can't understand why something was done, documentation is missing or the commit message is bad.

The proof you've done it right: close the conversation, open a new one, and the agent keeps up without you explaining anything.

The iteration loop

The fundamental difference from a linear process: Build runs multiple times. Each iteration uncovers something the plan didn't foresee. Each discovery is logged, the plan adjusts, and you continue.

The spec isn't touched during Build — it's reconciled in Close, once, when reality has stabilized. This avoids noisy versioning and keeps the spec as the source of truth.

flowchart TB
    S[Approved spec] --> P[Plan + tasks]
    P --> E[Execute]
    E --> D{Works?}
    D -->|No| A[Adjust plan] --> E
    D -->|Yes| C[Close: reconcile + commit]

Scaling the same principles

The principles don't change with project size. The tools do:

Principle Small project Large project
Persistent context One file in the repo Files per domain
Specs as behavior Scenarios in the context file BDD framework with .feature files
Plan per session In the conversation Ticket system linked to specs

Start simple. One file, handwritten scenarios, plan in the conversation. When it grows too much, scale the tool. You already have the principles.

Never miss a stream

Get updates about my upcoming live appearances and exclusive content.

Are you a professional Web developer?NoYes

No spam, unsubscribe anytime.

FrontendLeap
© 2026 FrontendLeap by Juan Andrés Núñez
- All rights reserved.