Live Coding

We build real software in real time.

Live sessions where we integrate GenAI into professional Frontend development workflows.

Juan Andrés Núñez

Juan Andrés Núñez

Frontend Engineer specialized in GenAI and professional educator

CRAFT
Last updated: March 2, 2026

Building Methodology AI First

CRAFT: Build with AI Without Losing Control

Contextualize · Refine · Act · Formalize · Teardown

flowchart LR
    Contextualize --> Refine
    Refine --> Act
    Act --> Formalize
    Act -.-> Refine
    Formalize --> Teardown

An LLM predicts the next token based on statistical patterns. It doesn't understand your project, your business, or your circumstances. It only generates plausible text given a context.

CRAFT is a 5-phase process for working with AI agents professionally. The goal: for humans to maintain judgment and control while leveraging the agent's speed.

Who is this for

CRAFT is designed for Individual Contributors. The principles scale to teams, but the focus is on your personal workflow.

It doesn't matter if you're just starting out or if you've been doing this for years. If you want to build with AI without relying on luck, this is for you.

Why it works

Each phase produces atomic and contained tasks:

  1. Any competent model can execute it. If the work is well-divided, you don't need the latest frontier model.
  2. You can audit everything. If each task is small, you can review each output. If the plan has 50 steps and 40 files, you can't.

This doesn't expire. Contextualizing, planning, executing, auditing, and cleaning up will remain valid regardless of the tool or model. And in the testing cycle, antifragile: each discovered error strengthens the system with a new test.

1. Contextualize

Define the rules and scope before you start

The agent doesn't know your preferences or your project's context. You need a context file that the agent reads automatically at the start of each session. In Claude Code it's CLAUDE.md, in Copilot it's AGENTS.md — the tool doesn't matter. The principle: context must live in a file the agent reads without you asking.

Global level

Your personal preferences that apply to all your projects: code style, universal constraints, naming conventions, patterns you always use or always avoid.

Project level

Context specific to this project: stack, architectural decisions, business edge cases, known antipatterns, technical debt, non-obvious relationships in the architecture.

Don't add the obvious. If the agent can infer it from package.json or the folder structure, it's redundant.

Define scope as behaviors

Share your ideas with the agent as bullet points: what you want to build and why. Discuss it. Let the agent challenge you, challenge it back. Once aligned, ask the agent to translate those requirements into concrete, verifiable behavioral scenarios.

In practice, the agent will use a GIVEN/WHEN/THEN syntax (the BDD standard — Gherkin). You don't need to know the syntax — the agent does. What matters is that each requirement is defined as something that can be verified, not as a vague description.

Save these scenarios in your context file. This defines the "what" of your application and survives across sessions.

As the project grows, a single file may not be enough. Frameworks like OpenSpec organize specs into independent files the agent reads on demand. The principles are the same — persistent context, specs as behavior — only the scale changes.

Automatic context tools

There are tools (MCPs) that inject context to the agent automatically: updated documentation, framework schemas, etc. Use them if they fit your stack. They reduce manual work, but don't replace the human context in your project file.

2. Refine

Plan only the current session

The full scope is already defined in your specs. Here you decide which slice to execute now.

Select the scenarios you'll implement this session. Let the agent propose a tactical plan (the "how"). Interrogate it: look for hidden dependencies, unnecessary complexity, edge cases. Refine until the plan makes sense to you.

If you can't review the plan in 1 minute, it's too big. Divide.

Don't plan in detail what you'll build in 3 sessions. The detailed plan is only for the immediate work. The rest will change when you build.

3. Act

Execute and validate

An AI agent is like an excellent junior developer loaded with caffeine: fast but overconfident. Validate absolutely everything.

Review diffs for modified files and full files for new ones. Interrogate every decision you don't understand. Run linters. Your professional responsibility is to audit the code — even if you didn't write it, it carries your name.

If the plan turns out to be wrong during execution, go back to Refine. This isn't a failure — it's a normal part of the cycle.

Tests for key business logic

Test what matters: critical logic, flows where a silent error would have real consequences, code where "if it fails, it hurts".

When you find a bug in domain logic: create a test that fails, apply the fix, verify it passes. If it doesn't pass, your hypothesis was wrong — investigate further.

The agent fills in what it doesn't know with plausible assumptions. You have the real context: the business, the user, the history. That's your advantage.

4. Formalize

Document the state, not abstract lessons

You've audited. Now preserve the work so the next session starts without friction.

Granular commits

One commit per logical change. Messages that explain the what and the why. If something goes wrong, you can revert without losing everything.

Update the context file

  • Mark completed scenarios — keep the original spec intact, only add a completion marker.
  • If building revealed that a future scenario needs a different approach, update it now.
  • Add non-obvious technical decisions the agent will need in the next session.
  • Don't document what can be inferred from code or manifest files.

If tomorrow you can't understand why something was done, documentation is missing or the commit message is bad.

5. Teardown

Close the session

If the four previous steps are done well, you can close the conversation and start fresh. The context file remains as the source of truth. The agent reads it, understands where you are, and continues.

That's the proof you've done it right: close the conversation, open a new one, and the agent keeps up without you explaining anything.

The default option is to clear. If you can't do it without the agent getting lost, go back to step 4 — documented context is missing.

Compacting (summarizing the conversation without closing it) only makes sense when you're in the middle of complex debugging with diagnostic context that isn't worth documenting yet. If you always depend on compacting, you're not documenting enough in Formalize.

Scaling the same principles

The principles don't change with project size. The tools do:

Principle Small project Large project
Persistent context One file in the repo Files per domain (OpenSpec)
Specs as behavior Scenarios in the context file BDD framework (Cucumber, Behat) with .feature files
Plan per session In the conversation Ticket system linked to specs

Start simple. One file, handwritten scenarios, plan in the conversation. When it grows too much, scale the tool. You already have the principles.

Never miss a stream

Get updates about my upcoming live appearances and exclusive content.

Are you a professional Web developer?NoYes

No spam, unsubscribe anytime.

FrontendLeap
© 2026 FrontendLeap by Juan Andrés Núñez
- All rights reserved.