5. Build a Skill
Note: The video covers material not in the guide below — please watch in full.
Action Step
Complete this before moving on.
In the same session from the previous training, ask Claude to reflect on everything it just did and walk you through the steps before writing anything. Then have it interview you with clarifying questions about how the skill should work. Once you have answered its questions, have it write the skill file. Test it by opening a brand new session, loading just the skill file, and telling Claude: "Read this and tell me what you need from me." Submit a screenshot of your skill file with the explorer sidebar visible.
Training Guide
Don't close this session.
You just pulled a transcript from Fireflies, extracted every action item, pushed tasks into Teamwork, and wrote a recap email. The full workflow is still loaded — your conversation history, the transcript, the task list, the prompts that worked. All of it sitting in your context window.
If someone handed you a different transcript tomorrow, you'd have to remember all of that. Which prompts did you use? What order did you do things in? What fields did Teamwork need? How did you structure the recap?
You're not going to remember. So capture it while it's fresh.
(The AI already has the full workflow in its memory. You're going to ask it to write it down)
Step 1: Ask Claude to Reflect
Claude was there for all of it. So ask directly:
"Look at everything we just did in this session — pulling the transcript from Fireflies, extracting action items, formatting tasks for Teamwork, pushing them via the API, and writing the recap email. I want to turn this entire workflow into a reusable skill. Walk me through what the steps were, in order, before we write anything."
Claude lays out the steps it just performed. Read through them. Did it capture the right sequence? Did it miss a step you did manually? Did it include the parts where you corrected it?
(Now define what the skill should actually do)
Step 2: Define the Skill
The workflow you just did has a clear shape. Every time someone processes a call transcript, the same things happen:
- Input: A transcript (from Fireflies or pasted in)
- Clarifying questions: Which project is this for? Which Teamwork board? Who attended the call? Who's the client contact?
- Extract: Action items, decisions made, open questions, owner assignments
- Format: Tasks structured for Teamwork — task name, assignee, due date, description
- Output: Tasks pushed to Teamwork + a client recap email
Tell Claude what the skill needs to cover:
"Here's what I want the skill to do. When I invoke it with a transcript, it should: first, ask me clarifying questions — which project, which Teamwork board, who attended, who's the client contact. Then extract action items, decisions, and open questions with owners. Then format those as Teamwork tasks. Then draft a client recap email. Write this as a skill file — a markdown file with clear numbered steps that another AI session could follow."
See how you're customizing your own program here? That's what a skill is. You're not writing code. You're writing instructions based on work you already did.
(Now let it build the actual file)
Step 3: Generate the Skill File
Claude writes the skill. It produces a markdown file with a description at the top, the clarifying questions it asks, and numbered steps for the full workflow.
Review what it writes:
- Does it include the clarifying questions upfront? If the skill dives into extraction without asking which project or who attended, it'll produce generic output.
- Are the Teamwork formatting requirements captured? You learned during the last training what fields Teamwork needs. If those aren't in the skill, you'll re-specify them every time.
- Is the recap email structure defined? Not just "write a recap" — the tone, the sections, what to include.
If something's missing, tell Claude to add it. If something's too vague, tell it to be more specific. You just did the work — you know what matters.
Save the file somewhere in your course folder. Name it something clear — post-call-processing-skill.md works.
Step 4: Test It
Open a fresh prompt — or if your context is getting heavy, start a new session. Load the skill file by copying its path and pasting it in. Then give it a different transcript.
"Here's my post-call processing skill: [paste path]. I have a new transcript to process. Here's the file: [paste transcript path]. Follow the skill."
Watch what happens. Does it ask the clarifying questions? Does it extract action items in the right format? Does the recap email match what you'd actually send?
Next time someone hands you a transcript, you invoke this skill. The AI already knows the process — what to extract, how to format it, where to send it.
(But it probably won't be perfect on the first try)
Step 5: Iterate
The AI is a prediction machine, not a rule follower. It reads your skill and pattern-matches against it. Most of the time it follows closely. Sometimes it drifts.
Maybe it skipped a clarifying question. Maybe it formatted tasks differently than you specified. Maybe the recap email had the right content but the wrong tone. This is normal — you ran into this in the fundamentals when you learned about skill adherence.
Don't just fix the output. Fix the skill. If it skipped a question, make that question more prominent. If it changed the format, add an example of what the output should look like. If it's missing nuance, add it.
Ask the model: "The skill missed [specific thing]. How should I rewrite the instructions so future sessions follow this more reliably?" Claude is good at diagnosing its own failure modes. Use that.
Every revision makes the skill tighter. The third version will be dramatically better than the first. Build, test, refine.
A Note on Hallucination
The skill won't produce identical output every time. Give it the same transcript twice and you'll get slightly different results — different phrasing, different ordering, maybe a different interpretation of who owns an action item.
AI isn't deterministic the way a spreadsheet formula is. It generates output based on probabilities. The skill increases consistency — a lot — but it doesn't guarantee it. You still review the output. You still apply judgment.
The more specific your skill, the less the output varies. Numbered steps, explicit formatting, concrete examples — these all increase adherence. "Extract the important stuff" leaves too much room for interpretation. "Extract every action item with: task name, owner, due date, and one-sentence description" gives the AI a pattern to match.
Submission
Take a screenshot of the workflow skill you have created and submit it.
What You Just Did
You took a workflow that lived only in a conversation and turned it into a reusable asset. Next time, you load the skill and go.
This is the compounding effect from the first training in this section. Every skill you build makes the next project faster. The kickoff prep, the presentation workflow — each one stacks, each one makes the next engagement faster.
(The skill you just built lives on your machine. Next up: put it somewhere the whole team can use it)
Comment in Slack
Post your answer in your onboarding channel.
What was your biggest takeaway(s) from this training?