A skill is a system prompt that can learn and be extended. It consists of a SKILL.md file with YAML frontmatter (name and description) and Markdown instructions. Scripts, reference docs, and assets can optionally be bundled. The description in the frontmatter is the most important part — it decides when Claude activates the skill. You don't have to write this file yourself — Claude does it for you.
You launch the Skill Creator in chat with this command:
/anthropic-skills:skill-creator
The process follows a clear cycle:
1. Set the intent — first you clarify: what should the skill do? When should it fire? What output format is expected? Do we need test cases?
2. Interview & research — the Skill Creator asks targeted questions about edge cases, input/output formats, success criteria, and dependencies.
3. Write SKILL.md — based on your answers, a draft of the skill file is created. Important: the description should be intentionally a bit "generous" so the skill activates in related contexts too.
4. Test — 2–3 realistic test prompts are created and run, once with and once without the skill.
5. Evaluate — the results are shown in an interactive viewer with both qualitative outputs and quantitative benchmarks (pass rate, tokens, runtime).
6. Improve & repeat — based on your feedback the skill gets reworked. The key here is to explain the "why" behind every instruction instead of just setting rigid rules.
7. Optimise the description — finally the skill description can be auto-optimised: the Skill Creator generates 20 test queries, measures trigger accuracy, and iteratively improves the text.
Always explain the "why" behind instructions, instead of relying on rigid MUST/NEVER rules. Keep the SKILL.md under 500 lines and move heavy content into reference files. Generalise from individual examples so the skill stays broadly applicable.
You launch the skill with: /competitive-analysis
This skill delivers a complete framework for competitor analysis. It covers the entire process: from systematic research across primary and secondary sources (websites, review portals, analyst reports, SEO data) through to structured evaluation.
Particularly useful are the built-in frameworks: the Messaging Matrix compares taglines, value props, and tone across several competitors. The Narrative Analysis uncovers which story structures competitors use (who's the "villain"? What transformation is promised?). The Content Gap Analysis identifies topics and formats you cover and your competition doesn't, and vice versa.
A highlight is the battlecard creation: structured one-pagers for sales and marketing with honest strengths and weaknesses of competitors, objection handling, and tactical "landmine" questions.
Trigger terms (fire the skill automatically): competitive analysis, battlecard, competitor comparison, content gaps, positioning.
You launch the skill with: /discover-brand
This skill automatically searches all connected enterprise platforms for brand materials. It works with Notion, Confluence, Google Drive, Box, SharePoint, Figma, Gong, Granola, and Slack.
The flow has four phases: first a broad sweep across all platforms, then categorising and ranking the sources by relevance. Then the best sources are analysed in detail. The end result is a structured Discovery Report with all found brand elements, conflicts between sources, and open questions.
Clever detail: the skill checks up front which platforms are connected and warns if important document platforms are missing. Open questions (e.g. conflicting style guides from different years) are presented with a recommendation instead of blocking the process.
After discovery you can move straight into guideline generation, turning the found materials into a consistent brand-voice document.
Trigger terms: brand discovery, find brand materials, style guide search, brand audit, brand voice discovery.
You launch the skill with: /roadmap-management
This skill supports creating, prioritising, and communicating product roadmaps. It offers four roadmap formats: Now/Next/Later (the simplest and often most effective), Quarterly Themes (for strategic alignment), OKR-Aligned (for OKR-driven orgs), and Timeline/Gantt (for detailed engineering planning).
Four prioritisation frameworks to pick from: RICE score (quantitative and defensible), MoSCoW (good for scope negotiations), ICE score (fast prioritisation), and the Value-vs-Effort matrix (visual and team-friendly).
What I find especially valuable are the sections on Dependency Mapping (identifying and managing technical, team, external, and knowledge dependencies), Capacity Planning (with the 70/20/10 rule of thumb for features, tech debt, and unplanned), and Communicating changes (when and how to communicate roadmap changes to stakeholders without causing whiplash).
Trigger terms: create roadmap, prioritise features, RICE score, Now/Next/Later, roadmap changes communication.
You launch the skill with: /contract-review
This skill analyses contracts against an internal negotiation playbook. It identifies deviations, classifies their severity, and generates concrete redline suggestions.
The review process is methodical: first it identifies the contract type and your negotiating side. Then the entire contract is read before individual clauses are evaluated, since clauses influence each other.
The skill contains detailed analysis frameworks for the most important clause types: limitation of liability, indemnification, intellectual property, data protection, term and termination, and venue. Deviations are classified with a traffic-light system: GREEN (acceptable), YELLOW (negotiable), RED (escalation required).
Hands-on detail: redline suggestions come with concrete alternative wording, rationale, priority, and fallback position. The three-tier negotiation strategy (must-haves, should-haves, nice-to-haves) helps you negotiate tactically.
Important note: the skill assists with contract analysis but does not replace qualified legal advice.
Trigger terms: contract review, redline, clause analysis, contract review, negotiation playbook.
You launch the skill with: /user-research
This skill supports planning, running, and analysing user research studies. It offers a compact overview of common methods with sample-size and timing recommendations: user interviews (5–8 participants, 2–4 weeks), usability testing (5–8, 1–2 weeks), surveys (100+, 1–2 weeks), card sorting (15–30, 1 week), diary studies (10–15, 2–8 weeks), and A/B testing.
The structured interview guide splits sessions into five phases: warm-up, context, deep dive, reaction (to concepts/prototypes), and wrap-up.
For analysis, four frameworks are offered: Affinity Mapping (group observations by theme), Impact/Effort Matrix (prioritise findings), Journey Mapping (visualise the user experience), and Jobs to be Done (understand what users "hire" the product for).
The standard deliverables are a research plan, interview guide, synthesis report, and a highlight reel with key quotes.
Trigger terms: user research plan, interview guide, usability test, survey design, research questions, user research.
No spam, unsubscribe at any time
