# 10 Coding Techniques Directly from Claude Code

February 13, 2026 — Alessandro Caprai

---

## 10 Ninja Techniques for Coding with Claude Code, According to Its Creators

When one of the creators of a tool reveals their trade secrets, it's worth listening. Boris Chenry, a key figure in the development of Claude Code, recently shared on X ten techniques that could radically change the way we program with AI. These aren't simple textbook suggestions, but real strategies born from the daily experience of someone who built this tool and uses it to its limits.

What makes these insights particularly valuable is their pragmatic nature. We're not talking about abstract theories on artificial intelligence, but concrete solutions to real problems that every developer faces daily. And perhaps the real revolution lies precisely here: in learning to think about AI-assisted programming not as a souped-up autocomplete, but as a new form of collaboration that requires its own methods and strategies.

## Multitasking Reinvented: Working in Parallel

The first technique Chenry proposes completely overturns the traditional idea of sequential workflow. Opening three to five git worktrees simultaneously, each with its own active Claude session, represents what he himself calls "the biggest productivity leap possible." This isn't empty rhetoric: we're talking about simultaneously managing different development branches, different features, different experiments, without losing context on any of them.

The native integration of worktrees in the Claude Code desktop app isn't accidental. It was born from observing how the team actually worked. Some members have even developed shell aliases (za, zb, zc) to jump instantly from one environment to another. It's fascinating to see how tools designed for AI end up shaping even our most ingrained habits, like workspace organization.

This parallel working mode wouldn't be possible with a human assistant, not at the same scale. Here AI demonstrates one of its most underrated advantages: the ability to maintain multiple completely separate contexts without getting confused, without getting tired, without losing coherence. It's like having multiple versions of yourself working on different problems simultaneously.

## Planning Before Action

The second technique introduces a concept that should be familiar to anyone who has ever managed complex projects: separating strategy from execution. Claude Code's "Plan" mode isn't a whim, it's a necessity for tasks that require architecture and reflection.

Focusing energy on strategy allows Claude to execute the implementation in one shot, what Chenry calls "1-shot." But there's an even more important indication: if things start to go wrong, immediately return to plan mode. Don't persist stubbornly. It's advice worth its weight in gold, and not just in the context of AI.

This separation reflects a profound truth about the nature of problem-solving: we often fail not because execution is poor, but because the strategy was wrong from the start. Asking Claude to use plan mode even for verification phases, not just for writing code, means treating testing as what it should be: an activity as strategic as development itself.

## Continuous Evolution Through CLAUDE.md

The third suggestion is perhaps the most subtle and powerful: invest in the CLAUDE.md file. The idea is simple but revolutionary: after each correction, ask Claude to update this file so as not to repeat the same mistake. In other words, teach the AI to learn from its specific experiences in your project.

Chenry emphasizes that "Claude is incredibly good at writing rules for itself." This capacity for self-reflection and self-improvement is what distinguishes a truly useful AI assistant from a simple code generator. The CLAUDE.md file becomes a sort of institutional memory of the project, but written by the AI itself.

One team engineer takes it a step further: maintaining a directory of "notes" for each project, updated after each pull request, and pointing the CLAUDE.md file to it. It's a systematic approach to knowledge management that transforms every commit into a learning opportunity. The error rate drops noticeably over time, not because the AI magically becomes smarter, but because it becomes more aware of the specific context in which it operates.

## Custom Skills: True Extensibility

The fourth technique introduces the concept of reusable custom skills. If you do something more than once a day, turn it into a skill or a command. It's the DRY (Don't Repeat Yourself) principle applied to interaction with AI.

The examples provided are illuminating: `/techdebt` commands to eliminate duplicate code, or commands that sync Slack, Google Drive, and GitHub into a single context block. This last example is particularly significant because it demonstrates how skills can act as bridges between different tools, creating workflows that would otherwise require constant manual coordination.

Loading these skills onto git to reuse them in every project transforms the initial investment in creating the skill into a multiplied advantage. It's the equivalent of building a personal function library, but at a higher level of abstraction. You're not automating individual operations, you're automating entire work patterns.

## Bug Fixing: Delegating Without Micromanagement

The fifth technique addresses one of the most frustrating aspects of software development: debugging. Chenry's suggested approach is radical in its simplicity: enable the Slack MCP, paste the bug thread, and simply say "fix." Zero distractions.

Even more interesting is the indication: "Go fix the failing CI tests." Don't micromanage the "how." This philosophy represents a paradigm shift in the relationship with AI tools. We're no longer giving detailed step-by-step instructions, we're defining objectives and letting the AI find the best path.

Pointing Claude at Docker logs for troubleshooting distributed systems is another example of how AI can handle complexity that traditionally requires hours of manual analysis. Distributed systems are notoriously difficult to debug because errors can propagate through multiple components. Having an assistant that can analyze logs from multiple sources simultaneously completely changes the game.

## The Art of Advanced Prompting

The sixth technique elevates prompt engineering to an art form. "Grill me on these changes and don't make the PR until I pass your test." Using Claude as a reviewer, not just as an executor, means leveraging its capacity for critical evaluation.

The "elegant approach" is particularly interesting: after a mediocre solution, ask "Knowing what you know now, throw it all away and implement the most elegant solution." This technique recognizes that often the first solution is just an exploration of the problem. Once the problem space is understood, you can aim for a truly optimal solution.

Specificity remains fundamental: the more detailed the specifications, the better the output. It's not a limitation of AI, it's the very nature of communication. Even between humans, vague instructions produce unpredictable results. The difference is that with AI, we have the possibility to iterate rapidly until reaching the necessary precision.

## Optimizing the Work Environment

The seventh technique focuses on terminal setup, an often overlooked but crucial aspect. The team loves Ghostty for synchronized rendering and unicode support, technical details that make a difference in the daily experience.

Using `/statusline` to always show context usage and the current git branch transforms potentially hidden information into constant feedback. It's the kind of transparency that prevents errors and confusion.

The suggestion about voice dictation is illuminating: we speak three times faster than we type, and dictated prompts tend to be much more detailed. It's a reminder that the interface through which we interact with AI doesn't necessarily have to be textual. The fn key pressed twice on macOS to activate dictation might seem like a minor detail, but it radically changes the workflow.

## Sub-Agents: Divide and Conquer

The eighth technique introduces the concept of sub-agents, an architecture that reflects established patterns in software engineering. Adding "use subagents" to any request dedicates more computational power to the problem, but above all keeps the main agent's context window clean and focused.

This separation of responsibilities isn't just organizational, it's strategic. Moving individual tasks to sub-agents prevents context pollution, one of the most insidious problems in working with AI on complex projects. The more information we add to the context, the harder it becomes for the AI to maintain focus on what is actually relevant.

Routing permission requests to Opus 4.5 through a hook to scan for potential attacks and automatically approve safe ones is an example of how sub-agents can handle security aspects transparently. It's automation applied to supervision, a meta-level we rarely consider.

## Data and Analytics: A New Relationship with Databases

The ninth technique concerns data and analytics, and contains a surprising statement: "Personally I haven't written a line of SQL in over 6 months." It's not laziness, it's efficiency. Asking Claude Code to use the BigQuery CLI to analyze metrics on the fly means eliminating context-switching between thinking about the problem and thinking about SQL syntax.

This technique works with any database that has a CLI, an MCP, or an API. The universality of the approach is important: we're not talking about a specific solution for a specific tool, but a pattern applicable to the entire ecosystem of data tools.

There's something deeply liberating about being able to query data using natural language without sacrificing precision or power. We're not lowering the technical bar, we're raising the level of abstraction so we can focus on the questions we want to ask the data rather than on how to formulate them syntactically.

## Learning with AI: Closing the Circle

The last technique closes the circle by bringing us back to learning. Enabling the "Explanatory" or "Learning" style in `/config` transforms Claude from executor to teacher, helping us understand the why behind the changes.

Asking Claude to generate visual presentations in HTML or ASCII diagrams to explain unknown codebases is a brilliant use of AI as an active documentation tool. We're not reading static documentation written months ago, we're generating explanations tailored to what we need to understand right now.

Creating a spaced repetition learning skill, where we explain what we understood and Claude asks us questions to fill in the gaps, is perhaps the most sophisticated application of all. We're using AI not to replace our learning, but to strengthen it through active testing of our understanding.

## Beyond Techniques: A New Mindset

These ten techniques, as a whole, sketch something greater than the sum of their parts. They represent a fundamentally different approach to working with AI, one that recognizes the importance of method as much as that of the tool.

What emerges clearly is that effectiveness in using AI for programming doesn't depend on the ability to write magic prompts, but on building systems and habits that allow AI to operate at its best. It's an investment in cognitive infrastructure, if we want to call it that.

Chenry's real lesson isn't in the individual techniques, however valuable. It's in showing that working effectively with AI requires intentionality, experimentation, and the willingness to rethink workflows we took for granted. We're not simply adding a tool to our arsenal, we're redefining what it means to program.

And perhaps this is the most important point: AI doesn't make us worse programmers if we rely on it too much, nor does it automatically make us better just because we use it. It makes us different, and it's up to us to decide whether that "different" represents an evolution or just a superficial change. Chenry's techniques suggest that evolution is possible, but it requires the same rigor and discipline we've always applied to our craft.

Ultimately, these suggestions remind us that artificial intelligence is, paradoxically, all the more useful the more intelligently we use it. And intelligence, in this case, doesn't lie in the perfect prompt, but in the patient construction of an ecosystem of practices, tools, and habits that amplify what AI does best, mitigating what it does worst. It's human work, deeply human work, that makes artificial intelligence truly intelligent.