AI AI LLM Productivity Claude Development

Integrating Claude into Your Dev Workflow

Practical tips for using LLMs to write tests, docs, and boilerplate without losing quality.

Q

Quantums Team

April 07, 2026

7 min read

Beyond the Hype

There's been two years of breathless coverage about AI replacing developers. That's not what we've experienced at Quantums. What we've found is something more useful and less dramatic: LLMs are genuinely excellent at specific, well-defined tasks, and genuinely poor at others. Once you know the difference, they become a serious productivity multiplier.

Where LLMs Actually Excel

Writing tests: Given a function signature and a short description of the behaviour, Claude can generate comprehensive unit test cases — happy path, edge cases, error conditions — in seconds. The tests still need review, but it usually gets 80% of the way there and the remaining 20% is fast to fix.

Boilerplate generation: DTOs, view models, interface implementations, migration SQL from entity definitions. The kind of code that's tedious but mechanical. An hour of work becomes a 30-second conversation.

Documentation: XML doc comments, README sections, API docs. Give it the function, give it context, and it writes documentation that's often better than what developers would write in a hurry.

Regex and string parsing: An area where most developers lose 20 minutes searching Stack Overflow. "Write me a regex that matches ISO 8601 timestamps with optional timezone offset" just works.

Explaining unfamiliar code: "Explain what this SQL window function is doing" or "What are the performance implications of this LINQ query?" — invaluable for understanding legacy code or unfamiliar libraries quickly.

The Prompting Patterns That Work

After a year of daily use, here are the patterns that consistently produce good output:

Be specific about the context. "Write a unit test" is weak. "Write xUnit tests for this C# method that parses a JWT, covering: valid token, expired token, invalid signature, missing claims" is strong.

Include the actual code, not a description of it. Paste the function. Paste the entity class. Don't make the model guess at implementation details.

Ask for the constraints explicitly. "Use Moq for mocking, no third-party assertion libraries, target .NET 8" — the model will use whatever is most common in its training data otherwise.

Iterate, don't restart. Start with a rough output and refine it. "Good, but add a test case for when the database throws a timeout exception" works much better than starting over with a longer prompt.

Where LLMs Fail (And Why)

Complex architectural decisions: "Should I use CQRS for this project?" will get you a wishy-washy "it depends" answer with no real guidance, because it genuinely does depend on your specific context that the model doesn't have.

Debugging complex runtime bugs: Works reasonably on simple logic bugs, but anything involving concurrency, distributed system state, or subtle framework behaviour needs a human who can run the code and observe it.

Keeping up with very recent changes: Training cutoffs mean the model may suggest deprecated APIs or miss breaking changes from the last 6 months. Always verify against current official docs.

Our Actual Workflow

For a typical feature at Quantums: we write the entity, the interface, and the controller action ourselves (the design decisions). We then use Claude to generate the repository implementation, the DTOs, the unit tests, and the XML documentation. Code review catches anything off. The feature ships roughly 40% faster, and the test coverage is actually better than it was before — because generating tests no longer feels like a chore.