Overview
LLM coding assistants are powerful but require discipline. Following best practices ensures you get maximum value while avoiding common pitfalls like subtle bugs, security issues, and over-reliance on generated code.
The Golden Rules
- •REVIEW: Never merge LLM-generated code without reading every line
- •TEST: Run existing tests after AI changes; write new tests for new features
- •UNDERSTAND: If you cannot explain the code, do not merge it
- •INCREMENTAL: Apply changes incrementally, commit after each working step
- •VERIFY: Use type checking as first-pass verification of AI changes
- •SECURITY: Check for injection, auth bypass, and data exposure in generated code
Common Pitfalls
Blind Trust
LLMs generate plausible-looking code that may have subtle bugs. Always verify logic, especially around edge cases and error handling.
Security Issues
LLMs may not apply security best practices consistently. Check for SQL injection, XSS, auth bypass, and exposed secrets.
Over-Engineering
LLMs tend to add unnecessary abstractions and features. Remove what you do not need. Simpler is better.
When to Code Manually
Security-critical code, performance-critical hot paths, and novel algorithms often benefit from manual implementation. Use LLMs for boilerplate, tests, and well-understood patterns where the risk of subtle bugs is lower.