env.dev

AI Debugging Prompts

Structured prompts for debugging with LLMs. Reproduce, isolate, and fix bugs systematically using AI assistants.

Overview

Effective LLM debugging requires structured context. The more information you provide about the bug, the better the AI can help you isolate and fix it. A well-structured bug report prompt saves multiple rounds of back-and-forth.

Bug Report Prompt Template

Prompt
I have a bug in my [language/framework] application.

EXPECTED: [what should happen]
ACTUAL: [what actually happens]
STEPS TO REPRODUCE:
1. [step 1]
2. [step 2]

ERROR MESSAGE:
```
[paste error/stack trace]
```

RELEVANT CODE:
```
[paste the relevant code]
```

ALREADY TRIED:
- [what you tried and why it didn't work]

Help me identify the root cause and suggest a fix.

Isolation Techniques

  • Binary search — ask the LLM to help narrow down which code path causes the issue
  • Logging strategy — ask where to add console.log/debug statements
  • Minimal reproduction — ask the LLM to create a minimal test case
  • Hypothesis testing — propose potential causes and ask the LLM to evaluate each

Common Bug Patterns LLMs Catch

Async/Await Mistakes

Missing await, unhandled promise rejections, race conditions in concurrent code.

React State Issues

Stale closures, incorrect dependency arrays, state updates on unmounted components.

Type Errors

Null/undefined access, incorrect type assertions, missing type narrowing.

Frequently Asked Questions

How should I describe a bug to an LLM?

Include: what you expected, what actually happened, steps to reproduce, the relevant code, any error messages, and what you have already tried.

Can LLMs debug runtime errors?

Yes. Provide the error message, stack trace, and relevant code. LLMs are particularly good at identifying common patterns like null reference errors, async/await mistakes, and type mismatches.