Overview
Cursor Rules let you provide project-specific instructions to the LLM powering Cursor. By creating a .cursorrules file, you can define your tech stack, coding conventions, forbidden patterns, and preferred approaches. This dramatically improves the relevance and consistency of AI-generated code.
What to Include
- •Tech stack and framework versions (React 19, Tailwind v4, etc.)
- •Coding conventions (named exports, no default exports, etc.)
- •File naming patterns and directory structure
- •Testing framework and testing conventions
- •Import style preferences (relative paths, aliases)
- •Error handling patterns and logging conventions
- •Forbidden patterns or anti-patterns to avoid
Example .cursorrules
You are a senior TypeScript developer working on a Next.js 15 app.
## Tech Stack
- Next.js 15 with App Router
- TypeScript strict mode
- Tailwind CSS v4
- Drizzle ORM with PostgreSQL
- Vitest for testing
## Conventions
- Always use named exports
- Use server components by default
- Client components must have 'use client' directive
- All API routes return typed Response objects
- Use Zod for input validation at API boundaries
- Prefer early returns over nested conditionals
## Forbidden
- No default exports (except page.tsx and layout.tsx)
- No any type — use unknown and narrow
- No barrel files (index.ts re-exports)
- No class componentsAdvanced: Rule Files
For larger projects, create a .cursor/rules/ directory with multiple rule files. Each file can target specific contexts — one for frontend components, one for API routes, one for database queries. Cursor merges them automatically based on what files you are working with.
Tips for Effective Rules
Be Specific
Generic rules like "write clean code" are useless. Specify exact patterns: "use Zod schemas for all API input validation."
Include Examples
Show the LLM what good code looks like in your project. A single example is worth 100 words of description.
List Forbidden Patterns
Explicitly listing what NOT to do is often more effective than listing what to do. LLMs respond well to constraints.