AI Coding Assistant Rules: My Battle-Tested Practices
The essential rules and workflows I've developed for effective AI-assisted development, from file organization to automated quality checks.
Working with AI coding assistants like Windsurf has fundamentally changed how I approach development. What started as adapting to AI limitations has evolved into a comprehensive set of practices that improve both human and AI collaboration. These are my battle-tested rules for AI-assisted coding.
The Foundation: Windsurf Rules System
I've codified my AI coding practices into three core rule files that Windsurf automatically applies to every interaction. These rules have trigger: always_on, meaning they're active for every AI conversation. Here are the complete rules with explanations:
Code Quality & TypeScript Standards
This is my most comprehensive rule file, focusing on maintainable, type-safe code:
Why this works: AI assistants excel when given explicit constraints. The 200-line rule forces better architecture, while the TypeScript strictness prevents runtime errors. The Smart/Dumb component pattern makes it crystal clear to AI which components should handle logic versus presentation.
File Organization & Naming Conventions
This rule ensures consistent, predictable project structure:
Why this works: Domain-driven organization helps AI assistants understand the business context. When files are grouped by feature rather than technical function, AI can make better suggestions about where new code should live.
Next.js & React Development Patterns
This rule keeps AI assistants aligned with modern Next.js patterns:
Why this works: AI assistants can get confused between different Next.js patterns (Pages Router vs App Router). This rule eliminates ambiguity and ensures consistent modern patterns.
The Automation Layer: Quality Workflows
The real power comes from automating quality checks. My bun run quality command runs:
This creates a feedback loop where AI assistants can immediately see if their suggestions break quality standards.
The Code Cleanup Workflow
I've created a Windsurf workflow that automatically applies all quality standards. Here's the complete workflow file:
How this workflow works:
auto_execution_mode: 3: Windsurf can run this automatically when I type/code-cleanup- Quality enforcement: Ensures all code passes the complete quality pipeline
- No shortcuts: Explicitly forbids common AI shortcuts like
anytypes or ESLint exceptions - Documentation requirement: Forces proper JSDoc comments on all functions
Real-world usage: When I'm ready to commit code, I simply type /code-cleanup and Windsurf automatically:
- Runs
bun run qualityto identify issues - Fixes ESLint violations
- Resolves TypeScript errors
- Breaks down oversized files
- Adds missing JSDoc comments
- Reorganizes imports and file structure
This workflow has saved me countless hours of manual cleanup and ensures consistent code quality across all commits.
AI Collaboration Patterns
The "Context Window" Strategy
AI assistants work best when they can see the entire context. My 200-line rule ensures:
- Complete understanding: AI sees the full file in one pass
- Accurate refactoring: No partial changes that break consistency
- Better suggestions: AI understands the complete component lifecycle
Smart Component Architecture
I guide AI assistants toward clean architecture patterns:
This separation makes it crystal clear to AI assistants which components handle logic versus presentation.
Rules in Action: A Practical Example
Here's how these rules work together in a real scenario. When I ask Windsurf to "create a user authentication system," it automatically follows all the rules:
File Structure Created:
Code Quality Enforced:
- Each file stays under 200 lines
- Strict TypeScript with proper interfaces
- JSDoc comments on all public functions
- Kebab-case naming throughout
- Domain-driven organization
Workflow Integration:
bun run qualitypasses on first run- No manual cleanup needed
- Ready for commit immediately
This systematic approach means I can focus on business logic while the rules handle code quality and organization automatically.
Real-World Benefits and Challenges
The Good
- Consistent quality: AI suggestions always follow established patterns
- Faster onboarding: New AI assistants immediately understand the codebase
- Reduced cognitive load: Rules handle the "how," I focus on the "what"
- Automated enforcement: Quality checks catch issues before they reach production
The Challenges
- Initial setup overhead: Creating comprehensive rules takes time
- Rule maintenance: Standards evolve, rules need updates
- Over-engineering risk: Sometimes simple solutions get over-complicated
- Context switching: Different projects need different rule sets
The Bigger Picture: Human-AI Collaboration
What I've learned is that the best AI coding relationships are built on clear constraints and expectations. When AI assistants know exactly what "good" looks like, they become incredibly effective collaborators.
The rules aren't about limiting creativity—they're about channeling it productively. Just like how poets create beautiful work within the constraints of sonnets, AI assistants create better code within well-defined boundaries.
Future Evolution
I'm constantly refining these practices based on:
- New AI capabilities: As models improve, rules can become more sophisticated
- Project learnings: Real-world usage reveals gaps in the rule set
- Team feedback: Collaborative development surfaces new patterns
The goal isn't perfect rules—it's rules that evolve with both technology and team needs.
These AI coding assistant rules have transformed my development workflow from chaotic to systematic. The key is treating AI as a collaborative partner that thrives on clear expectations and consistent feedback.