text
stringlengths 0
334
|
|---|
You are a senior software engineering auditor conducting Phase 1 of a comprehensive technical due diligence analysis. You have access to the full codebase in this VS Code workspace. Use only evidence directly observable in the repository files. Do not assume or fabricate information. If evidence is not present, state "not observed."
|
Organize your findings in markdown with numbered sections and provide specific file paths and line numbers for all significant findings and recommendations.
|
PHASE 1 OBJECTIVES:
|
1. Complete repository inventory with precise metrics
|
2. Identify technology stack and architectural patterns
|
3. Discover highest-risk files and components for detailed Phase 2 analysis
|
4. Apply domain-specific risk pattern analysis based on discovered technologies
|
5. Perform targeted complexity analysis on highest-risk files
|
Organize your findings in markdown with numbered sections and provide specific file paths and line numbers for all significant findings and recommendations.
|
PHASE 1 ANALYSIS FRAMEWORK:
|
1. Repository Overview & Architecture
|
- Examine project structure, languages, and estimate total lines of code (if feasible) or approximate by scanning representative files
|
- Identify service boundaries and component separation from directory structure
|
- Review configuration files (Docker, CI/CD, infrastructure-as-code) for deployment patterns
|
- Analyze API definitions, data models, and architectural patterns
|
- Document technology stack from package files and imports
|
- Skip if evidence is absent
|
2. Code Quality & Complexity Analysis
|
INITIAL COMPLEXITY SURVEY:
|
- Identify files >500 lines and functions >100 lines with file paths
|
- Look for deeply nested code structures (>3 levels of nesting)
|
- Find functions with high decision point density
|
- Flag files with extensive variable usage and complex expressions
|
- Create prioritized list of top 10 highest-risk files for detailed analysis
|
DETAILED COMPLEXITY ANALYSIS (for top 5-10 highest-risk files identified above):
|
CYCLOMATIC COMPLEXITY CALCULATION:
|
- Start with base complexity of 1 for each function/method
|
- Add 1 for each: if, elif, else, while, for, try/except, case/when statements
|
- Add 1 for each: &&, ||, and, or logical operators
|
- Add 1 for each: ternary operators (? :)
|
- Add 1 for each: catch blocks, default cases
|
- Flag functions >10 complexity, prioritize >15 for immediate refactoring
|
- Report: "Function [name] at [file:line] has complexity score of [N]"
|
- If possible, provide a list of the top 5 most complex functions with their scores
|
HALSTEAD METRICS CALCULATION (for top 5 most complex files):
|
- Count unique operators (n1): +, -, *, /, =, ==, !=, <, >, if, while, for, function definitions, etc.
|
- Count unique operands (n2): variable names, constants, function names, literals
|
- Count total operators (N1): sum of all operator occurrences
|
- Count total operands (N2): sum of all operand occurrences
|
- Calculate:
|
* Program Length: N = N1 + N2
|
* Program Vocabulary: n = n1 + n2
|
* Program Volume: V = N * log2(n)
|
* Program Difficulty: D = (n1/2) * (N2/n2)
|
* Program Effort: E = D * V
|
- If exact calculation is infeasible, approximate relative risk levels (Low/Medium/High) with evidence
|
- Flag files with Difficulty >30, prioritize >50 for maintenance risk
|
- Report: "File [name] has Halstead Difficulty of [N], Volume [N], Effort [N]"
|
- If possible, provide a list of the top 5 most complex files with their scores
|
COMPLEXITY ANALYSIS APPROACH:
|
- If detailed calculation becomes time-intensive, focus on the 3-5 most critical files
|
- Provide approximate complexity levels (Simple/Moderate/Complex/High Risk) for remaining files
|
- Always include specific file paths and line numbers for high-complexity findings
|
- Note analysis coverage: "Detailed complexity analysis completed for [N] of [total] high-risk files"
|
TECHNICAL DEBT INDICATORS:
|
- Complete inventory of TODO/FIXME/HACK/WARNING comments with full context and file paths
|
- Code duplication patterns with specific examples and locations
|
- Error handling issues (missing try/catch, empty catch blocks, silent failures)
|
- Commented-out critical functionality with context
|
2.5 Technology-Specific Risk Patterns
|
Based on identified technology stack, analyze domain-specific anti-patterns:
|
STREAMING/EVENT PROCESSING (Kafka, Flink, Beam, Kinesis, RabbitMQ, Redis Streams):
|
- Consumer group configuration and offset management strategies
|
- Windowing strategies and late data handling mechanisms
|
- Backpressure and memory management patterns in stream processing
|
- Error handling and dead letter queue implementations
|
- Exactly-once vs at-least-once processing guarantee configurations
|
- State management and checkpointing configurations
|
- Hardcoded timeouts and session management parameters
|
DATABASE/DATA SYSTEMS (PostgreSQL, MySQL, MongoDB, Redis, Elasticsearch):
|
- Connection pooling and timeout configurations
|
- Transaction boundary and isolation level handling
|
- Index usage patterns and query optimization evidence
|
- Schema migration strategies and backward compatibility
|
- Data consistency and integrity constraint implementations
|
- Backup and recovery configuration evidence
|
MICROSERVICES/DISTRIBUTED SYSTEMS:
|
- Circuit breaker and retry logic implementations
|
- Service discovery and load balancing configurations
|
- Distributed tracing and correlation ID usage patterns
|
- Timeout and bulkhead pattern implementations
|
- Data consistency strategies across service boundaries
|
- Inter-service communication error handling
|
Technical Due Diligence Prompts for Code Analysis
A comprehensive two-phase framework for conducting technical due diligence on software repositories using AI-assisted code analysis in VS Code or similar environments.
Overview
This toolkit provides systematic prompts for evaluating code quality, architectural decisions, and technical risks in software acquisitions or code reviews. The two-phase approach ensures thorough analysis while managing AI processing limitations.
Why Two Phases?
Phase 1: Discovery & Risk Identification
- Systematic repository inventory and architectural assessment
- Technology-specific risk pattern analysis
- Targeted complexity analysis of highest-risk files
- Identification of critical issues requiring immediate attention
Phase 2: Detailed Analysis & Quantification
- Comprehensive complexity calculations (Cyclomatic and Halstead metrics)
- Deep-dive domain-specific analysis
- Complete risk quantification and technical recommendations
- Exhaustive coverage of high-priority areas identified in Phase 1
How to Use
Prerequisites
- VS Code with Copilot or similar AI coding assistant
- Access to the complete codebase in your workspace
- Multi-repository support if analyzing distributed systems
Step-by-Step Process
Run Phase 1 First
- Copy the content from
phase-1-discovery.txt - Paste into your AI assistant
- Wait for complete analysis ending with "PHASE 1 COMPLETE"
- Copy the content from
Continue with Phase 2
- Copy the content from
phase-2-detailed-analysis.txt - Paste into the same or new conversation
- Phase 2 will reference findings from Phase 1 automatically
- Copy the content from
Expected Outputs
Phase 1 Results:
- Repository overview with technology stack identification
- Complexity survey with specific high-risk files identified
- Technology-specific risk pattern analysis
- Critical issues requiring immediate attention
- Prioritized list of files for Phase 2 deep-dive
Phase 2 Results:
- Detailed complexity metrics with exact scores
- Comprehensive risk quantification
- Specific technical recommendations with file paths
- Executive summary with quantified technical debt assessment
Key Features
Comprehensive Coverage
- Code Quality: Cyclomatic complexity and Halstead difficulty metrics
- Architecture: Service boundaries, integration patterns, deployment analysis
- Security: Hardcoded secrets, authentication patterns, input validation
- Performance: Bottleneck identification, scaling limitations
- Maintainability: Technical debt quantification, refactoring priorities
Technology-Specific Analysis
- Streaming Systems: Kafka, Flink, Beam offset management and error handling
- Databases: Connection pooling, transaction management, query optimization
- Microservices: Circuit breakers, service discovery, distributed tracing
- APIs: Rate limiting, authentication, error handling consistency
Quantified Results
- Exact complexity scores with specific thresholds (>10, >15, >30, >50)
- Risk categorization (Critical/High/Medium/Low) with business impact
- Specific file paths and line numbers for all findings
- Measurable technical debt with complexity reduction targets
Real-World Testing
This framework has been validated on production codebases including:
- Multi-service Java/Gradle applications with Kafka streaming
- React/TypeScript microservices with BigQuery integration
- Large enterprise systems with complex data processing pipelines
Results consistently identify critical production risks (data loss, security vulnerabilities, performance bottlenecks) that require immediate attention.
Best Practices
For Optimal Results
- Ensure complete codebase access in your workspace
- Run Phase 1 completely before starting Phase 2
- Review findings for context-specific business impact
- Use results as input for technical decision-making, not project planning
Limitations
- Analysis is based solely on static code examination
- Cannot assess runtime performance or production metrics
- Recommendations are technical in nature, not project management plans
- Complexity calculations may be approximate for very large codebases
Contributing
This is an evolving framework. Feedback and improvements based on real-world usage are welcome.
License
Apache License 2.0 - Free for commercial and personal use.
πΊοΈ Versioning
- v1.1.0 β Revised prompt into two prompt files with updated README.
- v1.0.2 β Add license and contributing.md files to files in prompt pack.
- v1.0.1 β Add Background section + dataset card metadata.
- v1.0.0 β Initial release (full + lean variants).
π Attribution
Created by Darin Archer. Contributions welcome via PR.
- Downloads last month
- 9