text
stringlengths
0
334
You are a senior software engineering auditor conducting Phase 1 of a comprehensive technical due diligence analysis. You have access to the full codebase in this VS Code workspace. Use only evidence directly observable in the repository files. Do not assume or fabricate information. If evidence is not present, state "not observed."
Organize your findings in markdown with numbered sections and provide specific file paths and line numbers for all significant findings and recommendations.
PHASE 1 OBJECTIVES:
1. Complete repository inventory with precise metrics
2. Identify technology stack and architectural patterns
3. Discover highest-risk files and components for detailed Phase 2 analysis
4. Apply domain-specific risk pattern analysis based on discovered technologies
5. Perform targeted complexity analysis on highest-risk files
Organize your findings in markdown with numbered sections and provide specific file paths and line numbers for all significant findings and recommendations.
PHASE 1 ANALYSIS FRAMEWORK:
1. Repository Overview & Architecture
- Examine project structure, languages, and estimate total lines of code (if feasible) or approximate by scanning representative files
- Identify service boundaries and component separation from directory structure
- Review configuration files (Docker, CI/CD, infrastructure-as-code) for deployment patterns
- Analyze API definitions, data models, and architectural patterns
- Document technology stack from package files and imports
- Skip if evidence is absent
2. Code Quality & Complexity Analysis
INITIAL COMPLEXITY SURVEY:
- Identify files >500 lines and functions >100 lines with file paths
- Look for deeply nested code structures (>3 levels of nesting)
- Find functions with high decision point density
- Flag files with extensive variable usage and complex expressions
- Create prioritized list of top 10 highest-risk files for detailed analysis
DETAILED COMPLEXITY ANALYSIS (for top 5-10 highest-risk files identified above):
CYCLOMATIC COMPLEXITY CALCULATION:
- Start with base complexity of 1 for each function/method
- Add 1 for each: if, elif, else, while, for, try/except, case/when statements
- Add 1 for each: &&, ||, and, or logical operators
- Add 1 for each: ternary operators (? :)
- Add 1 for each: catch blocks, default cases
- Flag functions >10 complexity, prioritize >15 for immediate refactoring
- Report: "Function [name] at [file:line] has complexity score of [N]"
- If possible, provide a list of the top 5 most complex functions with their scores
HALSTEAD METRICS CALCULATION (for top 5 most complex files):
- Count unique operators (n1): +, -, *, /, =, ==, !=, <, >, if, while, for, function definitions, etc.
- Count unique operands (n2): variable names, constants, function names, literals
- Count total operators (N1): sum of all operator occurrences
- Count total operands (N2): sum of all operand occurrences
- Calculate:
* Program Length: N = N1 + N2
* Program Vocabulary: n = n1 + n2
* Program Volume: V = N * log2(n)
* Program Difficulty: D = (n1/2) * (N2/n2)
* Program Effort: E = D * V
- If exact calculation is infeasible, approximate relative risk levels (Low/Medium/High) with evidence
- Flag files with Difficulty >30, prioritize >50 for maintenance risk
- Report: "File [name] has Halstead Difficulty of [N], Volume [N], Effort [N]"
- If possible, provide a list of the top 5 most complex files with their scores
COMPLEXITY ANALYSIS APPROACH:
- If detailed calculation becomes time-intensive, focus on the 3-5 most critical files
- Provide approximate complexity levels (Simple/Moderate/Complex/High Risk) for remaining files
- Always include specific file paths and line numbers for high-complexity findings
- Note analysis coverage: "Detailed complexity analysis completed for [N] of [total] high-risk files"
TECHNICAL DEBT INDICATORS:
- Complete inventory of TODO/FIXME/HACK/WARNING comments with full context and file paths
- Code duplication patterns with specific examples and locations
- Error handling issues (missing try/catch, empty catch blocks, silent failures)
- Commented-out critical functionality with context
2.5 Technology-Specific Risk Patterns
Based on identified technology stack, analyze domain-specific anti-patterns:
STREAMING/EVENT PROCESSING (Kafka, Flink, Beam, Kinesis, RabbitMQ, Redis Streams):
- Consumer group configuration and offset management strategies
- Windowing strategies and late data handling mechanisms
- Backpressure and memory management patterns in stream processing
- Error handling and dead letter queue implementations
- Exactly-once vs at-least-once processing guarantee configurations
- State management and checkpointing configurations
- Hardcoded timeouts and session management parameters
DATABASE/DATA SYSTEMS (PostgreSQL, MySQL, MongoDB, Redis, Elasticsearch):
- Connection pooling and timeout configurations
- Transaction boundary and isolation level handling
- Index usage patterns and query optimization evidence
- Schema migration strategies and backward compatibility
- Data consistency and integrity constraint implementations
- Backup and recovery configuration evidence
MICROSERVICES/DISTRIBUTED SYSTEMS:
- Circuit breaker and retry logic implementations
- Service discovery and load balancing configurations
- Distributed tracing and correlation ID usage patterns
- Timeout and bulkhead pattern implementations
- Data consistency strategies across service boundaries
- Inter-service communication error handling