Skip to content

Latest commit

 

History

History
580 lines (435 loc) · 12.1 KB

File metadata and controls

580 lines (435 loc) · 12.1 KB

Testing Guide

This document describes the testing strategy, test coverage, and how to run tests for Script Manager (sm).

Table of Contents


Test Philosophy

Script Manager follows a pragmatic testing approach:

  1. End-to-end tests for user workflows (primary focus)
  2. Integration tests for database and file operations
  3. Unit tests for business logic and utilities
  4. Manual tests for cross-platform compatibility

We prioritize real-world usage scenarios over 100% code coverage. Tests should:

  • ✅ Validate user-facing behavior
  • ✅ Catch regressions
  • ✅ Run fast (complete suite < 30 seconds)
  • ✅ Be maintainable and readable

Test Structure

sm/
├── test.ps1                    # Automated test suite (Windows)
├── test.sh                     # Automated test suite (Linux/Mac) - TODO
├── internal/
│   ├── domain/
│   │   └── *_test.go          # Unit tests for domain models
│   ├── usecase/
│   │   └── *_test.go          # Unit tests for business logic
│   └── infrastructure/
│       ├── database/
│       │   └── *_test.go      # Integration tests for repositories
│       └── runtime/
│           └── *_test.go      # Integration tests for runtime detection
└── cmd/sm/
    └── integration_test.go     # End-to-end CLI tests (Go)

Running Tests

Automated Test Suite (PowerShell)

Full test suite - Tests all features end-to-end:

# Windows
.\test.ps1
# Linux/Mac (coming soon)
./test.sh

What it tests:

  • Runtime detection
  • Script management (add, list, update, remove)
  • Script execution
  • Versioning (create, rollback)
  • Tag management
  • Script renaming
  • Script info display
  • Execution history
  • Configuration management
  • Cleanup operations
  • Chain execution

Output:

=== Script Manager Test Suite v2.0 ===

=== Setup ===
[PASS] Build succeeded

=== Test Group 1: Runtime Detection ===
[PASS] Runtime detection found Python
[PASS] Runtime list shows Python

...

=== TEST SUMMARY ===
Passed: 65
Failed: 0

🎉 ALL TESTS PASSED SUCCESSFULLY!

Go Unit Tests

Run Go unit and integration tests:

# Run all tests
go test ./...

# Run with coverage
go test -cover ./...

# Run specific package
go test ./internal/domain/...

# Verbose output
go test -v ./...

# Run with race detector
go test -race ./...

Manual Testing Checklist

For platform-specific testing:

# 1. First-run experience
rm -rf ~/.local/share/sm  # or %LOCALAPPDATA%\sm on Windows
sm init

# 2. Basic workflow
sm add script.py --name test
sm run test
sm history test

# 3. Cross-language support
sm add script.js --name node-test
sm add script.sh --name bash-test
sm run node-test
sm run bash-test

# 4. Edge cases
sm run nonexistent-script  # Should fail gracefully
sm add /path/to/missing.py # Should fail gracefully

Test Coverage

Current Coverage (v0.5.0)

Feature Area Test Type Coverage Test Count
Runtime Detection E2E ✅ Full 2
Script Management E2E ✅ Full 9
Script Execution E2E ✅ Full 2
Manual Versioning E2E ✅ Full 4
Tag Management E2E ✅ Full 5
Script Rename E2E ✅ Full 2
Script Info E2E ✅ Full 2
Execution History E2E ✅ Full 2
Script Removal E2E ✅ Full 5
Configuration E2E ✅ Full 6
Cleanup E2E ✅ Full 3
Chain Execution E2E ✅ Full 20
Domain Models Unit ⚠️ Partial -
Repositories Integration ⚠️ Partial -
Use Cases Unit ⚠️ Partial -

Total E2E Tests: 62
Total Unit Tests: TBD
Total Integration Tests: TBD

Test Categories

✅ Well-Tested (>90% coverage)

  • CLI commands and workflows
  • Script execution (all languages)
  • Versioning system
  • Tag operations
  • Chain execution

⚠️ Partially Tested (50-90% coverage)

  • Database operations
  • Error handling
  • Edge cases

❌ Needs More Tests (<50% coverage)

  • Concurrent operations
  • Large-scale stress tests (1000+ scripts)
  • Performance benchmarks

Writing Tests

Test Suite (PowerShell)

Location: test.ps1

Structure:

# Test Group Template
Write-Host "`n=== Test Group N: Feature Name ===" -ForegroundColor Cyan

# Test case
$output = .\sm.exe command args 2>&1 | Out-String
Assert-Contains $output "expected text" "Test description"

Assertion Helpers:

  • Assert-Contains - Check if output contains text
  • Assert-NotContains - Check if output doesn't contain text
  • Assert-JsonValid - Validate JSON output
  • Assert-FileExists - Check file existence

Example Test:

# Create test script
"print('test')" | Out-File -Encoding UTF8 test.py

# Add to manager
.\sm.exe add test.py --name test-script 2>&1 | Out-Null

# Verify it's listed
$list = .\sm.exe list 2>&1 | Out-String
Assert-Contains $list "test-script" "Script appears in list"

# Execute it
$run = .\sm.exe run test-script 2>&1 | Out-String
Assert-Contains $run "success" "Script executes successfully"
Assert-Contains $run "test" "Script output captured"

# Cleanup
Remove-Item test.py -ErrorAction SilentlyContinue

Go Unit Tests

Location: *_test.go files

Example:

package domain_test

import (
    "testing"
    "sm/internal/domain"
)

func TestScript_AddTag(t *testing.T) {
    script := domain.NewScript("test", "/path/test.py", domain.LanguagePython)
    
    script.AddTag("automation")
    
    if !script.HasTag("automation") {
        t.Errorf("expected script to have 'automation' tag")
    }
    
    // Idempotent
    script.AddTag("automation")
    if len(script.Tags) != 1 {
        t.Errorf("expected 1 tag, got %d", len(script.Tags))
    }
}

func TestScript_RemoveTag(t *testing.T) {
    script := domain.NewScript("test", "/path/test.py", domain.LanguagePython)
    script.AddTag("automation")
    
    script.RemoveTag("automation")
    
    if script.HasTag("automation") {
        t.Errorf("expected tag to be removed")
    }
    
    // Idempotent
    script.RemoveTag("automation") // Should not panic
}

Test Best Practices

  1. Use descriptive test names

    // Good
    func TestScriptManager_AddScript_WithDuplicateName_ReturnsError(t *testing.T)
    
    // Bad
    func TestAdd(t *testing.T)
  2. Follow AAA pattern (Arrange, Act, Assert)

    func TestExample(t *testing.T) {
        // Arrange
        script := domain.NewScript(...)
        
        // Act
        err := script.AddTag("test")
        
        // Assert
        if err != nil {
            t.Errorf("unexpected error: %v", err)
        }
    }
  3. Test edge cases

    • Empty inputs
    • Non-existent resources
    • Concurrent access
    • Large datasets
  4. Clean up after tests

    func TestWithTempDB(t *testing.T) {
        db := setupTestDB()
        defer db.Close() // Always cleanup
        
        // Test code
    }
  5. Use table-driven tests for similar cases

    func TestVersionIncrement(t *testing.T) {
        tests := []struct {
            input    string
            expected string
        }{
            {"v1.0.0", "v1.0.1"},
            {"v1.0.9", "v1.0.10"},
            {"v2.5.3", "v2.5.4"},
        }
        
        for _, tt := range tests {
            t.Run(tt.input, func(t *testing.T) {
                result := incrementVersion(tt.input)
                if result != tt.expected {
                    t.Errorf("got %s, want %s", result, tt.expected)
                }
            })
        }
    }

CI/CD Integration

GitHub Actions (Example)

.github/workflows/test.yml:

name: Tests

on: [push, pull_request]

jobs:
  test:
    strategy:
      matrix:
        os: [ubuntu-latest, windows-latest, macos-latest]
        go: ['1.21', '1.22']
    
    runs-on: ${{ matrix.os }}
    
    steps:
    - uses: actions/checkout@v3
    
    - name: Set up Go
      uses: actions/setup-go@v4
      with:
        go-version: ${{ matrix.go }}
    
    - name: Install dependencies
      run: go mod download
    
    - name: Run Go tests
      run: go test -v -race -coverprofile=coverage.out ./...
    
    - name: Run E2E tests (Windows)
      if: matrix.os == 'windows-latest'
      run: .\test.ps1
      shell: pwsh
    
    - name: Upload coverage
      uses: codecov/codecov-action@v3
      with:
        files: ./coverage.out

Pre-commit Hook

.git/hooks/pre-commit:

#!/bin/bash
# Run tests before commit

echo "Running tests..."
go test ./...

if [ $? -ne 0 ]; then
    echo "Tests failed! Commit aborted."
    exit 1
fi

echo "All tests passed!"

Make it executable:

chmod +x .git/hooks/pre-commit

Test Data

Test Scripts Location

Test scripts are created temporarily during test execution and cleaned up after.

Do not commit:

  • *.py test files
  • *.js test files
  • *.chain test files
  • $env:LOCALAPPDATA\sm\* test data

Keep in .gitignore:

# Test artifacts
test-*.py
test-*.js
test-*.sh
*.chain
*-test.py

Database Testing

For tests that need a clean database:

func setupTestDB(t *testing.T) *database.BoltDB {
    tempDir := t.TempDir() // Automatically cleaned up
    dbPath := filepath.Join(tempDir, "test.db")
    
    db, err := database.NewBoltDB(dbPath)
    if err != nil {
        t.Fatalf("failed to create test DB: %v", err)
    }
    
    return db
}

Performance Testing

Benchmarks

# Run benchmarks
go test -bench=. ./...

# With memory allocation stats
go test -bench=. -benchmem ./...

Example benchmark:

func BenchmarkScriptExecution(b *testing.B) {
    executor := setupExecutor()
    
    b.ResetTimer()
    for i := 0; i < b.N; i++ {
        executor.Execute("test-script", ExecuteOptions{})
    }
}

Stress Testing

# Add 1000 scripts
for ($i=0; $i -lt 1000; $i++) {
    "print('test $i')" | Out-File "test-$i.py"
    sm add "test-$i.py" --name "test-$i"
}

# Measure list performance
Measure-Command { sm list }

# Cleanup
for ($i=0; $i -lt 1000; $i++) {
    sm remove "test-$i" --force
    Remove-Item "test-$i.py"
}

Troubleshooting Tests

Common Issues

Test fails with "database locked":

  • Ensure each test opens/closes DB properly
  • Use defer db.Close()
  • Don't reuse DB connections across tests

File not found errors:

  • Use absolute paths in tests
  • Check working directory: os.Getwd()
  • Ensure cleanup doesn't delete files too early

Tests pass locally but fail in CI:

  • Check for platform-specific paths
  • Use filepath.Join() instead of string concatenation
  • Test on multiple platforms before pushing

Flaky tests:

  • Add explicit waits for async operations
  • Use deterministic test data (no random values)
  • Ensure proper cleanup between tests

Test Maintenance

When to Update Tests

  • ✅ After adding new features
  • ✅ After fixing bugs (add regression test)
  • ✅ When changing command output format
  • ✅ When deprecating features

Test Review Checklist

  • Tests run successfully locally
  • All new features have tests
  • Edge cases covered
  • Error cases tested
  • Cleanup code present
  • Tests are deterministic (no randomness)
  • Tests run in reasonable time (<30s total)

Contributing

When contributing tests:

  1. Run full test suite before submitting PR
  2. Add tests for new features
  3. Ensure tests are cross-platform compatible
  4. Document any platform-specific behavior
  5. Keep test execution time reasonable

See CONTRIBUTING.md for more details.


Resources


Questions?

Open an issue on GitHub with the testing label.