#
tokens: 28003/50000 3/195 files (page 4/4)
lines: off (toggle) GitHub
raw markdown copy
This is page 4 of 4. Use http://codebase.md/stefan-nitu/mcp-xcode-server?page={x} to view the full context.

# Directory Structure

```
├── .claude
│   └── settings.local.json
├── .github
│   └── workflows
│       └── ci.yml
├── .gitignore
├── .vscode
│   └── settings.json
├── CLAUDE.md
├── CONTRIBUTING.md
├── docs
│   ├── ARCHITECTURE.md
│   ├── ERROR-HANDLING.md
│   └── TESTING-PHILOSOPHY.md
├── examples
│   └── screenshot-demo.js
├── jest.config.cjs
├── jest.e2e.config.cjs
├── LICENSE
├── package-lock.json
├── package.json
├── README.md
├── scripts
│   └── xcode-sync.swift
├── src
│   ├── application
│   │   └── ports
│   │       ├── ArtifactPorts.ts
│   │       ├── BuildPorts.ts
│   │       ├── CommandPorts.ts
│   │       ├── ConfigPorts.ts
│   │       ├── LoggingPorts.ts
│   │       ├── MappingPorts.ts
│   │       ├── OutputFormatterPorts.ts
│   │       ├── OutputParserPorts.ts
│   │       └── SimulatorPorts.ts
│   ├── cli.ts
│   ├── config.ts
│   ├── domain
│   │   ├── errors
│   │   │   └── DomainError.ts
│   │   ├── services
│   │   │   └── PlatformDetector.ts
│   │   ├── shared
│   │   │   └── Result.ts
│   │   └── tests
│   │       └── unit
│   │           └── PlatformDetector.unit.test.ts
│   ├── features
│   │   ├── app-management
│   │   │   ├── controllers
│   │   │   │   └── InstallAppController.ts
│   │   │   ├── domain
│   │   │   │   ├── InstallRequest.ts
│   │   │   │   └── InstallResult.ts
│   │   │   ├── factories
│   │   │   │   └── InstallAppControllerFactory.ts
│   │   │   ├── index.ts
│   │   │   ├── infrastructure
│   │   │   │   └── AppInstallerAdapter.ts
│   │   │   ├── tests
│   │   │   │   ├── e2e
│   │   │   │   │   ├── InstallAppController.e2e.test.ts
│   │   │   │   │   └── InstallAppMCP.e2e.test.ts
│   │   │   │   ├── integration
│   │   │   │   │   └── InstallAppController.integration.test.ts
│   │   │   │   └── unit
│   │   │   │       ├── AppInstallerAdapter.unit.test.ts
│   │   │   │       ├── InstallAppController.unit.test.ts
│   │   │   │       ├── InstallAppUseCase.unit.test.ts
│   │   │   │       ├── InstallRequest.unit.test.ts
│   │   │   │       └── InstallResult.unit.test.ts
│   │   │   └── use-cases
│   │   │       └── InstallAppUseCase.ts
│   │   ├── build
│   │   │   ├── controllers
│   │   │   │   └── BuildXcodeController.ts
│   │   │   ├── domain
│   │   │   │   ├── BuildDestination.ts
│   │   │   │   ├── BuildIssue.ts
│   │   │   │   ├── BuildRequest.ts
│   │   │   │   ├── BuildResult.ts
│   │   │   │   └── PlatformInfo.ts
│   │   │   ├── factories
│   │   │   │   └── BuildXcodeControllerFactory.ts
│   │   │   ├── index.ts
│   │   │   ├── infrastructure
│   │   │   │   ├── BuildArtifactLocatorAdapter.ts
│   │   │   │   ├── BuildDestinationMapperAdapter.ts
│   │   │   │   ├── XcbeautifyFormatterAdapter.ts
│   │   │   │   ├── XcbeautifyOutputParserAdapter.ts
│   │   │   │   └── XcodeBuildCommandAdapter.ts
│   │   │   ├── tests
│   │   │   │   ├── e2e
│   │   │   │   │   ├── BuildXcodeController.e2e.test.ts
│   │   │   │   │   └── BuildXcodeMCP.e2e.test.ts
│   │   │   │   ├── integration
│   │   │   │   │   └── BuildXcodeController.integration.test.ts
│   │   │   │   └── unit
│   │   │   │       ├── BuildArtifactLocatorAdapter.unit.test.ts
│   │   │   │       ├── BuildDestinationMapperAdapter.unit.test.ts
│   │   │   │       ├── BuildIssue.unit.test.ts
│   │   │   │       ├── BuildProjectUseCase.unit.test.ts
│   │   │   │       ├── BuildRequest.unit.test.ts
│   │   │   │       ├── BuildResult.unit.test.ts
│   │   │   │       ├── BuildXcodeController.unit.test.ts
│   │   │   │       ├── BuildXcodePresenter.unit.test.ts
│   │   │   │       ├── PlatformInfo.unit.test.ts
│   │   │   │       ├── XcbeautifyFormatterAdapter.unit.test.ts
│   │   │   │       ├── XcbeautifyOutputParserAdapter.unit.test.ts
│   │   │   │       └── XcodeBuildCommandAdapter.unit.test.ts
│   │   │   └── use-cases
│   │   │       └── BuildProjectUseCase.ts
│   │   └── simulator
│   │       ├── controllers
│   │       │   ├── BootSimulatorController.ts
│   │       │   ├── ListSimulatorsController.ts
│   │       │   └── ShutdownSimulatorController.ts
│   │       ├── domain
│   │       │   ├── BootRequest.ts
│   │       │   ├── BootResult.ts
│   │       │   ├── ListSimulatorsRequest.ts
│   │       │   ├── ListSimulatorsResult.ts
│   │       │   ├── ShutdownRequest.ts
│   │       │   ├── ShutdownResult.ts
│   │       │   └── SimulatorState.ts
│   │       ├── factories
│   │       │   ├── BootSimulatorControllerFactory.ts
│   │       │   ├── ListSimulatorsControllerFactory.ts
│   │       │   └── ShutdownSimulatorControllerFactory.ts
│   │       ├── index.ts
│   │       ├── infrastructure
│   │       │   ├── SimulatorControlAdapter.ts
│   │       │   └── SimulatorLocatorAdapter.ts
│   │       ├── tests
│   │       │   ├── e2e
│   │       │   │   ├── BootSimulatorController.e2e.test.ts
│   │       │   │   ├── BootSimulatorMCP.e2e.test.ts
│   │       │   │   ├── ListSimulatorsController.e2e.test.ts
│   │       │   │   ├── ListSimulatorsMCP.e2e.test.ts
│   │       │   │   ├── ShutdownSimulatorController.e2e.test.ts
│   │       │   │   └── ShutdownSimulatorMCP.e2e.test.ts
│   │       │   ├── integration
│   │       │   │   ├── BootSimulatorController.integration.test.ts
│   │       │   │   ├── ListSimulatorsController.integration.test.ts
│   │       │   │   └── ShutdownSimulatorController.integration.test.ts
│   │       │   └── unit
│   │       │       ├── BootRequest.unit.test.ts
│   │       │       ├── BootResult.unit.test.ts
│   │       │       ├── BootSimulatorController.unit.test.ts
│   │       │       ├── BootSimulatorUseCase.unit.test.ts
│   │       │       ├── ListSimulatorsController.unit.test.ts
│   │       │       ├── ListSimulatorsUseCase.unit.test.ts
│   │       │       ├── ShutdownRequest.unit.test.ts
│   │       │       ├── ShutdownResult.unit.test.ts
│   │       │       ├── ShutdownSimulatorUseCase.unit.test.ts
│   │       │       ├── SimulatorControlAdapter.unit.test.ts
│   │       │       └── SimulatorLocatorAdapter.unit.test.ts
│   │       └── use-cases
│   │           ├── BootSimulatorUseCase.ts
│   │           ├── ListSimulatorsUseCase.ts
│   │           └── ShutdownSimulatorUseCase.ts
│   ├── index.ts
│   ├── infrastructure
│   │   ├── repositories
│   │   │   └── DeviceRepository.ts
│   │   ├── services
│   │   │   └── DependencyChecker.ts
│   │   └── tests
│   │       └── unit
│   │           ├── DependencyChecker.unit.test.ts
│   │           └── DeviceRepository.unit.test.ts
│   ├── logger.ts
│   ├── presentation
│   │   ├── decorators
│   │   │   └── DependencyCheckingDecorator.ts
│   │   ├── formatters
│   │   │   ├── ErrorFormatter.ts
│   │   │   └── strategies
│   │   │       ├── BuildIssuesStrategy.ts
│   │   │       ├── DefaultErrorStrategy.ts
│   │   │       ├── ErrorFormattingStrategy.ts
│   │   │       └── OutputFormatterErrorStrategy.ts
│   │   ├── interfaces
│   │   │   ├── IDependencyChecker.ts
│   │   │   ├── MCPController.ts
│   │   │   └── MCPResponse.ts
│   │   ├── presenters
│   │   │   └── BuildXcodePresenter.ts
│   │   └── tests
│   │       └── unit
│   │           ├── BuildIssuesStrategy.unit.test.ts
│   │           ├── DefaultErrorStrategy.unit.test.ts
│   │           ├── DependencyCheckingDecorator.unit.test.ts
│   │           └── ErrorFormatter.unit.test.ts
│   ├── shared
│   │   ├── domain
│   │   │   ├── AppPath.ts
│   │   │   ├── DeviceId.ts
│   │   │   ├── Platform.ts
│   │   │   └── ProjectPath.ts
│   │   ├── index.ts
│   │   ├── infrastructure
│   │   │   ├── ConfigProviderAdapter.ts
│   │   │   └── ShellCommandExecutorAdapter.ts
│   │   └── tests
│   │       ├── mocks
│   │       │   ├── promisifyExec.ts
│   │       │   ├── selectiveExecMock.ts
│   │       │   └── xcodebuildHelpers.ts
│   │       ├── skipped
│   │       │   ├── cli.e2e.test.skip
│   │       │   ├── hook-e2e.test.skip
│   │       │   ├── hook-path.e2e.test.skip
│   │       │   └── hook.test.skip
│   │       ├── types
│   │       │   └── execTypes.ts
│   │       ├── unit
│   │       │   ├── AppPath.unit.test.ts
│   │       │   ├── ConfigProviderAdapter.unit.test.ts
│   │       │   ├── ProjectPath.unit.test.ts
│   │       │   └── ShellCommandExecutorAdapter.unit.test.ts
│   │       └── utils
│   │           ├── gitResetTestArtifacts.ts
│   │           ├── mockHelpers.ts
│   │           ├── TestEnvironmentCleaner.ts
│   │           ├── TestErrorInjector.ts
│   │           ├── testHelpers.ts
│   │           ├── TestProjectManager.ts
│   │           └── TestSimulatorManager.ts
│   ├── types.ts
│   ├── utils
│   │   ├── devices
│   │   │   ├── Devices.ts
│   │   │   ├── SimulatorApps.ts
│   │   │   ├── SimulatorBoot.ts
│   │   │   ├── SimulatorDevice.ts
│   │   │   ├── SimulatorInfo.ts
│   │   │   ├── SimulatorReset.ts
│   │   │   └── SimulatorUI.ts
│   │   ├── errors
│   │   │   ├── index.ts
│   │   │   └── xcbeautify-parser.ts
│   │   ├── index.ts
│   │   ├── LogManager.ts
│   │   ├── LogManagerInstance.ts
│   │   └── projects
│   │       ├── SwiftBuild.ts
│   │       ├── SwiftPackage.ts
│   │       ├── SwiftPackageInfo.ts
│   │       ├── Xcode.ts
│   │       ├── XcodeArchive.ts
│   │       ├── XcodeBuild.ts
│   │       ├── XcodeErrors.ts
│   │       ├── XcodeInfo.ts
│   │       └── XcodeProject.ts
│   └── utils.ts
├── test_artifacts
│   ├── Test.xcworkspace
│   │   ├── contents.xcworkspacedata
│   │   └── xcuserdata
│   │       └── stefan.xcuserdatad
│   │           └── UserInterfaceState.xcuserstate
│   ├── TestProjectSwiftTesting
│   │   ├── TestProjectSwiftTesting
│   │   │   ├── Assets.xcassets
│   │   │   │   ├── AccentColor.colorset
│   │   │   │   │   └── Contents.json
│   │   │   │   ├── AppIcon.appiconset
│   │   │   │   │   └── Contents.json
│   │   │   │   └── Contents.json
│   │   │   ├── ContentView.swift
│   │   │   ├── Item.swift
│   │   │   ├── TestProjectSwiftTesting.entitlements
│   │   │   └── TestProjectSwiftTestingApp.swift
│   │   ├── TestProjectSwiftTesting.xcodeproj
│   │   │   ├── project.pbxproj
│   │   │   ├── project.xcworkspace
│   │   │   │   ├── contents.xcworkspacedata
│   │   │   │   └── xcuserdata
│   │   │   │       └── stefan.xcuserdatad
│   │   │   │           └── UserInterfaceState.xcuserstate
│   │   │   └── xcuserdata
│   │   │       └── stefan.xcuserdatad
│   │   │           └── xcschemes
│   │   │               └── xcschememanagement.plist
│   │   ├── TestProjectSwiftTestingTests
│   │   │   └── TestProjectSwiftTestingTests.swift
│   │   └── TestProjectSwiftTestingUITests
│   │       ├── TestProjectSwiftTestingUITests.swift
│   │       └── TestProjectSwiftTestingUITestsLaunchTests.swift
│   ├── TestProjectWatchOS
│   │   ├── TestProjectWatchOS
│   │   │   ├── Assets.xcassets
│   │   │   │   ├── AccentColor.colorset
│   │   │   │   │   └── Contents.json
│   │   │   │   ├── AppIcon.appiconset
│   │   │   │   │   └── Contents.json
│   │   │   │   └── Contents.json
│   │   │   ├── ContentView.swift
│   │   │   └── TestProjectWatchOSApp.swift
│   │   ├── TestProjectWatchOS Watch App
│   │   │   ├── Assets.xcassets
│   │   │   │   ├── AccentColor.colorset
│   │   │   │   │   └── Contents.json
│   │   │   │   ├── AppIcon.appiconset
│   │   │   │   │   └── Contents.json
│   │   │   │   └── Contents.json
│   │   │   ├── ContentView.swift
│   │   │   └── TestProjectWatchOSApp.swift
│   │   ├── TestProjectWatchOS Watch AppTests
│   │   │   └── TestProjectWatchOS_Watch_AppTests.swift
│   │   ├── TestProjectWatchOS Watch AppUITests
│   │   │   ├── TestProjectWatchOS_Watch_AppUITests.swift
│   │   │   └── TestProjectWatchOS_Watch_AppUITestsLaunchTests.swift
│   │   ├── TestProjectWatchOS.xcodeproj
│   │   │   ├── project.pbxproj
│   │   │   └── project.xcworkspace
│   │   │       └── contents.xcworkspacedata
│   │   ├── TestProjectWatchOSTests
│   │   │   └── TestProjectWatchOSTests.swift
│   │   └── TestProjectWatchOSUITests
│   │       ├── TestProjectWatchOSUITests.swift
│   │       └── TestProjectWatchOSUITestsLaunchTests.swift
│   ├── TestProjectXCTest
│   │   ├── TestProjectXCTest
│   │   │   ├── Assets.xcassets
│   │   │   │   ├── AccentColor.colorset
│   │   │   │   │   └── Contents.json
│   │   │   │   ├── AppIcon.appiconset
│   │   │   │   │   └── Contents.json
│   │   │   │   └── Contents.json
│   │   │   ├── ContentView.swift
│   │   │   ├── Item.swift
│   │   │   ├── TestProjectXCTest.entitlements
│   │   │   └── TestProjectXCTestApp.swift
│   │   ├── TestProjectXCTest.xcodeproj
│   │   │   ├── project.pbxproj
│   │   │   ├── project.xcworkspace
│   │   │   │   ├── contents.xcworkspacedata
│   │   │   │   └── xcuserdata
│   │   │   │       └── stefan.xcuserdatad
│   │   │   │           └── UserInterfaceState.xcuserstate
│   │   │   └── xcuserdata
│   │   │       └── stefan.xcuserdatad
│   │   │           └── xcschemes
│   │   │               └── xcschememanagement.plist
│   │   ├── TestProjectXCTestTests
│   │   │   └── TestProjectXCTestTests.swift
│   │   └── TestProjectXCTestUITests
│   │       ├── TestProjectXCTestUITests.swift
│   │       └── TestProjectXCTestUITestsLaunchTests.swift
│   ├── TestSwiftPackageSwiftTesting
│   │   ├── .gitignore
│   │   ├── Package.swift
│   │   ├── Sources
│   │   │   ├── TestSwiftPackageSwiftTesting
│   │   │   │   └── TestSwiftPackageSwiftTesting.swift
│   │   │   └── TestSwiftPackageSwiftTestingExecutable
│   │   │       └── main.swift
│   │   └── Tests
│   │       └── TestSwiftPackageSwiftTestingTests
│   │           └── TestSwiftPackageSwiftTestingTests.swift
│   └── TestSwiftPackageXCTest
│       ├── .gitignore
│       ├── Package.swift
│       ├── Sources
│       │   ├── TestSwiftPackageXCTest
│       │   │   └── TestSwiftPackageXCTest.swift
│       │   └── TestSwiftPackageXCTestExecutable
│       │       └── main.swift
│       └── Tests
│           └── TestSwiftPackageXCTestTests
│               └── TestSwiftPackageXCTestTests.swift
├── tsconfig.json
└── XcodeProjectModifier
    ├── Package.resolved
    ├── Package.swift
    └── Sources
        └── XcodeProjectModifier
            └── main.swift
```

# Files

--------------------------------------------------------------------------------
/src/utils/projects/XcodeBuild.ts:
--------------------------------------------------------------------------------

```typescript
import { execAsync } from '../../utils.js';
import { execSync } from 'child_process';
import { createModuleLogger } from '../../logger.js';
import { Platform } from '../../types.js';
import { PlatformInfo } from '../../features/build/domain/PlatformInfo.js';
import { existsSync, mkdirSync, rmSync } from 'fs';
import path from 'path';
import { config } from '../../config.js';
import { LogManager } from '../LogManager.js';
import { parseXcbeautifyOutput, formatParsedOutput } from '../errors/xcbeautify-parser.js';

const logger = createModuleLogger('XcodeBuild');
const logManager = new LogManager();

export interface BuildOptions {
  scheme?: string;
  configuration?: string;
  platform?: Platform;
  deviceId?: string;
  derivedDataPath?: string;
}

export interface TestOptions {
  scheme?: string;
  configuration?: string;
  platform?: Platform;
  deviceId?: string;
  testFilter?: string;
  testTarget?: string;
}

// Using unified xcbeautify parser for all error handling

/**
 * Handles xcodebuild commands for Xcode projects
 */
export class XcodeBuild {
  /**
   * Validates that a scheme supports the requested platform
   * @throws Error if the platform is not supported
   */
  private async validatePlatformSupport(
    projectPath: string,
    isWorkspace: boolean,
    scheme: string | undefined,
    platform: Platform
  ): Promise<void> {
    const projectFlag = isWorkspace ? '-workspace' : '-project';
    
    let command = `xcodebuild -showBuildSettings ${projectFlag} "${projectPath}"`;
    
    if (scheme) {
      command += ` -scheme "${scheme}"`;
    }
    
    // Use a generic destination to check platform support
    const platformInfo = PlatformInfo.fromPlatform(platform);
    const destination = platformInfo.generateGenericDestination();
    command += ` -destination '${destination}'`;
    
    try {
      logger.debug({ command }, 'Validating platform support');
      // Just check if the command succeeds - we don't need the output
      await execAsync(command, { 
        maxBuffer: 1024 * 1024, // 1MB is enough for validation
        timeout: 10000 // 10 second timeout for validation
      });
      logger.debug({ platform, scheme }, 'Platform validation succeeded');
    } catch (error: any) {
      const stderr = error.stderr || '';
      const stdout = error.stdout || '';
      
      // Check if error indicates platform mismatch
      if (stderr.includes('Available destinations for') || stdout.includes('Available destinations for')) {
        // Extract available platforms from the error message
        const availablePlatforms = this.extractAvailablePlatforms(stderr + stdout);
        throw new Error(
          `Platform '${platform}' is not supported by scheme '${scheme || 'default'}'. ` +
          `Available platforms: ${availablePlatforms.join(', ')}`
        );
      }
      
      // Some other error - let it pass through for now
      logger.warn({ error: error.message }, 'Platform validation check failed, continuing anyway');
    }
  }
  
  /**
   * Extracts available platforms from xcodebuild error output
   */
  private extractAvailablePlatforms(output: string): string[] {
    const platforms = new Set<string>();
    const lines = output.split('\n');
    
    for (const line of lines) {
      // Look for lines like "{ platform:watchOS" or "{ platform:iOS Simulator"
      const match = line.match(/\{ platform:([^,}]+)/);
      if (match) {
        let platform = match[1].trim();
        // Normalize platform names
        if (platform.includes('Simulator')) {
          platform = platform.replace(' Simulator', '');
        }
        platforms.add(platform);
      }
    }
    
    return Array.from(platforms);
  }
  /**
   * Build an Xcode project or workspace
   */
  async build(
    projectPath: string, 
    isWorkspace: boolean,
    options: BuildOptions = {}
  ): Promise<{ success: boolean; output: string; appPath?: string; logPath?: string; errors?: any[] }> {
    const {
      scheme,
      configuration = 'Debug',
      platform = Platform.iOS,
      deviceId,
      derivedDataPath = './DerivedData'
    } = options;
    
    // Validate platform support first
    await this.validatePlatformSupport(projectPath, isWorkspace, scheme, platform);
    
    const projectFlag = isWorkspace ? '-workspace' : '-project';
    let command = `xcodebuild ${projectFlag} "${projectPath}"`;
    
    if (scheme) {
      command += ` -scheme "${scheme}"`;
    }
    
    command += ` -configuration "${configuration}"`;
    
    // Determine destination
    const platformInfo = PlatformInfo.fromPlatform(platform);
    let destination: string;
    if (deviceId) {
      destination = platformInfo.generateDestination(deviceId);
    } else {
      destination = platformInfo.generateGenericDestination();
    }
    command += ` -destination '${destination}'`;
    
    command += ` -derivedDataPath "${derivedDataPath}" build`;
    
    // Pipe through xcbeautify for clean output
    command = `set -o pipefail && ${command} 2>&1 | xcbeautify`;
    
    logger.debug({ command }, 'Build command');
    
    let output = '';
    let exitCode = 0;
    const projectName = path.basename(projectPath, path.extname(projectPath));
    
    try {
      const { stdout, stderr } = await execAsync(command, { 
        maxBuffer: 50 * 1024 * 1024,
        shell: '/bin/bash'
      });
      
      output = stdout + (stderr ? `\n${stderr}` : '');
      
      // Try to find the built app using find command (more reliable than parsing output)
      let appPath: string | undefined;
      try {
        const { stdout: findOutput } = await execAsync(
          `find "${derivedDataPath}" -name "*.app" -type d | head -1`
        );
        appPath = findOutput.trim() || undefined;
        
        if (appPath) {
          logger.info({ appPath }, 'Found app at path');
          
          // Verify the app actually exists
          if (!existsSync(appPath)) {
            logger.error({ appPath }, 'App path does not exist!');
            appPath = undefined;
          }
        } else {
          logger.warn({ derivedDataPath }, 'No app found in DerivedData');
        }
      } catch (error: any) {
        logger.error({ error: error.message, derivedDataPath }, 'Error finding app path');
      }
      
      logger.info({ projectPath, scheme, configuration, platform }, 'Build succeeded');
      
      // Save the build output to logs
      const logPath = logManager.saveLog('build', output, projectName, {
        scheme,
        configuration,
        platform,
        exitCode,
        command
      });
      
      return {
        success: true,
        output,
        appPath,
        logPath
      };
    } catch (error: any) {
      logger.error({ error: error.message, projectPath }, 'Build failed');
      
      output = (error.stdout || '') + (error.stderr ? `\n${error.stderr}` : '');
      exitCode = error.code || 1;
      
      // Parse errors using the unified xcbeautify parser
      const parsed = parseXcbeautifyOutput(output);
      
      // Log for debugging
      if (parsed.errors.length === 0 && output.toLowerCase().includes('error:')) {
        logger.warn({ outputSample: output.substring(0, 500) }, 'Output contains "error:" but no errors were parsed');
      }
      
      // Save the build output to logs
      const logPath = logManager.saveLog('build', output, projectName, {
        scheme,
        configuration,
        platform,
        exitCode,
        command,
        errors: parsed.errors,
        warnings: parsed.warnings
      });
      
      // Save debug data with parsed errors
      if (parsed.errors.length > 0) {
        logManager.saveDebugData('build-errors', parsed.errors, projectName);
        logger.info({ errorCount: parsed.errors.length, warningCount: parsed.warnings.length }, 'Parsed errors');
      }
      
      // Create error with parsed details
      const errorWithDetails = new Error(formatParsedOutput(parsed)) as any;
      errorWithDetails.output = output;
      errorWithDetails.parsed = parsed;
      errorWithDetails.logPath = logPath;
      
      throw errorWithDetails;
    }
  }
  
  /**
   * Run tests for an Xcode project
   */
  async test(
    projectPath: string,
    isWorkspace: boolean,
    options: TestOptions = {}
  ): Promise<{ 
    success: boolean; 
    output: string; 
    passed: number; 
    failed: number; 
    failingTests?: Array<{ identifier: string; reason: string }>;
    errors?: any[];
    warnings?: any[];
    logPath: string;
  }> {
    const {
      scheme,
      configuration = 'Debug',
      platform = Platform.iOS,
      deviceId,
      testFilter,
      testTarget
    } = options;
    
    // Create a unique result bundle path in DerivedData
    const timestamp = new Date().toISOString().replace(/[:.]/g, '-');
    const derivedDataPath = config.getDerivedDataPath(projectPath);
    let resultBundlePath = path.join(
      derivedDataPath,
      'Logs',
      'Test',
      `Test-${scheme || 'tests'}-${timestamp}.xcresult`
    );
    
    // Ensure result directory exists
    const resultDir = path.dirname(resultBundlePath);
    if (!existsSync(resultDir)) {
      mkdirSync(resultDir, { recursive: true });
    }
    
    const projectFlag = isWorkspace ? '-workspace' : '-project';
    let command = `xcodebuild ${projectFlag} "${projectPath}"`;
    
    if (scheme) {
      command += ` -scheme "${scheme}"`;
    }
    
    command += ` -configuration "${configuration}"`;
    
    // Determine destination
    const platformInfo = PlatformInfo.fromPlatform(platform);
    let destination: string;
    if (deviceId) {
      destination = platformInfo.generateDestination(deviceId);
    } else {
      destination = platformInfo.generateGenericDestination();
    }
    command += ` -destination '${destination}'`;
    
    // Add test target/filter if provided
    if (testTarget) {
      command += ` -only-testing:${testTarget}`;
    }
    if (testFilter) {
      command += ` -only-testing:${testFilter}`;
    }
    
    // Disable parallel testing to avoid timeouts and multiple simulator instances
    command += ' -parallel-testing-enabled NO';
    
    // Add result bundle path
    command += ` -resultBundlePath "${resultBundlePath}"`;
    
    command += ' test';
    
    // Pipe through xcbeautify for clean output
    command = `set -o pipefail && ${command} 2>&1 | xcbeautify`;
    
    logger.debug({ command }, 'Test command');
    
    // Use execAsync instead of spawn to ensure the xcresult is fully written when we get the result
    let output = '';
    let code = 0;
    
    try {
      logger.info('Running tests...');
      const { stdout, stderr } = await execAsync(command, {
        maxBuffer: 50 * 1024 * 1024, // 50MB buffer for large test outputs
        timeout: 1800000, // 10 minute timeout for tests
        shell: '/bin/bash'
      });
      
      output = stdout + (stderr ? '\n' + stderr : '');
    } catch (error: any) {
      // Test failure is expected, capture the output
      output = (error.stdout || '') + (error.stderr ? '\n' + error.stderr : '');
      code = error.code || 1;
      logger.debug({ code }, 'Tests completed with failures');
    }
    
    // Parse compile errors and warnings using the central parser
    const parsed = parseXcbeautifyOutput(output);
    
    // Save the full test output to logs
    const projectName = path.basename(projectPath, path.extname(projectPath));
    const logPath = logManager.saveLog('test', output, projectName, {
      scheme,
      configuration,
      platform,
      exitCode: code,
      command,
      errors: parsed.errors.length > 0 ? parsed.errors : undefined,
      warnings: parsed.warnings.length > 0 ? parsed.warnings : undefined
    });
    logger.debug({ logPath }, 'Test output saved to log file');
    
    // Parse the xcresult bundle for accurate test results
    let testResult = { 
      passed: 0, 
      failed: 0, 
      success: false, 
      failingTests: undefined as Array<{ identifier: string; reason: string }> | undefined,
      logPath
    };
    
    // Try to extract the actual xcresult path from the output
    const resultMatch = output.match(/Test session results.*?\n\s*(.+\.xcresult)/);
    if (resultMatch) {
      resultBundlePath = resultMatch[1].trim();
      logger.debug({ resultBundlePath }, 'Found xcresult path in output');
    }
    
    // Also check for the "Writing result bundle at path" message
    const writingMatch = output.match(/Writing result bundle at path:\s*(.+\.xcresult)/);
    if (!resultMatch && writingMatch) {
      resultBundlePath = writingMatch[1].trim();
      logger.debug({ resultBundlePath }, 'Found xcresult path from Writing message');
    }
    
    try {
          // Check if xcresult exists and wait for it to be fully written
          // Wait for the xcresult bundle to be created and fully written (up to 10 seconds)
          let waitTime = 0;
          const maxWaitTime = 10000;
          const checkInterval = 200;
          
          // Check both that the directory exists and has the Info.plist file
          const isXcresultReady = () => {
            if (!existsSync(resultBundlePath)) {
              return false;
            }
            // Check if Info.plist exists inside the bundle, which indicates it's fully written
            const infoPlistPath = path.join(resultBundlePath, 'Info.plist');
            return existsSync(infoPlistPath);
          };
          
          while (!isXcresultReady() && waitTime < maxWaitTime) {
            await new Promise(resolve => setTimeout(resolve, checkInterval));
            waitTime += checkInterval;
          }
          
          if (!isXcresultReady()) {
            logger.warn({ resultBundlePath, waitTime }, 'xcresult bundle not ready after waiting, using fallback parsing');
            throw new Error('xcresult bundle not ready');
          }
          
          // Give xcresulttool a moment to prepare for reading
          await new Promise(resolve => setTimeout(resolve, 300));
          
          logger.debug({ resultBundlePath, waitTime }, 'xcresult bundle is ready');
          
          let testReportJson;
          let totalPassed = 0;
          let totalFailed = 0;
          const failingTests: Array<{ identifier: string; reason: string }> = [];
          
          try {
            // Try the new format first (Xcode 16+)
            logger.debug({ resultBundlePath }, 'Attempting to parse xcresult with new format');
            testReportJson = execSync(
              `xcrun xcresulttool get test-results summary --path "${resultBundlePath}"`,
              { encoding: 'utf8', maxBuffer: 50 * 1024 * 1024 }
            );
            
            const summary = JSON.parse(testReportJson);
            logger.debug({ summary: { passedTests: summary.passedTests, failedTests: summary.failedTests } }, 'Got summary from xcresulttool');
            
            // The summary counts are not reliable for mixed XCTest/Swift Testing
            // We'll count from the detailed test nodes instead
            
            // Always get the detailed tests to count accurately
            try {
                const testsJson = execSync(
                  `xcrun xcresulttool get test-results tests --path "${resultBundlePath}"`,
                  { encoding: 'utf8', maxBuffer: 50 * 1024 * 1024 }
                );
                const testsData = JSON.parse(testsJson);
                
                // Helper function to count tests and extract failing tests with reasons
                const processTestNodes = (node: any, parentName: string = ''): void => {
                  if (!node) return;
                  
                  // Count test cases (including argument variations)
                  if (node.nodeType === 'Test Case') {
                    // Check if this test has argument variations
                    let hasArguments = false;
                    if (node.children && Array.isArray(node.children)) {
                      for (const child of node.children) {
                        if (child.nodeType === 'Arguments') {
                          hasArguments = true;
                          // Each argument variation is a separate test
                          if (child.result === 'Passed') {
                            totalPassed++;
                          } else if (child.result === 'Failed') {
                            totalFailed++;
                          }
                        }
                      }
                    }
                    
                    // If no arguments, count the test case itself
                    if (!hasArguments) {
                      if (node.result === 'Passed') {
                        totalPassed++;
                      } else if (node.result === 'Failed') {
                        totalFailed++;
                        
                        // Extract failure information
                        let testName = node.nodeIdentifier || node.name || parentName;
                        let failureReason = '';
                        
                        // Look for failure message in children
                        if (node.children && Array.isArray(node.children)) {
                          for (const child of node.children) {
                            if (child.nodeType === 'Failure Message') {
                              failureReason = child.details || child.name || 'Test failed';
                              break;
                            }
                          }
                        }
                        
                        // Add test as an object with identifier and reason
                        failingTests.push({
                          identifier: testName,
                          reason: failureReason || 'Test failed (no details available)'
                        });
                      }
                    }
                  }
                  
                  // Recurse through children
                  if (node.children && Array.isArray(node.children)) {
                    for (const child of node.children) {
                      processTestNodes(child, node.name || parentName);
                    }
                  }
                };
                
                // Parse the test nodes to count tests and extract failing test names with reasons
                if (testsData.testNodes && Array.isArray(testsData.testNodes)) {
                  for (const testNode of testsData.testNodes) {
                    processTestNodes(testNode);
                  }
                }
            } catch (detailsError: any) {
              logger.debug({ error: detailsError.message }, 'Could not extract failing test details');
            }
            
          } catch (newFormatError: any) {
            // Fall back to legacy format
            logger.debug('Falling back to legacy xcresulttool format');
            testReportJson = execSync(
              `xcrun xcresulttool get test-report --legacy --format json --path "${resultBundlePath}"`,
              { encoding: 'utf8', maxBuffer: 50 * 1024 * 1024 }
            );
            
            const testReport = JSON.parse(testReportJson);
            
            // Parse the legacy test report structure
            if (testReport.tests) {
              const countTests = (tests: any[]): void => {
                for (const test of tests) {
                  if (test.subtests) {
                    // This is a test suite, recurse into it
                    countTests(test.subtests);
                  } else if (test.testStatus) {
                    // This is an actual test
                    if (test.testStatus === 'Success') {
                      totalPassed++;
                    } else if (test.testStatus === 'Failure' || test.testStatus === 'Expected Failure') {
                      totalFailed++;
                      // Extract test name and failure details
                      if (test.identifier) {
                        const failureReason = test.failureMessage || test.message || 'Test failed (no details available)';
                        failingTests.push({
                          identifier: test.identifier,
                          reason: failureReason
                        });
                      }
                    }
                  }
                }
              };
              
              countTests(testReport.tests);
            }
          }
          
          testResult = {
            passed: totalPassed,
            failed: totalFailed,
            success: totalFailed === 0 && code === 0,
            failingTests: failingTests.length > 0 ? failingTests : undefined,
            logPath
          };
          
          // Save debug data for successful parsing
          logManager.saveDebugData('test-xcresult-parsed', {
            passed: totalPassed,
            failed: totalFailed,
            failingTests,
            resultBundlePath
          }, projectName);
          
          logger.info({ 
            projectPath, 
            ...testResult, 
            exitCode: code,
            resultBundlePath 
          }, 'Tests completed (parsed from xcresult)');
          
        } catch (parseError: any) {
          logger.error({ 
            error: parseError.message,
            resultBundlePath,
            xcresultExists: existsSync(resultBundlePath) 
          }, 'Failed to parse xcresult bundle');
          
          // Save debug info about the failure
          logManager.saveDebugData('test-xcresult-parse-error', {
            error: parseError.message,
            resultBundlePath,
            exists: existsSync(resultBundlePath)
          }, projectName);
          
          // If xcresulttool fails, try to parse counts from the text output
          const passedMatch = output.match(/Executed (\d+) tests?, with (\d+) failures?/);
          if (passedMatch) {
            const totalTests = parseInt(passedMatch[1], 10);
            const failures = parseInt(passedMatch[2], 10);
            testResult = {
              passed: totalTests - failures,
              failed: failures,
              success: failures === 0,
              failingTests: undefined,
              logPath
            };
          } else {
            // Last resort fallback
            testResult = {
              passed: 0,
              failed: code === 0 ? 0 : 1,
              success: code === 0,
              failingTests: undefined,
              logPath
            };
          }
        }
        
    // Parse build errors from output
    // Errors are already parsed by xcbeautify parser
    
    const result = {
      ...testResult,
      success: code === 0 && testResult.failed === 0,
      output,
      errors: parsed.errors.length > 0 ? parsed.errors : undefined,
      warnings: parsed.warnings.length > 0 ? parsed.warnings : undefined
    };
    
    // Clean up the result bundle if tests passed (keep failed results for debugging)
    if (result.success) {
      try {
        rmSync(resultBundlePath, { recursive: true, force: true });
      } catch {
        // Ignore cleanup errors
      }
    }
    
    return result;
  }
  
  /**
   * Clean build artifacts
   */
  async clean(
    projectPath: string,
    isWorkspace: boolean,
    options: { scheme?: string; configuration?: string } = {}
  ): Promise<void> {
    const { scheme, configuration = 'Debug' } = options;
    
    const projectFlag = isWorkspace ? '-workspace' : '-project';
    let command = `xcodebuild ${projectFlag} "${projectPath}"`;
    
    if (scheme) {
      command += ` -scheme "${scheme}"`;
    }
    
    command += ` -configuration "${configuration}" clean`;
    
    logger.debug({ command }, 'Clean command');
    
    try {
      await execAsync(command);
      logger.info({ projectPath }, 'Clean succeeded');
    } catch (error: any) {
      logger.error({ error: error.message, projectPath }, 'Clean failed');
      throw new Error(`Clean failed: ${error.message}`);
    }
  }
}
```

--------------------------------------------------------------------------------
/src/utils/projects/SwiftBuild.ts:
--------------------------------------------------------------------------------

```typescript
import { execAsync } from '../../utils.js';
import { createModuleLogger } from '../../logger.js';
import path from 'path';
import { existsSync, readFileSync, unlinkSync } from 'fs';
import { tmpdir } from 'os';
import { XMLParser } from 'fast-xml-parser';
import { LogManager } from '../LogManager.js';
import { parseXcbeautifyOutput, Issue } from '../errors/xcbeautify-parser.js';

const logger = createModuleLogger('SwiftBuild');
const logManager = new LogManager();

export interface SwiftBuildOptions {
  configuration?: 'Debug' | 'Release';
  product?: string;
  target?: string;
}

export interface SwiftRunOptions {
  executable?: string;
  arguments?: string[];
  configuration?: 'Debug' | 'Release';
}

export interface SwiftTestOptions {
  filter?: string;
  configuration?: 'Debug' | 'Release';
}

/**
 * Handles Swift package commands (build, run, test)
 */
export class SwiftBuild {
  /**
   * Parse compile errors from Swift compiler output
   * @unused - Kept for potential future use
   */
  private parseCompileErrors(output: string): Issue[] {
    const errors: Issue[] = [];
    const lines = output.split('\n');
    
    // Swift compiler error format:
    // /path/to/file.swift:10:15: error: message here
    // /path/to/file.swift:20:8: warning: message here
    const errorRegex = /^(.+):(\d+):(\d+):\s+(error|warning):\s+(.+)$/;
    
    // Track unique errors (same as XcodeBuild to avoid duplicates)
    const seenErrors = new Set<string>();
    
    for (const line of lines) {
      const match = line.match(errorRegex);
      if (match) {
        const [, file, lineNum, column, type, message] = match;
        
        // Create unique key to avoid duplicates
        const errorKey = `${file}:${lineNum}:${column}:${message}`;
        
        if (!seenErrors.has(errorKey)) {
          seenErrors.add(errorKey);
          errors.push({
            file,
            line: parseInt(lineNum, 10),
            column: parseInt(column, 10),
            message,
            type: type as 'error' | 'warning',
            rawLine: line
          });
        }
      }
    }
    
    return errors;
  }
  /**
   * Build a Swift package
   */
  async build(
    packagePath: string,
    options: SwiftBuildOptions = {}
  ): Promise<{ success: boolean; output: string; logPath?: string; errors?: Issue[]; warnings?: Issue[] }> {
    const { configuration = 'Debug', product, target } = options;
    
    // Convert to lowercase for swift command
    const configFlag = configuration.toLowerCase();
    let command = `swift build --package-path "${packagePath}" -c ${configFlag}`;
    
    if (product) {
      command += ` --product "${product}"`;
    }
    
    if (target) {
      command += ` --target "${target}"`;
    }
    
    logger.debug({ command }, 'Build command');
    
    try {
      const { stdout, stderr } = await execAsync(command, {
        maxBuffer: 10 * 1024 * 1024
      });
      
      const output = stdout + (stderr ? `\n${stderr}` : '');
      
      // Save log
      const packageName = path.basename(packagePath);
      const logPath = logManager.saveLog('build', output, packageName, {
        configuration,
        product,
        target
      });
      
      logger.info({ packagePath, configuration, logPath }, 'Build succeeded');
      
      return {
        success: true,
        output,
        logPath
      };
    } catch (error: any) {
      logger.error({ error: error.message, packagePath }, 'Build failed');
      
      // Get full output
      const output = (error.stdout || '') + (error.stderr ? `\n${error.stderr}` : '');
      
      // Save log
      const packageName = path.basename(packagePath);
      const logPath = logManager.saveLog('build', output, packageName, {
        configuration,
        product,
        target,
        exitCode: error.code || 1
      });
      
      // Parse errors using unified xcbeautify parser
      const parsed = parseXcbeautifyOutput(output);
      const compileErrors = parsed.errors;
      const buildErrors: Issue[] = [];
      
      // Throw error with output for handler to parse
      const buildError: any = new Error(output);
      buildError.compileErrors = compileErrors;
      buildError.buildErrors = buildErrors;
      buildError.logPath = logPath;
      buildError.output = output;
      throw buildError;
    }
  }
  
  /**
   * Run a Swift package executable
   */
  async run(
    packagePath: string,
    options: SwiftRunOptions = {}
  ): Promise<{ success: boolean; output: string; logPath?: string; errors?: Issue[]; warnings?: Issue[] }> {
    const { executable, arguments: args = [], configuration = 'Debug' } = options;
    
    // Convert to lowercase for swift command
    const configFlag = configuration.toLowerCase();
    let command = `swift run --package-path "${packagePath}" -c ${configFlag}`;
    
    if (executable) {
      command += ` "${executable}"`;
    }
    
    if (args.length > 0) {
      command += ` ${args.map(arg => `"${arg}"`).join(' ')}`;
    }
    
    logger.debug({ command }, 'Run command');
    
    try {
      const { stdout, stderr } = await execAsync(command, {
        maxBuffer: 10 * 1024 * 1024
      });
      
      const output = stdout + (stderr ? `\n${stderr}` : '');
      
      // Save log
      const packageName = path.basename(packagePath);
      const logPath = logManager.saveLog('run', output, packageName, {
        configuration,
        executable,
        arguments: args
      });
      
      logger.info({ packagePath, executable, logPath }, 'Run succeeded');
      
      return {
        success: true,
        output,
        logPath
      };
    } catch (error: any) {
      logger.error({ error: error.message, packagePath }, 'Run failed');
      
      // Get full output - for swift run, build output is in stderr, executable output is in stdout
      // We want to show them in chronological order: build first, then executable
      const output = (error.stderr || '') + (error.stdout ? `\n${error.stdout}` : '');
      
      // Save log
      const packageName = path.basename(packagePath);
      const logPath = logManager.saveLog('run', output, packageName, {
        configuration,
        executable,
        arguments: args,
        exitCode: error.code || 1
      });
      
      // Parse errors using unified xcbeautify parser
      const parsed = parseXcbeautifyOutput(output);
      const compileErrors = parsed.errors;
      const buildErrors: Issue[] = [];
      
      // Throw error with output for handler to parse
      const runError: any = new Error(output);
      runError.compileErrors = compileErrors;
      runError.buildErrors = buildErrors;
      runError.logPath = logPath;
      runError.output = output;
      throw runError;
    }
  }
  
  /**
   * Test a Swift package
   */
  async test(
    packagePath: string,
    options: SwiftTestOptions = {}
  ): Promise<{ 
    success: boolean; 
    output: string;
    passed: number;
    failed: number;
    failingTests?: Array<{ identifier: string; reason: string }>;
    errors?: Issue[];
    warnings?: Issue[];
    logPath: string;
  }> {
    const { filter, configuration = 'Debug' } = options;
    
    
    // Convert to lowercase for swift command
    const configFlag = configuration.toLowerCase();
    
    // Generate unique xunit output file in temp directory
    const xunitPath = path.join(tmpdir(), `test-${Date.now()}-${Math.random().toString(36).substring(7)}.xml`);
    const swiftTestingXunitPath = xunitPath.replace('.xml', '-swift-testing.xml');
    
    let command = `swift test --package-path "${packagePath}" -c ${configFlag}`;
    
    if (filter) {
      command += ` --filter "${filter}"`;
    }
    
    // Add parallel and xunit output for better results
    command += ` --parallel --xunit-output "${xunitPath}"`;
    
    logger.debug({ command, xunitPath, swiftTestingXunitPath }, 'Test command');
    
    // Extract package name for logging
    const packageName = path.basename(packagePath);
    
    let testResult = { passed: 0, failed: 0, success: false, failingTests: undefined as Array<{ identifier: string; reason: string }> | undefined };
    let output = '';
    let exitCode = 0;
    
    try {
      const { stdout, stderr } = await execAsync(command, {
        maxBuffer: 10 * 1024 * 1024
      });
      
      output = stdout + (stderr ? `\n${stderr}` : '');
      
      // Parse XUnit files for test results
      const xunitResults = this.parseXunitFiles(xunitPath, swiftTestingXunitPath, output);
      
      // Use XUnit results if available
      if (xunitResults) {
        testResult = { ...testResult, ...xunitResults };
      } else {
        // Fallback to console parsing if XUnit fails
        const parsedResults = this.parseTestOutput(output);
        testResult = { ...testResult, ...parsedResults };
      }
      
      testResult.success = exitCode === 0 && testResult.failed === 0;
      
      // Clean up XUnit files
      this.cleanupXunitFiles(xunitPath, swiftTestingXunitPath);
      
      logger.info({ 
        packagePath, 
        passed: testResult.passed, 
        failed: testResult.failed,
        failingTests: testResult.failingTests,
        source: xunitResults ? 'xunit' : 'console'
      }, 'Tests completed');
      
      // Save the test output to logs
      const logPath = logManager.saveLog('test', output, packageName, {
        configuration,
        filter,
        exitCode,
        command,
        testResults: testResult
      });
      
      return {
        ...testResult,
        output,
        logPath
      };
    } catch (error: any) {
      logger.error({ error: error.message, packagePath }, 'Tests failed');
      
      // Extract output from error
      output = (error.stdout || '') + (error.stderr ? `\n${error.stderr}` : '');
      exitCode = error.code || 1;
      
      // Parse XUnit files for test results
      const xunitResults = this.parseXunitFiles(xunitPath, swiftTestingXunitPath, output);
      
      // Use XUnit results if available
      if (xunitResults) {
        testResult = { ...testResult, ...xunitResults };
      } else {
        // Fallback to console parsing if XUnit fails
        const parsedResults = this.parseTestOutput(output);
        testResult = { ...testResult, ...parsedResults };
      }
      
      // Clean up XUnit files
      this.cleanupXunitFiles(xunitPath, swiftTestingXunitPath);
      
      // Parse errors using unified xcbeautify parser
      const parsed = parseXcbeautifyOutput(output);
      
      // Save the test output to logs
      const logPath = logManager.saveLog('test', output, packageName, {
        configuration,
        filter,
        exitCode,
        command,
        testResults: testResult,
        errors: parsed.errors.length > 0 ? parsed.errors : undefined,
        warnings: parsed.warnings.length > 0 ? parsed.warnings : undefined
      });
      
      return {
        ...testResult,
        success: false,
        output,
        errors: parsed.errors.length > 0 ? parsed.errors : undefined,
        warnings: parsed.warnings.length > 0 ? parsed.warnings : undefined,
        logPath
      };
    }
  }

  /**
   * Parse test output from console
   */
  private parseTestOutput(output: string): { passed?: number; failed?: number; failingTests?: Array<{ identifier: string; reason: string }> } {
    const result: { passed?: number; failed?: number; failingTests?: Array<{ identifier: string; reason: string }> } = {};
    
    // Parse test counts
    const counts = this.parseTestCounts(output);
    if (counts) {
      result.passed = counts.passed;
      result.failed = counts.failed;
    }
    
    // Parse failing tests
    const failingTests = this.parseFailingTests(output);
    if (failingTests.length > 0) {
      result.failingTests = failingTests;
    }
    
    return result;
  }

  /**
   * Parse test counts from output
   */
  private parseTestCounts(output: string): { passed: number; failed: number } | null {
    // XCTest format: "Executed 1 test, with 1 failure"
    // Look for the last occurrence to get the summary
    const xcTestMatches = [...output.matchAll(/Executed (\d+) test(?:s)?, with (\d+) failure/g)];
    if (xcTestMatches.length > 0) {
      const lastMatch = xcTestMatches[xcTestMatches.length - 1];
      const totalTests = parseInt(lastMatch[1], 10);
      const failures = parseInt(lastMatch[2], 10);
      
      // If we found XCTest results with actual tests, use them
      if (totalTests > 0) {
        return {
          passed: totalTests - failures,
          failed: failures
        };
      }
    }
    
    // Swift Testing format: "✘ Test run with 1 test failed after..." or "✔ Test run with X tests passed after..."
    const swiftTestingMatch = output.match(/[✘✔] Test run with (\d+) test(?:s)? (passed|failed)/);
    if (swiftTestingMatch) {
      const testCount = parseInt(swiftTestingMatch[1], 10);
      const status = swiftTestingMatch[2];
      
      // Only use Swift Testing results if we have actual tests
      if (testCount > 0) {
        if (status === 'failed') {
          return { passed: 0, failed: testCount };
        } else {
          return { passed: testCount, failed: 0 };
        }
      }
    }
    
    return null;
  }

  /**
   * Parse failing test details from output
   */
  private parseFailingTests(output: string): Array<{ identifier: string; reason: string }> {
    const failingTests: Array<{ identifier: string; reason: string }> = [];
    
    // Parse XCTest failures
    const xcTestFailures = this.parseXCTestFailures(output);
    failingTests.push(...xcTestFailures);
    
    // Parse Swift Testing failures
    const swiftTestingFailures = this.parseSwiftTestingFailures(output);
    
    // Add Swift Testing failures, avoiding duplicates
    for (const failure of swiftTestingFailures) {
      if (!failingTests.some(t => t.identifier === failure.identifier)) {
        failingTests.push(failure);
      }
    }
    
    logger.debug({ failingTestsCount: failingTests.length, failingTests }, 'Parsed failing tests from console output');
    return failingTests;
  }

  /**
   * Parse XCTest failure details
   */
  private parseXCTestFailures(output: string): Array<{ identifier: string; reason: string }> {
    const failures: Array<{ identifier: string; reason: string }> = [];
    const pattern = /Test Case '-\[(\S+)\s+(\w+)\]' failed/g;
    let match;
    
    while ((match = pattern.exec(output)) !== null) {
      const className = match[1];
      const methodName = match[2];
      const identifier = `${className}.${methodName}`;
      const reason = this.extractXCTestFailureReason(output, className, methodName);
      
      failures.push({ identifier, reason });
    }
    
    return failures;
  }

  /**
   * Extract failure reason for a specific XCTest
   */
  private extractXCTestFailureReason(output: string, className: string, testName: string): string {
    const lines = output.split('\n');
    
    // Try both formats: full class name and just test name
    const patterns = [
      `Test Case '-[${className} ${testName}]' failed`,
      `Test Case '-[${className.split('.').pop()} ${testName}]' failed`
    ];
    
    for (const pattern of patterns) {
      for (let i = 0; i < lines.length; i++) {
        if (lines[i].includes(pattern)) {
          // Check the previous line for error details
          if (i > 0) {
            const prevLine = lines[i-1];
            
            // XCTFail format: "error: ... : failed - <message>"
            if (prevLine.includes('failed -')) {
              const failedMatch = prevLine.match(/failed\s*-\s*(.+)$/);
              if (failedMatch) {
                return failedMatch[1].trim();
              }
            }
            
            // XCTAssert format: may have the full error with escaped quotes
            if (prevLine.includes('error:')) {
              // Try to extract custom message after the last dash
              const customMessageMatch = prevLine.match(/\s-\s([^-]+)$/);
              if (customMessageMatch) {
                return customMessageMatch[1].trim();
              }
              
              // Try to extract the assertion type
              if (prevLine.includes('XCTAssertEqual failed')) {
                // Clean up the XCTAssertEqual format
                const assertMatch = prevLine.match(/XCTAssertEqual failed:.*?-\s*(.+)$/);
                if (assertMatch) {
                  return assertMatch[1].trim();
                }
                // If no custom message, return a generic one
                return 'Values are not equal';
              }
              
              // Generic error format: extract everything after "error: ... :"
              const errorMatch = prevLine.match(/error:\s*[^:]+:\s*(.+)$/);
              if (errorMatch) {
                let reason = errorMatch[1].trim();
                // Clean up escaped quotes and format
                reason = reason.replace(/\\"/g, '"');
                // Remove the redundant class/method prefix if present
                reason = reason.replace(new RegExp(`^-?\\[${className.replace(/[.*+?^${}()|[\]\\]/g, '\\$&')}[^\\]]*\\]\\s*:\\s*`, 'i'), '');
                return reason.trim();
              }
            }
          }
          break;
        }
      }
    }
    
    return 'Test failed';
  }

  /**
   * Parse Swift Testing failure details
   */
  private parseSwiftTestingFailures(output: string): Array<{ identifier: string; reason: string }> {
    const failures: Array<{ identifier: string; reason: string }> = [];
    const pattern = /✘ Test (\w+)\(\) (?:failed|recorded an issue)/g;
    let match;
    
    // Try to find the suite name from the output
    let suiteName: string | null = null;
    const suiteMatch = output.match(/◇ Suite (\w+) started\./);
    if (suiteMatch) {
      suiteName = suiteMatch[1];
    }
    
    while ((match = pattern.exec(output)) !== null) {
      const testName = match[1];
      
      // Build identifier with module.suite.test format to match XCTest
      let identifier = testName;
      const issuePattern = new RegExp(`✘ Test ${testName}\\(\\) recorded an issue at (\\w+)\\.swift`, 'm');
      const issueMatch = output.match(issuePattern);
      if (issueMatch) {
        const fileName = issueMatch[1];
        // If we have a suite name, use module.suite.test format
        // Otherwise fall back to module.test
        if (suiteName) {
          identifier = `${fileName}.${suiteName}.${testName}`;
        } else {
          identifier = `${fileName}.${testName}`;
        }
      }
      
      const reason = this.extractSwiftTestingFailureReason(output, testName);
      
      failures.push({ identifier, reason });
    }
    
    return failures;
  }

  /**
   * Extract failure reason for a specific Swift test
   */
  private extractSwiftTestingFailureReason(output: string, testName: string): string {
    const lines = output.split('\n');
    
    for (let i = 0; i < lines.length; i++) {
      const line = lines[i];
      
      if (line.includes(`✘ Test ${testName}() recorded an issue`)) {
        // Extract the expectation failure message from the same line
        // Format: "✘ Test testFailingTest() recorded an issue at TestSwiftPackageSwiftTestingTests.swift:12:5: Expectation failed: 1 == 2"
        const issueMatch = line.match(/recorded an issue at .*?:\d+:\d+:\s*(.+)$/);
        if (issueMatch) {
          let reason = issueMatch[1];
          
          // Check if there's a message on the following lines (marked with ↳)
          // Collect all lines between ↳ and the next ✘ marker
          const messageLines: string[] = [];
          let inMessage = false;
          
          for (let j = i + 1; j < lines.length && j < i + 20; j++) {
            const nextLine = lines[j];
            
            // Stop when we hit the next test marker
            if (nextLine.includes('✘')) {
              break;
            }
            
            // Start capturing after we see ↳ (but skip comment lines)
            if (nextLine.includes('↳')) {
              if (!nextLine.includes('//')) {
                const messageMatch = nextLine.match(/↳\s*(.+)$/);
                if (messageMatch) {
                  messageLines.push(messageMatch[1].trim());
                  inMessage = true;
                }
              }
            } else if (inMessage && nextLine.trim()) {
              // Capture continuation lines (indented lines without ↳)
              messageLines.push(nextLine.trim());
            }
          }
          
          // If we found message lines, append them to the reason
          if (messageLines.length > 0) {
            reason = `${reason} - ${messageLines.join(' ')}`;
          }
          
          return reason;
        }
        // Fallback to simpler pattern
        const simpleMatch = line.match(/recorded an issue.*?:\s*(.+)$/);
        if (simpleMatch) {
          return simpleMatch[1];
        }
        break;
      } else if (line.includes(`✘ Test ${testName}() failed`)) {
        // Check if there was an issue line before this
        if (i > 0 && lines[i-1].includes('recorded an issue')) {
          const issueMatch = lines[i-1].match(/recorded an issue.*?:\d+:\d+:\s*(.+)$/);
          if (issueMatch) {
            return issueMatch[1];
          }
        }
        break;
      }
    }
    
    return 'Test failed';
  }
  
  /**
   * Parse XUnit files from both XCTest and Swift Testing
   */
  private parseXunitFiles(xunitPath: string, swiftTestingPath: string, consoleOutput: string): {
    passed: number;
    failed: number;
    failingTests?: Array<{ identifier: string; reason: string }>;
  } | null {
    try {
      const parser = new XMLParser({
        ignoreAttributes: false,
        attributeNamePrefix: '@_'
      });
      
      let totalPassed = 0;
      let totalFailed = 0;
      const allFailingTests: Array<{ identifier: string; reason: string }> = [];
      
      // Parse XCTest XUnit file
      if (existsSync(xunitPath)) {
        const xcTestXml = readFileSync(xunitPath, 'utf8');
        const xcTestResult = parser.parse(xcTestXml);
        const xcTestSuite = xcTestResult.testsuites?.testsuite;
        
        if (xcTestSuite && xcTestSuite['@_tests']) {
          const totalTests = parseInt(xcTestSuite['@_tests'], 10);
          const failures = parseInt(xcTestSuite['@_failures'] || '0', 10);
          
          if (totalTests > 0) {
            totalPassed += totalTests - failures;
            totalFailed += failures;
            
            // Extract failing test identifiers (but not reasons - they're just "failed")
            const testcases = Array.isArray(xcTestSuite.testcase) 
              ? xcTestSuite.testcase 
              : xcTestSuite.testcase ? [xcTestSuite.testcase] : [];
            
            for (const testcase of testcases) {
              if (testcase && testcase.failure) {
                const className = testcase['@_classname'] || '';
                const testName = testcase['@_name'] || '';
                const identifier = `${className}.${testName}`;
                
                // Extract reason from console output
                const reason = this.extractXCTestFailureReason(consoleOutput, className, testName);
                allFailingTests.push({ identifier, reason });
              }
            }
          }
        }
      }
      
      // Parse Swift Testing XUnit file
      if (existsSync(swiftTestingPath)) {
        const swiftTestingXml = readFileSync(swiftTestingPath, 'utf8');
        const swiftTestingResult = parser.parse(swiftTestingXml);
        const swiftTestingSuite = swiftTestingResult.testsuites?.testsuite;
        
        if (swiftTestingSuite && swiftTestingSuite['@_tests']) {
          const totalTests = parseInt(swiftTestingSuite['@_tests'], 10);
          const failures = parseInt(swiftTestingSuite['@_failures'] || '0', 10);
          
          if (totalTests > 0) {
            totalPassed += totalTests - failures;
            totalFailed += failures;
            
            // Extract failing tests with full error messages
            const testcases = Array.isArray(swiftTestingSuite.testcase) 
              ? swiftTestingSuite.testcase 
              : swiftTestingSuite.testcase ? [swiftTestingSuite.testcase] : [];
            
            for (const testcase of testcases) {
              if (testcase && testcase.failure) {
                const className = testcase['@_classname'] || '';
                const testName = testcase['@_name'] || '';
                const identifier = `${className}.${testName}`;
                
                // Swift Testing XUnit includes the full error message!
                const failureElement = testcase.failure;
                let reason = 'Test failed';
                if (typeof failureElement === 'object' && failureElement['@_message']) {
                  reason = failureElement['@_message'];
                  // Decode HTML entities
                  reason = reason
                    .replace(/&amp;/g, '&')
                    .replace(/&lt;/g, '<')
                    .replace(/&gt;/g, '>')
                    .replace(/&quot;/g, '"')
                    .replace(/&#10;/g, '\n')
                    .replace(/&#8594;/g, '→');
                  // Replace newlines with space for single-line display
                  reason = reason.replace(/\n+/g, ' ').trim();
                }
                
                allFailingTests.push({ identifier, reason });
              }
            }
          }
        }
      }
      
      // Return results if we found any tests
      if (totalPassed > 0 || totalFailed > 0) {
        logger.debug({ 
          totalPassed, 
          totalFailed, 
          failingTests: allFailingTests,
          xcTestExists: existsSync(xunitPath),
          swiftTestingExists: existsSync(swiftTestingPath)
        }, 'XUnit parsing successful');
        
        return {
          passed: totalPassed,
          failed: totalFailed,
          failingTests: allFailingTests.length > 0 ? allFailingTests : undefined
        };
      }
      
      return null;
    } catch (error: any) {
      logger.error({ error: error.message }, 'Failed to parse XUnit files');
      return null;
    }
  }
  
  /**
   * Clean up XUnit files after parsing
   */
  private cleanupXunitFiles(xunitPath: string, swiftTestingPath: string): void {
    try {
      if (existsSync(xunitPath)) {
        unlinkSync(xunitPath);
      }
      if (existsSync(swiftTestingPath)) {
        unlinkSync(swiftTestingPath);
      }
    } catch (error: any) {
      logger.debug({ error: error.message }, 'Failed to clean up XUnit files');
    }
  }
  
  /**
   * Clean Swift package build artifacts
   */
  async clean(packagePath: string): Promise<void> {
    const command = `swift package clean --package-path "${packagePath}"`;
    
    logger.debug({ command }, 'Clean command');
    
    try {
      await execAsync(command);
      logger.info({ packagePath }, 'Clean succeeded');
    } catch (error: any) {
      logger.error({ error: error.message, packagePath }, 'Clean failed');
      throw new Error(`Clean failed: ${error.message}`);
    }
  }
}
```

--------------------------------------------------------------------------------
/docs/TESTING-PHILOSOPHY.md:
--------------------------------------------------------------------------------

```markdown
# Comprehensive Testing Philosophy

> "The more your tests resemble the way your software is used, the more confidence they can give you." - Kent C. Dodds

## Table of Contents
1. [Fundamental Principles](#fundamental-principles)
2. [Testing Strategies](#testing-strategies)
3. [Test Quality Principles](#test-quality-principles)
4. [Advanced Testing Patterns](#advanced-testing-patterns)
5. [Testing Anti-Patterns](#testing-anti-patterns)
6. [Architecture-Specific Testing](#architecture-specific-testing)
7. [Practical Guidelines](#practical-guidelines)
8. [Jest TypeScript Mocking Best Practices](#jest-typescript-mocking-best-practices)
9. [Troubleshooting Jest TypeScript Issues](#troubleshooting-jest-typescript-issues)
10. [Implementation Checklist](#implementation-checklist)

---

## Fundamental Principles


### 1. Parse, Don't Validate - Type Safety at Boundaries

**Principle**: Transform untrusted input into domain types at system boundaries. Once parsed, data is guaranteed valid throughout the system.

#### ✅ Good Example - Parse at Boundary
```typescript
// Parse raw input into domain type at the boundary
export const bootSimulatorSchema = z.object({
  deviceId: z.string().min(1, 'Device ID is required')
});

export type BootSimulatorArgs = z.infer<typeof bootSimulatorSchema>;

class BootSimulatorTool {
  async execute(args: any) {
    // Parse once at boundary
    const validated = bootSimulatorSchema.parse(args);
    // Now 'validated' is guaranteed to be valid BootSimulatorArgs
    // No need to check deviceId again anywhere in the system
    return this.bootDevice(validated);
  }
}
```

#### ❌ Bad Example - Validate Throughout
```typescript
class BootSimulatorTool {
  async execute(args: any) {
    // Checking validity everywhere = shotgun parsing
    if (!args.deviceId) throw new Error('No device ID');
    return this.bootDevice(args);
  }
  
  private async bootDevice(args: any) {
    // Having to check again!
    if (!args.deviceId || args.deviceId.length === 0) {
      throw new Error('Invalid device ID');
    }
    // ...
  }
}
```

### 2. Domain Primitives - Rich Types Over Primitives

**Principle**: Use domain-specific types that enforce invariants at creation time.

#### ✅ Good Example - Domain Primitive
```typescript
// DeviceId can only exist if valid
class DeviceId {
  private constructor(private readonly value: string) {}
  
  static parse(input: string): DeviceId {
    if (!input || input.length === 0) {
      throw new Error('Device ID cannot be empty');
    }
    if (!input.match(/^[A-F0-9-]+$/i)) {
      throw new Error('Invalid device ID format');
    }
    return new DeviceId(input);
  }
  
  toString(): string {
    return this.value;
  }
}

// Usage - type safety throughout
async bootDevice(deviceId: DeviceId) {
  // No need to validate - DeviceId guarantees validity
  await execAsync(`xcrun simctl boot "${deviceId}"`);
}
```

#### ❌ Bad Example - Primitive Obsession
```typescript
// Strings everywhere = no guarantees
async bootDevice(deviceId: string) {
  // Have to validate everywhere
  if (!deviceId) throw new Error('Invalid device');
  // Easy to pass wrong string
  await execAsync(`xcrun simctl boot "${deviceId}"`);
}

// Easy to mix up parameters
function buildProject(projectPath: string, scheme: string, configuration: string) {
  // Oops, swapped parameters - no compile-time error!
  return build(configuration, projectPath, scheme);
}
```

### 3. Test Behavior, Not Implementation

**Principle**: Test what your code does, not how it does it.

#### ✅ Good Example - Behavior Testing
```typescript
test('boots a simulator device', async () => {
  const tool = new BootSimulatorTool();
  const result = await tool.execute({ deviceId: 'iPhone 15' });
  
  // Test the behavior/outcome
  expect(result.content[0].text).toContain('booted');
  expect(result.content[0].text).toContain('iPhone 15');
});

test('handles already booted device gracefully', async () => {
  const tool = new BootSimulatorTool();
  
  // First boot
  await tool.execute({ deviceId: 'iPhone 15' });
  
  // Second boot should handle gracefully
  const result = await tool.execute({ deviceId: 'iPhone 15' });
  expect(result.content[0].text).toContain('already booted');
});
```

#### ❌ Bad Example - Implementation Testing
```typescript
test('calls correct commands in sequence', async () => {
  const tool = new BootSimulatorTool();
  await tool.execute({ deviceId: 'test-id' });
  
  // Testing HOW it works, not WHAT it does
  expect(mockExecAsync).toHaveBeenCalledWith('xcrun simctl list devices --json');
  expect(mockExecAsync).toHaveBeenCalledWith('xcrun simctl boot "test-id"');
  expect(mockExecAsync).toHaveBeenCalledTimes(2);
  expect(mockExecAsync.mock.calls[0]).toHaveBeenCalledBefore(mockExecAsync.mock.calls[1]);
});
```

## Testing Strategies

### The Testing Trophy (Not Pyramid) - Modern Approach

Based on Kent C. Dodds' philosophy: **"Write tests. Not too many. Mostly integration."**

```
       /\
      /e2e\      <- 10%: Critical user paths
     /------\
    /  integ \   <- 60%: Component interactions (THE FOCUS)
   /----------\
  /    unit    \ <- 25%: Complex logic, algorithms
 /--------------\
/     static     \ <- 5%: TypeScript, ESLint
```

**Why Trophy Over Pyramid**: 
- Integration tests provide the best confidence-to-effort ratio
- Modern tools make integration tests fast
- Unit tests often test implementation details
- "The more your tests resemble the way your software is used, the more confidence they can give you"

### When to Use Each Test Type

#### Static Testing (TypeScript, ESLint)
- **Use for**: Type safety, code style, obvious errors
- **Example**: TypeScript ensuring correct function signatures

#### Unit Tests - Solitary
- **Use for**: Pure functions, complex algorithms, data transformations
- **Mock**: All dependencies
- **Example**: Testing a sorting algorithm, parsing logic

#### Unit Tests - Sociable (Kent Beck's Original Approach)
- **Use for**: Testing small units with their real collaborators
- **Mock**: Only awkward dependencies (network, filesystem)
- **Example**: Testing a service with its real validator

#### ✅ Good Sociable Unit Test
```typescript
test('XcodeProject builds with real configuration', async () => {
  // Use real Configuration and ProjectParser
  const config = new Configuration({ scheme: 'MyApp' });
  const parser = new ProjectParser();
  const project = new XcodeProject('path/to/project', config, parser);
  
  // Only mock the subprocess boundary
  mockExecAsync.mockResolvedValue({ stdout: 'Build succeeded' });
  
  const result = await project.build();
  expect(result.success).toBe(true);
});
```

#### Integration Tests - Narrow (Recommended)
- **Use for**: Testing specific integration points
- **Mock**: External boundaries only (subprocess, network, filesystem)
- **Focus**: Data flow between components

#### ✅ Good Narrow Integration Test
```typescript
test('device information flows correctly through tool chain', async () => {
  // Mock only external boundary
  mockExecAsync.mockResolvedValue({
    stdout: JSON.stringify({ devices: deviceList })
  });
  
  // Test real component interaction
  const tool = new BootSimulatorTool(); // Uses real Devices, real SimulatorDevice
  const result = await tool.execute({ deviceId: 'iPhone 15' });
  
  // Verify outcome, not implementation
  expect(result.content[0].text).toContain('iPhone 15');
});
```

#### Integration Tests - Broad (Use Sparingly)
- **Use for**: Critical paths that must work
- **Mock**: Nothing - use real services
- **Also called**: E2E tests, system tests

#### End-to-End Tests
- **Use for**: Critical user journeys, smoke tests
- **Mock**: Nothing
- **Example**: Actually booting a real simulator

### Contract Testing - API Boundaries

**When to use**: When you have separate services/modules that communicate

#### Consumer-Driven Contract Example
```typescript
// Consumer defines what it needs
const consumerContract = {
  getDevice: {
    request: { deviceId: 'string' },
    response: { 
      id: 'string',
      name: 'string',
      state: 'Booted' | 'Shutdown'
    }
  }
};

// Provider verifies it can fulfill the contract
test('Devices service fulfills consumer contract', async () => {
  const device = await devices.find('test-id');
  expect(device).toMatchObject({
    id: expect.any(String),
    name: expect.any(String),
    state: expect.stringMatching(/Booted|Shutdown/)
  });
});
```

## Property-Based Testing

**Use for**: Finding edge cases, testing invariants

### Example: Testing Invariants
```typescript
import { property, forAll, string } from 'fast-check';

test('device ID parsing is reversible', () => {
  property(
    forAll(string(), (input) => {
      try {
        const deviceId = DeviceId.parse(input);
        const serialized = deviceId.toString();
        const reparsed = DeviceId.parse(serialized);
        // Invariant: parse → toString → parse = identity
        return reparsed.toString() === serialized;
      } catch {
        // Invalid inputs should consistently fail
        expect(() => DeviceId.parse(input)).toThrow();
        return true;
      }
    })
  );
});
```

## Anti-Patterns to Avoid

### 1. Testing Private Methods
```typescript
// ❌ BAD: Testing internals
test('private parseDeviceList works', () => {
  const devices = new Devices();
  // @ts-ignore - accessing private method
  const parsed = devices.parseDeviceList(json);
  expect(parsed).toHaveLength(3);
});

// ✅ GOOD: Test through public API
test('finds devices from list', async () => {
  const devices = new Devices();
  const device = await devices.find('iPhone 15');
  expect(device).toBeDefined();
});
```

### 2. Excessive Mocking
```typescript
// ❌ BAD: Mocking everything
test('device boots', async () => {
  const mockDevice = {
    bootDevice: jest.fn(),
    open: jest.fn(),
    id: 'test',
    name: 'Test Device'
  };
  const mockDevices = {
    find: jest.fn().mockResolvedValue(mockDevice)
  };
  const tool = new BootSimulatorTool(mockDevices);
  // This tests nothing real!
});

// ✅ GOOD: Minimal mocking
test('device boots', async () => {
  mockExecAsync.mockResolvedValue({ stdout: '' });
  const tool = new BootSimulatorTool(); // Real components
  await tool.execute({ deviceId: 'iPhone 15' });
  // Tests actual integration
});
```

### 3. Snapshot Testing Without Thought
```typescript
// ❌ BAD: Meaningless snapshot
test('renders correctly', () => {
  const result = tool.execute(args);
  expect(result).toMatchSnapshot();
  // What are we actually testing?
});

// ✅ GOOD: Specific assertions
test('returns success message with device name', async () => {
  const result = await tool.execute({ deviceId: 'iPhone 15' });
  expect(result.content[0].text).toContain('Successfully booted');
  expect(result.content[0].text).toContain('iPhone 15');
});
```

## Practical Guidelines for This Project

### 1. Test Categorization

**Keep as E2E (10%)**:
- Critical paths: build → run → test cycle
- Simulator boot/shutdown with real devices
- Actual Xcode project compilation

**Convert to Integration (60%)**:
- Tool composition tests (Tool → Service → Component)
- Data flow tests
- Error propagation tests

**Convert to Unit (30%)**:
- Validation logic
- Parsing functions
- Error message formatting
- Configuration merging

### 2. Where to Mock

**Always Mock**:
- `execAsync` / `execSync` - subprocess calls
- File system operations
- Network requests
- Time-dependent operations

**Never Mock**:
- Your own domain objects
- Simple data transformations
- Validation logic
- Pure functions

### 3. MCP Controller Testing Guidelines

#### What Controllers Are
In our MCP server architecture, Controllers are the presentation layer that:
- Define MCP tool metadata (name, description, input schema)
- Orchestrate the flow: validate input → call use case → format output
- Handle error presentation with consistent formatting

#### Unit Tests for Controllers - ONLY Test the Contract
Controller unit tests should ONLY verify:
1. **MCP Tool Metadata**: Name, description, and schema definition
2. **Error Formatting**: How errors are presented to users (❌ prefix, etc.)
3. **Success Formatting**: How success is presented (✅ prefix, etc.)

**DO NOT TEST**:
- How the controller calls the use case (implementation detail)
- What parameters are passed to the use case
- Whether the use case was called

#### ✅ Good Controller Unit Test
```typescript
describe('BuildXcodeController', () => {
  function createSUT() {
    // Minimal mocks just to instantiate
    const mockUseCase = {} as BuildProjectUseCase;
    const mockPresenter = {} as BuildXcodePresenter;
    return new BuildXcodeController(mockUseCase, mockPresenter);
  }

  it('should define correct tool metadata', () => {
    const sut = createSUT();
    expect(sut.name).toBe('build_xcode');
    expect(sut.description).toBe('Build an Xcode project or workspace');
  });

  it('should format success with ✅ emoji', async () => {
    const { sut, mockExecute } = createSUT();
    mockExecute.mockResolvedValue(BuildResult.succeeded(...));

    const result = await sut.execute({...});

    // Test WHAT the user sees, not HOW it's produced
    expect(result.content[0].text).toContain('✅');
    expect(result.content[0].text).toContain('Build succeeded');
  });
});
```

#### ❌ Bad Controller Unit Test
```typescript
it('should call use case with correct parameters', async () => {
  const { sut, mockExecute } = createSUT();

  await sut.execute({ platform: 'iOS' });

  // Testing HOW, not WHAT - implementation detail!
  expect(mockExecute).toHaveBeenCalledWith(
    expect.objectContaining({ platform: Platform.iOS })
  );
});
```

#### Integration Tests for Controllers - Test Behavior
Integration tests should verify the actual behavior with real components:
```typescript
it('should filter simulators by name', async () => {
  // Mock only external boundary (shell command)
  mockExec.mockImplementation(...);

  // Use real controller with real use case and repository
  const controller = ListSimulatorsControllerFactory.create();

  const result = await controller.execute({ name: 'iPhone 15' });

  // Test actual behavior
  expect(result.content[0].text).toContain('iPhone 15 Pro');
  expect(result.content[0].text).not.toContain('iPhone 14');
});
```

### 4. Test Naming Conventions

#### File Naming Pattern

**Standard**: `[ComponentName].[test-type].test.ts`

```bash
# Unit tests - test a single unit in isolation
XcbeautifyOutputParser.unit.test.ts
DeviceValidator.unit.test.ts
BuildCommandBuilder.unit.test.ts

# Integration tests - test components working together
BuildWorkflow.integration.test.ts
DeviceManagement.integration.test.ts

# E2E tests - test complete user scenarios
BuildAndRun.e2e.test.ts
SimulatorLifecycle.e2e.test.ts

# Contract tests - verify API contracts
DeviceService.contract.test.ts
```

**Why include test type in filename?**
- Immediately clear what type of test without opening file
- Can run specific test types: `jest *.unit.test.ts`
- Different test types have different performance characteristics
- Helps maintain proper test pyramid/trophy distribution

#### Directory Structure

```
src/__tests__/
├── unit/
│   ├── domain/           # Pure business logic
│   ├── application/      # Use cases and orchestration
│   ├── infrastructure/   # External adapters
│   └── utils/           # Helper functions
├── integration/         # Component interactions
├── e2e/                # Full system tests
└── contracts/          # API contract tests
```

#### Test Suite Organization

```typescript
// Mirror your source code structure in describe blocks
describe('XcbeautifyOutputParser', () => {  // Class/module name
  describe('parseBuildOutput', () => {      // Method name
    describe('when parsing errors', () => { // Scenario
      it('should extract file information from error line', () => {});
      it('should handle errors without file paths', () => {});
    });
    
    describe('when parsing warnings', () => {
      it('should extract warning details', () => {});
    });
  });
});
```

#### Individual Test Naming

**Pattern**: `should [expected behavior] when [condition]`

```typescript
// ✅ GOOD: Clear behavior and condition
it('should parse error with file information when line contains file path', () => {});
it('should return empty array when input is empty', () => {});
it('should throw ValidationError when device ID is invalid', () => {});
it('should deduplicate identical errors when parsing multi-arch output', () => {});

// ❌ BAD: Vague or implementation-focused
it('works', () => {});
it('parses correctly', () => {});
it('calls parseError method', () => {});
it('test case 1', () => {});
```

**Alternative patterns for specific scenarios**:

```typescript
// Given-When-Then (BDD style)
it('given a shutdown device, when boot is called, then device should be in booted state', () => {});

// Error scenarios - be specific about the error
it('should throw InvalidPathError when project path does not exist', () => {});
it('should throw TimeoutError when device does not respond within 5 seconds', () => {});

// Edge cases - explain what makes it an edge case
it('should handle empty array without throwing', () => {});
it('should process 10,000 items without memory overflow', () => {});
it('should correctly parse Unicode characters in file paths', () => {});

// Regression tests - reference the issue
it('should not crash when device name contains spaces (fixes #123)', () => {});
```

#### Mock and Test Data Naming

```typescript
// Use descriptive prefixes
const mockDeviceRepository = { find: jest.fn() };
const stubLogger = { log: () => {} };
const spyOnExecute = jest.spyOn(executor, 'execute');
const fakeDevice = { id: '123', name: 'Test Device' };

// Test data should describe the scenario
const validConfiguration = createConfiguration({ valid: true });
const invalidConfiguration = createConfiguration({ valid: false });
const minimalDevice = createDevice();  // defaults only
const bootedDevice = createBootedDevice();
const deviceWithError = createDeviceWithError('Boot failed');

// ❌ Avoid generic names
const data = {};      // What kind of data?
const obj = {};       // What object?
const mock1 = {};     // Mock of what?
const testDevice = {}; // All devices in tests are test devices
```

## Testing Decision Tree

```
Is it a pure function?
  Yes → Unit test with examples
  No ↓

Does it integrate with external systems?
  Yes → Mock external boundary, integration test
  No ↓

Is it orchestrating multiple components?
  Yes → Integration test with real components
  No ↓

Is it a critical user path?
  Yes → E2E test
  No ↓

Is the logic complex?
  Yes → Unit test with sociable approach
  No → Maybe doesn't need a test
```

## Measuring Test Quality

### Good Tests Are:
1. **Fast**: Run in milliseconds, not seconds
2. **Deterministic**: Same input → same output
3. **Isolated**: Can run in parallel
4. **Descriptive**: Clear what failed and why
5. **Maintainable**: Don't break on refactoring

### Red Flags:
- Tests that break when refactoring
- Tests with lots of mocks
- Tests that are hard to understand
- Tests that are slow
- Tests that are flaky

## Implementation Checklist

- [ ] Parse inputs at system boundaries using domain validation
- [ ] Create domain primitives for core concepts (DeviceId, BundleId, etc.)
- [ ] Remove integration tests that test implementation
- [ ] Convert E2E tests to integration tests where possible
- [ ] Focus on behavior, not implementation
- [ ] Use Kent C. Dodds' Testing Trophy approach
- [ ] Mock only at system boundaries
- [ ] Add property-based tests for invariants
- [ ] Use contract tests for module boundaries

## Test Quality Principles

### SUT (System Under Test) Pattern

**Principle**: Clearly identify and isolate the system being tested. Use factory methods to create the SUT and test data, making tests more maintainable and resistant to implementation changes.

#### ✅ Good Example - SUT with Factory Methods
```typescript
describe('XcbeautifyOutputParser', () => {
  // Factory method for creating the SUT
  function createSUT(): IOutputParser {
    return new XcbeautifyOutputParser();
  }

  // Factory methods for test data
  function createErrorWithFileInfo(
    file = '/Users/project/App.swift',
    line = 10,
    column = 15,
    message = 'cannot find type'
  ) {
    return `❌ ${file}:${line}:${column}: error: ${message}`;
  }

  describe('parseBuildOutput', () => {
    let sut: IOutputParser;

    beforeEach(() => {
      sut = createSUT(); // Easy to modify creation logic
    });

    it('should parse errors with file information', () => {
      // Arrange - using factory methods
      const output = createErrorWithFileInfo(
        '/Users/project/Main.swift', 
        25, 
        8, 
        'missing return'
      );

      // Act - clear what's being tested
      const result = sut.parseBuildOutput(output);

      // Assert - focused on behavior
      expect(result.errors[0]).toMatchObject({
        file: '/Users/project/Main.swift',
        line: 25,
        column: 8
      });
    });
  });
});
```

#### ❌ Bad Example - Direct Instantiation
```typescript
describe('XcbeautifyOutputParser', () => {
  let parser: XcbeautifyOutputParser;

  beforeEach(() => {
    // Hard to change if constructor changes
    parser = new XcbeautifyOutputParser();
  });

  it('parses errors', () => {
    // Inline test data - hard to reuse
    const output = '❌ /Users/project/App.swift:10:15: error: cannot find type';
    
    // Not clear what's being tested
    const result = parser.parseBuildOutput(output);
    
    // Brittle assertions
    expect(result.errors[0].file).toBe('/Users/project/App.swift');
    expect(result.errors[0].line).toBe(10);
  });
});
```

#### Benefits of SUT Pattern

1. **Maintainability**: Change SUT creation in one place
2. **Clarity**: Clear distinction between SUT and collaborators
3. **Flexibility**: Easy to add constructor parameters
4. **Testability**: Can return mocks/stubs from factory when needed
5. **Documentation**: Factory method name describes what variant is created

#### Factory Method Best Practices

```typescript
// 1. Default values for common cases
function createTestDevice(overrides = {}) {
  return {
    id: 'default-id',
    name: 'iPhone 15',
    state: 'Shutdown',
    ...overrides  // Easy to customize
  };
}

// 2. Descriptive factory names for specific scenarios
function createBootedDevice() {
  return createTestDevice({ state: 'Booted' });
}

function createErrorDevice() {
  return createTestDevice({ state: 'Error', error: 'Boot failed' });
}

// 3. Factory for complex objects with builders
function createParsedOutputWithErrors(errorCount = 1) {
  const errors = Array.from({ length: errorCount }, (_, i) => ({
    type: 'error' as const,
    message: `Error ${i + 1}`,
    file: `/path/file${i}.swift`,
    line: i * 10,
    column: 5
  }));

  return {
    errors,
    warnings: [],
    summary: {
      totalErrors: errorCount,
      totalWarnings: 0,
      buildSucceeded: false
    }
  };
}
```

### FIRST Principles

Good tests follow the FIRST principles:

#### **F - Fast**
Tests should execute in milliseconds, not seconds. A test suite with 2000 tests at 200ms each takes 6.5 minutes - unacceptable for rapid feedback.

```typescript
// ✅ FAST: In-memory, no I/O
test('validates device ID format', () => {
  expect(() => DeviceId.parse('')).toThrow();
  expect(() => DeviceId.parse('valid-id')).not.toThrow();
}); // ~1ms

// ❌ SLOW: Network calls, file I/O
test('fetches device from API', async () => {
  const device = await fetch('https://api.example.com/devices/123');
  expect(device.name).toBe('iPhone');
}); // ~500ms
```

#### **I - Independent/Isolated**
Tests should not depend on each other or execution order.

```typescript
// ❌ BAD: Tests depend on shared state
let counter = 0;
test('first test', () => {
  counter++;
  expect(counter).toBe(1);
});
test('second test', () => {
  expect(counter).toBe(1); // Fails if run alone!
});

// ✅ GOOD: Each test is independent
test('first test', () => {
  const counter = createCounter();
  counter.increment();
  expect(counter.value).toBe(1);
});
```

#### **R - Repeatable**
Same input → same output, every time.

```typescript
// ❌ BAD: Time-dependent
test('checks if weekend', () => {
  const isWeekend = checkWeekend();
  expect(isWeekend).toBe(true); // Fails Monday-Friday!
});

// ✅ GOOD: Deterministic
test('checks if weekend', () => {
  const saturday = new Date('2024-01-06');
  const isWeekend = checkWeekend(saturday);
  expect(isWeekend).toBe(true);
});
```

#### **S - Self-Validating**
Tests must clearly pass or fail without human interpretation.

```typescript
// ❌ BAD: Requires manual verification
test('logs output correctly', () => {
  console.log(generateReport());
  // Developer must manually check console output
});

// ✅ GOOD: Automated assertion
test('generates correct report', () => {
  const report = generateReport();
  expect(report).toContain('Total: 100');
  expect(report).toMatch(/Date: \d{4}-\d{2}-\d{2}/);
});
```

#### **T - Timely**
Write tests alongside code, not after.

```typescript
// TDD Cycle: Red → Green → Refactor
// 1. Write failing test first
test('parses valid device ID', () => {
  const id = DeviceId.parse('ABC-123');
  expect(id.toString()).toBe('ABC-123');
});

// 2. Implement minimal code to pass
// 3. Refactor while tests stay green
```

### DRY vs DAMP in Tests

**DRY (Don't Repeat Yourself)**: Avoid duplication in production code.
**DAMP (Descriptive And Meaningful Phrases)**: Prioritize readability in test code.

#### When to Choose DAMP Over DRY

```typescript
// ❌ Too DRY - Hard to understand test failures
const testCases = [
  ['input1', 'output1'],
  ['input2', 'output2'],
  ['input3', 'output3']
];

testCases.forEach(([input, output]) => {
  test(`test ${input}`, () => {
    expect(process(input)).toBe(output);
  });
});

// ✅ DAMP - Clear and descriptive
test('handles empty string input', () => {
  const result = parseDeviceId('');
  expect(result).toBeNull();
  expect(console.error).toHaveBeenCalledWith('Device ID cannot be empty');
});

test('handles valid UUID format', () => {
  const result = parseDeviceId('550e8400-e29b-41d4-a716-446655440000');
  expect(result).toEqual({
    type: 'uuid',
    value: '550e8400-e29b-41d4-a716-446655440000'
  });
});

test('handles device name format', () => {
  const result = parseDeviceId('iPhone 15 Pro');
  expect(result).toEqual({
    type: 'name',
    value: 'iPhone 15 Pro'
  });
});
```

**Key Insight**: "DAMP not DRY" means tests should be easy to understand even if that means some code duplication. When a test fails, the reason should be immediately obvious.

#### When to Use beforeEach vs DAMP

**Use beforeEach for:**
- Technical housekeeping that doesn't affect test understanding (e.g., `jest.clearAllMocks()`)
- Mock resets and cleanup operations
- Setting up test infrastructure that's identical across all tests

```typescript
// ✅ GOOD - beforeEach for technical housekeeping
describe('ProjectPath', () => {
  beforeEach(() => {
    jest.clearAllMocks(); // Technical cleanup, not test logic
  });
  
  it('should validate path exists', () => {
    // Test-specific setup visible here
    mockExistsSync.mockReturnValue(true);
    const result = ProjectPath.create('/path/to/project.xcodeproj');
    expect(result).toBeDefined();
  });
});

// ❌ BAD - Adding mockClear in every test
describe('ProjectPath', () => {
  it('should validate path exists', () => {
    mockExistsSync.mockClear(); // Repetitive technical detail
    mockExistsSync.mockReturnValue(true);
    const result = ProjectPath.create('/path/to/project.xcodeproj');
    expect(result).toBeDefined();
  });
  
  it('should reject invalid path', () => {
    mockExistsSync.mockClear(); // Same line in every test!
    mockExistsSync.mockReturnValue(false);
    // ...
  });
});
```

**Apply DAMP (avoid beforeEach) for:**
- Test data setup that varies between tests
- SUT (System Under Test) creation
- Mock configurations specific to test scenarios
- Anything that helps understand what the test is doing

#### SUT Creation Pattern - DAMP Over DRY

For simple SUTs (System Under Test), create them directly in each test for maximum clarity:

```typescript
// ✅ GOOD - Create SUT in each test for clarity
describe('XcbeautifyOutputParser', () => {
  function createSUT(): IOutputParser {
    return new XcbeautifyOutputParser();
  }

  it('should parse error with file information', () => {
    // Everything the test needs is visible here
    const sut = createSUT();
    const output = '❌ /path/file.swift:10:5: error message';
    
    const result = sut.parseBuildOutput(output);
    
    expect(result.issues[0]).toEqual(
      BuildIssue.error('error message', '/path/file.swift', 10, 5)
    );
  });
});

// ❌ BAD - Hidden setup in beforeEach
describe('XcbeautifyOutputParser', () => {
  let sut: IOutputParser;
  
  beforeEach(() => {
    sut = createSUT(); // Setup hidden from test
  });
  
  it('should parse error with file information', () => {
    // Have to look at beforeEach to understand setup
    const output = '❌ /path/file.swift:10:5: error message';
    const result = sut.parseBuildOutput(output);
    // ...
  });
});
```

#### SUT with Dependencies Pattern

When the SUT needs mocked dependencies, return both from the factory:

```typescript
// ✅ GOOD - Factory returns SUT with its mocks
describe('XcodePlatformValidator', () => {
  function createSUT() {
    const mockExecute = jest.fn();
    const mockExecutor: ICommandExecutor = { execute: mockExecute };
    const sut = new XcodePlatformValidator(mockExecutor);
    return { sut, mockExecute };
  }

  it('should validate platform support', async () => {
    // Everything needed is created together
    const { sut, mockExecute } = createSUT();
    
    mockExecute.mockResolvedValue({ exitCode: 0, stdout: '', stderr: '' });
    
    await sut.validate('/path', false, 'MyScheme', Platform.iOS);
    
    expect(mockExecute).toHaveBeenCalled();
  });
});

// ❌ BAD - Separate mock creation leads to duplication
describe('XcodePlatformValidator', () => {
  it('should validate platform support', async () => {
    const mockExecute = jest.fn();
    const mockExecutor = { execute: mockExecute };
    const sut = new XcodePlatformValidator(mockExecutor);
    // ... rest of test
  });
  
  it('should handle errors', async () => {
    // Duplicating mock setup
    const mockExecute = jest.fn();
    const mockExecutor = { execute: mockExecute };
    const sut = new XcodePlatformValidator(mockExecutor);
    // ... rest of test
  });
});
```

**Why this approach?**
1. **Complete visibility** - All setup is visible in the test
2. **Self-contained tests** - Each test is independent and complete
3. **Easy debugging** - When a test fails, everything is right there
4. **Follows AAA pattern** - Arrange is explicit in each test

## Advanced Testing Patterns

### Mutation Testing - Test Your Tests

Mutation testing injects faults into your code to verify that your tests catch them. It literally "tests your tests".

#### How It Works
1. Make small changes (mutations) to your code
2. Run tests against mutated code  
3. Tests should fail ("kill the mutant")
4. If tests pass, you have inadequate coverage

#### Example Mutations
```typescript
// Original code
function isAdult(age: number): boolean {
  return age >= 18;
}

// Mutations:
// 1. Change >= to >
return age > 18;  // Tests should catch this

// 2. Change 18 to 17
return age >= 17; // Tests should catch this

// 3. Change >= to <=
return age <= 18; // Tests should catch this
```

#### When to Use
- Mission-critical code
- Security-sensitive functions
- Core business logic
- After major refactoring

### Approval Testing (Golden Master)

Capture existing behavior as a "golden master" and detect any changes.

#### When to Use Approval Tests

```typescript
// ✅ GOOD: Complex output that's hard to assert
test('generates PDF report', async () => {
  const pdf = await generateReport(data);
  expect(pdf).toMatchSnapshot();
  // or
  expect(pdf).toMatchApprovedFile('report.approved.pdf');
});

// ✅ GOOD: Legacy code characterization
test('existing calculator behavior', () => {
  const results = [];
  for (let i = 0; i < 100; i++) {
    results.push(legacyCalculator.compute(i));
  }
  expect(results).toMatchSnapshot();
});

// ❌ BAD: Simple values
test('adds two numbers', () => {
  expect(add(2, 2)).toMatchSnapshot(); // Just use toBe(4)!
});
```

#### Key Benefits
- Quick tests for legacy code
- Handles complex outputs (PDFs, images, reports)
- Makes reviewers see changes clearly
- Enables safe refactoring

### Fuzz Testing

Automatically generate random, invalid, or unexpected inputs to find edge cases and security vulnerabilities.

#### Example Implementation
```typescript
import fc from 'fast-check';

test('device ID parser handles any input safely', () => {
  fc.assert(
    fc.property(fc.string(), (input) => {
      // Should never throw unhandled exception
      try {
        const result = parseDeviceId(input);
        // If it returns a result, it should be valid
        if (result) {
          expect(result.id).toBeTruthy();
          expect(result.type).toMatch(/uuid|name/);
        }
        return true;
      } catch (e) {
        // Should only throw expected errors
        expect(e.message).toMatch(/Invalid device ID|Empty input/);
        return true;
      }
    })
  );
});
```

#### What Fuzzing Finds
- Buffer overflows
- SQL injection vulnerabilities
- XSS vulnerabilities  
- Race conditions
- Memory leaks
- Unexpected crashes

### Testing Async Code

#### Common Pitfalls and Solutions

```typescript
// ❌ BAD: Not waiting for promise
test('async operation', () => {
  doAsyncThing(); // Test passes before this completes!
  expect(result).toBe(true);
});

// ❌ BAD: Mixing callbacks and promises
test('async operation', (done) => {
  doAsyncThing().then(result => {
    expect(result).toBe(true);
    done(); // Easy to forget!
  });
});

// ✅ GOOD: async/await
test('async operation', async () => {
  const result = await doAsyncThing();
  expect(result).toBe(true);
});

// ✅ GOOD: Testing race conditions
test('handles concurrent requests', async () => {
  const promises = [
    fetchUser('alice'),
    fetchUser('bob'),
    fetchUser('charlie')
  ];
  
  const results = await Promise.all(promises);
  expect(results).toHaveLength(3);
  expect(new Set(results.map(r => r.id)).size).toBe(3); // All unique
});

// ✅ GOOD: Testing timeouts
test('times out after 5 seconds', async () => {
  jest.useFakeTimers();
  
  const promise = fetchWithTimeout(url, 5000);
  jest.advanceTimersByTime(5001);
  
  await expect(promise).rejects.toThrow('Timeout');
  jest.useRealTimers();
});
```

## Testing Anti-Patterns

### Common Test Smells

#### 1. Mystery Guest
```typescript
// ❌ BAD: External dependency hidden
test('processes user data', async () => {
  const result = await processUser('user-123'); // What's user-123?
  expect(result.name).toBe('Alice'); // Why Alice?
});

// ✅ GOOD: Self-contained test
test('processes user data', async () => {
  const testUser = {
    id: 'user-123',
    name: 'Alice',
    email: '[email protected]'
  };
  await createTestUser(testUser);
  
  const result = await processUser(testUser.id);
  expect(result.name).toBe(testUser.name);
});
```

#### 2. Eager Test
```typescript
// ❌ BAD: Testing too much in one test
test('user workflow', async () => {
  const user = await createUser(data);
  expect(user.id).toBeDefined();
  
  const updated = await updateUser(user.id, newData);
  expect(updated.name).toBe(newData.name);
  
  const deleted = await deleteUser(user.id);
  expect(deleted).toBe(true);
  
  const fetched = await getUser(user.id);
  expect(fetched).toBeNull();
});

// ✅ GOOD: Focused tests
test('creates user with valid data', async () => {
  const user = await createUser(validData);
  expect(user.id).toBeDefined();
  expect(user.name).toBe(validData.name);
});

test('updates existing user', async () => {
  const user = await createTestUser();
  const updated = await updateUser(user.id, { name: 'New Name' });
  expect(updated.name).toBe('New Name');
});
```

#### 3. Excessive Setup (General Fixture)
```typescript
// ❌ BAD: Setting up everything for every test
beforeEach(() => {
  createDatabase();
  seedUsers(100);
  seedProducts(500);
  seedOrders(1000);
  setupMockServers();
  initializeCache();
});

test('gets user by id', async () => {
  // Only needs one user!
  const user = await getUser('user-1');
  expect(user.name).toBe('User 1');
});

// ✅ GOOD: Minimal setup
test('gets user by id', async () => {
  const user = await createTestUser({ name: 'Test User' });
  const fetched = await getUser(user.id);
  expect(fetched.name).toBe('Test User');
});
```

#### 4. Assertion Roulette
```typescript
// ❌ BAD: Multiple assertions without context
test('processes order', () => {
  const order = processOrder(data);
  expect(order.id).toBeDefined();
  expect(order.total).toBe(100);
  expect(order.items).toHaveLength(3);
  expect(order.status).toBe('pending');
  expect(order.customer).toBeDefined();
});

// ✅ GOOD: Descriptive assertions
test('processes order', () => {
  const order = processOrder(data);
  
  expect(order.id).toBeDefined();
  expect(order.total).toBe(100);
  expect(order.items).toHaveLength(3);
  expect(order.status).toBe('pending');
  expect(order.customer).toBeDefined();
});
```

#### 5. Test Code Duplication
```typescript
// ❌ BAD: Copying setup code
test('test 1', () => {
  const device = {
    id: 'test-id',
    name: 'iPhone',
    state: 'Booted'
  };
  // ... test logic
});

test('test 2', () => {
  const device = {
    id: 'test-id',
    name: 'iPhone',
    state: 'Booted'
  };
  // ... test logic
});

// ✅ GOOD: Extract factory function
function createTestDevice(overrides = {}) {
  return {
    id: 'test-id',
    name: 'iPhone',
    state: 'Booted',
    ...overrides
  };
}

test('test 1', () => {
  const device = createTestDevice();
  // ... test logic
});

test('test 2', () => {
  const device = createTestDevice({ state: 'Shutdown' });
  // ... test logic
});
```

## Architecture-Specific Testing

### Hexagonal Architecture (Ports & Adapters)

#### Test Boundaries
```typescript
// Domain (Hexagon Core)
class DeviceService {
  constructor(
    private deviceRepo: DeviceRepository, // Port
    private notifier: NotificationService // Port
  ) {}
  
  async bootDevice(id: string): Promise<void> {
    const device = await this.deviceRepo.find(id);
    if (!device) throw new Error('Device not found');
    
    await device.boot();
    await this.deviceRepo.save(device);
    await this.notifier.notify(`Device ${id} booted`);
  }
}

// Test at port boundary - mock adapters
test('boots device through service', async () => {
  const mockRepo = {
    find: jest.fn().mockResolvedValue(testDevice),
    save: jest.fn().mockResolvedValue(void 0)
  };
  const mockNotifier = {
    notify: jest.fn().mockResolvedValue(void 0)
  };
  
  const service = new DeviceService(mockRepo, mockNotifier);
  await service.bootDevice('test-id');
  
  expect(mockRepo.save).toHaveBeenCalledWith(
    expect.objectContaining({ state: 'Booted' })
  );
  expect(mockNotifier.notify).toHaveBeenCalled();
});

// Contract test for adapter
test('repository adapter fulfills contract', async () => {
  const repo = new MongoDeviceRepository();
  const device = await repo.find('test-id');
  
  // Verify contract shape
  expect(device).toMatchObject({
    id: expect.any(String),
    name: expect.any(String),
    boot: expect.any(Function)
  });
});
```

#### Key Benefits
- Test core logic without infrastructure
- Fast unit tests for business rules
- Contract tests ensure adapters comply
- Easy to swap implementations

### Microservices Testing

#### Testing Strategy Pyramid
```
         /\
        /e2e\       <- Cross-service journeys
       /------\
      /contract\    <- Service boundaries (Pact)
     /----------\
    /integration \  <- Within service
   /--------------\
  /     unit       \ <- Business logic
 /------------------\
```

#### Consumer-Driven Contract Testing
```typescript
// Consumer defines expectations
const deviceServiceContract = {
  'get device': {
    request: {
      method: 'GET',
      path: '/devices/123'
    },
    response: {
      status: 200,
      body: {
        id: '123',
        name: 'iPhone 15',
        state: 'Booted'
      }
    }
  }
};

// Provider verifies it can fulfill
test('device service fulfills contract', async () => {
  const response = await request(app)
    .get('/devices/123')
    .expect(200);
    
  expect(response.body).toMatchObject({
    id: expect.any(String),
    name: expect.any(String),
    state: expect.stringMatching(/Booted|Shutdown/)
  });
});
```

## Practical Guidelines

### Test Organization

#### AAA Pattern (Arrange-Act-Assert)
```typescript
test('should boot simulator when device exists', async () => {
  // Arrange
  const mockDevice = createMockDevice({ state: 'Shutdown' });
  const tool = new BootSimulatorTool();
  mockDevices.find.mockResolvedValue(mockDevice);
  
  // Act
  const result = await tool.execute({ deviceId: 'iPhone-15' });
  
  // Assert
  expect(result.success).toBe(true);
  expect(result.message).toContain('booted');
});
```

#### Given-When-Then (BDD Style)
```typescript
test('boots simulator successfully', async () => {
  // Given a shutdown simulator exists
  const device = givenAShutdownSimulator();
  
  // When I boot the simulator
  const result = await whenIBootSimulator(device.id);
  
  // Then the simulator should be booted
  thenSimulatorShouldBeBooted(result);
});
```

### Test Data Management

#### Builder Pattern
```typescript
class DeviceBuilder {
  private device = {
    id: 'default-id',
    name: 'iPhone 15',
    state: 'Shutdown',
    platform: 'iOS'
  };
  
  withId(id: string): this {
    this.device.id = id;
    return this;
  }
  
  withState(state: string): this {
    this.device.state = state;
    return this;
  }
  
  booted(): this {
    this.device.state = 'Booted';
    return this;
  }
  
  build(): Device {
    return { ...this.device };
  }
}

// Usage
const device = new DeviceBuilder()
  .withId('test-123')
  .booted()
  .build();
```

#### Object Mother Pattern
```typescript
class DeviceMother {
  static bootedIPhone(): Device {
    return {
      id: 'iphone-test',
      name: 'iPhone 15 Pro',
      state: 'Booted',
      platform: 'iOS'
    };
  }
  
  static shutdownAndroid(): Device {
    return {
      id: 'android-test',
      name: 'Pixel 8',
      state: 'Shutdown',
      platform: 'Android'
    };
  }
}

// Usage
const device = DeviceMother.bootedIPhone();
```

## Jest TypeScript Mocking Best Practices

### 1. Always Provide Explicit Type Signatures to jest.fn()

**Principle**: TypeScript requires explicit function signatures for proper type inference with mocks.

#### ❌ Bad - Causes "type never" errors
```typescript
const mockFunction = jest.fn();
mockFunction.mockResolvedValue({ success: true }); // Error: type 'never'
```

#### ✅ Good - Consistent Approach with @jest/globals
```typescript
// Always import from @jest/globals for consistency
import { describe, it, expect, jest, beforeEach } from '@jest/globals';

// Use single type parameter with function signature
const mockFunction = jest.fn<() => Promise<{ success: boolean }>>();
mockFunction.mockResolvedValue({ success: true }); // Works!

// With parameters
const mockBuildProject = jest.fn<(options: BuildOptions) => Promise<BuildResult>>();

// Multiple parameters
const mockCallback = jest.fn<(error: Error | null, data?: string) => void>();

// Optional parameters
const mockExecute = jest.fn<(command: string, options?: ExecutionOptions) => Promise<ExecutionResult>>();
```

#### Using Interface Properties
```typescript
// When mocking interface methods, use the property directly
const mockFindApp = jest.fn<IAppLocator['findApp']>();
const mockSaveLog = jest.fn<ILogManager['saveLog']>();
```

#### Factory Pattern for Mocks
```typescript
function createSUT() {
  const mockExecute = jest.fn<(command: string, options?: ExecutionOptions) => Promise<ExecutionResult>>();
  const mockExecutor: ICommandExecutor = {
    execute: mockExecute
  };
  const sut = new MyService(mockExecutor);
  return { sut, mockExecute }; // Return both for easy access in tests
}
```

**Important**: 
- Always import `jest` from `@jest/globals` for consistent type behavior
- Use single type parameter with complete function signature
- This approach avoids TypeScript errors and provides proper type inference

### 2. Handle instanceof Checks with Object.create()

**Principle**: When code uses `instanceof` checks, create mocks that pass these checks.

#### ❌ Bad - Plain object fails instanceof
```typescript
const mockXcodeProject = {
  buildProject: jest.fn()
};
// Fails: if (!(project instanceof XcodeProject))
```

#### ✅ Good - Use Object.create with prototype
```typescript
const mockBuildProject = jest.fn<(options: any) => Promise<any>>();
const mockXcodeProject = Object.create(XcodeProject.prototype);
mockXcodeProject.buildProject = mockBuildProject;
// Passes: if (project instanceof XcodeProject) ✓
```

### 3. Match Async vs Sync Return Types

**Principle**: Use the correct mock method based on function return type.

#### ❌ Bad - Mixing async/sync
```typescript
const mockSync = jest.fn<() => string>();
mockSync.mockResolvedValue('result'); // Wrong! Use mockReturnValue

const mockAsync = jest.fn<() => Promise<string>>();
mockAsync.mockReturnValue('result'); // Wrong! Use mockResolvedValue
```

#### ✅ Good - Match the return type
```typescript
// Synchronous
const mockSync = jest.fn<() => string>();
mockSync.mockReturnValue('result');

// Asynchronous
const mockAsync = jest.fn<() => Promise<string>>();
mockAsync.mockResolvedValue('result');
```

### 4. Mock Module Imports Correctly

**Principle**: Mock at module level and type the mocks properly.

```typescript
// Mock the module
jest.mock('fs', () => ({
  existsSync: jest.fn()
}));

// Import and type the mock
import { existsSync } from 'fs';
const mockExistsSync = existsSync as jest.MockedFunction<typeof existsSync>;

// Use in tests
beforeEach(() => {
  mockExistsSync.mockReturnValue(true);
});
```

### 5. Never Use Type Casting - Fix the Root Cause

**Principle**: Type casting hides problems. Fix the types properly instead.

#### ❌ Bad - Type casting
```typescript
const mockFunction = jest.fn() as any;
const mockFunction = jest.fn() as jest.Mock;
```

#### ✅ Good - Proper typing
```typescript
type BuildFunction = (path: string) => Promise<BuildResult>;
const mockBuild = jest.fn<BuildFunction>();
```

### 6. Sequential Mock Returns

```typescript
const mockExecAsync = jest.fn<(cmd: string) => Promise<{ stdout: string }>>();
mockExecAsync
  .mockResolvedValueOnce({ stdout: 'First call' })
  .mockResolvedValueOnce({ stdout: 'Second call' })
  .mockRejectedValueOnce(new Error('Third call fails'));
```

### Handling Flaky Tests

#### Identifying Flaky Tests
1. Run tests multiple times
2. Track failure patterns
3. Look for timing dependencies
4. Check for shared state

#### Common Causes and Fixes
```typescript
// ❌ FLAKY: Race condition
test('concurrent operations', async () => {
  startOperation1();
  startOperation2();
  await wait(100); // Arbitrary wait
  expect(getResult()).toBe('complete');
});

// ✅ FIXED: Proper synchronization
test('concurrent operations', async () => {
  const op1 = startOperation1();
  const op2 = startOperation2();
  await Promise.all([op1, op2]);
  expect(getResult()).toBe('complete');
});

// ❌ FLAKY: External dependency
test('fetches weather', async () => {
  const weather = await fetchWeather('London');
  expect(weather.temp).toBeGreaterThan(0);
});

// ✅ FIXED: Mock external service
test('fetches weather', async () => {
  mockWeatherAPI.mockResolvedValue({ temp: 20, condition: 'sunny' });
  const weather = await fetchWeather('London');
  expect(weather.temp).toBe(20);
});
```

## Troubleshooting Jest TypeScript Issues

### Common Problems and Solutions

#### Problem: "Argument of type X is not assignable to parameter of type 'never'"
**Solution**: Add explicit type signature to jest.fn()
```typescript
// Wrong
const mock = jest.fn();
// Right
const mock = jest.fn<() => Promise<string>>();
```

#### Problem: "Expected 0-1 type arguments, but got 2" (TS2558)
**Solution**: You're using old Jest syntax. Use modern syntax:
```typescript
// Wrong (old syntax)
jest.fn<any, any[]>()
// Right (modern syntax)
jest.fn<(...args: any[]) => any>()
```

#### Problem: instanceof checks failing in tests
**Solution**: Use Object.create(ClassName.prototype) for the mock object

#### Problem: Mock structure doesn't match actual implementation
**Solution**: Always verify the actual interface by reading the source code

#### Problem: Validation errors not caught in tests
**Solution**: Use `await expect(...).rejects.toThrow()` for code that throws

## References

### Core Testing Philosophy
1. "Parse, Don't Validate" - Alexis King
2. "Domain Primitives" - Secure by Design (Dan Bergh Johnsson, Daniel Deogun, Daniel Sawano)
3. "Write tests. Not too many. Mostly integration." - Kent C. Dodds
4. "Test Behavior, Not Implementation" - Martin Fowler
5. "Working Effectively with Legacy Code" - Michael Feathers

### Testing Techniques
6. "Property-Based Testing" - QuickCheck (Koen Claessen and John Hughes)
7. "Consumer-Driven Contracts" - Pact
8. "The Art of Unit Testing" - Roy Osherove
9. "Growing Object-Oriented Software, Guided by Tests" - Steve Freeman and Nat Pryce
10. "xUnit Test Patterns" - Gerard Meszaros

### Modern Approaches
11. "Testing Trophy" - Kent C. Dodds
12. "Mutation Testing" - PITest, Stryker
13. "Approval Tests" - Llewellyn Falco
14. "Hexagonal Architecture" - Alistair Cockburn
15. "FIRST Principles" - Clean Code (Robert C. Martin)
```
Page 4/4FirstPrevNextLast