#
tokens: 47237/50000 12/337 files (page 8/14)
lines: on (toggle) GitHub
raw markdown copy reset
This is page 8 of 14. Use http://codebase.md/cameroncooke/xcodebuildmcp?lines=true&page={x} to view the full context.

# Directory Structure

```
├── .axe-version
├── .claude
│   └── agents
│       └── xcodebuild-mcp-qa-tester.md
├── .cursor
│   ├── BUGBOT.md
│   └── environment.json
├── .cursorrules
├── .github
│   ├── FUNDING.yml
│   ├── ISSUE_TEMPLATE
│   │   ├── bug_report.yml
│   │   ├── config.yml
│   │   └── feature_request.yml
│   └── workflows
│       ├── ci.yml
│       ├── claude-code-review.yml
│       ├── claude-dispatch.yml
│       ├── claude.yml
│       ├── droid-code-review.yml
│       ├── README.md
│       ├── release.yml
│       └── sentry.yml
├── .gitignore
├── .prettierignore
├── .prettierrc.js
├── .vscode
│   ├── extensions.json
│   ├── launch.json
│   ├── mcp.json
│   ├── settings.json
│   └── tasks.json
├── AGENTS.md
├── banner.png
├── build-plugins
│   ├── plugin-discovery.js
│   ├── plugin-discovery.ts
│   └── tsconfig.json
├── CHANGELOG.md
├── CLAUDE.md
├── CODE_OF_CONDUCT.md
├── Dockerfile
├── docs
│   ├── ARCHITECTURE.md
│   ├── CODE_QUALITY.md
│   ├── CONTRIBUTING.md
│   ├── ESLINT_TYPE_SAFETY.md
│   ├── MANUAL_TESTING.md
│   ├── NODEJS_2025.md
│   ├── PLUGIN_DEVELOPMENT.md
│   ├── RELEASE_PROCESS.md
│   ├── RELOADEROO_FOR_XCODEBUILDMCP.md
│   ├── RELOADEROO_XCODEBUILDMCP_PRIMER.md
│   ├── RELOADEROO.md
│   ├── session_management_plan.md
│   ├── session-aware-migration-todo.md
│   ├── TEST_RUNNER_ENV_IMPLEMENTATION_PLAN.md
│   ├── TESTING.md
│   └── TOOLS.md
├── eslint.config.js
├── example_projects
│   ├── .vscode
│   │   └── launch.json
│   ├── iOS
│   │   ├── .cursor
│   │   │   └── rules
│   │   │       └── errors.mdc
│   │   ├── .vscode
│   │   │   └── settings.json
│   │   ├── Makefile
│   │   ├── MCPTest
│   │   │   ├── Assets.xcassets
│   │   │   │   ├── AccentColor.colorset
│   │   │   │   │   └── Contents.json
│   │   │   │   ├── AppIcon.appiconset
│   │   │   │   │   └── Contents.json
│   │   │   │   └── Contents.json
│   │   │   ├── ContentView.swift
│   │   │   ├── MCPTestApp.swift
│   │   │   └── Preview Content
│   │   │       └── Preview Assets.xcassets
│   │   │           └── Contents.json
│   │   ├── MCPTest.xcodeproj
│   │   │   ├── project.pbxproj
│   │   │   └── xcshareddata
│   │   │       └── xcschemes
│   │   │           └── MCPTest.xcscheme
│   │   └── MCPTestUITests
│   │       └── MCPTestUITests.swift
│   ├── iOS_Calculator
│   │   ├── CalculatorApp
│   │   │   ├── Assets.xcassets
│   │   │   │   ├── AccentColor.colorset
│   │   │   │   │   └── Contents.json
│   │   │   │   ├── AppIcon.appiconset
│   │   │   │   │   └── Contents.json
│   │   │   │   └── Contents.json
│   │   │   ├── CalculatorApp.swift
│   │   │   └── CalculatorApp.xctestplan
│   │   ├── CalculatorApp.xcodeproj
│   │   │   ├── project.pbxproj
│   │   │   └── xcshareddata
│   │   │       └── xcschemes
│   │   │           └── CalculatorApp.xcscheme
│   │   ├── CalculatorApp.xcworkspace
│   │   │   └── contents.xcworkspacedata
│   │   ├── CalculatorAppPackage
│   │   │   ├── .gitignore
│   │   │   ├── Package.swift
│   │   │   ├── Sources
│   │   │   │   └── CalculatorAppFeature
│   │   │   │       ├── BackgroundEffect.swift
│   │   │   │       ├── CalculatorButton.swift
│   │   │   │       ├── CalculatorDisplay.swift
│   │   │   │       ├── CalculatorInputHandler.swift
│   │   │   │       ├── CalculatorService.swift
│   │   │   │       └── ContentView.swift
│   │   │   └── Tests
│   │   │       └── CalculatorAppFeatureTests
│   │   │           └── CalculatorServiceTests.swift
│   │   ├── CalculatorAppTests
│   │   │   └── CalculatorAppTests.swift
│   │   └── Config
│   │       ├── Debug.xcconfig
│   │       ├── Release.xcconfig
│   │       ├── Shared.xcconfig
│   │       └── Tests.xcconfig
│   ├── macOS
│   │   ├── MCPTest
│   │   │   ├── Assets.xcassets
│   │   │   │   ├── AccentColor.colorset
│   │   │   │   │   └── Contents.json
│   │   │   │   ├── AppIcon.appiconset
│   │   │   │   │   └── Contents.json
│   │   │   │   └── Contents.json
│   │   │   ├── ContentView.swift
│   │   │   ├── MCPTest.entitlements
│   │   │   ├── MCPTestApp.swift
│   │   │   └── Preview Content
│   │   │       └── Preview Assets.xcassets
│   │   │           └── Contents.json
│   │   └── MCPTest.xcodeproj
│   │       ├── project.pbxproj
│   │       └── xcshareddata
│   │           └── xcschemes
│   │               └── MCPTest.xcscheme
│   └── spm
│       ├── .gitignore
│       ├── Package.resolved
│       ├── Package.swift
│       ├── Sources
│       │   ├── long-server
│       │   │   └── main.swift
│       │   ├── quick-task
│       │   │   └── main.swift
│       │   ├── spm
│       │   │   └── main.swift
│       │   └── TestLib
│       │       └── TaskManager.swift
│       └── Tests
│           └── TestLibTests
│               └── SimpleTests.swift
├── LICENSE
├── mcp-install-dark.png
├── package-lock.json
├── package.json
├── README.md
├── scripts
│   ├── analysis
│   │   └── tools-analysis.ts
│   ├── bundle-axe.sh
│   ├── check-code-patterns.js
│   ├── release.sh
│   ├── tools-cli.ts
│   └── update-tools-docs.ts
├── server.json
├── smithery.yaml
├── src
│   ├── core
│   │   ├── __tests__
│   │   │   └── resources.test.ts
│   │   ├── dynamic-tools.ts
│   │   ├── plugin-registry.ts
│   │   ├── plugin-types.ts
│   │   └── resources.ts
│   ├── doctor-cli.ts
│   ├── index.ts
│   ├── mcp
│   │   ├── resources
│   │   │   ├── __tests__
│   │   │   │   ├── devices.test.ts
│   │   │   │   ├── doctor.test.ts
│   │   │   │   └── simulators.test.ts
│   │   │   ├── devices.ts
│   │   │   ├── doctor.ts
│   │   │   └── simulators.ts
│   │   └── tools
│   │       ├── device
│   │       │   ├── __tests__
│   │       │   │   ├── build_device.test.ts
│   │       │   │   ├── get_device_app_path.test.ts
│   │       │   │   ├── index.test.ts
│   │       │   │   ├── install_app_device.test.ts
│   │       │   │   ├── launch_app_device.test.ts
│   │       │   │   ├── list_devices.test.ts
│   │       │   │   ├── re-exports.test.ts
│   │       │   │   ├── stop_app_device.test.ts
│   │       │   │   └── test_device.test.ts
│   │       │   ├── build_device.ts
│   │       │   ├── clean.ts
│   │       │   ├── discover_projs.ts
│   │       │   ├── get_app_bundle_id.ts
│   │       │   ├── get_device_app_path.ts
│   │       │   ├── index.ts
│   │       │   ├── install_app_device.ts
│   │       │   ├── launch_app_device.ts
│   │       │   ├── list_devices.ts
│   │       │   ├── list_schemes.ts
│   │       │   ├── show_build_settings.ts
│   │       │   ├── start_device_log_cap.ts
│   │       │   ├── stop_app_device.ts
│   │       │   ├── stop_device_log_cap.ts
│   │       │   └── test_device.ts
│   │       ├── discovery
│   │       │   ├── __tests__
│   │       │   │   └── discover_tools.test.ts
│   │       │   ├── discover_tools.ts
│   │       │   └── index.ts
│   │       ├── doctor
│   │       │   ├── __tests__
│   │       │   │   ├── doctor.test.ts
│   │       │   │   └── index.test.ts
│   │       │   ├── doctor.ts
│   │       │   ├── index.ts
│   │       │   └── lib
│   │       │       └── doctor.deps.ts
│   │       ├── logging
│   │       │   ├── __tests__
│   │       │   │   ├── index.test.ts
│   │       │   │   ├── start_device_log_cap.test.ts
│   │       │   │   ├── start_sim_log_cap.test.ts
│   │       │   │   ├── stop_device_log_cap.test.ts
│   │       │   │   └── stop_sim_log_cap.test.ts
│   │       │   ├── index.ts
│   │       │   ├── start_device_log_cap.ts
│   │       │   ├── start_sim_log_cap.ts
│   │       │   ├── stop_device_log_cap.ts
│   │       │   └── stop_sim_log_cap.ts
│   │       ├── macos
│   │       │   ├── __tests__
│   │       │   │   ├── build_macos.test.ts
│   │       │   │   ├── build_run_macos.test.ts
│   │       │   │   ├── get_mac_app_path.test.ts
│   │       │   │   ├── index.test.ts
│   │       │   │   ├── launch_mac_app.test.ts
│   │       │   │   ├── re-exports.test.ts
│   │       │   │   ├── stop_mac_app.test.ts
│   │       │   │   └── test_macos.test.ts
│   │       │   ├── build_macos.ts
│   │       │   ├── build_run_macos.ts
│   │       │   ├── clean.ts
│   │       │   ├── discover_projs.ts
│   │       │   ├── get_mac_app_path.ts
│   │       │   ├── get_mac_bundle_id.ts
│   │       │   ├── index.ts
│   │       │   ├── launch_mac_app.ts
│   │       │   ├── list_schemes.ts
│   │       │   ├── show_build_settings.ts
│   │       │   ├── stop_mac_app.ts
│   │       │   └── test_macos.ts
│   │       ├── project-discovery
│   │       │   ├── __tests__
│   │       │   │   ├── discover_projs.test.ts
│   │       │   │   ├── get_app_bundle_id.test.ts
│   │       │   │   ├── get_mac_bundle_id.test.ts
│   │       │   │   ├── index.test.ts
│   │       │   │   ├── list_schemes.test.ts
│   │       │   │   └── show_build_settings.test.ts
│   │       │   ├── discover_projs.ts
│   │       │   ├── get_app_bundle_id.ts
│   │       │   ├── get_mac_bundle_id.ts
│   │       │   ├── index.ts
│   │       │   ├── list_schemes.ts
│   │       │   └── show_build_settings.ts
│   │       ├── project-scaffolding
│   │       │   ├── __tests__
│   │       │   │   ├── index.test.ts
│   │       │   │   ├── scaffold_ios_project.test.ts
│   │       │   │   └── scaffold_macos_project.test.ts
│   │       │   ├── index.ts
│   │       │   ├── scaffold_ios_project.ts
│   │       │   └── scaffold_macos_project.ts
│   │       ├── session-management
│   │       │   ├── __tests__
│   │       │   │   ├── index.test.ts
│   │       │   │   ├── session_clear_defaults.test.ts
│   │       │   │   ├── session_set_defaults.test.ts
│   │       │   │   └── session_show_defaults.test.ts
│   │       │   ├── index.ts
│   │       │   ├── session_clear_defaults.ts
│   │       │   ├── session_set_defaults.ts
│   │       │   └── session_show_defaults.ts
│   │       ├── simulator
│   │       │   ├── __tests__
│   │       │   │   ├── boot_sim.test.ts
│   │       │   │   ├── build_run_sim.test.ts
│   │       │   │   ├── build_sim.test.ts
│   │       │   │   ├── get_sim_app_path.test.ts
│   │       │   │   ├── index.test.ts
│   │       │   │   ├── install_app_sim.test.ts
│   │       │   │   ├── launch_app_logs_sim.test.ts
│   │       │   │   ├── launch_app_sim.test.ts
│   │       │   │   ├── list_sims.test.ts
│   │       │   │   ├── open_sim.test.ts
│   │       │   │   ├── record_sim_video.test.ts
│   │       │   │   ├── screenshot.test.ts
│   │       │   │   ├── stop_app_sim.test.ts
│   │       │   │   └── test_sim.test.ts
│   │       │   ├── boot_sim.ts
│   │       │   ├── build_run_sim.ts
│   │       │   ├── build_sim.ts
│   │       │   ├── clean.ts
│   │       │   ├── describe_ui.ts
│   │       │   ├── discover_projs.ts
│   │       │   ├── get_app_bundle_id.ts
│   │       │   ├── get_sim_app_path.ts
│   │       │   ├── index.ts
│   │       │   ├── install_app_sim.ts
│   │       │   ├── launch_app_logs_sim.ts
│   │       │   ├── launch_app_sim.ts
│   │       │   ├── list_schemes.ts
│   │       │   ├── list_sims.ts
│   │       │   ├── open_sim.ts
│   │       │   ├── record_sim_video.ts
│   │       │   ├── screenshot.ts
│   │       │   ├── show_build_settings.ts
│   │       │   ├── stop_app_sim.ts
│   │       │   └── test_sim.ts
│   │       ├── simulator-management
│   │       │   ├── __tests__
│   │       │   │   ├── erase_sims.test.ts
│   │       │   │   ├── index.test.ts
│   │       │   │   ├── reset_sim_location.test.ts
│   │       │   │   ├── set_sim_appearance.test.ts
│   │       │   │   ├── set_sim_location.test.ts
│   │       │   │   └── sim_statusbar.test.ts
│   │       │   ├── boot_sim.ts
│   │       │   ├── erase_sims.ts
│   │       │   ├── index.ts
│   │       │   ├── list_sims.ts
│   │       │   ├── open_sim.ts
│   │       │   ├── reset_sim_location.ts
│   │       │   ├── set_sim_appearance.ts
│   │       │   ├── set_sim_location.ts
│   │       │   └── sim_statusbar.ts
│   │       ├── swift-package
│   │       │   ├── __tests__
│   │       │   │   ├── active-processes.test.ts
│   │       │   │   ├── index.test.ts
│   │       │   │   ├── swift_package_build.test.ts
│   │       │   │   ├── swift_package_clean.test.ts
│   │       │   │   ├── swift_package_list.test.ts
│   │       │   │   ├── swift_package_run.test.ts
│   │       │   │   ├── swift_package_stop.test.ts
│   │       │   │   └── swift_package_test.test.ts
│   │       │   ├── active-processes.ts
│   │       │   ├── index.ts
│   │       │   ├── swift_package_build.ts
│   │       │   ├── swift_package_clean.ts
│   │       │   ├── swift_package_list.ts
│   │       │   ├── swift_package_run.ts
│   │       │   ├── swift_package_stop.ts
│   │       │   └── swift_package_test.ts
│   │       ├── ui-testing
│   │       │   ├── __tests__
│   │       │   │   ├── button.test.ts
│   │       │   │   ├── describe_ui.test.ts
│   │       │   │   ├── gesture.test.ts
│   │       │   │   ├── index.test.ts
│   │       │   │   ├── key_press.test.ts
│   │       │   │   ├── key_sequence.test.ts
│   │       │   │   ├── long_press.test.ts
│   │       │   │   ├── screenshot.test.ts
│   │       │   │   ├── swipe.test.ts
│   │       │   │   ├── tap.test.ts
│   │       │   │   ├── touch.test.ts
│   │       │   │   └── type_text.test.ts
│   │       │   ├── button.ts
│   │       │   ├── describe_ui.ts
│   │       │   ├── gesture.ts
│   │       │   ├── index.ts
│   │       │   ├── key_press.ts
│   │       │   ├── key_sequence.ts
│   │       │   ├── long_press.ts
│   │       │   ├── screenshot.ts
│   │       │   ├── swipe.ts
│   │       │   ├── tap.ts
│   │       │   ├── touch.ts
│   │       │   └── type_text.ts
│   │       └── utilities
│   │           ├── __tests__
│   │           │   ├── clean.test.ts
│   │           │   └── index.test.ts
│   │           ├── clean.ts
│   │           └── index.ts
│   ├── server
│   │   └── server.ts
│   ├── test-utils
│   │   └── mock-executors.ts
│   ├── types
│   │   └── common.ts
│   └── utils
│       ├── __tests__
│       │   ├── build-utils.test.ts
│       │   ├── environment.test.ts
│       │   ├── session-aware-tool-factory.test.ts
│       │   ├── session-store.test.ts
│       │   ├── simulator-utils.test.ts
│       │   ├── test-runner-env-integration.test.ts
│       │   └── typed-tool-factory.test.ts
│       ├── axe
│       │   └── index.ts
│       ├── axe-helpers.ts
│       ├── build
│       │   └── index.ts
│       ├── build-utils.ts
│       ├── capabilities.ts
│       ├── command.ts
│       ├── CommandExecutor.ts
│       ├── environment.ts
│       ├── errors.ts
│       ├── execution
│       │   └── index.ts
│       ├── FileSystemExecutor.ts
│       ├── log_capture.ts
│       ├── log-capture
│       │   └── index.ts
│       ├── logger.ts
│       ├── logging
│       │   └── index.ts
│       ├── plugin-registry
│       │   └── index.ts
│       ├── responses
│       │   └── index.ts
│       ├── schema-helpers.ts
│       ├── sentry.ts
│       ├── session-store.ts
│       ├── simulator-utils.ts
│       ├── template
│       │   └── index.ts
│       ├── template-manager.ts
│       ├── test
│       │   └── index.ts
│       ├── test-common.ts
│       ├── tool-registry.ts
│       ├── typed-tool-factory.ts
│       ├── validation
│       │   └── index.ts
│       ├── validation.ts
│       ├── version
│       │   └── index.ts
│       ├── video_capture.ts
│       ├── video-capture
│       │   └── index.ts
│       ├── xcode.ts
│       ├── xcodemake
│       │   └── index.ts
│       └── xcodemake.ts
├── tsconfig.json
├── tsconfig.test.json
├── tsup.config.ts
└── vitest.config.ts
```

# Files

--------------------------------------------------------------------------------
/src/mcp/tools/device/test_device.ts:
--------------------------------------------------------------------------------

```typescript
  1 | /**
  2 |  * Device Shared Plugin: Test Device (Unified)
  3 |  *
  4 |  * Runs tests for an Apple project or workspace on a physical device (iPhone, iPad, Apple Watch, Apple TV, Apple Vision Pro)
  5 |  * using xcodebuild test and parses xcresult output. Accepts mutually exclusive `projectPath` or `workspacePath`.
  6 |  */
  7 | 
  8 | import { z } from 'zod';
  9 | import { join } from 'path';
 10 | import { ToolResponse, XcodePlatform } from '../../../types/common.ts';
 11 | import { log } from '../../../utils/logging/index.ts';
 12 | import { executeXcodeBuildCommand } from '../../../utils/build/index.ts';
 13 | import { createTextResponse } from '../../../utils/responses/index.ts';
 14 | import { normalizeTestRunnerEnv } from '../../../utils/environment.ts';
 15 | import type {
 16 |   CommandExecutor,
 17 |   FileSystemExecutor,
 18 |   CommandExecOptions,
 19 | } from '../../../utils/execution/index.ts';
 20 | import {
 21 |   getDefaultCommandExecutor,
 22 |   getDefaultFileSystemExecutor,
 23 | } from '../../../utils/execution/index.ts';
 24 | import { createSessionAwareTool } from '../../../utils/typed-tool-factory.ts';
 25 | import { nullifyEmptyStrings } from '../../../utils/schema-helpers.ts';
 26 | 
 27 | // Unified schema: XOR between projectPath and workspacePath
 28 | const baseSchemaObject = z.object({
 29 |   projectPath: z.string().optional().describe('Path to the .xcodeproj file'),
 30 |   workspacePath: z.string().optional().describe('Path to the .xcworkspace file'),
 31 |   scheme: z.string().describe('The scheme to test'),
 32 |   deviceId: z.string().describe('UDID of the device (obtained from list_devices)'),
 33 |   configuration: z.string().optional().describe('Build configuration (Debug, Release)'),
 34 |   derivedDataPath: z.string().optional().describe('Path to derived data directory'),
 35 |   extraArgs: z.array(z.string()).optional().describe('Additional arguments to pass to xcodebuild'),
 36 |   preferXcodebuild: z.boolean().optional().describe('Prefer xcodebuild over faster alternatives'),
 37 |   platform: z
 38 |     .enum(['iOS', 'watchOS', 'tvOS', 'visionOS'])
 39 |     .optional()
 40 |     .describe('Target platform (defaults to iOS)'),
 41 |   testRunnerEnv: z
 42 |     .record(z.string(), z.string())
 43 |     .optional()
 44 |     .describe(
 45 |       'Environment variables to pass to the test runner (TEST_RUNNER_ prefix added automatically)',
 46 |     ),
 47 | });
 48 | 
 49 | const baseSchema = z.preprocess(nullifyEmptyStrings, baseSchemaObject);
 50 | 
 51 | const testDeviceSchema = baseSchema
 52 |   .refine((val) => val.projectPath !== undefined || val.workspacePath !== undefined, {
 53 |     message: 'Either projectPath or workspacePath is required.',
 54 |   })
 55 |   .refine((val) => !(val.projectPath !== undefined && val.workspacePath !== undefined), {
 56 |     message: 'projectPath and workspacePath are mutually exclusive. Provide only one.',
 57 |   });
 58 | 
 59 | export type TestDeviceParams = z.infer<typeof testDeviceSchema>;
 60 | 
 61 | /**
 62 |  * Type definition for test summary structure from xcresulttool
 63 |  * (JavaScript implementation - no actual interface, this is just documentation)
 64 |  */
 65 | 
 66 | /**
 67 |  * Parse xcresult bundle using xcrun xcresulttool
 68 |  */
 69 | async function parseXcresultBundle(
 70 |   resultBundlePath: string,
 71 |   executor: CommandExecutor = getDefaultCommandExecutor(),
 72 | ): Promise<string> {
 73 |   try {
 74 |     // Use injected executor for testing
 75 |     const result = await executor(
 76 |       ['xcrun', 'xcresulttool', 'get', 'test-results', 'summary', '--path', resultBundlePath],
 77 |       'Parse xcresult bundle',
 78 |     );
 79 |     if (!result.success) {
 80 |       throw new Error(result.error ?? 'Failed to execute xcresulttool');
 81 |     }
 82 |     if (!result.output || result.output.trim().length === 0) {
 83 |       throw new Error('xcresulttool returned no output');
 84 |     }
 85 | 
 86 |     // Parse JSON response and format as human-readable
 87 |     const summaryData = JSON.parse(result.output) as Record<string, unknown>;
 88 |     return formatTestSummary(summaryData);
 89 |   } catch (error) {
 90 |     const errorMessage = error instanceof Error ? error.message : String(error);
 91 |     log('error', `Error parsing xcresult bundle: ${errorMessage}`);
 92 |     throw error;
 93 |   }
 94 | }
 95 | 
 96 | /**
 97 |  * Format test summary JSON into human-readable text
 98 |  */
 99 | function formatTestSummary(summary: Record<string, unknown>): string {
100 |   const lines = [];
101 | 
102 |   lines.push(`Test Summary: ${summary.title ?? 'Unknown'}`);
103 |   lines.push(`Overall Result: ${summary.result ?? 'Unknown'}`);
104 |   lines.push('');
105 | 
106 |   lines.push('Test Counts:');
107 |   lines.push(`  Total: ${summary.totalTestCount ?? 0}`);
108 |   lines.push(`  Passed: ${summary.passedTests ?? 0}`);
109 |   lines.push(`  Failed: ${summary.failedTests ?? 0}`);
110 |   lines.push(`  Skipped: ${summary.skippedTests ?? 0}`);
111 |   lines.push(`  Expected Failures: ${summary.expectedFailures ?? 0}`);
112 |   lines.push('');
113 | 
114 |   if (summary.environmentDescription) {
115 |     lines.push(`Environment: ${summary.environmentDescription}`);
116 |     lines.push('');
117 |   }
118 | 
119 |   if (
120 |     summary.devicesAndConfigurations &&
121 |     Array.isArray(summary.devicesAndConfigurations) &&
122 |     summary.devicesAndConfigurations.length > 0
123 |   ) {
124 |     const deviceConfig = summary.devicesAndConfigurations[0] as Record<string, unknown>;
125 |     const device = deviceConfig.device as Record<string, unknown> | undefined;
126 |     if (device) {
127 |       lines.push(
128 |         `Device: ${device.deviceName ?? 'Unknown'} (${device.platform ?? 'Unknown'} ${device.osVersion ?? 'Unknown'})`,
129 |       );
130 |       lines.push('');
131 |     }
132 |   }
133 | 
134 |   if (
135 |     summary.testFailures &&
136 |     Array.isArray(summary.testFailures) &&
137 |     summary.testFailures.length > 0
138 |   ) {
139 |     lines.push('Test Failures:');
140 |     summary.testFailures.forEach((failureItem, index) => {
141 |       const failure = failureItem as Record<string, unknown>;
142 |       lines.push(
143 |         `  ${index + 1}. ${failure.testName ?? 'Unknown Test'} (${failure.targetName ?? 'Unknown Target'})`,
144 |       );
145 |       if (failure.failureText) {
146 |         lines.push(`     ${failure.failureText}`);
147 |       }
148 |     });
149 |     lines.push('');
150 |   }
151 | 
152 |   if (summary.topInsights && Array.isArray(summary.topInsights) && summary.topInsights.length > 0) {
153 |     lines.push('Insights:');
154 |     summary.topInsights.forEach((insightItem, index) => {
155 |       const insight = insightItem as Record<string, unknown>;
156 |       lines.push(
157 |         `  ${index + 1}. [${insight.impact ?? 'Unknown'}] ${insight.text ?? 'No description'}`,
158 |       );
159 |     });
160 |   }
161 | 
162 |   return lines.join('\n');
163 | }
164 | 
165 | /**
166 |  * Business logic for running tests with platform-specific handling.
167 |  * Exported for direct testing and reuse.
168 |  */
169 | export async function testDeviceLogic(
170 |   params: TestDeviceParams,
171 |   executor: CommandExecutor = getDefaultCommandExecutor(),
172 |   fileSystemExecutor: FileSystemExecutor = getDefaultFileSystemExecutor(),
173 | ): Promise<ToolResponse> {
174 |   log(
175 |     'info',
176 |     `Starting test run for scheme ${params.scheme} on platform ${params.platform ?? 'iOS'} (internal)`,
177 |   );
178 | 
179 |   let tempDir: string | undefined;
180 |   const cleanup = async (): Promise<void> => {
181 |     if (!tempDir) return;
182 |     try {
183 |       await fileSystemExecutor.rm(tempDir, { recursive: true, force: true });
184 |     } catch (cleanupError) {
185 |       log('warn', `Failed to clean up temporary directory: ${cleanupError}`);
186 |     }
187 |   };
188 | 
189 |   try {
190 |     // Create temporary directory for xcresult bundle
191 |     tempDir = await fileSystemExecutor.mkdtemp(
192 |       join(fileSystemExecutor.tmpdir(), 'xcodebuild-test-'),
193 |     );
194 |     const resultBundlePath = join(tempDir, 'TestResults.xcresult');
195 | 
196 |     // Add resultBundlePath to extraArgs
197 |     const extraArgs = [...(params.extraArgs ?? []), `-resultBundlePath`, resultBundlePath];
198 | 
199 |     // Prepare execution options with TEST_RUNNER_ environment variables
200 |     const execOpts: CommandExecOptions | undefined = params.testRunnerEnv
201 |       ? { env: normalizeTestRunnerEnv(params.testRunnerEnv) }
202 |       : undefined;
203 | 
204 |     // Run the test command
205 |     const testResult = await executeXcodeBuildCommand(
206 |       {
207 |         projectPath: params.projectPath,
208 |         workspacePath: params.workspacePath,
209 |         scheme: params.scheme,
210 |         configuration: params.configuration ?? 'Debug',
211 |         derivedDataPath: params.derivedDataPath,
212 |         extraArgs,
213 |       },
214 |       {
215 |         platform: (params.platform as XcodePlatform) || XcodePlatform.iOS,
216 |         simulatorName: undefined,
217 |         simulatorId: undefined,
218 |         deviceId: params.deviceId,
219 |         useLatestOS: false,
220 |         logPrefix: 'Test Run',
221 |       },
222 |       params.preferXcodebuild,
223 |       'test',
224 |       executor,
225 |       execOpts,
226 |     );
227 | 
228 |     // Parse xcresult bundle if it exists, regardless of whether tests passed or failed
229 |     // Test failures are expected and should not prevent xcresult parsing
230 |     try {
231 |       log('info', `Attempting to parse xcresult bundle at: ${resultBundlePath}`);
232 | 
233 |       // Check if the file exists
234 |       try {
235 |         await fileSystemExecutor.stat(resultBundlePath);
236 |         log('info', `xcresult bundle exists at: ${resultBundlePath}`);
237 |       } catch {
238 |         log('warn', `xcresult bundle does not exist at: ${resultBundlePath}`);
239 |         throw new Error(`xcresult bundle not found at ${resultBundlePath}`);
240 |       }
241 | 
242 |       const testSummary = await parseXcresultBundle(resultBundlePath, executor);
243 |       log('info', 'Successfully parsed xcresult bundle');
244 | 
245 |       // Clean up temporary directory
246 |       await cleanup();
247 | 
248 |       // Return combined result - preserve isError from testResult (test failures should be marked as errors)
249 |       return {
250 |         content: [
251 |           ...(testResult.content || []),
252 |           {
253 |             type: 'text',
254 |             text: '\nTest Results Summary:\n' + testSummary,
255 |           },
256 |         ],
257 |         isError: testResult.isError,
258 |       };
259 |     } catch (parseError) {
260 |       // If parsing fails, return original test result
261 |       log('warn', `Failed to parse xcresult bundle: ${parseError}`);
262 | 
263 |       await cleanup();
264 | 
265 |       return testResult;
266 |     }
267 |   } catch (error) {
268 |     const errorMessage = error instanceof Error ? error.message : String(error);
269 |     log('error', `Error during test run: ${errorMessage}`);
270 |     return createTextResponse(`Error during test run: ${errorMessage}`, true);
271 |   } finally {
272 |     await cleanup();
273 |   }
274 | }
275 | 
276 | export default {
277 |   name: 'test_device',
278 |   description: 'Runs tests on a physical Apple device.',
279 |   schema: baseSchemaObject.omit({
280 |     projectPath: true,
281 |     workspacePath: true,
282 |     scheme: true,
283 |     deviceId: true,
284 |     configuration: true,
285 |   } as const).shape,
286 |   handler: createSessionAwareTool<TestDeviceParams>({
287 |     internalSchema: testDeviceSchema as unknown as z.ZodType<TestDeviceParams>,
288 |     logicFunction: (params: TestDeviceParams, executor: CommandExecutor) =>
289 |       testDeviceLogic(
290 |         {
291 |           ...params,
292 |           platform: params.platform ?? 'iOS',
293 |         },
294 |         executor,
295 |         getDefaultFileSystemExecutor(),
296 |       ),
297 |     getExecutor: getDefaultCommandExecutor,
298 |     requirements: [
299 |       { allOf: ['scheme', 'deviceId'], message: 'Provide scheme and deviceId' },
300 |       { oneOf: ['projectPath', 'workspacePath'], message: 'Provide a project or workspace' },
301 |     ],
302 |     exclusivePairs: [['projectPath', 'workspacePath']],
303 |   }),
304 | };
305 | 
```

--------------------------------------------------------------------------------
/src/mcp/tools/logging/__tests__/stop_sim_log_cap.test.ts:
--------------------------------------------------------------------------------

```typescript
  1 | /**
  2 |  * stop_sim_log_cap Plugin Tests - Test coverage for stop_sim_log_cap plugin
  3 |  *
  4 |  * This test file provides complete coverage for the stop_sim_log_cap plugin:
  5 |  * - Plugin structure validation
  6 |  * - Handler functionality (stop log capture session and retrieve captured logs)
  7 |  * - Error handling for validation and log capture failures
  8 |  *
  9 |  * Tests follow the canonical testing patterns from CLAUDE.md with deterministic
 10 |  * response validation and comprehensive parameter testing.
 11 |  * Converted to pure dependency injection without vitest mocking.
 12 |  */
 13 | 
 14 | import { describe, it, expect, beforeEach } from 'vitest';
 15 | import { z } from 'zod';
 16 | import stopSimLogCap, { stop_sim_log_capLogic } from '../stop_sim_log_cap.ts';
 17 | import { createMockFileSystemExecutor } from '../../../../test-utils/mock-executors.ts';
 18 | import { activeLogSessions } from '../../../../utils/log_capture.ts';
 19 | 
 20 | describe('stop_sim_log_cap plugin', () => {
 21 |   let mockFileSystem: any;
 22 | 
 23 |   beforeEach(() => {
 24 |     mockFileSystem = createMockFileSystemExecutor();
 25 |     // Clear any active sessions before each test
 26 |     activeLogSessions.clear();
 27 |   });
 28 | 
 29 |   // Helper function to create a test log session
 30 |   async function createTestLogSession(sessionId: string, logContent: string = '') {
 31 |     const mockProcess = {
 32 |       pid: 12345,
 33 |       killed: false,
 34 |       exitCode: null,
 35 |       kill: () => {},
 36 |     };
 37 | 
 38 |     const logFilePath = `/tmp/xcodemcp_sim_log_test_${sessionId}.log`;
 39 | 
 40 |     // Create actual file for the test
 41 |     const fs = await import('fs/promises');
 42 |     await fs.writeFile(logFilePath, logContent, 'utf-8');
 43 | 
 44 |     activeLogSessions.set(sessionId, {
 45 |       processes: [mockProcess as any],
 46 |       logFilePath: logFilePath,
 47 |       simulatorUuid: 'test-simulator-uuid',
 48 |       bundleId: 'com.example.TestApp',
 49 |     });
 50 |   }
 51 | 
 52 |   describe('Export Field Validation (Literal)', () => {
 53 |     it('should have correct plugin structure', () => {
 54 |       expect(stopSimLogCap).toHaveProperty('name');
 55 |       expect(stopSimLogCap).toHaveProperty('description');
 56 |       expect(stopSimLogCap).toHaveProperty('schema');
 57 |       expect(stopSimLogCap).toHaveProperty('handler');
 58 | 
 59 |       expect(stopSimLogCap.name).toBe('stop_sim_log_cap');
 60 |       expect(stopSimLogCap.description).toBe(
 61 |         'Stops an active simulator log capture session and returns the captured logs.',
 62 |       );
 63 |       expect(typeof stopSimLogCap.handler).toBe('function');
 64 |       expect(typeof stopSimLogCap.schema).toBe('object');
 65 |     });
 66 | 
 67 |     it('should have correct schema structure', () => {
 68 |       // Schema should be a plain object for MCP protocol compliance
 69 |       expect(typeof stopSimLogCap.schema).toBe('object');
 70 |       expect(stopSimLogCap.schema).toHaveProperty('logSessionId');
 71 | 
 72 |       // Validate that schema fields are Zod types that can be used for validation
 73 |       const schema = z.object(stopSimLogCap.schema);
 74 |       expect(schema.safeParse({ logSessionId: 'test-session-id' }).success).toBe(true);
 75 |       expect(schema.safeParse({ logSessionId: 123 }).success).toBe(false);
 76 |     });
 77 | 
 78 |     it('should validate schema with valid parameters', () => {
 79 |       expect(stopSimLogCap.schema.logSessionId.safeParse('test-session-id').success).toBe(true);
 80 |     });
 81 | 
 82 |     it('should reject invalid schema parameters', () => {
 83 |       expect(stopSimLogCap.schema.logSessionId.safeParse(null).success).toBe(false);
 84 |       expect(stopSimLogCap.schema.logSessionId.safeParse(undefined).success).toBe(false);
 85 |       expect(stopSimLogCap.schema.logSessionId.safeParse(123).success).toBe(false);
 86 |       expect(stopSimLogCap.schema.logSessionId.safeParse(true).success).toBe(false);
 87 |     });
 88 |   });
 89 | 
 90 |   describe('Input Validation', () => {
 91 |     it('should handle null logSessionId (validation handled by framework)', async () => {
 92 |       // With typed tool factory, invalid params won't reach the logic function
 93 |       // This test now validates that the logic function works with valid empty strings
 94 |       await createTestLogSession('', 'Log content for empty session');
 95 | 
 96 |       const result = await stop_sim_log_capLogic(
 97 |         {
 98 |           logSessionId: '',
 99 |         },
100 |         mockFileSystem,
101 |       );
102 | 
103 |       expect(result.isError).toBeUndefined();
104 |       expect(result.content[0].text).toBe(
105 |         'Log capture session  stopped successfully. Log content follows:\n\nLog content for empty session',
106 |       );
107 |     });
108 | 
109 |     it('should handle undefined logSessionId (validation handled by framework)', async () => {
110 |       // With typed tool factory, invalid params won't reach the logic function
111 |       // This test now validates that the logic function works with valid empty strings
112 |       await createTestLogSession('', 'Log content for empty session');
113 | 
114 |       const result = await stop_sim_log_capLogic(
115 |         {
116 |           logSessionId: '',
117 |         },
118 |         mockFileSystem,
119 |       );
120 | 
121 |       expect(result.isError).toBeUndefined();
122 |       expect(result.content[0].text).toBe(
123 |         'Log capture session  stopped successfully. Log content follows:\n\nLog content for empty session',
124 |       );
125 |     });
126 | 
127 |     it('should handle empty string logSessionId', async () => {
128 |       await createTestLogSession('', 'Log content for empty session');
129 | 
130 |       const result = await stop_sim_log_capLogic(
131 |         {
132 |           logSessionId: '',
133 |         },
134 |         mockFileSystem,
135 |       );
136 | 
137 |       expect(result.isError).toBeUndefined();
138 |       expect(result.content[0].text).toBe(
139 |         'Log capture session  stopped successfully. Log content follows:\n\nLog content for empty session',
140 |       );
141 |     });
142 |   });
143 | 
144 |   describe('Function Call Generation', () => {
145 |     it('should call stopLogCapture with correct parameters', async () => {
146 |       await createTestLogSession('test-session-id', 'Mock log content from file');
147 | 
148 |       const result = await stop_sim_log_capLogic(
149 |         {
150 |           logSessionId: 'test-session-id',
151 |         },
152 |         mockFileSystem,
153 |       );
154 | 
155 |       expect(result.isError).toBeUndefined();
156 |       expect(result.content[0].text).toBe(
157 |         'Log capture session test-session-id stopped successfully. Log content follows:\n\nMock log content from file',
158 |       );
159 |     });
160 | 
161 |     it('should call stopLogCapture with different session ID', async () => {
162 |       await createTestLogSession('different-session-id', 'Different log content');
163 | 
164 |       const result = await stop_sim_log_capLogic(
165 |         {
166 |           logSessionId: 'different-session-id',
167 |         },
168 |         mockFileSystem,
169 |       );
170 | 
171 |       expect(result.isError).toBeUndefined();
172 |       expect(result.content[0].text).toBe(
173 |         'Log capture session different-session-id stopped successfully. Log content follows:\n\nDifferent log content',
174 |       );
175 |     });
176 |   });
177 | 
178 |   describe('Response Processing', () => {
179 |     it('should handle successful log capture stop', async () => {
180 |       await createTestLogSession('test-session-id', 'Mock log content from file');
181 | 
182 |       const result = await stop_sim_log_capLogic(
183 |         {
184 |           logSessionId: 'test-session-id',
185 |         },
186 |         mockFileSystem,
187 |       );
188 | 
189 |       expect(result.isError).toBeUndefined();
190 |       expect(result.content[0].text).toBe(
191 |         'Log capture session test-session-id stopped successfully. Log content follows:\n\nMock log content from file',
192 |       );
193 |     });
194 | 
195 |     it('should handle empty log content', async () => {
196 |       await createTestLogSession('test-session-id', '');
197 | 
198 |       const result = await stop_sim_log_capLogic(
199 |         {
200 |           logSessionId: 'test-session-id',
201 |         },
202 |         mockFileSystem,
203 |       );
204 | 
205 |       expect(result.isError).toBeUndefined();
206 |       expect(result.content[0].text).toBe(
207 |         'Log capture session test-session-id stopped successfully. Log content follows:\n\n',
208 |       );
209 |     });
210 | 
211 |     it('should handle multiline log content', async () => {
212 |       await createTestLogSession('test-session-id', 'Line 1\nLine 2\nLine 3');
213 | 
214 |       const result = await stop_sim_log_capLogic(
215 |         {
216 |           logSessionId: 'test-session-id',
217 |         },
218 |         mockFileSystem,
219 |       );
220 | 
221 |       expect(result.isError).toBeUndefined();
222 |       expect(result.content[0].text).toBe(
223 |         'Log capture session test-session-id stopped successfully. Log content follows:\n\nLine 1\nLine 2\nLine 3',
224 |       );
225 |     });
226 | 
227 |     it('should handle log capture stop errors for non-existent session', async () => {
228 |       const result = await stop_sim_log_capLogic(
229 |         {
230 |           logSessionId: 'non-existent-session',
231 |         },
232 |         mockFileSystem,
233 |       );
234 | 
235 |       expect(result.isError).toBe(true);
236 |       expect(result.content[0].text).toBe(
237 |         'Error stopping log capture session non-existent-session: Log capture session not found: non-existent-session',
238 |       );
239 |     });
240 | 
241 |     it('should handle file read errors', async () => {
242 |       // Create session but make file reading fail in the log_capture utility
243 |       const mockProcess = {
244 |         pid: 12345,
245 |         killed: false,
246 |         exitCode: null,
247 |         kill: () => {},
248 |       };
249 | 
250 |       activeLogSessions.set('test-session-id', {
251 |         processes: [mockProcess as any],
252 |         logFilePath: `/tmp/test_file_not_found.log`,
253 |         simulatorUuid: 'test-simulator-uuid',
254 |         bundleId: 'com.example.TestApp',
255 |       });
256 | 
257 |       const result = await stop_sim_log_capLogic(
258 |         {
259 |           logSessionId: 'test-session-id',
260 |         },
261 |         mockFileSystem,
262 |       );
263 | 
264 |       expect(result.isError).toBe(true);
265 |       expect(result.content[0].text).toContain(
266 |         'Error stopping log capture session test-session-id:',
267 |       );
268 |     });
269 | 
270 |     it('should handle permission errors', async () => {
271 |       // Create session but make file reading fail in the log_capture utility
272 |       const mockProcess = {
273 |         pid: 12345,
274 |         killed: false,
275 |         exitCode: null,
276 |         kill: () => {},
277 |       };
278 | 
279 |       activeLogSessions.set('test-session-id', {
280 |         processes: [mockProcess as any],
281 |         logFilePath: `/tmp/test_permission_denied.log`,
282 |         simulatorUuid: 'test-simulator-uuid',
283 |         bundleId: 'com.example.TestApp',
284 |       });
285 | 
286 |       const result = await stop_sim_log_capLogic(
287 |         {
288 |           logSessionId: 'test-session-id',
289 |         },
290 |         mockFileSystem,
291 |       );
292 | 
293 |       expect(result.isError).toBe(true);
294 |       expect(result.content[0].text).toContain(
295 |         'Error stopping log capture session test-session-id:',
296 |       );
297 |     });
298 | 
299 |     it('should handle various error types', async () => {
300 |       // Create session but make file reading fail in the log_capture utility
301 |       const mockProcess = {
302 |         pid: 12345,
303 |         killed: false,
304 |         exitCode: null,
305 |         kill: () => {},
306 |       };
307 | 
308 |       activeLogSessions.set('test-session-id', {
309 |         processes: [mockProcess as any],
310 |         logFilePath: `/tmp/test_generic_error.log`,
311 |         simulatorUuid: 'test-simulator-uuid',
312 |         bundleId: 'com.example.TestApp',
313 |       });
314 | 
315 |       const result = await stop_sim_log_capLogic(
316 |         {
317 |           logSessionId: 'test-session-id',
318 |         },
319 |         mockFileSystem,
320 |       );
321 | 
322 |       expect(result.isError).toBe(true);
323 |       expect(result.content[0].text).toContain(
324 |         'Error stopping log capture session test-session-id:',
325 |       );
326 |     });
327 |   });
328 | });
329 | 
```

--------------------------------------------------------------------------------
/src/mcp/tools/simulator-management/__tests__/set_sim_location.test.ts:
--------------------------------------------------------------------------------

```typescript
  1 | /**
  2 |  * Tests for set_sim_location plugin
  3 |  * Following CLAUDE.md testing standards with literal validation
  4 |  * Using pure dependency injection for deterministic testing
  5 |  */
  6 | 
  7 | import { describe, it, expect, beforeEach } from 'vitest';
  8 | import { z } from 'zod';
  9 | import { createMockExecutor, createNoopExecutor } from '../../../../test-utils/mock-executors.ts';
 10 | import setSimLocation, { set_sim_locationLogic } from '../set_sim_location.ts';
 11 | 
 12 | describe('set_sim_location tool', () => {
 13 |   // No mocks to clear since we use pure dependency injection
 14 | 
 15 |   describe('Export Field Validation (Literal)', () => {
 16 |     it('should have correct name', () => {
 17 |       expect(setSimLocation.name).toBe('set_sim_location');
 18 |     });
 19 | 
 20 |     it('should have correct description', () => {
 21 |       expect(setSimLocation.description).toBe('Sets a custom GPS location for the simulator.');
 22 |     });
 23 | 
 24 |     it('should have handler function', () => {
 25 |       expect(typeof setSimLocation.handler).toBe('function');
 26 |     });
 27 | 
 28 |     it('should have correct schema with simulatorUuid string field and latitude/longitude number fields', () => {
 29 |       const schema = z.object(setSimLocation.schema);
 30 | 
 31 |       // Valid inputs
 32 |       expect(
 33 |         schema.safeParse({
 34 |           simulatorUuid: 'test-uuid-123',
 35 |           latitude: 37.7749,
 36 |           longitude: -122.4194,
 37 |         }).success,
 38 |       ).toBe(true);
 39 |       expect(
 40 |         schema.safeParse({ simulatorUuid: 'ABC123-DEF456', latitude: 0, longitude: 0 }).success,
 41 |       ).toBe(true);
 42 |       expect(
 43 |         schema.safeParse({ simulatorUuid: 'test-uuid', latitude: 90, longitude: 180 }).success,
 44 |       ).toBe(true);
 45 |       expect(
 46 |         schema.safeParse({ simulatorUuid: 'test-uuid', latitude: -90, longitude: -180 }).success,
 47 |       ).toBe(true);
 48 |       expect(
 49 |         schema.safeParse({ simulatorUuid: 'test-uuid', latitude: 45.5, longitude: -73.6 }).success,
 50 |       ).toBe(true);
 51 | 
 52 |       // Invalid inputs
 53 |       expect(
 54 |         schema.safeParse({ simulatorUuid: 123, latitude: 37.7749, longitude: -122.4194 }).success,
 55 |       ).toBe(false);
 56 |       expect(
 57 |         schema.safeParse({ simulatorUuid: 'test-uuid', latitude: 'invalid', longitude: -122.4194 })
 58 |           .success,
 59 |       ).toBe(false);
 60 |       expect(
 61 |         schema.safeParse({ simulatorUuid: 'test-uuid', latitude: 37.7749, longitude: 'invalid' })
 62 |           .success,
 63 |       ).toBe(false);
 64 |       expect(
 65 |         schema.safeParse({ simulatorUuid: null, latitude: 37.7749, longitude: -122.4194 }).success,
 66 |       ).toBe(false);
 67 |       expect(schema.safeParse({ simulatorUuid: 'test-uuid', longitude: -122.4194 }).success).toBe(
 68 |         false,
 69 |       );
 70 |       expect(schema.safeParse({ simulatorUuid: 'test-uuid', latitude: 37.7749 }).success).toBe(
 71 |         false,
 72 |       );
 73 |       expect(schema.safeParse({ latitude: 37.7749, longitude: -122.4194 }).success).toBe(false);
 74 |       expect(schema.safeParse({}).success).toBe(false);
 75 |     });
 76 |   });
 77 | 
 78 |   describe('Command Generation', () => {
 79 |     it('should generate correct simctl command', async () => {
 80 |       let capturedCommand: string[] = [];
 81 | 
 82 |       const mockExecutor = async (command: string[]) => {
 83 |         capturedCommand = command;
 84 |         return {
 85 |           success: true,
 86 |           output: 'Location set successfully',
 87 |           error: undefined,
 88 |           process: { pid: 12345 },
 89 |         };
 90 |       };
 91 | 
 92 |       await set_sim_locationLogic(
 93 |         {
 94 |           simulatorUuid: 'test-uuid-123',
 95 |           latitude: 37.7749,
 96 |           longitude: -122.4194,
 97 |         },
 98 |         mockExecutor,
 99 |       );
100 | 
101 |       expect(capturedCommand).toEqual([
102 |         'xcrun',
103 |         'simctl',
104 |         'location',
105 |         'test-uuid-123',
106 |         'set',
107 |         '37.7749,-122.4194',
108 |       ]);
109 |     });
110 | 
111 |     it('should generate command with different coordinates', async () => {
112 |       let capturedCommand: string[] = [];
113 | 
114 |       const mockExecutor = async (command: string[]) => {
115 |         capturedCommand = command;
116 |         return {
117 |           success: true,
118 |           output: 'Location set successfully',
119 |           error: undefined,
120 |           process: { pid: 12345 },
121 |         };
122 |       };
123 | 
124 |       await set_sim_locationLogic(
125 |         {
126 |           simulatorUuid: 'different-uuid',
127 |           latitude: 45.5,
128 |           longitude: -73.6,
129 |         },
130 |         mockExecutor,
131 |       );
132 | 
133 |       expect(capturedCommand).toEqual([
134 |         'xcrun',
135 |         'simctl',
136 |         'location',
137 |         'different-uuid',
138 |         'set',
139 |         '45.5,-73.6',
140 |       ]);
141 |     });
142 | 
143 |     it('should generate command with negative coordinates', async () => {
144 |       let capturedCommand: string[] = [];
145 | 
146 |       const mockExecutor = async (command: string[]) => {
147 |         capturedCommand = command;
148 |         return {
149 |           success: true,
150 |           output: 'Location set successfully',
151 |           error: undefined,
152 |           process: { pid: 12345 },
153 |         };
154 |       };
155 | 
156 |       await set_sim_locationLogic(
157 |         {
158 |           simulatorUuid: 'test-uuid',
159 |           latitude: -90,
160 |           longitude: -180,
161 |         },
162 |         mockExecutor,
163 |       );
164 | 
165 |       expect(capturedCommand).toEqual([
166 |         'xcrun',
167 |         'simctl',
168 |         'location',
169 |         'test-uuid',
170 |         'set',
171 |         '-90,-180',
172 |       ]);
173 |     });
174 |   });
175 | 
176 |   describe('Response Processing', () => {
177 |     it('should handle successful location setting', async () => {
178 |       const mockExecutor = createMockExecutor({
179 |         success: true,
180 |         output: 'Location set successfully',
181 |         error: undefined,
182 |       });
183 | 
184 |       const result = await set_sim_locationLogic(
185 |         {
186 |           simulatorUuid: 'test-uuid-123',
187 |           latitude: 37.7749,
188 |           longitude: -122.4194,
189 |         },
190 |         mockExecutor,
191 |       );
192 | 
193 |       expect(result).toEqual({
194 |         content: [
195 |           {
196 |             type: 'text',
197 |             text: 'Successfully set simulator test-uuid-123 location to 37.7749,-122.4194',
198 |           },
199 |         ],
200 |       });
201 |     });
202 | 
203 |     it('should handle latitude validation failure', async () => {
204 |       const result = await set_sim_locationLogic(
205 |         {
206 |           simulatorUuid: 'test-uuid-123',
207 |           latitude: 95,
208 |           longitude: -122.4194,
209 |         },
210 |         createNoopExecutor(),
211 |       );
212 | 
213 |       expect(result).toEqual({
214 |         content: [
215 |           {
216 |             type: 'text',
217 |             text: 'Latitude must be between -90 and 90 degrees',
218 |           },
219 |         ],
220 |       });
221 |     });
222 | 
223 |     it('should handle longitude validation failure', async () => {
224 |       const result = await set_sim_locationLogic(
225 |         {
226 |           simulatorUuid: 'test-uuid-123',
227 |           latitude: 37.7749,
228 |           longitude: -185,
229 |         },
230 |         createNoopExecutor(),
231 |       );
232 | 
233 |       expect(result).toEqual({
234 |         content: [
235 |           {
236 |             type: 'text',
237 |             text: 'Longitude must be between -180 and 180 degrees',
238 |           },
239 |         ],
240 |       });
241 |     });
242 | 
243 |     it('should handle command failure', async () => {
244 |       const mockExecutor = createMockExecutor({
245 |         success: false,
246 |         output: '',
247 |         error: 'Simulator not found',
248 |       });
249 | 
250 |       const result = await set_sim_locationLogic(
251 |         {
252 |           simulatorUuid: 'invalid-uuid',
253 |           latitude: 37.7749,
254 |           longitude: -122.4194,
255 |         },
256 |         mockExecutor,
257 |       );
258 | 
259 |       expect(result).toEqual({
260 |         content: [
261 |           {
262 |             type: 'text',
263 |             text: 'Failed to set simulator location: Simulator not found',
264 |           },
265 |         ],
266 |       });
267 |     });
268 | 
269 |     it('should handle exception with Error object', async () => {
270 |       const mockExecutor = createMockExecutor(new Error('Connection failed'));
271 | 
272 |       const result = await set_sim_locationLogic(
273 |         {
274 |           simulatorUuid: 'test-uuid-123',
275 |           latitude: 37.7749,
276 |           longitude: -122.4194,
277 |         },
278 |         mockExecutor,
279 |       );
280 | 
281 |       expect(result).toEqual({
282 |         content: [
283 |           {
284 |             type: 'text',
285 |             text: 'Failed to set simulator location: Connection failed',
286 |           },
287 |         ],
288 |       });
289 |     });
290 | 
291 |     it('should handle exception with string error', async () => {
292 |       const mockExecutor = createMockExecutor('String error');
293 | 
294 |       const result = await set_sim_locationLogic(
295 |         {
296 |           simulatorUuid: 'test-uuid-123',
297 |           latitude: 37.7749,
298 |           longitude: -122.4194,
299 |         },
300 |         mockExecutor,
301 |       );
302 | 
303 |       expect(result).toEqual({
304 |         content: [
305 |           {
306 |             type: 'text',
307 |             text: 'Failed to set simulator location: String error',
308 |           },
309 |         ],
310 |       });
311 |     });
312 | 
313 |     it('should handle boundary values for coordinates', async () => {
314 |       const mockExecutor = createMockExecutor({
315 |         success: true,
316 |         output: 'Location set successfully',
317 |         error: undefined,
318 |       });
319 | 
320 |       const result = await set_sim_locationLogic(
321 |         {
322 |           simulatorUuid: 'test-uuid-123',
323 |           latitude: 90,
324 |           longitude: 180,
325 |         },
326 |         mockExecutor,
327 |       );
328 | 
329 |       expect(result).toEqual({
330 |         content: [
331 |           {
332 |             type: 'text',
333 |             text: 'Successfully set simulator test-uuid-123 location to 90,180',
334 |           },
335 |         ],
336 |       });
337 |     });
338 | 
339 |     it('should handle boundary values for negative coordinates', async () => {
340 |       const mockExecutor = createMockExecutor({
341 |         success: true,
342 |         output: 'Location set successfully',
343 |         error: undefined,
344 |       });
345 | 
346 |       const result = await set_sim_locationLogic(
347 |         {
348 |           simulatorUuid: 'test-uuid-123',
349 |           latitude: -90,
350 |           longitude: -180,
351 |         },
352 |         mockExecutor,
353 |       );
354 | 
355 |       expect(result).toEqual({
356 |         content: [
357 |           {
358 |             type: 'text',
359 |             text: 'Successfully set simulator test-uuid-123 location to -90,-180',
360 |           },
361 |         ],
362 |       });
363 |     });
364 | 
365 |     it('should handle zero coordinates', async () => {
366 |       const mockExecutor = createMockExecutor({
367 |         success: true,
368 |         output: 'Location set successfully',
369 |         error: undefined,
370 |       });
371 | 
372 |       const result = await set_sim_locationLogic(
373 |         {
374 |           simulatorUuid: 'test-uuid-123',
375 |           latitude: 0,
376 |           longitude: 0,
377 |         },
378 |         mockExecutor,
379 |       );
380 | 
381 |       expect(result).toEqual({
382 |         content: [
383 |           {
384 |             type: 'text',
385 |             text: 'Successfully set simulator test-uuid-123 location to 0,0',
386 |           },
387 |         ],
388 |       });
389 |     });
390 | 
391 |     it('should verify correct executor arguments', async () => {
392 |       let capturedArgs: any[] = [];
393 | 
394 |       const mockExecutor = async (...args: any[]) => {
395 |         capturedArgs = args;
396 |         return {
397 |           success: true,
398 |           output: 'Location set successfully',
399 |           error: undefined,
400 |           process: { pid: 12345 },
401 |         };
402 |       };
403 | 
404 |       await set_sim_locationLogic(
405 |         {
406 |           simulatorUuid: 'test-uuid-123',
407 |           latitude: 37.7749,
408 |           longitude: -122.4194,
409 |         },
410 |         mockExecutor,
411 |       );
412 | 
413 |       expect(capturedArgs).toEqual([
414 |         ['xcrun', 'simctl', 'location', 'test-uuid-123', 'set', '37.7749,-122.4194'],
415 |         'Set Simulator Location',
416 |         true,
417 |         {},
418 |       ]);
419 |     });
420 |   });
421 | });
422 | 
```

--------------------------------------------------------------------------------
/src/mcp/tools/device/__tests__/list_devices.test.ts:
--------------------------------------------------------------------------------

```typescript
  1 | /**
  2 |  * Tests for list_devices plugin (device-shared)
  3 |  * This tests the re-exported plugin from device-workspace
  4 |  * Following CLAUDE.md testing standards with literal validation
  5 |  *
  6 |  * Note: This is a re-export test. Comprehensive handler tests are in device-workspace/list_devices.test.ts
  7 |  */
  8 | 
  9 | import { describe, it, expect } from 'vitest';
 10 | import {
 11 |   createMockExecutor,
 12 |   createMockFileSystemExecutor,
 13 | } from '../../../../test-utils/mock-executors.ts';
 14 | 
 15 | // Import the logic function and re-export
 16 | import listDevices, { list_devicesLogic } from '../list_devices.ts';
 17 | 
 18 | describe('list_devices plugin (device-shared)', () => {
 19 |   describe('Export Field Validation (Literal)', () => {
 20 |     it('should export list_devicesLogic function', () => {
 21 |       expect(typeof list_devicesLogic).toBe('function');
 22 |     });
 23 | 
 24 |     it('should have correct name', () => {
 25 |       expect(listDevices.name).toBe('list_devices');
 26 |     });
 27 | 
 28 |     it('should have correct description', () => {
 29 |       expect(listDevices.description).toBe(
 30 |         'Lists connected physical Apple devices (iPhone, iPad, Apple Watch, Apple TV, Apple Vision Pro) with their UUIDs, names, and connection status. Use this to discover physical devices for testing.',
 31 |       );
 32 |     });
 33 | 
 34 |     it('should have handler function', () => {
 35 |       expect(typeof listDevices.handler).toBe('function');
 36 |     });
 37 | 
 38 |     it('should have empty schema', () => {
 39 |       expect(listDevices.schema).toEqual({});
 40 |     });
 41 |   });
 42 | 
 43 |   describe('Command Generation Tests', () => {
 44 |     it('should generate correct devicectl command', async () => {
 45 |       const devicectlJson = {
 46 |         result: {
 47 |           devices: [
 48 |             {
 49 |               identifier: 'test-device-123',
 50 |               visibilityClass: 'Default',
 51 |               connectionProperties: {
 52 |                 pairingState: 'paired',
 53 |                 tunnelState: 'connected',
 54 |                 transportType: 'USB',
 55 |               },
 56 |               deviceProperties: {
 57 |                 name: 'Test iPhone',
 58 |                 platformIdentifier: 'com.apple.platform.iphoneos',
 59 |                 osVersionNumber: '17.0',
 60 |               },
 61 |               hardwareProperties: {
 62 |                 productType: 'iPhone15,2',
 63 |               },
 64 |             },
 65 |           ],
 66 |         },
 67 |       };
 68 | 
 69 |       // Track command calls
 70 |       const commandCalls: Array<{
 71 |         command: string[];
 72 |         logPrefix?: string;
 73 |         useShell?: boolean;
 74 |         env?: Record<string, string>;
 75 |       }> = [];
 76 | 
 77 |       // Create mock executor
 78 |       const mockExecutor = createMockExecutor({
 79 |         success: true,
 80 |         output: '',
 81 |       });
 82 | 
 83 |       // Wrap to track calls
 84 |       const trackingExecutor = async (
 85 |         command: string[],
 86 |         logPrefix?: string,
 87 |         useShell?: boolean,
 88 |         env?: Record<string, string>,
 89 |       ) => {
 90 |         commandCalls.push({ command, logPrefix, useShell, env });
 91 |         return mockExecutor(command, logPrefix, useShell, env);
 92 |       };
 93 | 
 94 |       // Create mock path dependencies
 95 |       const mockPathDeps = {
 96 |         tmpdir: () => '/tmp',
 97 |         join: (...paths: string[]) => paths.join('/'),
 98 |       };
 99 | 
100 |       // Create mock filesystem with specific behavior
101 |       const mockFsDeps = createMockFileSystemExecutor({
102 |         readFile: async () => JSON.stringify(devicectlJson),
103 |         unlink: async () => {},
104 |       });
105 | 
106 |       await list_devicesLogic({}, trackingExecutor, mockPathDeps, mockFsDeps);
107 | 
108 |       expect(commandCalls).toHaveLength(1);
109 |       expect(commandCalls[0].command).toEqual([
110 |         'xcrun',
111 |         'devicectl',
112 |         'list',
113 |         'devices',
114 |         '--json-output',
115 |         '/tmp/devicectl-123.json',
116 |       ]);
117 |       expect(commandCalls[0].logPrefix).toBe('List Devices (devicectl with JSON)');
118 |       expect(commandCalls[0].useShell).toBe(true);
119 |       expect(commandCalls[0].env).toBeUndefined();
120 |     });
121 | 
122 |     it('should generate correct xctrace fallback command', async () => {
123 |       // Track command calls
124 |       const commandCalls: Array<{
125 |         command: string[];
126 |         logPrefix?: string;
127 |         useShell?: boolean;
128 |         env?: Record<string, string>;
129 |       }> = [];
130 | 
131 |       // Create tracking executor with call count behavior
132 |       let callCount = 0;
133 |       const trackingExecutor = async (
134 |         command: string[],
135 |         logPrefix?: string,
136 |         useShell?: boolean,
137 |         env?: Record<string, string>,
138 |       ) => {
139 |         callCount++;
140 |         commandCalls.push({ command, logPrefix, useShell, env });
141 | 
142 |         if (callCount === 1) {
143 |           // First call fails (devicectl)
144 |           return {
145 |             success: false,
146 |             output: '',
147 |             error: 'devicectl failed',
148 |             process: { pid: 12345 },
149 |           };
150 |         } else {
151 |           // Second call succeeds (xctrace)
152 |           return {
153 |             success: true,
154 |             output: 'iPhone 15 (12345678-1234-1234-1234-123456789012)',
155 |             error: undefined,
156 |             process: { pid: 12345 },
157 |           };
158 |         }
159 |       };
160 | 
161 |       // Create mock path dependencies
162 |       const mockPathDeps = {
163 |         tmpdir: () => '/tmp',
164 |         join: (...paths: string[]) => paths.join('/'),
165 |       };
166 | 
167 |       // Create mock filesystem that throws for readFile
168 |       const mockFsDeps = createMockFileSystemExecutor({
169 |         readFile: async () => {
170 |           throw new Error('File not found');
171 |         },
172 |         unlink: async () => {},
173 |       });
174 | 
175 |       await list_devicesLogic({}, trackingExecutor, mockPathDeps, mockFsDeps);
176 | 
177 |       expect(commandCalls).toHaveLength(2);
178 |       expect(commandCalls[1].command).toEqual(['xcrun', 'xctrace', 'list', 'devices']);
179 |       expect(commandCalls[1].logPrefix).toBe('List Devices (xctrace)');
180 |       expect(commandCalls[1].useShell).toBe(true);
181 |       expect(commandCalls[1].env).toBeUndefined();
182 |     });
183 |   });
184 | 
185 |   describe('Success Path Tests', () => {
186 |     it('should return successful devicectl response with parsed devices', async () => {
187 |       const devicectlJson = {
188 |         result: {
189 |           devices: [
190 |             {
191 |               identifier: 'test-device-123',
192 |               visibilityClass: 'Default',
193 |               connectionProperties: {
194 |                 pairingState: 'paired',
195 |                 tunnelState: 'connected',
196 |                 transportType: 'USB',
197 |               },
198 |               deviceProperties: {
199 |                 name: 'Test iPhone',
200 |                 platformIdentifier: 'com.apple.platform.iphoneos',
201 |                 osVersionNumber: '17.0',
202 |               },
203 |               hardwareProperties: {
204 |                 productType: 'iPhone15,2',
205 |               },
206 |             },
207 |           ],
208 |         },
209 |       };
210 | 
211 |       const mockExecutor = createMockExecutor({
212 |         success: true,
213 |         output: '',
214 |       });
215 | 
216 |       // Create mock path dependencies
217 |       const mockPathDeps = {
218 |         tmpdir: () => '/tmp',
219 |         join: (...paths: string[]) => paths.join('/'),
220 |       };
221 | 
222 |       // Create mock filesystem with specific behavior
223 |       const mockFsDeps = createMockFileSystemExecutor({
224 |         readFile: async () => JSON.stringify(devicectlJson),
225 |         unlink: async () => {},
226 |       });
227 | 
228 |       const result = await list_devicesLogic({}, mockExecutor, mockPathDeps, mockFsDeps);
229 | 
230 |       expect(result).toEqual({
231 |         content: [
232 |           {
233 |             type: 'text',
234 |             text: "Connected Devices:\n\n✅ Available Devices:\n\n📱 Test iPhone\n   UDID: test-device-123\n   Model: iPhone15,2\n   Product Type: iPhone15,2\n   Platform: iOS 17.0\n   Connection: USB\n\nNext Steps:\n1. Build for device: build_device({ scheme: 'SCHEME', deviceId: 'DEVICE_UDID' })\n2. Run tests: test_device({ scheme: 'SCHEME', deviceId: 'DEVICE_UDID' })\n3. Get app path: get_device_app_path({ scheme: 'SCHEME' })\n\nNote: Use the device ID/UDID from above when required by other tools.\n",
235 |           },
236 |         ],
237 |       });
238 |     });
239 | 
240 |     it('should return successful xctrace fallback response', async () => {
241 |       // Create executor with call count behavior
242 |       let callCount = 0;
243 |       const mockExecutor = async (
244 |         command: string[],
245 |         logPrefix?: string,
246 |         useShell?: boolean,
247 |         env?: Record<string, string>,
248 |       ) => {
249 |         callCount++;
250 |         if (callCount === 1) {
251 |           // First call fails (devicectl)
252 |           return {
253 |             success: false,
254 |             output: '',
255 |             error: 'devicectl failed',
256 |             process: { pid: 12345 },
257 |           };
258 |         } else {
259 |           // Second call succeeds (xctrace)
260 |           return {
261 |             success: true,
262 |             output: 'iPhone 15 (12345678-1234-1234-1234-123456789012)',
263 |             error: undefined,
264 |             process: { pid: 12345 },
265 |           };
266 |         }
267 |       };
268 | 
269 |       // Create mock path dependencies
270 |       const mockPathDeps = {
271 |         tmpdir: () => '/tmp',
272 |         join: (...paths: string[]) => paths.join('/'),
273 |       };
274 | 
275 |       // Create mock filesystem that throws for readFile
276 |       const mockFsDeps = createMockFileSystemExecutor({
277 |         readFile: async () => {
278 |           throw new Error('File not found');
279 |         },
280 |         unlink: async () => {},
281 |       });
282 | 
283 |       const result = await list_devicesLogic({}, mockExecutor, mockPathDeps, mockFsDeps);
284 | 
285 |       expect(result).toEqual({
286 |         content: [
287 |           {
288 |             type: 'text',
289 |             text: 'Device listing (xctrace output):\n\niPhone 15 (12345678-1234-1234-1234-123456789012)\n\nNote: For better device information, please upgrade to Xcode 15 or later which supports the modern devicectl command.',
290 |           },
291 |         ],
292 |       });
293 |     });
294 | 
295 |     it('should return successful no devices found response', async () => {
296 |       const devicectlJson = {
297 |         result: {
298 |           devices: [],
299 |         },
300 |       };
301 | 
302 |       // Create executor with call count behavior
303 |       let callCount = 0;
304 |       const mockExecutor = async (
305 |         command: string[],
306 |         logPrefix?: string,
307 |         useShell?: boolean,
308 |         env?: Record<string, string>,
309 |       ) => {
310 |         callCount++;
311 |         if (callCount === 1) {
312 |           // First call succeeds (devicectl)
313 |           return {
314 |             success: true,
315 |             output: '',
316 |             error: undefined,
317 |             process: { pid: 12345 },
318 |           };
319 |         } else {
320 |           // Second call succeeds (xctrace) with empty output
321 |           return {
322 |             success: true,
323 |             output: '',
324 |             error: undefined,
325 |             process: { pid: 12345 },
326 |           };
327 |         }
328 |       };
329 | 
330 |       // Create mock path dependencies
331 |       const mockPathDeps = {
332 |         tmpdir: () => '/tmp',
333 |         join: (...paths: string[]) => paths.join('/'),
334 |       };
335 | 
336 |       // Create mock filesystem with empty devices response
337 |       const mockFsDeps = createMockFileSystemExecutor({
338 |         readFile: async () => JSON.stringify(devicectlJson),
339 |         unlink: async () => {},
340 |       });
341 | 
342 |       const result = await list_devicesLogic({}, mockExecutor, mockPathDeps, mockFsDeps);
343 | 
344 |       expect(result).toEqual({
345 |         content: [
346 |           {
347 |             type: 'text',
348 |             text: 'Device listing (xctrace output):\n\n\n\nNote: For better device information, please upgrade to Xcode 15 or later which supports the modern devicectl command.',
349 |           },
350 |         ],
351 |       });
352 |     });
353 |   });
354 | 
355 |   // Note: Handler functionality is thoroughly tested in device-workspace/list_devices.test.ts
356 |   // This test file only verifies the re-export works correctly
357 | });
358 | 
```

--------------------------------------------------------------------------------
/.claude/agents/xcodebuild-mcp-qa-tester.md:
--------------------------------------------------------------------------------

```markdown
  1 | ---
  2 | name: xcodebuild-mcp-qa-tester
  3 | description: Use this agent when you need comprehensive black box testing of the XcodeBuildMCP server using Reloaderoo. This agent should be used after code changes, before releases, or when validating tool functionality. Examples:\n\n- <example>\n  Context: The user has made changes to XcodeBuildMCP tools and wants to validate everything works correctly.\n  user: "I've updated the simulator tools and need to make sure they all work properly"\n  assistant: "I'll use the xcodebuild-mcp-qa-tester agent to perform comprehensive black box testing of all simulator tools using Reloaderoo"\n  <commentary>\n  Since the user needs thorough testing of XcodeBuildMCP functionality, use the xcodebuild-mcp-qa-tester agent to systematically validate all tools and resources.\n  </commentary>\n</example>\n\n- <example>\n  Context: The user is preparing for a release and needs full QA validation.\n  user: "We're about to release version 2.1.0 and need complete testing coverage"\n  assistant: "I'll launch the xcodebuild-mcp-qa-tester agent to perform thorough black box testing of all XcodeBuildMCP tools and resources following the manual testing procedures"\n  <commentary>\n  For release validation, the QA tester agent should perform comprehensive testing to ensure all functionality works as expected.\n  </commentary>\n</example>
  4 | tools: Task, Bash, Glob, Grep, LS, ExitPlanMode, Read, NotebookRead, WebFetch, TodoWrite, WebSearch, ListMcpResourcesTool, ReadMcpResourceTool
  5 | color: purple
  6 | ---
  7 | 
  8 | You are a senior quality assurance software engineer specializing in black box testing of the XcodeBuildMCP server. Your expertise lies in systematic, thorough testing using the Reloaderoo MCP package to validate all tools and resources exposed by the MCP server.
  9 | 
 10 | ## Your Core Responsibilities
 11 | 
 12 | 1. **Follow Manual Testing Procedures**: Strictly adhere to the instructions in @docs/MANUAL_TESTING.md for systematic test execution
 13 | 2. **Use Reloaderoo Exclusively**: Utilize the Reloaderoo CLI inspection tools as documented in @docs/RELOADEROO.md for all testing activities
 14 | 3. **Comprehensive Coverage**: Test ALL tools and resources - never skip or assume functionality works
 15 | 4. **Black Box Approach**: Test from the user perspective without knowledge of internal implementation details
 16 | 5. **Live Documentation**: Create and continuously update a markdown test report showing real-time progress
 17 | 6. **MANDATORY COMPLETION**: Continue testing until EVERY SINGLE tool and resource has been tested - DO NOT STOP until 100% completion is achieved
 18 | 
 19 | ## MANDATORY Test Report Creation and Updates
 20 | 
 21 | ### Step 1: Create Initial Test Report (IMMEDIATELY)
 22 | **BEFORE TESTING BEGINS**, you MUST:
 23 | 
 24 | 1. **Create Test Report File**: Generate a markdown file in the workspace root named `TESTING_REPORT_<YYYY-MM-DD>_<HH-MM>.md`
 25 | 2. **Include Report Header**: Date, time, environment information, and testing scope
 26 | 3. **Discovery Phase**: Run `list-tools` and `list-resources` to get complete inventory
 27 | 4. **Create Checkbox Lists**: Add unchecked markdown checkboxes for every single tool and resource discovered
 28 | 
 29 | ### Test Report Initial Structure
 30 | ```markdown
 31 | # XcodeBuildMCP Testing Report
 32 | **Date:** YYYY-MM-DD HH:MM:SS  
 33 | **Environment:** [System details]  
 34 | **Testing Scope:** Comprehensive black box testing of all tools and resources
 35 | 
 36 | ## Test Summary
 37 | - **Total Tools:** [X]
 38 | - **Total Resources:** [Y]
 39 | - **Tests Completed:** 0/[X+Y]
 40 | - **Tests Passed:** 0
 41 | - **Tests Failed:** 0
 42 | 
 43 | ## Tools Testing Checklist
 44 | - [ ] Tool: tool_name_1 - Test with valid parameters
 45 | - [ ] Tool: tool_name_2 - Test with valid parameters
 46 | [... all tools discovered ...]
 47 | 
 48 | ## Resources Testing Checklist  
 49 | - [ ] Resource: resource_uri_1 - Validate content and accessibility
 50 | - [ ] Resource: resource_uri_2 - Validate content and accessibility
 51 | [... all resources discovered ...]
 52 | 
 53 | ## Detailed Test Results
 54 | [Updated as tests are completed]
 55 | 
 56 | ## Failed Tests
 57 | [Updated if any failures occur]
 58 | ```
 59 | 
 60 | ### Step 2: Continuous Updates (AFTER EACH TEST)
 61 | **IMMEDIATELY after completing each test**, you MUST update the test report with:
 62 | 
 63 | 1. **Check the box**: Change `- [ ]` to `- [x]` for the completed test
 64 | 2. **Update test summary counts**: Increment completed/passed/failed counters
 65 | 3. **Add detailed result**: Append to "Detailed Test Results" section with:
 66 |    - Test command used
 67 |    - Verification method
 68 |    - Validation summary
 69 |    - Pass/fail status
 70 | 
 71 | ### Live Update Example
 72 | After testing `list_sims` tool, update the report:
 73 | ```markdown
 74 | - [x] Tool: list_sims - Test with valid parameters ✅ PASSED
 75 | 
 76 | ## Detailed Test Results
 77 | 
 78 | ### Tool: list_sims ✅ PASSED
 79 | **Command:** `npx reloaderoo@latest inspect call-tool list_sims --params '{}' -- node build/index.js`
 80 | **Verification:** Command returned JSON array with 6 simulator objects
 81 | **Validation Summary:** Successfully discovered 6 available simulators with UUIDs, names, and boot status
 82 | **Timestamp:** 2025-01-29 14:30:15
 83 | ```
 84 | 
 85 | ## Testing Methodology
 86 | 
 87 | ### Pre-Testing Setup
 88 | - Always start by building the project: `npm run build`
 89 | - Verify Reloaderoo is available: `npx reloaderoo@latest --help`
 90 | - Check server connectivity: `npx reloaderoo@latest inspect ping -- node build/index.js`
 91 | - Get server information: `npx reloaderoo@latest inspect server-info -- node build/index.js`
 92 | 
 93 | ### Systematic Testing Workflow
 94 | 1. **Create Initial Report**: Generate test report with all checkboxes unchecked
 95 | 2. **Individual Testing**: Test each tool/resource systematically
 96 | 3. **Live Updates**: Update report immediately after each test completion
 97 | 4. **Continuous Tracking**: Report serves as real-time progress tracker
 98 | 5. **CONTINUOUS EXECUTION**: Never stop until ALL tools and resources are tested (100% completion)
 99 | 6. **Progress Monitoring**: Check total tested vs total available - continue if any remain untested
100 | 7. **Final Review**: Ensure all checkboxes are marked and results documented
101 | 
102 | ### CRITICAL: NO EARLY TERMINATION
103 | - **NEVER STOP** testing until every single tool and resource has been tested
104 | - If you have tested X out of Y items, IMMEDIATELY continue testing the remaining Y-X items
105 | - The only acceptable completion state is 100% coverage (all checkboxes checked)
106 | - Do not summarize or conclude until literally every tool and resource has been individually tested
107 | - Use the test report checkbox count as your progress indicator - if any boxes remain unchecked, CONTINUE TESTING
108 | 
109 | ### Tool Testing Process
110 | For each tool:
111 | 1. Execute test with `npx reloaderoo@latest inspect call-tool <tool_name> --params '<json>' -- node build/index.js`
112 | 2. Verify response format and content
113 | 3. **IMMEDIATELY** update test report with result
114 | 4. Check the box and add detailed verification summary
115 | 5. Move to next tool
116 | 
117 | ### Resource Testing Process
118 | For each resource:
119 | 1. Execute test with `npx reloaderoo@latest inspect read-resource "<uri>" -- node build/index.js`
120 | 2. Verify resource accessibility and content format
121 | 3. **IMMEDIATELY** update test report with result
122 | 4. Check the box and add detailed verification summary
123 | 5. Move to next resource
124 | 
125 | ## Quality Standards
126 | 
127 | ### Thoroughness Over Speed
128 | - **NEVER rush testing** - take time to be comprehensive
129 | - Test every single tool and resource without exception
130 | - Update the test report after every single test - no batching
131 | - The markdown report is the single source of truth for progress
132 | 
133 | ### Test Documentation Requirements
134 | - Record the exact command used for each test
135 | - Document expected vs actual results
136 | - Note any warnings, errors, or unexpected behavior
137 | - Include full JSON responses for failed tests
138 | - Categorize issues by severity (critical, major, minor)
139 | - **MANDATORY**: Update test report immediately after each test completion
140 | 
141 | ### Validation Criteria
142 | - All tools must respond without errors for valid inputs
143 | - Error messages must be clear and actionable for invalid inputs
144 | - JSON responses must be properly formatted
145 | - Resource URIs must be accessible and return valid data
146 | - Tool descriptions must accurately reflect functionality
147 | 
148 | ## Testing Environment Considerations
149 | 
150 | ### Prerequisites Validation
151 | - Verify Xcode is installed and accessible
152 | - Check for required simulators and devices
153 | - Validate development environment setup
154 | - Ensure all dependencies are available
155 | 
156 | ### Platform-Specific Testing
157 | - Test iOS simulator tools with actual simulators
158 | - Validate device tools (when devices are available)
159 | - Test macOS-specific functionality
160 | - Verify Swift Package Manager integration
161 | 
162 | ## Test Report Management
163 | 
164 | ### File Naming Convention
165 | - Format: `TESTING_REPORT_<YYYY-MM-DD>_<HH-MM>.md`
166 | - Location: Workspace root directory
167 | - Example: `TESTING_REPORT_2025-01-29_14-30.md`
168 | 
169 | ### Update Requirements
170 | - **Real-time updates**: Update after every single test completion
171 | - **No batching**: Never wait to update multiple tests at once
172 | - **Checkbox tracking**: Visual progress through checked/unchecked boxes
173 | - **Detailed results**: Each test gets a dedicated result section
174 | - **Summary statistics**: Keep running totals updated
175 | 
176 | ### Verification Summary Requirements
177 | Every test result MUST answer: "How did you know this test passed?"
178 | 
179 | Examples of strong verification summaries:
180 | - `Successfully discovered 84 tools in server response`
181 | - `Returned valid app bundle path: /path/to/MyApp.app`
182 | - `Listed 6 simulators with expected UUID format and boot status`
183 | - `Resource returned JSON array with 4 device objects containing UDID and name fields`
184 | - `Tool correctly rejected invalid parameters with clear error message`
185 | 
186 | ## Error Investigation Protocol
187 | 
188 | 1. **Reproduce Consistently**: Ensure errors can be reproduced reliably
189 | 2. **Isolate Variables**: Test with minimal parameters to isolate issues
190 | 3. **Check Prerequisites**: Verify all required tools and environments are available
191 | 4. **Document Context**: Include system information, versions, and environment details
192 | 5. **Update Report**: Document failures immediately in the test report
193 | 
194 | ## Critical Success Criteria
195 | 
196 | - ✅ Test report created BEFORE any testing begins with all checkboxes unchecked
197 | - ✅ Every single tool has its own checkbox and detailed result section
198 | - ✅ Every single resource has its own checkbox and detailed result section
199 | - ✅ Report updated IMMEDIATELY after each individual test completion
200 | - ✅ No tool or resource is skipped or grouped together
201 | - ✅ Each verification summary clearly explains how success was determined
202 | - ✅ Real-time progress tracking through checkbox completion
203 | - ✅ Test report serves as the single source of truth for all testing progress
204 | - ✅ **100% COMPLETION MANDATORY**: All checkboxes must be checked before considering testing complete
205 | 
206 | ## ABSOLUTE COMPLETION REQUIREMENT
207 | 
208 | **YOU MUST NOT STOP TESTING UNTIL:**
209 | - Every single tool discovered by `list-tools` has been individually tested
210 | - Every single resource discovered by `list-resources` has been individually tested  
211 | - All checkboxes in your test report are marked as complete
212 | - The test summary shows X/X completion (100%)
213 | 
214 | **IF TESTING IS NOT 100% COMPLETE:**
215 | - Immediately identify which tools/resources remain untested
216 | - Continue systematic testing of the remaining items
217 | - Update the test report after each additional test
218 | - Do not provide final summaries or conclusions until literally everything is tested
219 | 
220 | Remember: Your role is to be the final quality gate before release. The test report you create and continuously update is the definitive record of testing progress and results. Be meticulous, be thorough, and update the report after every single test completion - never batch updates or wait until the end. **NEVER CONCLUDE TESTING UNTIL 100% COMPLETION IS ACHIEVED.**
221 | 
```

--------------------------------------------------------------------------------
/src/mcp/tools/project-discovery/__tests__/discover_projs.test.ts:
--------------------------------------------------------------------------------

```typescript
  1 | /**
  2 |  * Pure dependency injection test for discover_projs plugin
  3 |  *
  4 |  * Tests the plugin structure and project discovery functionality
  5 |  * including parameter validation, file system operations, and response formatting.
  6 |  *
  7 |  * Uses createMockFileSystemExecutor for file system operations.
  8 |  */
  9 | 
 10 | import { describe, it, expect, beforeEach } from 'vitest';
 11 | import { z } from 'zod';
 12 | import plugin, { discover_projsLogic } from '../discover_projs.ts';
 13 | import { createMockFileSystemExecutor } from '../../../../test-utils/mock-executors.ts';
 14 | 
 15 | describe('discover_projs plugin', () => {
 16 |   let mockFileSystemExecutor: any;
 17 | 
 18 |   // Create mock file system executor
 19 |   mockFileSystemExecutor = createMockFileSystemExecutor({
 20 |     stat: async () => ({ isDirectory: () => true }),
 21 |     readdir: async () => [],
 22 |   });
 23 | 
 24 |   describe('Export Field Validation (Literal)', () => {
 25 |     it('should have correct name', () => {
 26 |       expect(plugin.name).toBe('discover_projs');
 27 |     });
 28 | 
 29 |     it('should have correct description', () => {
 30 |       expect(plugin.description).toBe(
 31 |         'Scans a directory (defaults to workspace root) to find Xcode project (.xcodeproj) and workspace (.xcworkspace) files.',
 32 |       );
 33 |     });
 34 | 
 35 |     it('should have handler function', () => {
 36 |       expect(typeof plugin.handler).toBe('function');
 37 |     });
 38 | 
 39 |     it('should validate schema with valid inputs', () => {
 40 |       const schema = z.object(plugin.schema);
 41 |       expect(schema.safeParse({ workspaceRoot: '/path/to/workspace' }).success).toBe(true);
 42 |       expect(
 43 |         schema.safeParse({ workspaceRoot: '/path/to/workspace', scanPath: 'subdir' }).success,
 44 |       ).toBe(true);
 45 |       expect(schema.safeParse({ workspaceRoot: '/path/to/workspace', maxDepth: 3 }).success).toBe(
 46 |         true,
 47 |       );
 48 |       expect(
 49 |         schema.safeParse({
 50 |           workspaceRoot: '/path/to/workspace',
 51 |           scanPath: 'subdir',
 52 |           maxDepth: 5,
 53 |         }).success,
 54 |       ).toBe(true);
 55 |     });
 56 | 
 57 |     it('should validate schema with invalid inputs', () => {
 58 |       const schema = z.object(plugin.schema);
 59 |       expect(schema.safeParse({}).success).toBe(false);
 60 |       expect(schema.safeParse({ workspaceRoot: 123 }).success).toBe(false);
 61 |       expect(schema.safeParse({ workspaceRoot: '/path', scanPath: 123 }).success).toBe(false);
 62 |       expect(schema.safeParse({ workspaceRoot: '/path', maxDepth: 'invalid' }).success).toBe(false);
 63 |       expect(schema.safeParse({ workspaceRoot: '/path', maxDepth: -1 }).success).toBe(false);
 64 |       expect(schema.safeParse({ workspaceRoot: '/path', maxDepth: 1.5 }).success).toBe(false);
 65 |     });
 66 |   });
 67 | 
 68 |   describe('Handler Behavior (Complete Literal Returns)', () => {
 69 |     it('should handle workspaceRoot parameter correctly when provided', async () => {
 70 |       mockFileSystemExecutor.stat = async () => ({ isDirectory: () => true });
 71 |       mockFileSystemExecutor.readdir = async () => [];
 72 | 
 73 |       const result = await discover_projsLogic(
 74 |         { workspaceRoot: '/workspace' },
 75 |         mockFileSystemExecutor,
 76 |       );
 77 | 
 78 |       expect(result).toEqual({
 79 |         content: [{ type: 'text', text: 'Discovery finished. Found 0 projects and 0 workspaces.' }],
 80 |         isError: false,
 81 |       });
 82 |     });
 83 | 
 84 |     it('should return error when scan path does not exist', async () => {
 85 |       mockFileSystemExecutor.stat = async () => {
 86 |         throw new Error('ENOENT: no such file or directory');
 87 |       };
 88 | 
 89 |       const result = await discover_projsLogic(
 90 |         {
 91 |           workspaceRoot: '/workspace',
 92 |           scanPath: '.',
 93 |           maxDepth: 5,
 94 |         },
 95 |         mockFileSystemExecutor,
 96 |       );
 97 | 
 98 |       expect(result).toEqual({
 99 |         content: [
100 |           {
101 |             type: 'text',
102 |             text: 'Failed to access scan path: /workspace. Error: ENOENT: no such file or directory',
103 |           },
104 |         ],
105 |         isError: true,
106 |       });
107 |     });
108 | 
109 |     it('should return error when scan path is not a directory', async () => {
110 |       mockFileSystemExecutor.stat = async () => ({ isDirectory: () => false });
111 | 
112 |       const result = await discover_projsLogic(
113 |         {
114 |           workspaceRoot: '/workspace',
115 |           scanPath: '.',
116 |           maxDepth: 5,
117 |         },
118 |         mockFileSystemExecutor,
119 |       );
120 | 
121 |       expect(result).toEqual({
122 |         content: [{ type: 'text', text: 'Scan path is not a directory: /workspace' }],
123 |         isError: true,
124 |       });
125 |     });
126 | 
127 |     it('should return success with no projects found', async () => {
128 |       mockFileSystemExecutor.stat = async () => ({ isDirectory: () => true });
129 |       mockFileSystemExecutor.readdir = async () => [];
130 | 
131 |       const result = await discover_projsLogic(
132 |         {
133 |           workspaceRoot: '/workspace',
134 |           scanPath: '.',
135 |           maxDepth: 5,
136 |         },
137 |         mockFileSystemExecutor,
138 |       );
139 | 
140 |       expect(result).toEqual({
141 |         content: [{ type: 'text', text: 'Discovery finished. Found 0 projects and 0 workspaces.' }],
142 |         isError: false,
143 |       });
144 |     });
145 | 
146 |     it('should return success with projects found', async () => {
147 |       mockFileSystemExecutor.stat = async () => ({ isDirectory: () => true });
148 |       mockFileSystemExecutor.readdir = async () => [
149 |         { name: 'MyApp.xcodeproj', isDirectory: () => true, isSymbolicLink: () => false },
150 |         { name: 'MyWorkspace.xcworkspace', isDirectory: () => true, isSymbolicLink: () => false },
151 |       ];
152 | 
153 |       const result = await discover_projsLogic(
154 |         {
155 |           workspaceRoot: '/workspace',
156 |           scanPath: '.',
157 |           maxDepth: 5,
158 |         },
159 |         mockFileSystemExecutor,
160 |       );
161 | 
162 |       expect(result).toEqual({
163 |         content: [
164 |           { type: 'text', text: 'Discovery finished. Found 1 projects and 1 workspaces.' },
165 |           { type: 'text', text: 'Projects found:\n - /workspace/MyApp.xcodeproj' },
166 |           { type: 'text', text: 'Workspaces found:\n - /workspace/MyWorkspace.xcworkspace' },
167 |         ],
168 |         isError: false,
169 |       });
170 |     });
171 | 
172 |     it('should handle fs error with code', async () => {
173 |       const error = new Error('Permission denied');
174 |       (error as any).code = 'EACCES';
175 |       mockFileSystemExecutor.stat = async () => {
176 |         throw error;
177 |       };
178 | 
179 |       const result = await discover_projsLogic(
180 |         {
181 |           workspaceRoot: '/workspace',
182 |           scanPath: '.',
183 |           maxDepth: 5,
184 |         },
185 |         mockFileSystemExecutor,
186 |       );
187 | 
188 |       expect(result).toEqual({
189 |         content: [
190 |           {
191 |             type: 'text',
192 |             text: 'Failed to access scan path: /workspace. Error: Permission denied',
193 |           },
194 |         ],
195 |         isError: true,
196 |       });
197 |     });
198 | 
199 |     it('should handle string error', async () => {
200 |       mockFileSystemExecutor.stat = async () => {
201 |         throw 'String error';
202 |       };
203 | 
204 |       const result = await discover_projsLogic(
205 |         {
206 |           workspaceRoot: '/workspace',
207 |           scanPath: '.',
208 |           maxDepth: 5,
209 |         },
210 |         mockFileSystemExecutor,
211 |       );
212 | 
213 |       expect(result).toEqual({
214 |         content: [
215 |           { type: 'text', text: 'Failed to access scan path: /workspace. Error: String error' },
216 |         ],
217 |         isError: true,
218 |       });
219 |     });
220 | 
221 |     it('should handle workspaceRoot parameter correctly', async () => {
222 |       mockFileSystemExecutor.stat = async () => ({ isDirectory: () => true });
223 |       mockFileSystemExecutor.readdir = async () => [];
224 | 
225 |       const result = await discover_projsLogic(
226 |         {
227 |           workspaceRoot: '/workspace',
228 |         },
229 |         mockFileSystemExecutor,
230 |       );
231 | 
232 |       expect(result).toEqual({
233 |         content: [{ type: 'text', text: 'Discovery finished. Found 0 projects and 0 workspaces.' }],
234 |         isError: false,
235 |       });
236 |     });
237 | 
238 |     it('should handle scan path outside workspace root', async () => {
239 |       // Mock path normalization to simulate path outside workspace root
240 |       mockFileSystemExecutor.stat = async () => ({ isDirectory: () => true });
241 |       mockFileSystemExecutor.readdir = async () => [];
242 | 
243 |       const result = await discover_projsLogic(
244 |         {
245 |           workspaceRoot: '/workspace',
246 |           scanPath: '../outside',
247 |           maxDepth: 5,
248 |         },
249 |         mockFileSystemExecutor,
250 |       );
251 | 
252 |       expect(result).toEqual({
253 |         content: [{ type: 'text', text: 'Discovery finished. Found 0 projects and 0 workspaces.' }],
254 |         isError: false,
255 |       });
256 |     });
257 | 
258 |     it('should handle error with object containing message and code properties', async () => {
259 |       const errorObject = {
260 |         message: 'Access denied',
261 |         code: 'EACCES',
262 |       };
263 |       mockFileSystemExecutor.stat = async () => {
264 |         throw errorObject;
265 |       };
266 | 
267 |       const result = await discover_projsLogic(
268 |         {
269 |           workspaceRoot: '/workspace',
270 |           scanPath: '.',
271 |           maxDepth: 5,
272 |         },
273 |         mockFileSystemExecutor,
274 |       );
275 | 
276 |       expect(result).toEqual({
277 |         content: [
278 |           { type: 'text', text: 'Failed to access scan path: /workspace. Error: Access denied' },
279 |         ],
280 |         isError: true,
281 |       });
282 |     });
283 | 
284 |     it('should handle max depth reached during recursive scan', async () => {
285 |       let readdirCallCount = 0;
286 | 
287 |       mockFileSystemExecutor.stat = async () => ({ isDirectory: () => true });
288 |       mockFileSystemExecutor.readdir = async () => {
289 |         readdirCallCount++;
290 |         if (readdirCallCount <= 3) {
291 |           return [
292 |             {
293 |               name: `subdir${readdirCallCount}`,
294 |               isDirectory: () => true,
295 |               isSymbolicLink: () => false,
296 |             },
297 |           ];
298 |         }
299 |         return [];
300 |       };
301 | 
302 |       const result = await discover_projsLogic(
303 |         {
304 |           workspaceRoot: '/workspace',
305 |           scanPath: '.',
306 |           maxDepth: 3,
307 |         },
308 |         mockFileSystemExecutor,
309 |       );
310 | 
311 |       expect(result).toEqual({
312 |         content: [{ type: 'text', text: 'Discovery finished. Found 0 projects and 0 workspaces.' }],
313 |         isError: false,
314 |       });
315 |     });
316 | 
317 |     it('should handle skipped directory types during scan', async () => {
318 |       mockFileSystemExecutor.stat = async () => ({ isDirectory: () => true });
319 |       mockFileSystemExecutor.readdir = async () => [
320 |         { name: 'build', isDirectory: () => true, isSymbolicLink: () => false },
321 |         { name: 'DerivedData', isDirectory: () => true, isSymbolicLink: () => false },
322 |         { name: 'symlink', isDirectory: () => true, isSymbolicLink: () => true },
323 |         { name: 'regular.txt', isDirectory: () => false, isSymbolicLink: () => false },
324 |       ];
325 | 
326 |       const result = await discover_projsLogic(
327 |         {
328 |           workspaceRoot: '/workspace',
329 |           scanPath: '.',
330 |           maxDepth: 5,
331 |         },
332 |         mockFileSystemExecutor,
333 |       );
334 | 
335 |       // Test that skipped directories and files are correctly filtered out
336 |       expect(result).toEqual({
337 |         content: [{ type: 'text', text: 'Discovery finished. Found 0 projects and 0 workspaces.' }],
338 |         isError: false,
339 |       });
340 |     });
341 | 
342 |     it('should handle error during recursive directory reading', async () => {
343 |       mockFileSystemExecutor.stat = async () => ({ isDirectory: () => true });
344 |       mockFileSystemExecutor.readdir = async () => {
345 |         const readError = new Error('Permission denied');
346 |         (readError as any).code = 'EACCES';
347 |         throw readError;
348 |       };
349 | 
350 |       const result = await discover_projsLogic(
351 |         {
352 |           workspaceRoot: '/workspace',
353 |           scanPath: '.',
354 |           maxDepth: 5,
355 |         },
356 |         mockFileSystemExecutor,
357 |       );
358 | 
359 |       // The function should handle the error gracefully and continue
360 |       expect(result).toEqual({
361 |         content: [{ type: 'text', text: 'Discovery finished. Found 0 projects and 0 workspaces.' }],
362 |         isError: false,
363 |       });
364 |     });
365 |   });
366 | });
367 | 
```

--------------------------------------------------------------------------------
/scripts/release.sh:
--------------------------------------------------------------------------------

```bash
  1 | #!/bin/bash
  2 | set -e
  3 | 
  4 | # GitHub Release Creation Script
  5 | # This script handles only the GitHub release creation.
  6 | # Building and NPM publishing are handled by GitHub workflows.
  7 | #
  8 | # Usage: ./scripts/release.sh [VERSION|BUMP_TYPE] [OPTIONS]
  9 | # Run with --help for detailed usage information
 10 | FIRST_ARG=$1
 11 | DRY_RUN=false
 12 | VERSION=""
 13 | BUMP_TYPE=""
 14 | 
 15 | # Function to show help
 16 | show_help() {
 17 |   cat << 'EOF'
 18 | 📦 GitHub Release Creator
 19 | 
 20 | Creates releases with automatic semver bumping. Only handles GitHub release
 21 | creation - building and NPM publishing are handled by workflows.
 22 | 
 23 | USAGE:
 24 |     [VERSION|BUMP_TYPE] [OPTIONS]
 25 | 
 26 | ARGUMENTS:
 27 |     VERSION         Explicit version (e.g., 1.5.0, 2.0.0-beta.1)
 28 |     BUMP_TYPE       major | minor [default] | patch
 29 | 
 30 | OPTIONS:
 31 |     --dry-run       Preview without executing
 32 |     -h, --help      Show this help
 33 | 
 34 | EXAMPLES:
 35 |     (no args)       Interactive minor bump
 36 |     major           Interactive major bump
 37 |     1.5.0           Use specific version
 38 |     patch --dry-run Preview patch bump
 39 | 
 40 | EOF
 41 | 
 42 |   local highest_version=$(get_highest_version)
 43 |   if [[ -n "$highest_version" ]]; then
 44 |     echo "CURRENT: $highest_version"
 45 |     echo "NEXT: major=$(bump_version "$highest_version" "major") | minor=$(bump_version "$highest_version" "minor") | patch=$(bump_version "$highest_version" "patch")"
 46 |   else
 47 |     echo "No existing version tags found"
 48 |   fi
 49 |   echo ""
 50 | }
 51 | 
 52 | # Function to get the highest version from git tags
 53 | get_highest_version() {
 54 |   git tag | grep -E '^v?[0-9]+\.[0-9]+\.[0-9]+(-[a-zA-Z0-9]+\.[0-9]+)?$' | sed 's/^v//' | sort -V | tail -1
 55 | }
 56 | 
 57 | # Function to parse version components
 58 | parse_version() {
 59 |   local version=$1
 60 |   echo "$version" | sed -E 's/^([0-9]+)\.([0-9]+)\.([0-9]+)(-.*)?$/\1 \2 \3 \4/'
 61 | }
 62 | 
 63 | # Function to bump version based on type
 64 | bump_version() {
 65 |   local current_version=$1
 66 |   local bump_type=$2
 67 | 
 68 |   local parsed=($(parse_version "$current_version"))
 69 |   local major=${parsed[0]}
 70 |   local minor=${parsed[1]}
 71 |   local patch=${parsed[2]}
 72 |   local prerelease=${parsed[3]:-""}
 73 | 
 74 |   # Remove prerelease for stable version bumps
 75 |   case $bump_type in
 76 |     major)
 77 |       echo "$((major + 1)).0.0"
 78 |       ;;
 79 |     minor)
 80 |       echo "${major}.$((minor + 1)).0"
 81 |       ;;
 82 |     patch)
 83 |       echo "${major}.${minor}.$((patch + 1))"
 84 |       ;;
 85 |     *)
 86 |       echo "❌ Unknown bump type: $bump_type" >&2
 87 |       exit 1
 88 |       ;;
 89 |   esac
 90 | }
 91 | 
 92 | # Function to validate version format
 93 | validate_version() {
 94 |   local version=$1
 95 |   if ! [[ "$version" =~ ^[0-9]+\.[0-9]+\.[0-9]+(-[a-zA-Z0-9]+\.[0-9]+)?$ ]]; then
 96 |     echo "❌ Invalid version format: $version"
 97 |     echo "Version must be in format: x.y.z or x.y.z-tag.n (e.g., 1.4.0 or 1.4.0-beta.3)"
 98 |     return 1
 99 |   fi
100 |   return 0
101 | }
102 | 
103 | # Function to compare versions (returns 1 if first version is greater, 0 if equal, -1 if less)
104 | compare_versions() {
105 |   local version1=$1
106 |   local version2=$2
107 | 
108 |   # Remove prerelease parts for comparison
109 |   local v1_stable=$(echo "$version1" | sed -E 's/(-.*)?$//')
110 |   local v2_stable=$(echo "$version2" | sed -E 's/(-.*)?$//')
111 | 
112 |   if [[ "$v1_stable" == "$v2_stable" ]]; then
113 |     echo 0
114 |     return
115 |   fi
116 | 
117 |   # Use sort -V to compare versions
118 |   local sorted=$(printf "%s\n%s" "$v1_stable" "$v2_stable" | sort -V)
119 |   if [[ "$(echo "$sorted" | head -1)" == "$v1_stable" ]]; then
120 |     echo -1
121 |   else
122 |     echo 1
123 |   fi
124 | }
125 | 
126 | # Function to ask for confirmation
127 | ask_confirmation() {
128 |   local suggested_version=$1
129 |   echo ""
130 |   echo "🚀 Suggested next version: $suggested_version"
131 |   read -p "Do you want to use this version? (y/N): " -n 1 -r
132 |   echo
133 |   if [[ $REPLY =~ ^[Yy]$ ]]; then
134 |     return 0
135 |   else
136 |     return 1
137 |   fi
138 | }
139 | 
140 | # Function to get version interactively
141 | get_version_interactively() {
142 |   echo ""
143 |   echo "Please enter the version manually:"
144 |   while true; do
145 |     read -p "Version: " manual_version
146 |     if validate_version "$manual_version"; then
147 |       local highest_version=$(get_highest_version)
148 |       if [[ -n "$highest_version" ]]; then
149 |         local comparison=$(compare_versions "$manual_version" "$highest_version")
150 |         if [[ $comparison -le 0 ]]; then
151 |           echo "❌ Version $manual_version is not newer than the highest existing version $highest_version"
152 |           continue
153 |         fi
154 |       fi
155 |       VERSION="$manual_version"
156 |       break
157 |     fi
158 |   done
159 | }
160 | 
161 | # Check for help flags first
162 | for arg in "$@"; do
163 |   if [[ "$arg" == "-h" ]] || [[ "$arg" == "--help" ]]; then
164 |     show_help
165 |     exit 0
166 |   fi
167 | done
168 | 
169 | # Check for arguments and set flags
170 | for arg in "$@"; do
171 |   if [[ "$arg" == "--dry-run" ]]; then
172 |     DRY_RUN=true
173 |   fi
174 | done
175 | 
176 | # Determine version or bump type (ignore --dry-run flag)
177 | if [[ -z "$FIRST_ARG" ]] || [[ "$FIRST_ARG" == "--dry-run" ]]; then
178 |   # No argument provided, default to minor bump
179 |   BUMP_TYPE="minor"
180 | elif [[ "$FIRST_ARG" == "major" ]] || [[ "$FIRST_ARG" == "minor" ]] || [[ "$FIRST_ARG" == "patch" ]]; then
181 |   # Bump type provided
182 |   BUMP_TYPE="$FIRST_ARG"
183 | else
184 |   # Version string provided
185 |   if validate_version "$FIRST_ARG"; then
186 |     VERSION="$FIRST_ARG"
187 |   else
188 |     exit 1
189 |   fi
190 | fi
191 | 
192 | # If bump type is set, calculate the suggested version
193 | if [[ -n "$BUMP_TYPE" ]]; then
194 |   HIGHEST_VERSION=$(get_highest_version)
195 |   if [[ -z "$HIGHEST_VERSION" ]]; then
196 |     echo "❌ No existing version tags found. Please provide a version manually."
197 |     get_version_interactively
198 |   else
199 |     SUGGESTED_VERSION=$(bump_version "$HIGHEST_VERSION" "$BUMP_TYPE")
200 | 
201 |     if ask_confirmation "$SUGGESTED_VERSION"; then
202 |       VERSION="$SUGGESTED_VERSION"
203 |     else
204 |       get_version_interactively
205 |     fi
206 |   fi
207 | fi
208 | 
209 | # Final validation and version comparison
210 | if [[ -z "$VERSION" ]]; then
211 |   echo "❌ No version determined"
212 |   exit 1
213 | fi
214 | 
215 | HIGHEST_VERSION=$(get_highest_version)
216 | if [[ -n "$HIGHEST_VERSION" ]]; then
217 |   COMPARISON=$(compare_versions "$VERSION" "$HIGHEST_VERSION")
218 |   if [[ $COMPARISON -le 0 ]]; then
219 |     echo "❌ Version $VERSION is not newer than the highest existing version $HIGHEST_VERSION"
220 |     exit 1
221 |   fi
222 | fi
223 | 
224 | # Detect current branch
225 | BRANCH=$(git rev-parse --abbrev-ref HEAD)
226 | 
227 | # Enforce branch policy - only allow releases from main
228 | if [[ "$BRANCH" != "main" ]]; then
229 |   echo "❌ Error: Releases must be created from the main branch."
230 |   echo "Current branch: $BRANCH"
231 |   echo "Please switch to main and try again."
232 |   exit 1
233 | fi
234 | 
235 | run() {
236 |   if $DRY_RUN; then
237 |     echo "[dry-run] $*"
238 |   else
239 |     eval "$@"
240 |   fi
241 | }
242 | 
243 | # Ensure we're in the project root (parent of scripts directory)
244 | cd "$(dirname "$0")/.."
245 | 
246 | # Check if working directory is clean
247 | if ! git diff-index --quiet HEAD --; then
248 |   echo "❌ Error: Working directory is not clean."
249 |   echo "Please commit or stash your changes before creating a release."
250 |   exit 1
251 | fi
252 | 
253 | # Check if package.json already has this version (from previous attempt)
254 | CURRENT_PACKAGE_VERSION=$(node -p "require('./package.json').version")
255 | if [[ "$CURRENT_PACKAGE_VERSION" == "$VERSION" ]]; then
256 |   echo "📦 Version $VERSION already set in package.json"
257 |   SKIP_VERSION_UPDATE=true
258 | else
259 |   SKIP_VERSION_UPDATE=false
260 | fi
261 | 
262 | if [[ "$SKIP_VERSION_UPDATE" == "false" ]]; then
263 |   # Version update
264 |   echo ""
265 |   echo "🔧 Setting version to $VERSION..."
266 |   run "npm version \"$VERSION\" --no-git-tag-version"
267 | 
268 |   # README update
269 |   echo ""
270 |   echo "📝 Updating version in README.md..."
271 |   # Update version references in code examples using extended regex for precise semver matching
272 |   run "sed -i '' -E 's/@[0-9]+\.[0-9]+\.[0-9]+(-[a-zA-Z0-9]+\.[0-9]+)?(-[a-zA-Z0-9]+\.[0-9]+)*(-[a-zA-Z0-9]+)?/@'"$VERSION"'/g' README.md"
273 | 
274 |   # Update URL-encoded version references in shield links
275 |   echo "📝 Updating version in README.md shield links..."
276 |   run "sed -i '' -E 's/npm%3Axcodebuildmcp%40[0-9]+\.[0-9]+\.[0-9]+(-[a-zA-Z0-9]+\.[0-9]+)?(-[a-zA-Z0-9]+\.[0-9]+)*(-[a-zA-Z0-9]+)?/npm%3Axcodebuildmcp%40'"$VERSION"'/g' README.md"
277 | 
278 |   # server.json update
279 |   echo ""
280 |   if [[ -f server.json ]]; then
281 |     echo "📝 Updating server.json version to $VERSION..."
282 |     run "node -e \"const fs=require('fs');const f='server.json';const j=JSON.parse(fs.readFileSync(f,'utf8'));j.version='$VERSION';if(Array.isArray(j.packages)){j.packages=j.packages.map(p=>({...p,version:'$VERSION'}));}fs.writeFileSync(f,JSON.stringify(j,null,2)+'\n');\""
283 |   else
284 |     echo "⚠️  server.json not found; skipping update"
285 |   fi
286 | 
287 |   # Git operations
288 |   echo ""
289 |   echo "📦 Committing version changes..."
290 |   if [[ -f server.json ]]; then
291 |     run "git add package.json README.md server.json"
292 |   else
293 |     run "git add package.json README.md"
294 |   fi
295 |   run "git commit -m \"Release v$VERSION\""
296 | else
297 |   echo "⏭️  Skipping version update (already done)"
298 |   # Ensure server.json still matches the desired version (in case of a partial previous run)
299 |   if [[ -f server.json ]]; then
300 |     CURRENT_SERVER_VERSION=$(node -e "console.log(JSON.parse(require('fs').readFileSync('server.json','utf8')).version||'')")
301 |     if [[ "$CURRENT_SERVER_VERSION" != "$VERSION" ]]; then
302 |       echo "📝 Aligning server.json to $VERSION..."
303 |       run "node -e \"const fs=require('fs');const f='server.json';const j=JSON.parse(fs.readFileSync(f,'utf8'));j.version='$VERSION';if(Array.isArray(j.packages)){j.packages=j.packages.map(p=>({...p,version:'$VERSION'}));}fs.writeFileSync(f,JSON.stringify(j,null,2)+'\\n');\""
304 |       run "git add server.json"
305 |       run "git commit -m \"Align server.json for v$VERSION\""
306 |     fi
307 |   fi
308 | fi
309 | 
310 | # Create or recreate tag at current HEAD
311 | echo "🏷️  Creating tag v$VERSION..."
312 | run "git tag -f \"v$VERSION\""
313 | 
314 | echo ""
315 | echo "🚀 Pushing to origin..."
316 | run "git push origin $BRANCH --tags"
317 | 
318 | # Monitor the workflow and handle failures
319 | echo ""
320 | echo "⏳ Monitoring GitHub Actions workflow..."
321 | echo "This may take a few minutes..."
322 | 
323 | # Wait for workflow to start
324 | sleep 5
325 | 
326 | # Get the workflow run ID for this tag
327 | RUN_ID=$(gh run list --workflow=release.yml --limit=1 --json databaseId --jq '.[0].databaseId')
328 | 
329 | if [[ -n "$RUN_ID" ]]; then
330 |   echo "📊 Workflow run ID: $RUN_ID"
331 |   echo "🔍 Watching workflow progress..."
332 |   echo "(Press Ctrl+C to detach and monitor manually)"
333 |   echo ""
334 | 
335 |   # Watch the workflow with exit status
336 |   if gh run watch "$RUN_ID" --exit-status; then
337 |     echo ""
338 |     echo "✅ Release v$VERSION completed successfully!"
339 |     echo "📦 View on NPM: https://www.npmjs.com/package/xcodebuildmcp/v/$VERSION"
340 |     echo "🎉 View release: https://github.com/cameroncooke/XcodeBuildMCP/releases/tag/v$VERSION"
341 |     # MCP Registry verification link
342 |     echo "🔎 Verify MCP Registry: https://registry.modelcontextprotocol.io/v0/servers?search=com.xcodebuildmcp/XcodeBuildMCP&version=latest
343 |   else
344 |     echo ""
345 |     echo "❌ CI workflow failed!"
346 |     echo ""
347 |     # Prefer job state: if the primary 'release' job succeeded, treat as success.
348 |     RELEASE_JOB_CONCLUSION=$(gh run view "$RUN_ID" --json jobs --jq '.jobs[] | select(.name=="release") | .conclusion')
349 |     if [ "$RELEASE_JOB_CONCLUSION" = "success" ]; then
350 |       echo "⚠️ Workflow reported failure, but primary 'release' job concluded SUCCESS."
351 |       echo "✅ Treating release as successful. Tag v$VERSION is kept."
352 |       echo "📦 Verify on NPM: https://www.npmjs.com/package/xcodebuildmcp/v/$VERSION"
353 |       exit 0
354 |     fi
355 |     echo "🧹 Cleaning up tags only (keeping version commit)..."
356 | 
357 |     # Delete remote tag
358 |     echo "  - Deleting remote tag v$VERSION..."
359 |     git push origin :refs/tags/v$VERSION 2>/dev/null || true
360 | 
361 |     # Delete local tag
362 |     echo "  - Deleting local tag v$VERSION..."
363 |     git tag -d v$VERSION
364 | 
365 |     echo ""
366 |     echo "✅ Tag cleanup complete!"
367 |     echo ""
368 |     echo "ℹ️  The version commit remains in your history."
369 |     echo "📝 To retry after fixing issues:"
370 |     echo "   1. Fix the CI issues"
371 |     echo "   2. Commit your fixes"
372 |     echo "   3. Run: ./scripts/release.sh $VERSION"
373 |     echo ""
374 |     echo "🔍 To see what failed: gh run view $RUN_ID --log-failed"
375 |     exit 1
376 |   fi
377 | else
378 |   echo "⚠️  Could not find workflow run. Please check manually:"
379 |   echo "https://github.com/cameroncooke/XcodeBuildMCP/actions"
380 | fi
381 | 
```

--------------------------------------------------------------------------------
/src/mcp/tools/simulator/get_sim_app_path.ts:
--------------------------------------------------------------------------------

```typescript
  1 | /**
  2 |  * Simulator Get App Path Plugin: Get Simulator App Path (Unified)
  3 |  *
  4 |  * Gets the app bundle path for a simulator by UUID or name using either a project or workspace file.
  5 |  * Accepts mutually exclusive `projectPath` or `workspacePath`.
  6 |  * Accepts mutually exclusive `simulatorId` or `simulatorName`.
  7 |  */
  8 | 
  9 | import { z } from 'zod';
 10 | import { log } from '../../../utils/logging/index.ts';
 11 | import { createTextResponse } from '../../../utils/responses/index.ts';
 12 | import type { CommandExecutor } from '../../../utils/execution/index.ts';
 13 | import { getDefaultCommandExecutor } from '../../../utils/execution/index.ts';
 14 | import { ToolResponse } from '../../../types/common.ts';
 15 | import { createSessionAwareTool } from '../../../utils/typed-tool-factory.ts';
 16 | import { nullifyEmptyStrings } from '../../../utils/schema-helpers.ts';
 17 | 
 18 | const XcodePlatform = {
 19 |   macOS: 'macOS',
 20 |   iOS: 'iOS',
 21 |   iOSSimulator: 'iOS Simulator',
 22 |   watchOS: 'watchOS',
 23 |   watchOSSimulator: 'watchOS Simulator',
 24 |   tvOS: 'tvOS',
 25 |   tvOSSimulator: 'tvOS Simulator',
 26 |   visionOS: 'visionOS',
 27 |   visionOSSimulator: 'visionOS Simulator',
 28 | };
 29 | 
 30 | function constructDestinationString(
 31 |   platform: string,
 32 |   simulatorName: string,
 33 |   simulatorId: string,
 34 |   useLatest: boolean = true,
 35 |   arch?: string,
 36 | ): string {
 37 |   const isSimulatorPlatform = [
 38 |     XcodePlatform.iOSSimulator,
 39 |     XcodePlatform.watchOSSimulator,
 40 |     XcodePlatform.tvOSSimulator,
 41 |     XcodePlatform.visionOSSimulator,
 42 |   ].includes(platform);
 43 | 
 44 |   // If ID is provided for a simulator, it takes precedence and uniquely identifies it.
 45 |   if (isSimulatorPlatform && simulatorId) {
 46 |     return `platform=${platform},id=${simulatorId}`;
 47 |   }
 48 | 
 49 |   // If name is provided for a simulator
 50 |   if (isSimulatorPlatform && simulatorName) {
 51 |     return `platform=${platform},name=${simulatorName}${useLatest ? ',OS=latest' : ''}`;
 52 |   }
 53 | 
 54 |   // If it's a simulator platform but neither ID nor name is provided (should be prevented by callers now)
 55 |   if (isSimulatorPlatform && !simulatorId && !simulatorName) {
 56 |     log(
 57 |       'warning',
 58 |       `Constructing generic destination for ${platform} without name or ID. This might not be specific enough.`,
 59 |     );
 60 |     throw new Error(`Simulator name or ID is required for specific ${platform} operations`);
 61 |   }
 62 | 
 63 |   // Handle non-simulator platforms
 64 |   switch (platform) {
 65 |     case XcodePlatform.macOS:
 66 |       return arch ? `platform=macOS,arch=${arch}` : 'platform=macOS';
 67 |     case XcodePlatform.iOS:
 68 |       return 'generic/platform=iOS';
 69 |     case XcodePlatform.watchOS:
 70 |       return 'generic/platform=watchOS';
 71 |     case XcodePlatform.tvOS:
 72 |       return 'generic/platform=tvOS';
 73 |     case XcodePlatform.visionOS:
 74 |       return 'generic/platform=visionOS';
 75 |   }
 76 |   // Fallback just in case (shouldn't be reached with enum)
 77 |   log('error', `Reached unexpected point in constructDestinationString for platform: ${platform}`);
 78 |   return `platform=${platform}`;
 79 | }
 80 | 
 81 | // Define base schema
 82 | const baseGetSimulatorAppPathSchema = z.object({
 83 |   projectPath: z
 84 |     .string()
 85 |     .optional()
 86 |     .describe('Path to .xcodeproj file. Provide EITHER this OR workspacePath, not both'),
 87 |   workspacePath: z
 88 |     .string()
 89 |     .optional()
 90 |     .describe('Path to .xcworkspace file. Provide EITHER this OR projectPath, not both'),
 91 |   scheme: z.string().describe('The scheme to use (Required)'),
 92 |   platform: z
 93 |     .enum(['iOS Simulator', 'watchOS Simulator', 'tvOS Simulator', 'visionOS Simulator'])
 94 |     .describe('Target simulator platform (Required)'),
 95 |   simulatorId: z
 96 |     .string()
 97 |     .optional()
 98 |     .describe(
 99 |       'UUID of the simulator (from list_sims). Provide EITHER this OR simulatorName, not both',
100 |     ),
101 |   simulatorName: z
102 |     .string()
103 |     .optional()
104 |     .describe(
105 |       "Name of the simulator (e.g., 'iPhone 16'). Provide EITHER this OR simulatorId, not both",
106 |     ),
107 |   configuration: z.string().optional().describe('Build configuration (Debug, Release, etc.)'),
108 |   useLatestOS: z
109 |     .boolean()
110 |     .optional()
111 |     .describe('Whether to use the latest OS version for the named simulator'),
112 |   arch: z.string().optional().describe('Optional architecture'),
113 | });
114 | 
115 | // Add XOR validation with preprocessing
116 | const getSimulatorAppPathSchema = z.preprocess(
117 |   nullifyEmptyStrings,
118 |   baseGetSimulatorAppPathSchema
119 |     .refine((val) => val.projectPath !== undefined || val.workspacePath !== undefined, {
120 |       message: 'Either projectPath or workspacePath is required.',
121 |     })
122 |     .refine((val) => !(val.projectPath !== undefined && val.workspacePath !== undefined), {
123 |       message: 'projectPath and workspacePath are mutually exclusive. Provide only one.',
124 |     })
125 |     .refine((val) => val.simulatorId !== undefined || val.simulatorName !== undefined, {
126 |       message: 'Either simulatorId or simulatorName is required.',
127 |     })
128 |     .refine((val) => !(val.simulatorId !== undefined && val.simulatorName !== undefined), {
129 |       message: 'simulatorId and simulatorName are mutually exclusive. Provide only one.',
130 |     }),
131 | );
132 | 
133 | // Use z.infer for type safety
134 | type GetSimulatorAppPathParams = z.infer<typeof getSimulatorAppPathSchema>;
135 | 
136 | /**
137 |  * Exported business logic function for getting app path
138 |  */
139 | export async function get_sim_app_pathLogic(
140 |   params: GetSimulatorAppPathParams,
141 |   executor: CommandExecutor,
142 | ): Promise<ToolResponse> {
143 |   // Set defaults - Zod validation already ensures required params are present
144 |   const projectPath = params.projectPath;
145 |   const workspacePath = params.workspacePath;
146 |   const scheme = params.scheme;
147 |   const platform = params.platform;
148 |   const simulatorId = params.simulatorId;
149 |   const simulatorName = params.simulatorName;
150 |   const configuration = params.configuration ?? 'Debug';
151 |   const useLatestOS = params.useLatestOS ?? true;
152 |   const arch = params.arch;
153 | 
154 |   // Log warning if useLatestOS is provided with simulatorId
155 |   if (simulatorId && params.useLatestOS !== undefined) {
156 |     log(
157 |       'warning',
158 |       `useLatestOS parameter is ignored when using simulatorId (UUID implies exact device/OS)`,
159 |     );
160 |   }
161 | 
162 |   log('info', `Getting app path for scheme ${scheme} on platform ${platform}`);
163 | 
164 |   try {
165 |     // Create the command array for xcodebuild with -showBuildSettings option
166 |     const command = ['xcodebuild', '-showBuildSettings'];
167 | 
168 |     // Add the workspace or project (XOR validation ensures exactly one is provided)
169 |     if (workspacePath) {
170 |       command.push('-workspace', workspacePath);
171 |     } else if (projectPath) {
172 |       command.push('-project', projectPath);
173 |     }
174 | 
175 |     // Add the scheme and configuration
176 |     command.push('-scheme', scheme);
177 |     command.push('-configuration', configuration);
178 | 
179 |     // Handle destination based on platform
180 |     const isSimulatorPlatform = [
181 |       XcodePlatform.iOSSimulator,
182 |       XcodePlatform.watchOSSimulator,
183 |       XcodePlatform.tvOSSimulator,
184 |       XcodePlatform.visionOSSimulator,
185 |     ].includes(platform);
186 | 
187 |     let destinationString = '';
188 | 
189 |     if (isSimulatorPlatform) {
190 |       if (simulatorId) {
191 |         destinationString = `platform=${platform},id=${simulatorId}`;
192 |       } else if (simulatorName) {
193 |         destinationString = `platform=${platform},name=${simulatorName}${(simulatorId ? false : useLatestOS) ? ',OS=latest' : ''}`;
194 |       } else {
195 |         return createTextResponse(
196 |           `For ${platform} platform, either simulatorId or simulatorName must be provided`,
197 |           true,
198 |         );
199 |       }
200 |     } else if (platform === XcodePlatform.macOS) {
201 |       destinationString = constructDestinationString(platform, '', '', false, arch);
202 |     } else if (platform === XcodePlatform.iOS) {
203 |       destinationString = 'generic/platform=iOS';
204 |     } else if (platform === XcodePlatform.watchOS) {
205 |       destinationString = 'generic/platform=watchOS';
206 |     } else if (platform === XcodePlatform.tvOS) {
207 |       destinationString = 'generic/platform=tvOS';
208 |     } else if (platform === XcodePlatform.visionOS) {
209 |       destinationString = 'generic/platform=visionOS';
210 |     } else {
211 |       return createTextResponse(`Unsupported platform: ${platform}`, true);
212 |     }
213 | 
214 |     command.push('-destination', destinationString);
215 | 
216 |     // Execute the command directly
217 |     const result = await executor(command, 'Get App Path', true, undefined);
218 | 
219 |     if (!result.success) {
220 |       return createTextResponse(`Failed to get app path: ${result.error}`, true);
221 |     }
222 | 
223 |     if (!result.output) {
224 |       return createTextResponse('Failed to extract build settings output from the result.', true);
225 |     }
226 | 
227 |     const buildSettingsOutput = result.output;
228 |     const builtProductsDirMatch = buildSettingsOutput.match(/^\s*BUILT_PRODUCTS_DIR\s*=\s*(.+)$/m);
229 |     const fullProductNameMatch = buildSettingsOutput.match(/^\s*FULL_PRODUCT_NAME\s*=\s*(.+)$/m);
230 | 
231 |     if (!builtProductsDirMatch || !fullProductNameMatch) {
232 |       return createTextResponse(
233 |         'Failed to extract app path from build settings. Make sure the app has been built first.',
234 |         true,
235 |       );
236 |     }
237 | 
238 |     const builtProductsDir = builtProductsDirMatch[1].trim();
239 |     const fullProductName = fullProductNameMatch[1].trim();
240 |     const appPath = `${builtProductsDir}/${fullProductName}`;
241 | 
242 |     let nextStepsText = '';
243 |     if (platform === XcodePlatform.macOS) {
244 |       nextStepsText = `Next Steps:
245 | 1. Get bundle ID: get_mac_bundle_id({ appPath: "${appPath}" })
246 | 2. Launch the app: launch_mac_app({ appPath: "${appPath}" })`;
247 |     } else if (isSimulatorPlatform) {
248 |       nextStepsText = `Next Steps:
249 | 1. Get bundle ID: get_app_bundle_id({ appPath: "${appPath}" })
250 | 2. Boot simulator: boot_sim({ simulatorUuid: "SIMULATOR_UUID" })
251 | 3. Install app: install_app_sim({ simulatorUuid: "SIMULATOR_UUID", appPath: "${appPath}" })
252 | 4. Launch app: launch_app_sim({ simulatorUuid: "SIMULATOR_UUID", bundleId: "BUNDLE_ID" })`;
253 |     } else if (
254 |       [
255 |         XcodePlatform.iOS,
256 |         XcodePlatform.watchOS,
257 |         XcodePlatform.tvOS,
258 |         XcodePlatform.visionOS,
259 |       ].includes(platform)
260 |     ) {
261 |       nextStepsText = `Next Steps:
262 | 1. Get bundle ID: get_app_bundle_id({ appPath: "${appPath}" })
263 | 2. Install app on device: install_app_device({ deviceId: "DEVICE_UDID", appPath: "${appPath}" })
264 | 3. Launch app on device: launch_app_device({ deviceId: "DEVICE_UDID", bundleId: "BUNDLE_ID" })`;
265 |     } else {
266 |       // For other platforms
267 |       nextStepsText = `Next Steps:
268 | 1. The app has been built for ${platform}
269 | 2. Use platform-specific deployment tools to install and run the app`;
270 |     }
271 | 
272 |     return {
273 |       content: [
274 |         {
275 |           type: 'text',
276 |           text: `✅ App path retrieved successfully: ${appPath}`,
277 |         },
278 |         {
279 |           type: 'text',
280 |           text: nextStepsText,
281 |         },
282 |       ],
283 |       isError: false,
284 |     };
285 |   } catch (error) {
286 |     const errorMessage = error instanceof Error ? error.message : String(error);
287 |     log('error', `Error retrieving app path: ${errorMessage}`);
288 |     return createTextResponse(`Error retrieving app path: ${errorMessage}`, true);
289 |   }
290 | }
291 | 
292 | const publicSchemaObject = baseGetSimulatorAppPathSchema.omit({
293 |   projectPath: true,
294 |   workspacePath: true,
295 |   scheme: true,
296 |   simulatorId: true,
297 |   simulatorName: true,
298 |   configuration: true,
299 |   useLatestOS: true,
300 |   arch: true,
301 | } as const);
302 | 
303 | export default {
304 |   name: 'get_sim_app_path',
305 |   description: 'Retrieves the built app path for an iOS simulator.',
306 |   schema: publicSchemaObject.shape,
307 |   handler: createSessionAwareTool<GetSimulatorAppPathParams>({
308 |     internalSchema: getSimulatorAppPathSchema as unknown as z.ZodType<GetSimulatorAppPathParams>,
309 |     logicFunction: get_sim_app_pathLogic,
310 |     getExecutor: getDefaultCommandExecutor,
311 |     requirements: [
312 |       { allOf: ['scheme'], message: 'scheme is required' },
313 |       { oneOf: ['projectPath', 'workspacePath'], message: 'Provide a project or workspace' },
314 |       { oneOf: ['simulatorId', 'simulatorName'], message: 'Provide simulatorId or simulatorName' },
315 |     ],
316 |     exclusivePairs: [
317 |       ['projectPath', 'workspacePath'],
318 |       ['simulatorId', 'simulatorName'],
319 |     ],
320 |   }),
321 | };
322 | 
```

--------------------------------------------------------------------------------
/src/mcp/tools/macos/test_macos.ts:
--------------------------------------------------------------------------------

```typescript
  1 | /**
  2 |  * macOS Shared Plugin: Test macOS (Unified)
  3 |  *
  4 |  * Runs tests for a macOS project or workspace using xcodebuild test and parses xcresult output.
  5 |  * Accepts mutually exclusive `projectPath` or `workspacePath`.
  6 |  */
  7 | 
  8 | import { z } from 'zod';
  9 | import { join } from 'path';
 10 | import { ToolResponse, XcodePlatform } from '../../../types/common.ts';
 11 | import { log } from '../../../utils/logging/index.ts';
 12 | import { executeXcodeBuildCommand } from '../../../utils/build/index.ts';
 13 | import { createTextResponse } from '../../../utils/responses/index.ts';
 14 | import { normalizeTestRunnerEnv } from '../../../utils/environment.ts';
 15 | import type {
 16 |   CommandExecutor,
 17 |   FileSystemExecutor,
 18 |   CommandExecOptions,
 19 | } from '../../../utils/execution/index.ts';
 20 | import {
 21 |   getDefaultCommandExecutor,
 22 |   getDefaultFileSystemExecutor,
 23 | } from '../../../utils/execution/index.ts';
 24 | import { createSessionAwareTool } from '../../../utils/typed-tool-factory.ts';
 25 | import { nullifyEmptyStrings } from '../../../utils/schema-helpers.ts';
 26 | 
 27 | // Unified schema: XOR between projectPath and workspacePath
 28 | const baseSchemaObject = z.object({
 29 |   projectPath: z.string().optional().describe('Path to the .xcodeproj file'),
 30 |   workspacePath: z.string().optional().describe('Path to the .xcworkspace file'),
 31 |   scheme: z.string().describe('The scheme to use'),
 32 |   configuration: z.string().optional().describe('Build configuration (Debug, Release, etc.)'),
 33 |   derivedDataPath: z
 34 |     .string()
 35 |     .optional()
 36 |     .describe('Path where build products and other derived data will go'),
 37 |   extraArgs: z.array(z.string()).optional().describe('Additional xcodebuild arguments'),
 38 |   preferXcodebuild: z
 39 |     .boolean()
 40 |     .optional()
 41 |     .describe('If true, prefers xcodebuild over the experimental incremental build system'),
 42 |   testRunnerEnv: z
 43 |     .record(z.string(), z.string())
 44 |     .optional()
 45 |     .describe(
 46 |       'Environment variables to pass to the test runner (TEST_RUNNER_ prefix added automatically)',
 47 |     ),
 48 | });
 49 | 
 50 | const baseSchema = z.preprocess(nullifyEmptyStrings, baseSchemaObject);
 51 | 
 52 | const publicSchemaObject = baseSchemaObject.omit({
 53 |   projectPath: true,
 54 |   workspacePath: true,
 55 |   scheme: true,
 56 |   configuration: true,
 57 | } as const);
 58 | 
 59 | const testMacosSchema = baseSchema
 60 |   .refine((val) => val.projectPath !== undefined || val.workspacePath !== undefined, {
 61 |     message: 'Either projectPath or workspacePath is required.',
 62 |   })
 63 |   .refine((val) => !(val.projectPath !== undefined && val.workspacePath !== undefined), {
 64 |     message: 'projectPath and workspacePath are mutually exclusive. Provide only one.',
 65 |   });
 66 | 
 67 | export type TestMacosParams = z.infer<typeof testMacosSchema>;
 68 | 
 69 | /**
 70 |  * Type definition for test summary structure from xcresulttool
 71 |  * @typedef {Object} TestSummary
 72 |  * @property {string} [title]
 73 |  * @property {string} [result]
 74 |  * @property {number} [totalTestCount]
 75 |  * @property {number} [passedTests]
 76 |  * @property {number} [failedTests]
 77 |  * @property {number} [skippedTests]
 78 |  * @property {number} [expectedFailures]
 79 |  * @property {string} [environmentDescription]
 80 |  * @property {Array<Object>} [devicesAndConfigurations]
 81 |  * @property {Array<Object>} [testFailures]
 82 |  * @property {Array<Object>} [topInsights]
 83 |  */
 84 | 
 85 | /**
 86 |  * Parse xcresult bundle using xcrun xcresulttool
 87 |  */
 88 | async function parseXcresultBundle(
 89 |   resultBundlePath: string,
 90 |   executor: CommandExecutor = getDefaultCommandExecutor(),
 91 | ): Promise<string> {
 92 |   try {
 93 |     const result = await executor(
 94 |       ['xcrun', 'xcresulttool', 'get', 'test-results', 'summary', '--path', resultBundlePath],
 95 |       'Parse xcresult bundle',
 96 |       true,
 97 |     );
 98 | 
 99 |     if (!result.success) {
100 |       throw new Error(result.error ?? 'Failed to parse xcresult bundle');
101 |     }
102 | 
103 |     // Parse JSON response and format as human-readable
104 |     let summary: unknown;
105 |     try {
106 |       summary = JSON.parse(result.output || '{}');
107 |     } catch (parseError) {
108 |       throw new Error(`Failed to parse JSON output: ${parseError}`);
109 |     }
110 | 
111 |     if (typeof summary !== 'object' || summary === null) {
112 |       throw new Error('Invalid JSON output: expected object');
113 |     }
114 | 
115 |     return formatTestSummary(summary as Record<string, unknown>);
116 |   } catch (error) {
117 |     const errorMessage = error instanceof Error ? error.message : String(error);
118 |     log('error', `Error parsing xcresult bundle: ${errorMessage}`);
119 |     throw error;
120 |   }
121 | }
122 | 
123 | /**
124 |  * Format test summary JSON into human-readable text
125 |  */
126 | function formatTestSummary(summary: Record<string, unknown>): string {
127 |   const lines = [];
128 | 
129 |   lines.push(`Test Summary: ${summary.title ?? 'Unknown'}`);
130 |   lines.push(`Overall Result: ${summary.result ?? 'Unknown'}`);
131 |   lines.push('');
132 | 
133 |   lines.push('Test Counts:');
134 |   lines.push(`  Total: ${summary.totalTestCount ?? 0}`);
135 |   lines.push(`  Passed: ${summary.passedTests ?? 0}`);
136 |   lines.push(`  Failed: ${summary.failedTests ?? 0}`);
137 |   lines.push(`  Skipped: ${summary.skippedTests ?? 0}`);
138 |   lines.push(`  Expected Failures: ${summary.expectedFailures ?? 0}`);
139 |   lines.push('');
140 | 
141 |   if (summary.environmentDescription) {
142 |     lines.push(`Environment: ${summary.environmentDescription}`);
143 |     lines.push('');
144 |   }
145 | 
146 |   if (
147 |     summary.devicesAndConfigurations &&
148 |     Array.isArray(summary.devicesAndConfigurations) &&
149 |     summary.devicesAndConfigurations.length > 0
150 |   ) {
151 |     const firstDeviceConfig: unknown = summary.devicesAndConfigurations[0];
152 |     if (
153 |       typeof firstDeviceConfig === 'object' &&
154 |       firstDeviceConfig !== null &&
155 |       'device' in firstDeviceConfig
156 |     ) {
157 |       const device: unknown = (firstDeviceConfig as Record<string, unknown>).device;
158 |       if (typeof device === 'object' && device !== null) {
159 |         const deviceRecord = device as Record<string, unknown>;
160 |         const deviceName =
161 |           'deviceName' in deviceRecord && typeof deviceRecord.deviceName === 'string'
162 |             ? deviceRecord.deviceName
163 |             : 'Unknown';
164 |         const platform =
165 |           'platform' in deviceRecord && typeof deviceRecord.platform === 'string'
166 |             ? deviceRecord.platform
167 |             : 'Unknown';
168 |         const osVersion =
169 |           'osVersion' in deviceRecord && typeof deviceRecord.osVersion === 'string'
170 |             ? deviceRecord.osVersion
171 |             : 'Unknown';
172 | 
173 |         lines.push(`Device: ${deviceName} (${platform} ${osVersion})`);
174 |         lines.push('');
175 |       }
176 |     }
177 |   }
178 | 
179 |   if (
180 |     summary.testFailures &&
181 |     Array.isArray(summary.testFailures) &&
182 |     summary.testFailures.length > 0
183 |   ) {
184 |     lines.push('Test Failures:');
185 |     summary.testFailures.forEach((failure: unknown, index: number) => {
186 |       if (typeof failure === 'object' && failure !== null) {
187 |         const failureRecord = failure as Record<string, unknown>;
188 |         const testName =
189 |           'testName' in failureRecord && typeof failureRecord.testName === 'string'
190 |             ? failureRecord.testName
191 |             : 'Unknown Test';
192 |         const targetName =
193 |           'targetName' in failureRecord && typeof failureRecord.targetName === 'string'
194 |             ? failureRecord.targetName
195 |             : 'Unknown Target';
196 | 
197 |         lines.push(`  ${index + 1}. ${testName} (${targetName})`);
198 | 
199 |         if ('failureText' in failureRecord && typeof failureRecord.failureText === 'string') {
200 |           lines.push(`     ${failureRecord.failureText}`);
201 |         }
202 |       }
203 |     });
204 |     lines.push('');
205 |   }
206 | 
207 |   if (summary.topInsights && Array.isArray(summary.topInsights) && summary.topInsights.length > 0) {
208 |     lines.push('Insights:');
209 |     summary.topInsights.forEach((insight: unknown, index: number) => {
210 |       if (typeof insight === 'object' && insight !== null) {
211 |         const insightRecord = insight as Record<string, unknown>;
212 |         const impact =
213 |           'impact' in insightRecord && typeof insightRecord.impact === 'string'
214 |             ? insightRecord.impact
215 |             : 'Unknown';
216 |         const text =
217 |           'text' in insightRecord && typeof insightRecord.text === 'string'
218 |             ? insightRecord.text
219 |             : 'No description';
220 | 
221 |         lines.push(`  ${index + 1}. [${impact}] ${text}`);
222 |       }
223 |     });
224 |   }
225 | 
226 |   return lines.join('\n');
227 | }
228 | 
229 | /**
230 |  * Business logic for testing a macOS project or workspace.
231 |  * Exported for direct testing and reuse.
232 |  */
233 | export async function testMacosLogic(
234 |   params: TestMacosParams,
235 |   executor: CommandExecutor = getDefaultCommandExecutor(),
236 |   fileSystemExecutor: FileSystemExecutor = getDefaultFileSystemExecutor(),
237 | ): Promise<ToolResponse> {
238 |   log('info', `Starting test run for scheme ${params.scheme} on platform macOS (internal)`);
239 | 
240 |   try {
241 |     // Create temporary directory for xcresult bundle
242 |     const tempDir = await fileSystemExecutor.mkdtemp(
243 |       join(fileSystemExecutor.tmpdir(), 'xcodebuild-test-'),
244 |     );
245 |     const resultBundlePath = join(tempDir, 'TestResults.xcresult');
246 | 
247 |     // Add resultBundlePath to extraArgs
248 |     const extraArgs = [...(params.extraArgs ?? []), `-resultBundlePath`, resultBundlePath];
249 | 
250 |     // Prepare execution options with TEST_RUNNER_ environment variables
251 |     const execOpts: CommandExecOptions | undefined = params.testRunnerEnv
252 |       ? { env: normalizeTestRunnerEnv(params.testRunnerEnv) }
253 |       : undefined;
254 | 
255 |     // Run the test command
256 |     const testResult = await executeXcodeBuildCommand(
257 |       {
258 |         projectPath: params.projectPath,
259 |         workspacePath: params.workspacePath,
260 |         scheme: params.scheme,
261 |         configuration: params.configuration ?? 'Debug',
262 |         derivedDataPath: params.derivedDataPath,
263 |         extraArgs,
264 |       },
265 |       {
266 |         platform: XcodePlatform.macOS,
267 |         logPrefix: 'Test Run',
268 |       },
269 |       params.preferXcodebuild ?? false,
270 |       'test',
271 |       executor,
272 |       execOpts,
273 |     );
274 | 
275 |     // Parse xcresult bundle if it exists, regardless of whether tests passed or failed
276 |     // Test failures are expected and should not prevent xcresult parsing
277 |     try {
278 |       log('info', `Attempting to parse xcresult bundle at: ${resultBundlePath}`);
279 | 
280 |       // Check if the file exists
281 |       try {
282 |         await fileSystemExecutor.stat(resultBundlePath);
283 |         log('info', `xcresult bundle exists at: ${resultBundlePath}`);
284 |       } catch {
285 |         log('warn', `xcresult bundle does not exist at: ${resultBundlePath}`);
286 |         throw new Error(`xcresult bundle not found at ${resultBundlePath}`);
287 |       }
288 | 
289 |       const testSummary = await parseXcresultBundle(resultBundlePath, executor);
290 |       log('info', 'Successfully parsed xcresult bundle');
291 | 
292 |       // Clean up temporary directory
293 |       await fileSystemExecutor.rm(tempDir, { recursive: true, force: true });
294 | 
295 |       // Return combined result - preserve isError from testResult (test failures should be marked as errors)
296 |       return {
297 |         content: [
298 |           ...(testResult.content ?? []),
299 |           {
300 |             type: 'text',
301 |             text: '\nTest Results Summary:\n' + testSummary,
302 |           },
303 |         ],
304 |         isError: testResult.isError,
305 |       };
306 |     } catch (parseError) {
307 |       // If parsing fails, return original test result
308 |       log('warn', `Failed to parse xcresult bundle: ${parseError}`);
309 | 
310 |       // Clean up temporary directory even if parsing fails
311 |       try {
312 |         await fileSystemExecutor.rm(tempDir, { recursive: true, force: true });
313 |       } catch (cleanupError) {
314 |         log('warn', `Failed to clean up temporary directory: ${cleanupError}`);
315 |       }
316 | 
317 |       return testResult;
318 |     }
319 |   } catch (error) {
320 |     const errorMessage = error instanceof Error ? error.message : String(error);
321 |     log('error', `Error during test run: ${errorMessage}`);
322 |     return createTextResponse(`Error during test run: ${errorMessage}`, true);
323 |   }
324 | }
325 | 
326 | export default {
327 |   name: 'test_macos',
328 |   description: 'Runs tests for a macOS target.',
329 |   schema: publicSchemaObject.shape,
330 |   handler: createSessionAwareTool<TestMacosParams>({
331 |     internalSchema: testMacosSchema as unknown as z.ZodType<TestMacosParams>,
332 |     logicFunction: (params, executor) =>
333 |       testMacosLogic(params, executor, getDefaultFileSystemExecutor()),
334 |     getExecutor: getDefaultCommandExecutor,
335 |     requirements: [
336 |       { allOf: ['scheme'], message: 'scheme is required' },
337 |       { oneOf: ['projectPath', 'workspacePath'], message: 'Provide a project or workspace' },
338 |     ],
339 |     exclusivePairs: [['projectPath', 'workspacePath']],
340 |   }),
341 | };
342 | 
```

--------------------------------------------------------------------------------
/src/mcp/tools/device/__tests__/test_device.test.ts:
--------------------------------------------------------------------------------

```typescript
  1 | /**
  2 |  * Tests for test_device plugin
  3 |  * Following CLAUDE.md testing standards with literal validation
  4 |  * Using pure dependency injection for deterministic testing
  5 |  * NO VITEST MOCKING ALLOWED - Only createMockExecutor and manual stubs
  6 |  */
  7 | 
  8 | import { describe, it, expect, beforeEach } from 'vitest';
  9 | import { z } from 'zod';
 10 | import {
 11 |   createMockExecutor,
 12 |   createMockFileSystemExecutor,
 13 | } from '../../../../test-utils/mock-executors.ts';
 14 | import testDevice, { testDeviceLogic } from '../test_device.ts';
 15 | import { sessionStore } from '../../../../utils/session-store.ts';
 16 | 
 17 | describe('test_device plugin', () => {
 18 |   beforeEach(() => {
 19 |     sessionStore.clear();
 20 |   });
 21 | 
 22 |   describe('Export Field Validation (Literal)', () => {
 23 |     it('should have correct name', () => {
 24 |       expect(testDevice.name).toBe('test_device');
 25 |     });
 26 | 
 27 |     it('should have correct description', () => {
 28 |       expect(testDevice.description).toBe('Runs tests on a physical Apple device.');
 29 |     });
 30 | 
 31 |     it('should have handler function', () => {
 32 |       expect(typeof testDevice.handler).toBe('function');
 33 |     });
 34 | 
 35 |     it('should expose only session-free fields in public schema', () => {
 36 |       const schema = z.object(testDevice.schema).strict();
 37 |       expect(
 38 |         schema.safeParse({
 39 |           derivedDataPath: '/path/to/derived-data',
 40 |           extraArgs: ['--arg1'],
 41 |           preferXcodebuild: true,
 42 |           platform: 'iOS',
 43 |           testRunnerEnv: { FOO: 'bar' },
 44 |         }).success,
 45 |       ).toBe(true);
 46 |       expect(schema.safeParse({}).success).toBe(true);
 47 |       expect(schema.safeParse({ projectPath: '/path/to/project.xcodeproj' }).success).toBe(false);
 48 | 
 49 |       const schemaKeys = Object.keys(testDevice.schema).sort();
 50 |       expect(schemaKeys).toEqual([
 51 |         'derivedDataPath',
 52 |         'extraArgs',
 53 |         'platform',
 54 |         'preferXcodebuild',
 55 |         'testRunnerEnv',
 56 |       ]);
 57 |     });
 58 | 
 59 |     it('should validate XOR between projectPath and workspacePath', async () => {
 60 |       // This would be validated at the schema level via createTypedTool
 61 |       // We test the schema validation through successful logic calls instead
 62 |       const mockExecutor = createMockExecutor({
 63 |         success: true,
 64 |         output: JSON.stringify({
 65 |           title: 'Test Schema',
 66 |           result: 'SUCCESS',
 67 |           totalTestCount: 1,
 68 |           passedTests: 1,
 69 |           failedTests: 0,
 70 |           skippedTests: 0,
 71 |           expectedFailures: 0,
 72 |         }),
 73 |       });
 74 | 
 75 |       // Valid: project path only
 76 |       const projectResult = await testDeviceLogic(
 77 |         {
 78 |           projectPath: '/path/to/project.xcodeproj',
 79 |           scheme: 'MyScheme',
 80 |           deviceId: 'test-device-123',
 81 |         },
 82 |         mockExecutor,
 83 |         createMockFileSystemExecutor({
 84 |           mkdtemp: async () => '/tmp/xcodebuild-test-123',
 85 |           tmpdir: () => '/tmp',
 86 |           stat: async () => ({ isFile: () => true }),
 87 |           rm: async () => {},
 88 |         }),
 89 |       );
 90 |       expect(projectResult.isError).toBeFalsy();
 91 | 
 92 |       // Valid: workspace path only
 93 |       const workspaceResult = await testDeviceLogic(
 94 |         {
 95 |           workspacePath: '/path/to/workspace.xcworkspace',
 96 |           scheme: 'MyScheme',
 97 |           deviceId: 'test-device-123',
 98 |         },
 99 |         mockExecutor,
100 |         createMockFileSystemExecutor({
101 |           mkdtemp: async () => '/tmp/xcodebuild-test-456',
102 |           tmpdir: () => '/tmp',
103 |           stat: async () => ({ isFile: () => true }),
104 |           rm: async () => {},
105 |         }),
106 |       );
107 |       expect(workspaceResult.isError).toBeFalsy();
108 |     });
109 |   });
110 | 
111 |   describe('Handler Requirements', () => {
112 |     it('should require scheme and device defaults', async () => {
113 |       const result = await testDevice.handler({});
114 | 
115 |       expect(result.isError).toBe(true);
116 |       expect(result.content[0].text).toContain('Missing required session defaults');
117 |       expect(result.content[0].text).toContain('Provide scheme and deviceId');
118 |     });
119 | 
120 |     it('should require project or workspace when defaults provide scheme and device', async () => {
121 |       sessionStore.setDefaults({ scheme: 'MyScheme', deviceId: 'test-device-123' });
122 | 
123 |       const result = await testDevice.handler({});
124 | 
125 |       expect(result.isError).toBe(true);
126 |       expect(result.content[0].text).toContain('Provide a project or workspace');
127 |     });
128 | 
129 |     it('should reject mutually exclusive project inputs when defaults satisfy requirements', async () => {
130 |       sessionStore.setDefaults({ scheme: 'MyScheme', deviceId: 'test-device-123' });
131 | 
132 |       const result = await testDevice.handler({
133 |         projectPath: '/path/to/project.xcodeproj',
134 |         workspacePath: '/path/to/workspace.xcworkspace',
135 |       });
136 | 
137 |       expect(result.isError).toBe(true);
138 |       expect(result.content[0].text).toContain('Parameter validation failed');
139 |       expect(result.content[0].text).toContain('Mutually exclusive parameters provided');
140 |     });
141 |   });
142 | 
143 |   describe('Handler Behavior (Complete Literal Returns)', () => {
144 |     beforeEach(() => {
145 |       // Clean setup for standard testing pattern
146 |     });
147 | 
148 |     it('should return successful test response with parsed results', async () => {
149 |       // Mock xcresulttool output
150 |       const mockExecutor = createMockExecutor({
151 |         success: true,
152 |         output: JSON.stringify({
153 |           title: 'MyScheme Tests',
154 |           result: 'SUCCESS',
155 |           totalTestCount: 5,
156 |           passedTests: 5,
157 |           failedTests: 0,
158 |           skippedTests: 0,
159 |           expectedFailures: 0,
160 |         }),
161 |       });
162 | 
163 |       const result = await testDeviceLogic(
164 |         {
165 |           projectPath: '/path/to/project.xcodeproj',
166 |           scheme: 'MyScheme',
167 |           deviceId: 'test-device-123',
168 |           configuration: 'Debug',
169 |           preferXcodebuild: false,
170 |           platform: 'iOS',
171 |         },
172 |         mockExecutor,
173 |         createMockFileSystemExecutor({
174 |           mkdtemp: async () => '/tmp/xcodebuild-test-123456',
175 |           tmpdir: () => '/tmp',
176 |           stat: async () => ({ isFile: () => true }),
177 |           rm: async () => {},
178 |         }),
179 |       );
180 | 
181 |       expect(result.content).toHaveLength(2);
182 |       expect(result.content[0].text).toContain('✅');
183 |       expect(result.content[1].text).toContain('Test Results Summary:');
184 |       expect(result.content[1].text).toContain('MyScheme Tests');
185 |     });
186 | 
187 |     it('should handle test failure scenarios', async () => {
188 |       // Mock xcresulttool output for failed tests
189 |       const mockExecutor = createMockExecutor({
190 |         success: true,
191 |         output: JSON.stringify({
192 |           title: 'MyScheme Tests',
193 |           result: 'FAILURE',
194 |           totalTestCount: 5,
195 |           passedTests: 3,
196 |           failedTests: 2,
197 |           skippedTests: 0,
198 |           expectedFailures: 0,
199 |           testFailures: [
200 |             {
201 |               testName: 'testExample',
202 |               targetName: 'MyTarget',
203 |               failureText: 'Expected true but was false',
204 |             },
205 |           ],
206 |         }),
207 |       });
208 | 
209 |       const result = await testDeviceLogic(
210 |         {
211 |           projectPath: '/path/to/project.xcodeproj',
212 |           scheme: 'MyScheme',
213 |           deviceId: 'test-device-123',
214 |           configuration: 'Debug',
215 |           preferXcodebuild: false,
216 |           platform: 'iOS',
217 |         },
218 |         mockExecutor,
219 |         createMockFileSystemExecutor({
220 |           mkdtemp: async () => '/tmp/xcodebuild-test-123456',
221 |           tmpdir: () => '/tmp',
222 |           stat: async () => ({ isFile: () => true }),
223 |           rm: async () => {},
224 |         }),
225 |       );
226 | 
227 |       expect(result.content).toHaveLength(2);
228 |       expect(result.content[1].text).toContain('Test Failures:');
229 |       expect(result.content[1].text).toContain('testExample');
230 |     });
231 | 
232 |     it('should handle xcresult parsing failures gracefully', async () => {
233 |       // Create a multi-call mock that handles different commands
234 |       let callCount = 0;
235 |       const mockExecutor = async (args: string[], description: string) => {
236 |         callCount++;
237 | 
238 |         // First call is for xcodebuild test (successful)
239 |         if (callCount === 1) {
240 |           return { success: true, output: 'BUILD SUCCEEDED' };
241 |         }
242 | 
243 |         // Second call is for xcresulttool (fails)
244 |         return { success: false, error: 'xcresulttool failed' };
245 |       };
246 | 
247 |       const result = await testDeviceLogic(
248 |         {
249 |           projectPath: '/path/to/project.xcodeproj',
250 |           scheme: 'MyScheme',
251 |           deviceId: 'test-device-123',
252 |           configuration: 'Debug',
253 |           preferXcodebuild: false,
254 |           platform: 'iOS',
255 |         },
256 |         mockExecutor,
257 |         createMockFileSystemExecutor({
258 |           mkdtemp: async () => '/tmp/xcodebuild-test-123456',
259 |           tmpdir: () => '/tmp',
260 |           stat: async () => {
261 |             throw new Error('File not found');
262 |           },
263 |           rm: async () => {},
264 |         }),
265 |       );
266 | 
267 |       // When xcresult parsing fails, it falls back to original test result only
268 |       expect(result.content).toHaveLength(1);
269 |       expect(result.content[0].text).toContain('✅');
270 |     });
271 | 
272 |     it('should support different platforms', async () => {
273 |       // Mock xcresulttool output
274 |       const mockExecutor = createMockExecutor({
275 |         success: true,
276 |         output: JSON.stringify({
277 |           title: 'WatchApp Tests',
278 |           result: 'SUCCESS',
279 |           totalTestCount: 3,
280 |           passedTests: 3,
281 |           failedTests: 0,
282 |           skippedTests: 0,
283 |           expectedFailures: 0,
284 |         }),
285 |       });
286 | 
287 |       const result = await testDeviceLogic(
288 |         {
289 |           projectPath: '/path/to/project.xcodeproj',
290 |           scheme: 'WatchApp',
291 |           deviceId: 'watch-device-456',
292 |           configuration: 'Debug',
293 |           preferXcodebuild: false,
294 |           platform: 'watchOS',
295 |         },
296 |         mockExecutor,
297 |         createMockFileSystemExecutor({
298 |           mkdtemp: async () => '/tmp/xcodebuild-test-123456',
299 |           tmpdir: () => '/tmp',
300 |           stat: async () => ({ isFile: () => true }),
301 |           rm: async () => {},
302 |         }),
303 |       );
304 | 
305 |       expect(result.content).toHaveLength(2);
306 |       expect(result.content[1].text).toContain('WatchApp Tests');
307 |     });
308 | 
309 |     it('should handle optional parameters', async () => {
310 |       // Mock xcresulttool output
311 |       const mockExecutor = createMockExecutor({
312 |         success: true,
313 |         output: JSON.stringify({
314 |           title: 'Tests',
315 |           result: 'SUCCESS',
316 |           totalTestCount: 1,
317 |           passedTests: 1,
318 |           failedTests: 0,
319 |           skippedTests: 0,
320 |           expectedFailures: 0,
321 |         }),
322 |       });
323 | 
324 |       const result = await testDeviceLogic(
325 |         {
326 |           projectPath: '/path/to/project.xcodeproj',
327 |           scheme: 'MyScheme',
328 |           deviceId: 'test-device-123',
329 |           configuration: 'Release',
330 |           derivedDataPath: '/tmp/derived-data',
331 |           extraArgs: ['--verbose'],
332 |           preferXcodebuild: false,
333 |           platform: 'iOS',
334 |         },
335 |         mockExecutor,
336 |         createMockFileSystemExecutor({
337 |           mkdtemp: async () => '/tmp/xcodebuild-test-123456',
338 |           tmpdir: () => '/tmp',
339 |           stat: async () => ({ isFile: () => true }),
340 |           rm: async () => {},
341 |         }),
342 |       );
343 | 
344 |       expect(result.content).toHaveLength(2);
345 |       expect(result.content[0].text).toContain('✅');
346 |     });
347 | 
348 |     it('should handle workspace testing successfully', async () => {
349 |       // Mock xcresulttool output
350 |       const mockExecutor = createMockExecutor({
351 |         success: true,
352 |         output: JSON.stringify({
353 |           title: 'WorkspaceScheme Tests',
354 |           result: 'SUCCESS',
355 |           totalTestCount: 10,
356 |           passedTests: 10,
357 |           failedTests: 0,
358 |           skippedTests: 0,
359 |           expectedFailures: 0,
360 |         }),
361 |       });
362 | 
363 |       const result = await testDeviceLogic(
364 |         {
365 |           workspacePath: '/path/to/workspace.xcworkspace',
366 |           scheme: 'WorkspaceScheme',
367 |           deviceId: 'test-device-456',
368 |           configuration: 'Debug',
369 |           preferXcodebuild: false,
370 |           platform: 'iOS',
371 |         },
372 |         mockExecutor,
373 |         createMockFileSystemExecutor({
374 |           mkdtemp: async () => '/tmp/xcodebuild-test-workspace-123',
375 |           tmpdir: () => '/tmp',
376 |           stat: async () => ({ isFile: () => true }),
377 |           rm: async () => {},
378 |         }),
379 |       );
380 | 
381 |       expect(result.content).toHaveLength(2);
382 |       expect(result.content[0].text).toContain('✅');
383 |       expect(result.content[1].text).toContain('Test Results Summary:');
384 |       expect(result.content[1].text).toContain('WorkspaceScheme Tests');
385 |     });
386 |   });
387 | });
388 | 
```

--------------------------------------------------------------------------------
/src/mcp/tools/swift-package/__tests__/swift_package_run.test.ts:
--------------------------------------------------------------------------------

```typescript
  1 | /**
  2 |  * Tests for swift_package_run plugin
  3 |  * Following CLAUDE.md testing standards with literal validation
  4 |  * Integration tests using dependency injection for deterministic testing
  5 |  */
  6 | 
  7 | import { describe, it, expect, beforeEach } from 'vitest';
  8 | import { z } from 'zod';
  9 | import { createMockExecutor, createNoopExecutor } from '../../../../test-utils/mock-executors.ts';
 10 | import swiftPackageRun, { swift_package_runLogic } from '../swift_package_run.ts';
 11 | 
 12 | describe('swift_package_run plugin', () => {
 13 |   describe('Export Field Validation (Literal)', () => {
 14 |     it('should have correct name', () => {
 15 |       expect(swiftPackageRun.name).toBe('swift_package_run');
 16 |     });
 17 | 
 18 |     it('should have correct description', () => {
 19 |       expect(swiftPackageRun.description).toBe(
 20 |         'Runs an executable target from a Swift Package with swift run',
 21 |       );
 22 |     });
 23 | 
 24 |     it('should have handler function', () => {
 25 |       expect(typeof swiftPackageRun.handler).toBe('function');
 26 |     });
 27 | 
 28 |     it('should validate schema correctly', () => {
 29 |       // Test packagePath (required string)
 30 |       expect(swiftPackageRun.schema.packagePath.safeParse('valid/path').success).toBe(true);
 31 |       expect(swiftPackageRun.schema.packagePath.safeParse(null).success).toBe(false);
 32 | 
 33 |       // Test executableName (optional string)
 34 |       expect(swiftPackageRun.schema.executableName.safeParse('MyExecutable').success).toBe(true);
 35 |       expect(swiftPackageRun.schema.executableName.safeParse(undefined).success).toBe(true);
 36 |       expect(swiftPackageRun.schema.executableName.safeParse(123).success).toBe(false);
 37 | 
 38 |       // Test arguments (optional array of strings)
 39 |       expect(swiftPackageRun.schema.arguments.safeParse(['arg1', 'arg2']).success).toBe(true);
 40 |       expect(swiftPackageRun.schema.arguments.safeParse(undefined).success).toBe(true);
 41 |       expect(swiftPackageRun.schema.arguments.safeParse(['arg1', 123]).success).toBe(false);
 42 | 
 43 |       // Test configuration (optional enum)
 44 |       expect(swiftPackageRun.schema.configuration.safeParse('debug').success).toBe(true);
 45 |       expect(swiftPackageRun.schema.configuration.safeParse('release').success).toBe(true);
 46 |       expect(swiftPackageRun.schema.configuration.safeParse(undefined).success).toBe(true);
 47 |       expect(swiftPackageRun.schema.configuration.safeParse('invalid').success).toBe(false);
 48 | 
 49 |       // Test timeout (optional number)
 50 |       expect(swiftPackageRun.schema.timeout.safeParse(30).success).toBe(true);
 51 |       expect(swiftPackageRun.schema.timeout.safeParse(undefined).success).toBe(true);
 52 |       expect(swiftPackageRun.schema.timeout.safeParse('30').success).toBe(false);
 53 | 
 54 |       // Test background (optional boolean)
 55 |       expect(swiftPackageRun.schema.background.safeParse(true).success).toBe(true);
 56 |       expect(swiftPackageRun.schema.background.safeParse(false).success).toBe(true);
 57 |       expect(swiftPackageRun.schema.background.safeParse(undefined).success).toBe(true);
 58 |       expect(swiftPackageRun.schema.background.safeParse('true').success).toBe(false);
 59 | 
 60 |       // Test parseAsLibrary (optional boolean)
 61 |       expect(swiftPackageRun.schema.parseAsLibrary.safeParse(true).success).toBe(true);
 62 |       expect(swiftPackageRun.schema.parseAsLibrary.safeParse(false).success).toBe(true);
 63 |       expect(swiftPackageRun.schema.parseAsLibrary.safeParse(undefined).success).toBe(true);
 64 |       expect(swiftPackageRun.schema.parseAsLibrary.safeParse('true').success).toBe(false);
 65 |     });
 66 |   });
 67 | 
 68 |   let executorCalls: any[] = [];
 69 | 
 70 |   beforeEach(() => {
 71 |     executorCalls = [];
 72 |   });
 73 | 
 74 |   describe('Command Generation Testing', () => {
 75 |     it('should build correct command for basic run (foreground mode)', async () => {
 76 |       const mockExecutor = (
 77 |         command: string[],
 78 |         logPrefix?: string,
 79 |         useShell?: boolean,
 80 |         env?: any,
 81 |       ) => {
 82 |         executorCalls.push({ command, logPrefix, useShell, env });
 83 |         return Promise.resolve({
 84 |           success: true,
 85 |           output: 'Process completed',
 86 |           error: undefined,
 87 |           process: { pid: 12345 },
 88 |         });
 89 |       };
 90 | 
 91 |       await swift_package_runLogic(
 92 |         {
 93 |           packagePath: '/test/package',
 94 |         },
 95 |         mockExecutor,
 96 |       );
 97 | 
 98 |       expect(executorCalls[0]).toEqual({
 99 |         command: ['swift', 'run', '--package-path', '/test/package'],
100 |         logPrefix: 'Swift Package Run',
101 |         useShell: true,
102 |         env: undefined,
103 |       });
104 |     });
105 | 
106 |     it('should build correct command with release configuration', async () => {
107 |       const mockExecutor = (
108 |         command: string[],
109 |         logPrefix?: string,
110 |         useShell?: boolean,
111 |         env?: any,
112 |       ) => {
113 |         executorCalls.push({ command, logPrefix, useShell, env });
114 |         return Promise.resolve({
115 |           success: true,
116 |           output: 'Process completed',
117 |           error: undefined,
118 |           process: { pid: 12345 },
119 |         });
120 |       };
121 | 
122 |       await swift_package_runLogic(
123 |         {
124 |           packagePath: '/test/package',
125 |           configuration: 'release',
126 |         },
127 |         mockExecutor,
128 |       );
129 | 
130 |       expect(executorCalls[0]).toEqual({
131 |         command: ['swift', 'run', '--package-path', '/test/package', '-c', 'release'],
132 |         logPrefix: 'Swift Package Run',
133 |         useShell: true,
134 |         env: undefined,
135 |       });
136 |     });
137 | 
138 |     it('should build correct command with executable name', async () => {
139 |       const mockExecutor = (
140 |         command: string[],
141 |         logPrefix?: string,
142 |         useShell?: boolean,
143 |         env?: any,
144 |       ) => {
145 |         executorCalls.push({ command, logPrefix, useShell, env });
146 |         return Promise.resolve({
147 |           success: true,
148 |           output: 'Process completed',
149 |           error: undefined,
150 |           process: { pid: 12345 },
151 |         });
152 |       };
153 | 
154 |       await swift_package_runLogic(
155 |         {
156 |           packagePath: '/test/package',
157 |           executableName: 'MyApp',
158 |         },
159 |         mockExecutor,
160 |       );
161 | 
162 |       expect(executorCalls[0]).toEqual({
163 |         command: ['swift', 'run', '--package-path', '/test/package', 'MyApp'],
164 |         logPrefix: 'Swift Package Run',
165 |         useShell: true,
166 |         env: undefined,
167 |       });
168 |     });
169 | 
170 |     it('should build correct command with arguments', async () => {
171 |       const mockExecutor = (
172 |         command: string[],
173 |         logPrefix?: string,
174 |         useShell?: boolean,
175 |         env?: any,
176 |       ) => {
177 |         executorCalls.push({ command, logPrefix, useShell, env });
178 |         return Promise.resolve({
179 |           success: true,
180 |           output: 'Process completed',
181 |           error: undefined,
182 |           process: { pid: 12345 },
183 |         });
184 |       };
185 | 
186 |       await swift_package_runLogic(
187 |         {
188 |           packagePath: '/test/package',
189 |           arguments: ['arg1', 'arg2'],
190 |         },
191 |         mockExecutor,
192 |       );
193 | 
194 |       expect(executorCalls[0]).toEqual({
195 |         command: ['swift', 'run', '--package-path', '/test/package', '--', 'arg1', 'arg2'],
196 |         logPrefix: 'Swift Package Run',
197 |         useShell: true,
198 |         env: undefined,
199 |       });
200 |     });
201 | 
202 |     it('should build correct command with parseAsLibrary flag', async () => {
203 |       const mockExecutor = (
204 |         command: string[],
205 |         logPrefix?: string,
206 |         useShell?: boolean,
207 |         env?: any,
208 |       ) => {
209 |         executorCalls.push({ command, logPrefix, useShell, env });
210 |         return Promise.resolve({
211 |           success: true,
212 |           output: 'Process completed',
213 |           error: undefined,
214 |           process: { pid: 12345 },
215 |         });
216 |       };
217 | 
218 |       await swift_package_runLogic(
219 |         {
220 |           packagePath: '/test/package',
221 |           parseAsLibrary: true,
222 |         },
223 |         mockExecutor,
224 |       );
225 | 
226 |       expect(executorCalls[0]).toEqual({
227 |         command: [
228 |           'swift',
229 |           'run',
230 |           '--package-path',
231 |           '/test/package',
232 |           '-Xswiftc',
233 |           '-parse-as-library',
234 |         ],
235 |         logPrefix: 'Swift Package Run',
236 |         useShell: true,
237 |         env: undefined,
238 |       });
239 |     });
240 | 
241 |     it('should build correct command with all parameters', async () => {
242 |       const mockExecutor = (
243 |         command: string[],
244 |         logPrefix?: string,
245 |         useShell?: boolean,
246 |         env?: any,
247 |       ) => {
248 |         executorCalls.push({ command, logPrefix, useShell, env });
249 |         return Promise.resolve({
250 |           success: true,
251 |           output: 'Process completed',
252 |           error: undefined,
253 |           process: { pid: 12345 },
254 |         });
255 |       };
256 | 
257 |       await swift_package_runLogic(
258 |         {
259 |           packagePath: '/test/package',
260 |           executableName: 'MyApp',
261 |           configuration: 'release',
262 |           arguments: ['arg1'],
263 |           parseAsLibrary: true,
264 |         },
265 |         mockExecutor,
266 |       );
267 | 
268 |       expect(executorCalls[0]).toEqual({
269 |         command: [
270 |           'swift',
271 |           'run',
272 |           '--package-path',
273 |           '/test/package',
274 |           '-c',
275 |           'release',
276 |           '-Xswiftc',
277 |           '-parse-as-library',
278 |           'MyApp',
279 |           '--',
280 |           'arg1',
281 |         ],
282 |         logPrefix: 'Swift Package Run',
283 |         useShell: true,
284 |         env: undefined,
285 |       });
286 |     });
287 | 
288 |     it('should not call executor for background mode', async () => {
289 |       // For background mode, no executor should be called since it uses direct spawn
290 |       const mockExecutor = createNoopExecutor();
291 | 
292 |       const result = await swift_package_runLogic(
293 |         {
294 |           packagePath: '/test/package',
295 |           background: true,
296 |         },
297 |         mockExecutor,
298 |       );
299 | 
300 |       // Should return success without calling executor
301 |       expect(result.content[0].text).toContain('🚀 Started executable in background');
302 |     });
303 |   });
304 | 
305 |   describe('Response Logic Testing', () => {
306 |     it('should return validation error for missing packagePath', async () => {
307 |       // Since the tool now uses createTypedTool, Zod validation happens at the handler level
308 |       // Test the handler directly to see Zod validation
309 |       const result = await swiftPackageRun.handler({});
310 | 
311 |       expect(result).toEqual({
312 |         content: [
313 |           {
314 |             type: 'text',
315 |             text: 'Error: Parameter validation failed\nDetails: Invalid parameters:\npackagePath: Required',
316 |           },
317 |         ],
318 |         isError: true,
319 |       });
320 |     });
321 | 
322 |     it('should return success response for background mode', async () => {
323 |       const mockExecutor = createNoopExecutor();
324 |       const result = await swift_package_runLogic(
325 |         {
326 |           packagePath: '/test/package',
327 |           background: true,
328 |         },
329 |         mockExecutor,
330 |       );
331 | 
332 |       expect(result.content[0].text).toContain('🚀 Started executable in background');
333 |       expect(result.content[0].text).toContain('💡 Process is running independently');
334 |     });
335 | 
336 |     it('should return success response for successful execution', async () => {
337 |       const mockExecutor = createMockExecutor({
338 |         success: true,
339 |         output: 'Hello, World!',
340 |       });
341 | 
342 |       const result = await swift_package_runLogic(
343 |         {
344 |           packagePath: '/test/package',
345 |         },
346 |         mockExecutor,
347 |       );
348 | 
349 |       expect(result).toEqual({
350 |         content: [
351 |           { type: 'text', text: '✅ Swift executable completed successfully.' },
352 |           { type: 'text', text: '💡 Process finished cleanly. Check output for results.' },
353 |           { type: 'text', text: 'Hello, World!' },
354 |         ],
355 |       });
356 |     });
357 | 
358 |     it('should return error response for failed execution', async () => {
359 |       const mockExecutor = createMockExecutor({
360 |         success: false,
361 |         output: '',
362 |         error: 'Compilation failed',
363 |       });
364 | 
365 |       const result = await swift_package_runLogic(
366 |         {
367 |           packagePath: '/test/package',
368 |         },
369 |         mockExecutor,
370 |       );
371 | 
372 |       expect(result).toEqual({
373 |         content: [
374 |           { type: 'text', text: '❌ Swift executable failed.' },
375 |           { type: 'text', text: '(no output)' },
376 |           { type: 'text', text: 'Errors:\nCompilation failed' },
377 |         ],
378 |       });
379 |     });
380 | 
381 |     it('should handle executor error', async () => {
382 |       const mockExecutor = createMockExecutor(new Error('Command not found'));
383 | 
384 |       const result = await swift_package_runLogic(
385 |         {
386 |           packagePath: '/test/package',
387 |         },
388 |         mockExecutor,
389 |       );
390 | 
391 |       expect(result).toEqual({
392 |         content: [
393 |           {
394 |             type: 'text',
395 |             text: 'Error: Failed to execute swift run\nDetails: Command not found',
396 |           },
397 |         ],
398 |         isError: true,
399 |       });
400 |     });
401 |   });
402 | });
403 | 
```

--------------------------------------------------------------------------------
/docs/TOOLS.md:
--------------------------------------------------------------------------------

```markdown
  1 | # XcodeBuildMCP Tools Reference
  2 | 
  3 | XcodeBuildMCP provides 61 tools organized into 12 workflow groups for comprehensive Apple development workflows.
  4 | 
  5 | ## Workflow Groups
  6 | 
  7 | ### Dynamic Tool Discovery (`discovery`)
  8 | **Purpose**: Intelligent discovery and recommendation of appropriate development workflows based on project structure and requirements (1 tools)
  9 | 
 10 | - `discover_tools` - Analyzes a natural language task description and enables the most relevant development workflow. Prioritizes project/workspace workflows (simulator/device/macOS) and also supports task-based workflows (simulator-management, logging) and Swift packages.
 11 | ### iOS Device Development (`device`)
 12 | **Purpose**: Complete iOS development workflow for both .xcodeproj and .xcworkspace files targeting physical devices (iPhone, iPad, Apple Watch, Apple TV, Apple Vision Pro). Build, test, deploy, and debug apps on real hardware. (7 tools)
 13 | 
 14 | - `build_device` - Builds an app from a project or workspace for a physical Apple device. Provide exactly one of projectPath or workspacePath. Example: build_device({ projectPath: '/path/to/MyProject.xcodeproj', scheme: 'MyScheme' })
 15 | - `get_device_app_path` - Gets the app bundle path for a physical device application (iOS, watchOS, tvOS, visionOS) using either a project or workspace. Provide exactly one of projectPath or workspacePath. Example: get_device_app_path({ projectPath: '/path/to/project.xcodeproj', scheme: 'MyScheme' })
 16 | - `install_app_device` - Installs an app on a physical Apple device (iPhone, iPad, Apple Watch, Apple TV, Apple Vision Pro). Requires deviceId and appPath.
 17 | - `launch_app_device` - Launches an app on a physical Apple device (iPhone, iPad, Apple Watch, Apple TV, Apple Vision Pro). Requires deviceId and bundleId.
 18 | - `list_devices` - Lists connected physical Apple devices (iPhone, iPad, Apple Watch, Apple TV, Apple Vision Pro) with their UUIDs, names, and connection status. Use this to discover physical devices for testing.
 19 | - `stop_app_device` - Stops an app running on a physical Apple device (iPhone, iPad, Apple Watch, Apple TV, Apple Vision Pro). Requires deviceId and processId.
 20 | - `test_device` - Runs tests for an Apple project or workspace on a physical device (iPhone, iPad, Apple Watch, Apple TV, Apple Vision Pro) using xcodebuild test and parses xcresult output. Provide exactly one of projectPath or workspacePath.
 21 | ### iOS Simulator Development (`simulator`)
 22 | **Purpose**: Complete iOS development workflow for both .xcodeproj and .xcworkspace files targeting simulators. Build, test, deploy, and interact with iOS apps on simulators. (12 tools)
 23 | 
 24 | - `boot_sim` - Boots an iOS simulator. After booting, use open_sim() to make the simulator visible.
 25 | - `build_run_sim` - Builds and runs an app from a project or workspace on a specific simulator by UUID or name. Provide exactly one of projectPath or workspacePath, and exactly one of simulatorId or simulatorName.
 26 | - `build_sim` - Builds an app from a project or workspace for a specific simulator by UUID or name. Provide exactly one of projectPath or workspacePath, and exactly one of simulatorId or simulatorName.
 27 | - `get_sim_app_path` - Gets the app bundle path for a simulator by UUID or name using either a project or workspace file.
 28 | - `install_app_sim` - Installs an app in an iOS simulator.
 29 | - `launch_app_logs_sim` - Launches an app in an iOS simulator and captures its logs.
 30 | - `launch_app_sim` - Launches an app in an iOS simulator by UUID or name. If simulator window isn't visible, use open_sim() first. or launch_app_sim({ simulatorName: 'iPhone 16', bundleId: 'com.example.MyApp' })
 31 | - `list_sims` - Lists available iOS simulators with their UUIDs.
 32 | - `open_sim` - Opens the iOS Simulator app.
 33 | - `record_sim_video` - Starts or stops video capture for an iOS simulator using AXe. Provide exactly one of start=true or stop=true. On stop, outputFile is required. fps defaults to 30.
 34 | - `stop_app_sim` - Stops an app running in an iOS simulator by UUID or name. or stop_app_sim({ simulatorName: "iPhone 16", bundleId: "com.example.MyApp" })
 35 | - `test_sim` - Runs tests on a simulator by UUID or name using xcodebuild test and parses xcresult output. Works with both Xcode projects (.xcodeproj) and workspaces (.xcworkspace).
 36 | ### Log Capture & Management (`logging`)
 37 | **Purpose**: Log capture and management tools for iOS simulators and physical devices. Start, stop, and analyze application and system logs during development and testing. (4 tools)
 38 | 
 39 | - `start_device_log_cap` - Starts capturing logs from a specified Apple device (iPhone, iPad, Apple Watch, Apple TV, Apple Vision Pro) by launching the app with console output. Returns a session ID.
 40 | - `start_sim_log_cap` - Starts capturing logs from a specified simulator. Returns a session ID. By default, captures only structured logs.
 41 | - `stop_device_log_cap` - Stops an active Apple device log capture session and returns the captured logs.
 42 | - `stop_sim_log_cap` - Stops an active simulator log capture session and returns the captured logs.
 43 | ### macOS Development (`macos`)
 44 | **Purpose**: Complete macOS development workflow for both .xcodeproj and .xcworkspace files. Build, test, deploy, and manage macOS applications. (6 tools)
 45 | 
 46 | - `build_macos` - Builds a macOS app using xcodebuild from a project or workspace. Provide exactly one of projectPath or workspacePath. Example: build_macos({ projectPath: '/path/to/MyProject.xcodeproj', scheme: 'MyScheme' })
 47 | - `build_run_macos` - Builds and runs a macOS app from a project or workspace in one step. Provide exactly one of projectPath or workspacePath. Example: build_run_macos({ projectPath: '/path/to/MyProject.xcodeproj', scheme: 'MyScheme' })
 48 | - `get_mac_app_path` - Gets the app bundle path for a macOS application using either a project or workspace. Provide exactly one of projectPath or workspacePath. Example: get_mac_app_path({ projectPath: '/path/to/project.xcodeproj', scheme: 'MyScheme' })
 49 | - `launch_mac_app` - Launches a macOS application. Note: In some environments, this tool may be prefixed as mcp0_launch_macos_app.
 50 | - `stop_mac_app` - Stops a running macOS application. Can stop by app name or process ID.
 51 | - `test_macos` - Runs tests for a macOS project or workspace using xcodebuild test and parses xcresult output. Provide exactly one of projectPath or workspacePath.
 52 | ### Project Discovery (`project-discovery`)
 53 | **Purpose**: Discover and examine Xcode projects, workspaces, and Swift packages. Analyze project structure, schemes, build settings, and bundle information. (5 tools)
 54 | 
 55 | - `discover_projs` - Scans a directory (defaults to workspace root) to find Xcode project (.xcodeproj) and workspace (.xcworkspace) files.
 56 | - `get_app_bundle_id` - Extracts the bundle identifier from an app bundle (.app) for any Apple platform (iOS, iPadOS, watchOS, tvOS, visionOS).
 57 | - `get_mac_bundle_id` - Extracts the bundle identifier from a macOS app bundle (.app). Note: In some environments, this tool may be prefixed as mcp0_get_macos_bundle_id.
 58 | - `list_schemes` - Lists available schemes for either a project or a workspace. Provide exactly one of projectPath or workspacePath. Example: list_schemes({ projectPath: '/path/to/MyProject.xcodeproj' })
 59 | - `show_build_settings` - Shows build settings from either a project or workspace using xcodebuild. Provide exactly one of projectPath or workspacePath, plus scheme. Example: show_build_settings({ projectPath: '/path/to/MyProject.xcodeproj', scheme: 'MyScheme' })
 60 | ### Project Scaffolding (`project-scaffolding`)
 61 | **Purpose**: Tools for creating new iOS and macOS projects from templates. Bootstrap new applications with best practices, standard configurations, and modern project structures. (2 tools)
 62 | 
 63 | - `scaffold_ios_project` - Scaffold a new iOS project from templates. Creates a modern Xcode project with workspace structure, SPM package for features, and proper iOS configuration.
 64 | - `scaffold_macos_project` - Scaffold a new macOS project from templates. Creates a modern Xcode project with workspace structure, SPM package for features, and proper macOS configuration.
 65 | ### Project Utilities (`utilities`)
 66 | **Purpose**: Essential project maintenance utilities for cleaning and managing existing projects. Provides clean operations for both .xcodeproj and .xcworkspace files. (1 tools)
 67 | 
 68 | - `clean` - Cleans build products for either a project or a workspace using xcodebuild. Provide exactly one of projectPath or workspacePath. Platform defaults to iOS if not specified. Example: clean({ projectPath: '/path/to/MyProject.xcodeproj', scheme: 'MyScheme', platform: 'iOS' })
 69 | ### Simulator Management (`simulator-management`)
 70 | **Purpose**: Tools for managing simulators from booting, opening simulators, listing simulators, stopping simulators, erasing simulator content and settings, and setting simulator environment options like location, network, statusbar and appearance. (5 tools)
 71 | 
 72 | - `erase_sims` - Erases simulator content and settings. Provide exactly one of: simulatorUdid or all=true. Optional: shutdownFirst to shut down before erasing.
 73 | - `reset_sim_location` - Resets the simulator's location to default.
 74 | - `set_sim_appearance` - Sets the appearance mode (dark/light) of an iOS simulator.
 75 | - `set_sim_location` - Sets a custom GPS location for the simulator.
 76 | - `sim_statusbar` - Sets the data network indicator in the iOS simulator status bar. Use "clear" to reset all overrides, or specify a network type (hide, wifi, 3g, 4g, lte, lte-a, lte+, 5g, 5g+, 5g-uwb, 5g-uc).
 77 | ### Swift Package Manager (`swift-package`)
 78 | **Purpose**: Swift Package Manager operations for building, testing, running, and managing Swift packages and dependencies. Complete SPM workflow support. (6 tools)
 79 | 
 80 | - `swift_package_build` - Builds a Swift Package with swift build
 81 | - `swift_package_clean` - Cleans Swift Package build artifacts and derived data
 82 | - `swift_package_list` - Lists currently running Swift Package processes
 83 | - `swift_package_run` - Runs an executable target from a Swift Package with swift run
 84 | - `swift_package_stop` - Stops a running Swift Package executable started with swift_package_run
 85 | - `swift_package_test` - Runs tests for a Swift Package with swift test
 86 | ### System Doctor (`doctor`)
 87 | **Purpose**: Debug tools and system doctor for troubleshooting XcodeBuildMCP server, development environment, and tool availability. (1 tools)
 88 | 
 89 | - `doctor` - Provides comprehensive information about the MCP server environment, available dependencies, and configuration status.
 90 | ### UI Testing & Automation (`ui-testing`)
 91 | **Purpose**: UI automation and accessibility testing tools for iOS simulators. Perform gestures, interactions, screenshots, and UI analysis for automated testing workflows. (11 tools)
 92 | 
 93 | - `button` - Press hardware button on iOS simulator. Supported buttons: apple-pay, home, lock, side-button, siri
 94 | - `describe_ui` - Gets entire view hierarchy with precise frame coordinates (x, y, width, height) for all visible elements. Use this before UI interactions or after layout changes - do NOT guess coordinates from screenshots. Returns JSON tree with frame data for accurate automation.
 95 | - `gesture` - Perform gesture on iOS simulator using preset gestures: scroll-up, scroll-down, scroll-left, scroll-right, swipe-from-left-edge, swipe-from-right-edge, swipe-from-top-edge, swipe-from-bottom-edge
 96 | - `key_press` - Press a single key by keycode on the simulator. Common keycodes: 40=Return, 42=Backspace, 43=Tab, 44=Space, 58-67=F1-F10.
 97 | - `key_sequence` - Press key sequence using HID keycodes on iOS simulator with configurable delay
 98 | - `long_press` - Long press at specific coordinates for given duration (ms). Use describe_ui for precise coordinates (don't guess from screenshots).
 99 | - `screenshot` - Captures screenshot for visual verification. For UI coordinates, use describe_ui instead (don't determine coordinates from screenshots).
100 | - `swipe` - Swipe from one point to another. Use describe_ui for precise coordinates (don't guess from screenshots). Supports configurable timing.
101 | - `tap` - Tap at specific coordinates. Use describe_ui to get precise element coordinates (don't guess from screenshots). Supports optional timing delays.
102 | - `touch` - Perform touch down/up events at specific coordinates. Use describe_ui for precise coordinates (don't guess from screenshots).
103 | - `type_text` - Type text (supports US keyboard characters). Use describe_ui to find text field, tap to focus, then type.
104 | 
105 | ## Summary Statistics
106 | 
107 | - **Total Tools**: 61 canonical tools + 22 re-exports = 83 total
108 | - **Workflow Groups**: 12
109 | 
110 | ---
111 | 
112 | *This documentation is automatically generated by `scripts/update-tools-docs.ts` using static analysis. Last updated: 2025-09-22*
113 | 
```
Page 8/14FirstPrevNextLast