/models

Overview

The /models command opens the model selection interface where you can view available AI models grouped by provider and select which model Apex uses for penetration testing.

Usage

$/models

How It Works

The models interface displays all available models from your configured providers, organized by provider name. You can navigate through the list and select your preferred model.

When no model is explicitly selected, Apex automatically picks a default based on your configured provider, prioritized as: Pensar → Anthropic → OpenAI → Google → OpenRouter → Bedrock.

Only models from configured providers are shown. Use /providers to set up additional AI providers, or /auth to connect via Pensar Console.

Model selection screen — models grouped by provider

Interface

When you open /models, you’ll see:

  • Provider sections: Models grouped by provider (Anthropic, OpenAI, Google, etc.)
  • Model list: Available models under each provider
  • Selection indicator: Current model highlighted with a bullet (●)
  • Show more/less: Expand to see all models if a provider has many

Keyboard Navigation

KeyAction
[↑/↓]Navigate between models
[Enter]Select highlighted model
[Ctrl+P]Open provider management
[ESC]Close and return home

Supported AI Providers

Support for GPT and o-series models from OpenAI.

Available models include:

  • GPT-4.1 / GPT-4.1 mini
  • GPT-4o / GPT-4o mini
  • o3 / o3-mini / o4-mini
  • And more…

Gemini models via Google’s Generative AI API.

Available models include:

  • Gemini 2.5 Pro / Flash
  • Gemini 2.0 Flash
  • And more…

Access to models from multiple providers through a unified API.

Benefits:

  • Single API for multiple providers
  • Pay-per-use pricing
  • Access to latest models from Anthropic, OpenAI, Google, Meta, and more

Enterprise-grade AI through AWS infrastructure.

Available models:

  • Claude models via Bedrock
  • Amazon Nova models
  • Meta Llama models
  • And more…

Requires AWS credentials and Bedrock model access in your region.

Managed inference through Pensar Console — no API key required.

Available models:

  • Claude Opus 4.6 (default for Pensar)
  • Claude Sonnet 4.5 / Opus 4.5
  • Claude Sonnet 4.0 / Opus 4.0
  • Claude Haiku 4.5
  • And more…

Connect via /auth in Apex.

Default Model Per Provider

When you haven’t explicitly selected a model, Apex automatically uses the flagship model for your highest-priority configured provider:

ProviderDefault Model
PensarClaude Opus 4.6 (Pensar)
AnthropicClaude Opus 4.6
OpenAIGPT-5.2 Pro
GoogleGemini 3.1 Pro Preview
OpenRouterClaude Opus 4.6 (OpenRouter)

This also applies to the headless CLI commands (pensar pentest, pensar targeted-pentest) when the --model flag is not provided.

Setting Up Providers

If no providers are configured, the models screen will show:

No providers configured.
Press Ctrl+P to connect a provider.

Click “Connect provider” or press Ctrl+P to open the provider management interface.

Model Selection

1

Open Models

Run /models to open the model selection interface

3

Select

Press [Enter] to select the highlighted model
4

Confirm

The interface closes and your selected model is now active

Model Comparison

Claude Sonnet 4.5

Recommended - Best for: All penetration testing - Speed: Fast - Capability: Excellent - Cost: Moderate

Claude Opus 4.5

Maximum Capability - Best for: Complex, thorough tests - Speed: Moderate

  • Capability: Maximum - Cost: Higher
GPT-4.1

Alternative Option - Best for: OpenAI users - Speed: Fast - Capability: Very Good - Cost: Moderate

Claude Haiku 4.5

Budget Option - Best for: Quick tests, development - Speed: Very Fast - Capability: Good - Cost: Lower

Performance Considerations

Model performance impacts: - Reasoning quality: How well the AI understands security concepts - Context handling: Ability to track complex test scenarios - Speed: Time to complete testing runs - Cost: Per-token or subscription costs

When to Use Each Model

Use CaseRecommended Model
Production security auditsClaude Sonnet 4.5 or Opus 4.5
Quick development testingClaude Haiku 4.5
Enterprise with AWSClaude via Bedrock
Budget-conscious testingHaiku or GPT-4o mini

Troubleshooting

Issue: No models showing in the selection

Solution:

  1. Press Ctrl+P to open provider management
  2. Configure at least one AI provider with a valid API key
  3. Return to /models to see available models

Issue: Configured provider doesn’t show any models

Solution:

  1. Verify your API key is valid
  2. Check your account has credits/access
  3. For AWS Bedrock, ensure model access is enabled
  4. Try reconfiguring the provider in /providers

Issue: Selected model doesn’t persist

Solution:

  1. Ensure you press [Enter] to confirm selection
  2. Check that you have write permissions to config directory
  3. Restart Apex and try again