/models

Overview

The /models command opens the models configuration interface where you can select and configure which AI model Apex uses for penetration testing. Different models have different capabilities, performance characteristics, and costs.

Usage

$/models

Supported AI Providers

Support for GPT models from OpenAI.

Available Models:

  • GPT-4 - High capability, slower
  • GPT-4 Turbo - Faster, more cost-effective
  • GPT-3.5 Turbo - Budget option

Setup:

$export OPENAI_API_KEY="your-api-key-here"

Enterprise-grade AI through AWS infrastructure.

Available Models:

  • Anthropic Claude (via Bedrock)
  • Other Bedrock-supported models

Setup:

$export AWS_ACCESS_KEY_ID="your-access-key"
>export AWS_SECRET_ACCESS_KEY="your-secret-key"
>export AWS_REGION="us-east-1"

Ensure you have Bedrock access and model permissions configured in your AWS account.

Run models locally using vLLM for privacy and cost savings.

Setup:

  1. Start your vLLM server:

    $vllm serve meta-llama/Llama-3-8B-Instruct --port 8000
  2. Configure Apex:

    $export LOCAL_MODEL_URL="http://localhost:8000/v1"
  3. In Apex, use /models and enter your model name in the “Custom local model (vLLM)” input

Benefits:

  • Privacy: Data never leaves your infrastructure
  • Cost: No per-token API charges
  • Control: Full control over model and infrastructure

Local models may have reduced performance compared to hosted Claude models. Ensure your model has sufficient capabilities for security testing.

Selecting a Model

In the models interface:

1

Open Models

Run /models to open the model selection interface

2

Choose Provider

Select your AI provider (Anthropic, OpenAI, AWS Bedrock, or vLLM)

3

Select Model

Choose the specific model from the available options

4

Configure (if needed)

For vLLM, enter your custom model name

5

Save

Confirm your selection to use the model for testing

Model Comparison

Claude 3.5 Sonnet

Recommended

  • Best for: All penetration testing
  • Speed: Fast
  • Capability: Excellent
  • Cost: Moderate
Claude 3 Opus

Maximum Capability

  • Best for: Complex, thorough tests
  • Speed: Moderate
  • Capability: Maximum
  • Cost: Higher
GPT-4 Turbo

Alternative Option

  • Best for: OpenAI users
  • Speed: Fast
  • Capability: Very Good
  • Cost: Moderate
Local vLLM

Privacy First

  • Best for: On-premise testing
  • Speed: Variable
  • Capability: Depends on model
  • Cost: Infrastructure only

Performance Considerations

Model performance impacts:

  • Reasoning quality: How well the AI understands security concepts
  • Context handling: Ability to track complex test scenarios
  • Speed: Time to complete testing runs
  • Cost: Per-token or infrastructure costs

When to Use Each Model

Use CaseRecommended Model
Production security auditsClaude 3.5 Sonnet or Opus
Quick development testingClaude 3 Haiku
Enterprise with AWSClaude via Bedrock
Air-gapped environmentsLocal vLLM
Budget-conscious testingLocal vLLM or GPT-3.5

Troubleshooting

Issue: No models showing in the selection

Solution: Ensure you’ve set the appropriate API key environment variable:

$export ANTHROPIC_API_KEY="your-key"

Restart Apex after setting the environment variable.

Issue: Cannot connect to local vLLM server

Solution:

  1. Verify vLLM server is running: curl http://localhost:8000/v1/models
  2. Check the URL is correct: echo $LOCAL_MODEL_URL
  3. Ensure no firewall is blocking the connection

Issue: Authentication failed for API provider

Solution:

  1. Verify API key is correct and not expired
  2. Check your account has credits/access
  3. For AWS, ensure IAM permissions are correct