/models
Overview
The /models command opens the models configuration interface where you can select and configure which AI model Apex uses for penetration testing. Different models have different capabilities, performance characteristics, and costs.
Usage
Supported AI Providers
Anthropic Claude (Recommended)
Best performance for penetration testing tasks.
Available Models:
- Claude 3.5 Sonnet (Recommended) - Best balance of speed and capability
- Claude 3 Opus - Maximum capability for complex tests
- Claude 3 Haiku - Fastest, lower cost option
Setup:
Anthropic models are specifically recommended for penetration testing due to their superior reasoning capabilities and context understanding.
OpenAI
Support for GPT models from OpenAI.
Available Models:
- GPT-4 - High capability, slower
- GPT-4 Turbo - Faster, more cost-effective
- GPT-3.5 Turbo - Budget option
Setup:
AWS Bedrock
Enterprise-grade AI through AWS infrastructure.
Available Models:
- Anthropic Claude (via Bedrock)
- Other Bedrock-supported models
Setup:
Ensure you have Bedrock access and model permissions configured in your AWS account.
vLLM (Local Models)
Run models locally using vLLM for privacy and cost savings.
Setup:
-
Start your vLLM server:
-
Configure Apex:
-
In Apex, use
/modelsand enter your model name in the “Custom local model (vLLM)” input
Benefits:
- Privacy: Data never leaves your infrastructure
- Cost: No per-token API charges
- Control: Full control over model and infrastructure
Local models may have reduced performance compared to hosted Claude models. Ensure your model has sufficient capabilities for security testing.
Selecting a Model
In the models interface:
Model Comparison
Recommended
- Best for: All penetration testing
- Speed: Fast
- Capability: Excellent
- Cost: Moderate
Maximum Capability
- Best for: Complex, thorough tests
- Speed: Moderate
- Capability: Maximum
- Cost: Higher
Alternative Option
- Best for: OpenAI users
- Speed: Fast
- Capability: Very Good
- Cost: Moderate
Privacy First
- Best for: On-premise testing
- Speed: Variable
- Capability: Depends on model
- Cost: Infrastructure only
Performance Considerations
Model performance impacts:
- Reasoning quality: How well the AI understands security concepts
- Context handling: Ability to track complex test scenarios
- Speed: Time to complete testing runs
- Cost: Per-token or infrastructure costs
When to Use Each Model
Troubleshooting
No models available
Issue: No models showing in the selection
Solution: Ensure you’ve set the appropriate API key environment variable:
Restart Apex after setting the environment variable.
vLLM connection failed
Issue: Cannot connect to local vLLM server
Solution:
- Verify vLLM server is running:
curl http://localhost:8000/v1/models - Check the URL is correct:
echo $LOCAL_MODEL_URL - Ensure no firewall is blocking the connection
API authentication errors
Issue: Authentication failed for API provider
Solution:
- Verify API key is correct and not expired
- Check your account has credits/access
- For AWS, ensure IAM permissions are correct