AI Engine
A unified AI fabric across all modules — 7 providers, 14 specialised methods, configurable temperature/tokens, and per-feature provider override.
Supported providers
| Provider | Default model | API key required |
|---|---|---|
| Vortex (Built-in) | auto (server-side) | No (uses your account) |
| OpenAI | gpt-4o | Yes (sk-proj-…) |
| Anthropic | claude-sonnet-4 | Yes (sk-ant-…) |
| DeepSeek | deepseek-chat | Yes (sk-…) |
| Groq | llama-3.3-70b-versatile | Yes (gsk_…) |
| Ollama (Local) | llama3 | No |
| Custom (OpenAI-compatible) | — | Yes |
| GitHub Copilot | auto | OAuth (your subscription) |
Specialised methods
Every module hits a tailored entry point that adds the right context (schema, host info, response, attached file):
| Method | Module | Purpose |
|---|---|---|
sqlGenerate | SQL | NL → SQL with schema context |
sqlExplain | SQL | Explain a SQL query |
sshGenerate | SSH | NL → shell command |
sshExplain | SSH | Explain captured terminal output |
apiGenerate | API | NL → request config |
apiGenerateCluster | API | Generate a full cluster from a description |
apiGenerateRequest | API | Generate a single request |
apiGenerateClusterFromProject | API | Extract endpoints from a source repo |
apiGenerateResponseDocs | API | Per-endpoint field docs (source-aware) |
apiAnalyze | API | Analyze a response |
ftpAssist | FTP | File-management advice; generate .htaccess, nginx.conf, .env |
emailSummarize | Summarise a message | |
emailChat | Q&A about a specific email | |
explain | General | Explain any text/code |
Configuration
- Active provider — set globally, override per feature.
- Model override — leave blank to use the provider default.
- Base URL — required for Ollama and Custom providers.
- Temperature — 0.0 (precise) → 1.0 (creative).
- Max tokens — 256 → 8 192.
- Per-provider API keys — base64-obfuscated inside the AES-256-GCM encrypted config.
- Test connection — verifies each provider before saving.
Usage tracking
The Vortex built-in provider enforces daily/weekly/monthly token quotas server-side, with live progress bars in Settings → AI → Usage. Bring-your-own-key providers are unlimited (you pay your provider directly).
What's next
- Provider deep dive — pick the right backend per task.
- Local runtime & embeddings — go fully offline.
- GitHub Copilot integration — reuse your existing subscription.