Docs AI Engine AI Engine

AI Engine

A unified AI fabric across all modules — 7 providers, 14 specialised methods, configurable temperature/tokens, and per-feature provider override.

Supported providers

ProviderDefault modelAPI key required
Vortex (Built-in)auto (server-side)No (uses your account)
OpenAIgpt-4oYes (sk-proj-…)
Anthropicclaude-sonnet-4Yes (sk-ant-…)
DeepSeekdeepseek-chatYes (sk-…)
Groqllama-3.3-70b-versatileYes (gsk_…)
Ollama (Local)llama3No
Custom (OpenAI-compatible)Yes
GitHub CopilotautoOAuth (your subscription)

Specialised methods

Every module hits a tailored entry point that adds the right context (schema, host info, response, attached file):

MethodModulePurpose
sqlGenerateSQLNL → SQL with schema context
sqlExplainSQLExplain a SQL query
sshGenerateSSHNL → shell command
sshExplainSSHExplain captured terminal output
apiGenerateAPINL → request config
apiGenerateClusterAPIGenerate a full cluster from a description
apiGenerateRequestAPIGenerate a single request
apiGenerateClusterFromProjectAPIExtract endpoints from a source repo
apiGenerateResponseDocsAPIPer-endpoint field docs (source-aware)
apiAnalyzeAPIAnalyze a response
ftpAssistFTPFile-management advice; generate .htaccess, nginx.conf, .env
emailSummarizeEmailSummarise a message
emailChatEmailQ&A about a specific email
explainGeneralExplain any text/code

Configuration

  • Active provider — set globally, override per feature.
  • Model override — leave blank to use the provider default.
  • Base URL — required for Ollama and Custom providers.
  • Temperature — 0.0 (precise) → 1.0 (creative).
  • Max tokens — 256 → 8 192.
  • Per-provider API keys — base64-obfuscated inside the AES-256-GCM encrypted config.
  • Test connection — verifies each provider before saving.

Usage tracking

The Vortex built-in provider enforces daily/weekly/monthly token quotas server-side, with live progress bars in Settings → AI → Usage. Bring-your-own-key providers are unlimited (you pay your provider directly).

What's next

Last updated 6 days ago

No matches.