Architecture
How NOBA works under the hood.
Stack
- Backend: FastAPI (Python 3.10+), 57 API routers, 490+ route decorators
- Frontend: Vue 3 + Vite + Pinia, Chart.js
- Database: SQLite WAL (default), PostgreSQL, or MySQL/MariaDB — zero code changes to switch
- Agents: Zero-dependency Python zipapp (.pyz), 42 command types
- Telemetry: Server-Sent Events for real-time browser streaming
- Tests: 192 test files, 3,900+ tests passing
Data Flow
Browser <──SSE──> FastAPI Server <──WebSocket──> Remote Agent
│ │ │
xterm.js SQLite / Local psutil
Remote Desktop PostgreSQL / + /proc direct
AI Chat Panel MySQL + integrations
│
Integration APIs Agent Capabilities:
(Proxmox, Docker, K8s, 42 commands, 3 risk tiers
Pi-hole, UniFi, etc.) Remote desktop (Wayland/X11/Win)
│ Embedded terminal (PTY)
Healing Engine File transfer (50 MB, SHA-256)
(6-layer pipeline) Self-update & self-heal
Deployment Boundary
NOBA is deployed into your environment. The server owns the database, integrations, audit log, automation state, and agent control plane. Remote agents connect back to the server over encrypted WebSocket with polling fallback; operators use the browser UI against the server.
External network access is only needed for the services you configure: monitored integrations, optional identity providers, optional LLM providers, optional notification/SIEM/OTel targets, and the updater checking the public release manifest. The built-in AI assistant is optional and can use a local Ollama endpoint.
Release & Updater Path
NOBA Enterprise releases are built from tagged releases in the enterprise repository. The release workflow publishes packages to GitHub Releases and uploads tarballs/packages into Cloudflare R2 under versioned prefixes.
The public website serves those artifacts from R2 at /download/*. The app updater reads https://www.nobacmd.com/download/latest/version.json, then downloads the tarball path declared in that manifest. The server-side updater validates unsafe manifest paths, extracts the tarball into a temporary directory, checks the expected NOBA web layout, and applies the update with backup/rollback handling.
Validation Boundary
Core migration flows have been exercised in a real remote lab across two locations, WAN links, four Proxmox hosts, four AD environments, and an Azure development tenant. The validation topology documents that coverage and the customer-scale validation still wanted during beta.
Security Model
Authentication & Directory Integration
- AD Sync — ongoing sync from Azure AD / Entra ID or on-prem LDAP
- AD Migration — 7-step wizard for one-time directory imports
- AD Acquisition — 8-step M&A merge wizard with conflict resolution
- WebAuthn (FIDO2) passwordless login with passkeys
- SAML 2.0 SSO (IdP + SP initiated, signed AuthnRequests)
- OIDC / OAuth (Google, GitHub, Microsoft, custom providers)
- LDAP / Active Directory
- SCIM 2.0 automatic user provisioning
- TOTP 2FA and social login
OWASP Compliance
- HSTS, COOP, COEP, CORP, X-Frame-Options DENY
- DOMPurify sanitization on all v-html bindings
- PBKDF2 with 600,000 iterations (auto-upgrades from older hashes)
- Per-IP and per-user rate limiting with automatic lockout
- Clear-Site-Data on logout (cache, cookies, storage)
- Error message sanitization — internal details never exposed
- Fernet encryption for all stored secrets (OAuth client secrets, API keys)
Authorization
- Viewer: read-only dashboard access
- Operator: low and medium risk commands, AI chat
- Admin: full access including high-risk commands, user management
All medium and high risk actions are audit-logged with username, IP, timestamp, and parameters.
Operational Evidence
Enterprise surfaces are designed around auditability: approval decisions, high-risk commands, self-healing activity, identity changes, security posture, drift, retention, and compliance-oriented exports are visible from the control plane. Trust Center summarizes detected or configured license, identity, backup, integration, SSO/SCIM, and export state; audit and maintenance remain linked as evidence and control workflows rather than scored readiness by default.
Self-Healing Engine
Six-layer pipeline:
- Correlation: Group related alerts across agents and services
- Dependency Analysis: Map impact using service topology
- Planning: Select actions based on risk tier and cooldowns
- Execution: Dispatch locally or via agent WebSocket
- Verification: Re-collect metrics, confirm fix applied
- Learning: Record outcome for future decisions
Actions execute autonomously for low-risk scenarios. High-risk actions are gated by role. Maintenance windows suppress healing during planned work.
Remote Agents
Each remote site runs a single agent packaged as a Python zipapp (zero external dependencies). Python 3.6+ on Linux, Windows, or macOS.
- Connectivity: Primary WebSocket with HTTP polling fallback
- Heartbeat: Every 30 seconds (configurable 5–3600s)
- Reconnect: Exponential backoff from 5s to 60s max
- Remote Desktop: Wayland (Mutter D-Bus + PipeWire), X11 (XTest), Windows (GDI), macOS (CoreGraphics). JPEG/PNG encoding, adjustable quality (10–100) and FPS (1–30). Multi-viewer fan-out.
- Terminal: Full PTY sessions via xterm.js. Restricted shells for non-admin roles.
- File Transfer: Chunked uploads up to 50 MB with SHA-256 verification. Automatic backups before overwrites.
- Self-Update: Atomic replacement with SHA-256 digest comparison. Automatic systemd restart.
- Security: Path traversal prevention, symlink resolution, deny-lists for sensitive files, dangerous group protection.
AI Ops Integration
The AI assistant receives a real-time system prompt built from live infrastructure state: fleet status, resource averages, active alerts, and recent incidents. Suggested actions are parsed from LLM responses and rendered as buttons requiring human confirmation before execution.
Supports Anthropic (Claude), OpenAI, Ollama (local), and any OpenAI-compatible endpoint. Off by default.