Back to Projects

Personal Project · Infrastructure Engineering

SJVIK Labs Nexus Home Lab

Personal engineering project Started 2024 Production · Continuously Evolving

This is not an employer project. The Nexus lab is a personal engineering environment built to apply infrastructure, AI systems, and security operations discipline on private hardware — running 24/7 with real operational constraints. Every service, version, IP, and change is git-versioned. Every deployment goes through a review gate. The lab is the proof of work.

3-node Proxmox cluster — LXC-isolated services throughout

Cluster leader · Always-on

nx-core-01

192.168.10.11 · Proxmox VE 9.1.6
  • LXC 100 · AdGuard Home — private DNS, port 3000 admin
  • LXC 101 · OpenClaw gateway — Brad (main), Telegram, Slack
  • LXC 102 · Syncthing — Obsidian vault backup
  • LXC 103 · LiteLLM proxy — offline Claude Code bridge
AI host · Always-on

nx-ai-01

192.168.10.22 · Proxmox VE 9.1.6
  • LXC 200 · ollama-ai — CPU inference (qwen2.5 7b/14b, coder 7b/14b)
  • LXC 201 · claw-ai — OpenClaw ops worker (Oscar)
Store host · Always-on

nx-store-01

192.168.10.13 · Proxmox VE 9.1.6
  • LXC 210 · claw-store — OpenClaw store worker (Adam)
Workstation · On-demand

nx-pc-01

192.168.10.250 · RTX 3080

GPU Ollama inference node. Provides the highest-quality local model output for tasks that benefit from the RTX 3080. Models unload after 30 minutes idle (OLLAMA_KEEP_ALIVE=30m) to conserve VRAM. LAN-accessible at :11434.

GPU models loaded

qwen2.5-coder:14b qwen2.5:14b qwen2.5-coder:7b qwen2.5:7b

26 GB total model footprint

Cluster configuration

All three server nodes form the nexus Proxmox cluster with kernel-based networking (knet) and secure authentication (secauth) enabled. Quorum is maintained at all times. The cluster provides centralized management, live migration capability, and unified monitoring across all three hosts. SSH mesh access is key-authenticated across all nodes and LXC containers — no password authentication anywhere in the lab.

Proxmox VE 9.1.6 Kernel 6.17.13-1-pve knet networking secauth enabled SSH key mesh LXC isolation

What's running and why each service exists

NETWORK SECURITY · ADGUARD HOME · LXC 100

Private DNS with ad and tracker blocking

AdGuard Home replaces ISP DNS for the entire lab network. All DNS queries are resolved locally first — no external DNS provider sees the lab's internal query traffic. Blocklists filter ad and tracker domains across every device on the network. This is the first layer of the lab's zero-trust design: DNS is a control plane that should be owned, not delegated to a third party.

AI GATEWAY · OPENCLAW · LXC 101

Multi-agent AI orchestration — Brad, Oscar, Adam

OpenClaw is the AI gateway and multi-agent orchestration system. Three agents run 24/7: Brad (main gateway, gpt-4.1-mini primary with Ollama fallback chain, Telegram and Slack interfaces), Oscar (ops worker on nx-ai-01, qwen2.5-coder:14b primary), and Adam (store worker on nx-store-01, qwen2.5-coder:14b primary). Worker nodes connect to the gateway via SSH tunnel. The entire system runs on private hardware with no mandatory cloud dependency — Ollama models serve as the fallback for OpenAI downtime. This is a production AI assistant system, not a demo.

VAULT BACKUP · SYNCTHING · LXC 102

Obsidian vault synchronization

Syncthing provides continuous, encrypted peer-to-peer sync for the Obsidian knowledge vault across all devices. No cloud intermediary — files sync directly between devices via the Syncthing protocol. The LXC on nx-core-01 acts as the always-on relay node, ensuring vault state is preserved even when mobile devices are offline. Version history is maintained on the sync node.

OFFLINE AI BRIDGE · LITELLM · LXC 103

LiteLLM proxy — Claude Code offline bridge

LiteLLM runs as a proxy that bridges Claude Code to local Ollama models. When external API access is unavailable or undesirable, Claude Code routes through LiteLLM to Ollama inference on the GPU workstation or CPU Ollama node. This makes the development environment functional with zero internet dependency — the proxy presents an OpenAI-compatible API surface that Claude Code can target directly.

MONITORING · LAB CONTROL CENTER · TYPESCRIPT/NODE.JS

Auth-protected real-time dashboard

The Lab Control Center is a TypeScript/Express application running on the lab with session-authenticated access. It provides live node health data via CPU and memory arc gauges, section status badges showing the operational state of each service, and an activity timeline of recent lab events. Data is collected by shell scripts running on a cron cycle, fed into the dashboard backend, and surfaced in the UI with 30-second auto-refresh. The dashboard makes the lab's operational state visible at a glance — no SSH required for routine health checks.

Applied engineering discipline across five domains

📋

Infrastructure as Code

Every service, version, IP address, and configuration decision is documented in git-versioned state files. The repo is the source of truth — not memory, not tribal knowledge.

🤖

AI Systems Integration

Multi-agent orchestration running on private hardware. No mandatory cloud dependency. Fallback chains ensure continuity when external APIs are unavailable.

🔒

Zero-Trust Architecture

LXC isolation: one service per container, no shared namespaces. Private DNS. SSH key authentication only. Services never run on the hypervisor host OS.

📓

Operational Discipline

Change log entry for every infrastructure change. Runbooks with explicit rollback gates. Review-first deployment flow. No undocumented changes.

🛡️

Applied Security Engineering

Private DNS prevents ISP query visibility. VLAN-segmented LAN. Security-first configuration baseline across all LXC containers and host nodes.

📊

Observability

Live dashboard with real metric collection, not mocked data. Arc gauges, status badges, and activity timeline driven by cron-scheduled shell collectors.

Full stack of tools in active production use

Proxmox VE 9.1.6 LXC OpenClaw AI Gateway Ollama TypeScript Node.js Python Bash AdGuard Home Syncthing LiteLLM Tailscale Git RTX 3080 qwen2.5-coder:14b SSH key mesh