Post 020
The Problem With "Random Automation"
Most homelab automation falls into two categories: scheduled scripts in cron that hope for the best, or one-off webhooks that fire alerts nobody reads. Neither reflects how production environments actually work.
In a real infrastructure team, automation is part of a pipeline. Something happens. The system detects it. It gathers context. It makes a decision. It acts. It reports what it did. That's the model I built.
The Architecture
n8n runs on Node-B (CR90 Corvette) in the Phoenix-Nest VM. It's the orchestration layer - it doesn't do the work itself, but it coordinates everything that does.
Host: Node-B / Phoenix-Nest VM
IP: 192.168.20.50
Port: 5678
Access: https://n8n.tima.dev (via NPM + Authentik SSO)
The stack underneath it:
| System | Role | Connection |
|---|---|---|
| Wazuh | Security event detection | Webhook → n8n |
| InfluxDB | Metrics storage | HTTP API |
| Grafana | Visualization | Linked dashboards |
| Ollama | Local LLM inference | HTTP API (192.168.20.20:11434) |
| Authentik | Identity / SSO | OAuth integration |
| PostgreSQL | Event logs, application state | Direct connection |
| Discord | Operational alerts (Admiral Ackbar) | Webhook |
| UniFi | Network management | REST API |
| Ghost | Blog CMS | Webhook triggers |
The Core Pattern
Every workflow follows the same execution path:
Detect → Enrich → Decide → Act → Report
This isn't a slogan - it's the actual flow encoded in every n8n workflow:
- Detect - Webhook trigger or scheduled check identifies an event
- Enrich - Pull additional context (threat intel, system metrics, user info)
- Decide - Evaluate thresholds, match conditions, apply logic
- Act - Execute the response (block IP, restart service, update config)
- Report - Post a structured alert to Discord with full context
Security Automation
IP Enrichment Pipeline
The most complete security workflow. When Wazuh fires a webhook on suspicious network activity:
Wazuh webhook fires
→ n8n extracts source IP from alert payload
→ Query AbuseIPDB for threat intelligence score
→ Evaluate: is confidence score > threshold?
→ Yes: Update UniFi firewall block group via API
→ No: Log and skip
→ Format structured alert with enrichment data
→ Post to Discord (#admiral-ackbar)
The Discord alert includes: source IP, country of origin, AbuseIPDB confidence score, abuse categories, and action taken.
UniFi API Enforcement
The automated blocking leg uses the UniFi API to manage a firewall group:
Step 1: POST /api/login → session cookie
Step 2: GET /api/s/default/rest/firewallgroup → current block list
Step 3: Append new IP to group_members array
Step 4: PUT /api/s/default/rest/firewallgroup/<group_id> → push update
Step 5: Discord confirmation
Prerequisite: Create a Wazuh-Blocked-IPs IP group in UniFi and a firewall rule that blocks all traffic from that group. The n8n workflow only manages group membership - the blocking rule stays static.
CVE Digest → Ollama
Weekly workflow that pulls vulnerability data from the Wazuh API, sends the CVE list to Ollama for plain-English remediation guidance, and posts a formatted report to Discord:
Schedule trigger (weekly)
→ Query Wazuh Vulnerability API for all High/Critical CVEs
→ Group by host and severity
→ Send CVE list to Ollama (llama3:8b) with prompt:
"For each CVE, provide a one-line plain-English description
and the recommended fix command for Debian."
→ Format as Discord embed with per-host sections
→ Post to Discord (#wazuh-command)
This bridges the SIEM and AI platforms into a single workflow - Wazuh detects, Ollama explains, n8n delivers.
Authentik Failed Login Digest
Daily summary of failed authentication attempts across all Authentik-protected services:
Schedule trigger (daily 8am)
→ Query Authentik Events API (event_type: login_failed)
→ Group by source IP and application
→ If any single IP has > 10 failures: flag as suspicious
→ Format digest with IP geolocation
→ Post to Discord (#chaincode-identity)
Infrastructure Automation
Proxmox Health Digest
Morning briefing that hits the Proxmox API across all three nodes:
Schedule trigger (daily 8am)
→ Query each Proxmox node API for CPU, RAM, disk, uptime
→ Calculate cluster-wide utilization
→ Flag any node over 80% on any metric
→ Format as a single Discord message
→ Post to Discord (#holonet-telemetry)
Docker Image Staleness Check
Weekly check for outdated container images:
Schedule trigger (weekly)
→ Query Portainer API for running containers
→ For each container: compare running image tag to latest on Docker Hub
→ Flag any container more than 2 versions behind
→ Format as update report
→ Post to Discord (#specforce-reports)
SSL Certificate Expiry Monitor
Schedule trigger (weekly)
→ For each *.tima.dev subdomain: check cert expiry date
→ Alert if any cert is under 30 days from expiry
→ Post to Discord (#shield-gate)
GPU Platform Automation
Ollama Model Management
!models command via Discord → n8n webhook
→ GET http://192.168.20.20:11434/api/tags
→ Format as table: name, size, modified date
→ Return to Discord
GPU Stats on Demand
!gpu command via Discord → n8n webhook
→ Query InfluxDB: last 5m of nvidia_smi measurements
→ Format: temperature, utilization, VRAM, power
→ Return to Discord
Workflow Statistics
| Category | Count | Trigger Type |
|---|---|---|
| Security | 8 | Webhook + scheduled |
| Infrastructure | 12 | Scheduled |
| Observability | 6 | Webhook |
| AI/GPU | 5 | Webhook (Discord commands) |
| Blog/Portfolio | 3 | Webhook (Ghost) |
| Career/Job Search | 4 | Scheduled |
| Study Automation | 6 | Webhook (Discord) |
| Utility | 14 | Mixed |
| Total | 58 |
Error Handling
Every workflow includes:
- Try/catch on external API calls - if AbuseIPDB is unreachable, the workflow logs the failure and continues without enrichment
- Discord error reporting - if a workflow fails, it posts the error to
#tactical-droidso failures are visible - Idempotency checks - the IP enrichment pipeline checks if an IP is already in the block list before attempting to add it
What I'd Do Differently
-
Start with the pattern, not the tools - the Detect → Enrich → Decide → Act → Report framework applies to every automation. Design the flow first, then build the nodes.
-
Export and version control workflows - n8n workflows should live in Git. Currently exported manually; should be automated via the n8n API.
-
Rate limiting on external APIs - AbuseIPDB has a daily query limit. The workflow should cache recent lookups in PostgreSQL to avoid redundant API calls.
Related: Post 021 - BD-1 Discord Bot | Post 023 - CVE Digest Pipeline | Post 024 - Discord as an Ops Console