Wazuh: When to Stop Fighting and Use the Script
The Goal
Deploy a full Wazuh SIEM stack - manager, indexer, and dashboard - on a dedicated LXC container in the homelab. Centralized log collection, threat detection, file integrity monitoring, and vulnerability scanning for the entire Alliance Fleet.
Sounds straightforward. It wasn't.
Attempt 1: Manual Installation (Failed)
The Wazuh documentation provides a step-by-step manual installation guide. I followed it on a fresh Debian 13 LXC container (CT 110, 192.168.20.30, Node-C).
The Indexer
The Wazuh Indexer (based on OpenSearch) is the first component. It handles log storage and search. The manual install requires:
- Generate SSL certificates for inter-component communication
- Configure the indexer with the cert paths
- Initialize the security index
- Verify the cluster health
This worked. Mostly. The certificate generation script ran, the indexer started, and the health check returned green. One component down, two to go.
The Manager
The Wazuh Manager handles agent communication, rule evaluation, and alert generation. Manual install:
apt-get install wazuh-manager
systemctl enable wazuh-manager
systemctl start wazuh-manager
The service started but immediately hit dependency issues. The manager expected specific versions of libraries that weren't in Debian 13's repos. The documentation was written for Debian 12 and Ubuntu 22.04 - Debian 13 (Trixie) introduced package changes that broke the assumed dependency chain.
Attempted fixes:
- Manually installing the missing libraries → version conflicts
- Pinning package versions → broke other dependencies
- Building from source → rabbit hole with no end
The Dashboard
Never got here on the manual path. The manager issues consumed all the time budget.
Time spent: ~4 hours. Result: partial, unstable installation.
Attempt 2: Manual Installation in tmux (Failed Differently)
Thinking the issue was Debian 13 specifically, I tried a different approach: install inside a tmux session with more careful dependency management, manually resolving each conflict as it appeared.
tmux new -s wazuh-install
This was a Python-script-in-tmux phase where I was running individual installation steps, checking logs between each one, and patching configs by hand. It felt productive because I was learning the internals of how Wazuh's components communicate. But the end state was the same - a partially functional installation that would break on service restart.
The fundamental problem: the manual installation documentation assumes a clean, supported OS version. Debian 13 wasn't officially supported yet, and the dependency tree had just enough mismatches to make the manual path unreliable.
Time spent: ~3 more hours. Result: same partial installation, now with more frustration.
The Pivot Decision
Seven hours across two attempts, and I had an indexer that worked, a manager that sort-of worked, and no dashboard. The sunk cost fallacy was strong - I'd already learned so much about the internals, surely one more fix would get it working.
That's the trap. The goal was a working SIEM, not a PhD in Wazuh packaging. The manual path had value for understanding the architecture, but it was not going to produce a production-ready deployment on this OS version.
Attempt 3: The Community Script (Worked)
# Destroy the broken container
pct stop 110
pct destroy 110
# Create fresh LXC
# (new CT 110, Debian 12 this time, 4GB RAM, 32GB disk)
# Run the community install script
bash <(curl -s https://packages.wazuh.com/4.x/wazuh-install.sh)
The community automation script handles everything:
- Certificate generation for all components
- Indexer installation and initialization
- Manager installation with correct dependencies
- Dashboard installation and configuration
- Inter-component authentication setup
- Service startup and health verification
It ran for about 15 minutes. Every component came up. The dashboard was reachable at https://192.168.20.30.
Username: admin
Password: [generated on install - save immediately]
Time spent: ~20 minutes (including container recreation). Result: fully functional Wazuh stack.
Why the Script Worked
The script was written by people who hit the same dependency issues. It:
- Pins package versions - installs specific, tested versions of every component rather than relying on whatever the OS repos provide
- Handles certificate generation - creates and distributes the SSL certs that manual installation requires you to manage by hand
- Configures inter-component auth - sets up the API credentials between manager, indexer, and dashboard automatically
- Tests health at each step - verifies each component before proceeding to the next
It's not magic. It's the accumulated knowledge of everyone who tried the manual path and documented the fixes.
What I Learned From the Manual Attempts
The manual attempts weren't wasted time. They taught me:
- How the components communicate - Manager → Indexer via REST API with mutual TLS. Dashboard → Indexer for search queries. Dashboard → Manager API for agent management.
- What the certificates protect - each component authenticates to the others via SSL client certs. A compromised dashboard can't impersonate the manager.
- Where the config files live -
/var/ossec/etc/ossec.conffor the manager,/etc/wazuh-indexer/opensearch.ymlfor the indexer,/etc/wazuh-dashboard/opensearch_dashboards.ymlfor the dashboard. - What breaks - dependency version mismatches, certificate path errors, API authentication failures. Knowing these failure modes makes troubleshooting faster when something goes wrong later.
Post-Install: Agent Deployment
With the server running, the next step was deploying agents across the fleet. That process had its own set of failures - lsb-release dependencies, sudo not found on Proxmox hosts, and silent enrollment failures from missing firewall rules.
Full agent deployment writeup: Post 012
Current State
Wazuh is fully operational:
- Manager: Running on CT 110 (Node-C), receiving data from 10 enrolled agents
- Indexer: Storing and indexing security events
- Dashboard: Accessible at
https://wazuh.tima.devvia NPM + Authentik SSO - Capabilities: Log analysis, file integrity monitoring, vulnerability detection, SCA benchmarks
The alert pipeline flows through n8n to Discord (Post 020), and the CVE data feeds into the Ollama remediation digest (Post 023).
The Lesson
Use the script first. If your goal is a working system, the script is the right tool. The manual path exists to understand the components - and it's valuable for that. But the manual path is not an efficient route to a production deployment, especially on an OS version that's even slightly outside the documented support matrix.
Corollaries:
-
New LXC containers for new attempts. Don't try to fix a broken install in-place. Destroy the container, create a fresh one, start clean. Proxmox makes this trivial.
-
Match the documented OS. If the docs say Debian 12, use Debian 12. Debian 13 will be supported eventually. Fighting upstream dependency assumptions is not a productive use of lab time.
-
Save the generated credentials immediately. The script outputs the admin password once. If you miss it, you're resetting credentials via the API - doable but annoying.
Related: Post 012 - Wazuh Agent Enrollment | Post 023 - CVE Digest Pipeline | Post 020 - n8n Automation