NPM Died, So I Rebuilt It - And Migrated DNS to Cloudflare in the Same Weekend
The Problem
Nginx Proxy Manager stopped accepting logins. The admin password reset via SQLite failed. The bcryptjs module wasn't on the system path. The database had remnants of a botched install. I spent an hour trying to fix it before realizing the whole container was compromised from a bad initial deployment.
The right move was to nuke everything and start clean - which turned into a weekend project that also migrated my entire DNS infrastructure to Cloudflare with wildcard SSL for *.tima.dev.
What Went Wrong With the Original Install
The NPM LXC container (CT 101, 192.168.1.101) was deployed via the community-scripts Proxmox helper. On the surface it looked fine - the container ran, the web UI loaded. But under the hood:
SQLite NOW() Bug
The community script version at the time had a known SQLite compatibility issue. SQLite doesn't support NOW() - it uses datetime('now'). The install script's seed data used NOW(), which silently inserted null timestamps into the auth table.
Password Reset Failed
When I tried to reset the admin password via SQLite:
sqlite3 /data/database.sqlite
DELETE FROM auth WHERE user_id=1;
INSERT INTO auth (created_on, modified_on, user_id, type, meta, secret)
VALUES (datetime('now'), datetime('now'), 1, 'password', '{}',
'$2y$10$zN.cU/mxem4cMSomTFk9.uXDpgDMpF9MWxTPP3QqEV7xyQZBKNjGy');
The bcrypt hash was for changeme, but when I tried to generate a new one:
node -e "const bcrypt = require('bcryptjs'); bcrypt.hash('changeme', 10, (err, hash) => console.log(hash));"
Error: Cannot find module 'bcryptjs'
The module existed in NPM's app directory (/app/node_modules/bcryptjs), not in the global path. You could fix it with the full path:
node -e "const bcrypt = require('/app/node_modules/bcryptjs'); bcrypt.hash('changeme', 10, (err, hash) => console.log(hash));"
But at this point the install was clearly compromised. Database in a non-standard location, Node.js module paths broken, auth table corrupted.
The Rebuild
Step 1: Destroy the Container
# From the Proxmox host
pct stop 101
pct destroy 101
Step 2: Fresh Install via Community Scripts
bash -c "$(wget -qLO - https://github.com/community-scripts/ProxmoxVE/raw/main/ct/nginx-proxy-manager.sh)"
The latest version of the script fixes the SQLite compatibility issue. Clean install, clean database, setup wizard works on first access at http://192.168.1.101:81.
Step 3: Initial Configuration
Default credentials:
Email: admin@example.com
Password: changeme
First login forces a password change and email update. Create your real admin account immediately and delete the default.
The Cloudflare DNS Migration
With a fresh NPM instance running, I migrated DNS from the previous setup to Cloudflare. The goal: wildcard SSL for *.tima.dev so every service gets HTTPS without individual certificate management.
Cloudflare DNS Records
| Type | Name | Target | Proxy |
|---|---|---|---|
| A | tima.dev |
Netlify IP | DNS only |
| CNAME | holocron-labs |
holocron-labs.ghost.io |
Proxied (orange cloud) |
| A | *.tima.dev |
(homelab IP via Tailscale) | DNS only |
NPM Wildcard Certificate
In NPM → SSL Certificates → Add Let's Encrypt Certificate:
- Domain Names:
*.tima.dev,tima.dev - Use DNS Challenge: Yes
- DNS Provider: Cloudflare
- Credentials: Cloudflare API token with Zone:DNS:Edit permissions
The DNS challenge verifies domain ownership via a TXT record that NPM creates automatically through the Cloudflare API. No ports need to be open - the challenge happens entirely via DNS.
AdGuard Wildcard Rewrite
In AdGuard Home → Filters → DNS Rewrites:
*.tima.dev → 192.168.1.101
This catches all *.tima.dev queries from internal devices and routes them to NPM. NPM then proxies to the correct backend service based on the subdomain.
Adding All Services
Each service gets a proxy host in NPM with the same pattern:
| Subdomain | Scheme | Backend IP | Port |
|---|---|---|---|
grafana.tima.dev |
http | 192.168.20.40 | 3000 |
wazuh.tima.dev |
https | 192.168.20.30 | 443 |
auth.tima.dev |
https | 192.168.20.10 | 9443 |
portainer.tima.dev |
https | 192.168.20.10 | 9443 |
uptime.tima.dev |
http | 192.168.20.61 | 3001 |
n8n.tima.dev |
http | 192.168.20.50 | 5678 |
vault.tima.dev |
http | 192.168.20.51 | 80 |
llm.tima.dev |
http | 192.168.20.20 | 3000 |
home.tima.dev |
http | 192.168.20.60 | 3000 |
All use the *.tima.dev wildcard cert with Force SSL enabled.
No port conflicts - even though Authentik and Portainer share 192.168.20.10:9443, NPM differentiates by hostname. That's the entire point of a reverse proxy.
Cross-VLAN Routing
NPM sits on VLAN 1 (192.168.1.101). Backend services are on VLAN 20 (192.168.20.x). The UniFi firewall needs an explicit allow rule:
Name: Allow NPM to Services
Action: Allow
Source: 192.168.1.101
Destination: VLAN 20 (192.168.20.0/24)
Ports: (all service ports)
Protocol: TCP
The existing "Allow Management to All" rule covered this in my setup, but if you're running tighter rules, you'll need explicit allows per service port.
Verification
After adding all 9 proxy hosts, the NPM dashboard showed all services Online with Let's Encrypt SSL.
The full chain: browser → https://grafana.tima.dev → AdGuard resolves to NPM → NPM terminates SSL → NPM proxies to 192.168.20.40:3000 → Grafana login page.
The Lesson
Don't debug a broken install. If the container's foundation is compromised - corrupted database, wrong module paths, seed data failures - the time you spend patching it exceeds the time to destroy and rebuild. Proxmox makes this trivial with LXC containers. Treat them as disposable.
The silver lining: being forced to rebuild NPM pushed me to do the Cloudflare migration and wildcard cert setup I'd been putting off. The result is cleaner than what I had before.
Related: Post 009 - NPM + UniFi Firewall Rule Ordering covers a different NPM debugging session involving inter-VLAN traffic drops.