Project Post 003


Before Authentik, every service in the Alliance Fleet had its own login. Proxmox, Grafana, Portainer, n8n, Vaultwarden, OpenWebUI, Wazuh. 15+ services, each with a unique username and password. Twelve or more credentials for a single operator.

The same problems I'd seen managing Active Directory for 200+ users at enterprise scale were reproducing in the homelab: password reuse, inconsistent MFA enforcement, and zero visibility into which services were actually being accessed. A homelab running a secrets manager, infrastructure dashboards with write access, and an SSO-capable reverse proxy can't treat authentication as an afterthought.

This post covers how I deployed Authentik as a centralized identity provider for the fleet: the architecture decisions, the integration work, and the debugging that came with it.


What Was Missing

No central authentication. Adding a new service meant creating another local account with another password. No SSO. Each login was independent, each session isolated.

No MFA enforcement. MFA coverage sat around 30%, applied manually per-app where the service happened to support it. There was no way to enforce MFA as a policy across the fleet.

No audit trail. Login events weren't aggregated. Answering "who accessed what, when" meant checking each service's local logs individually, if the service even logged authentication events at all.

No offboarding path. Revoking access to everything meant manually visiting each service and disabling or deleting accounts. Error-prone, slow, and easy to miss one.

The constraint was the same as the rest of the fleet: self-hosted, no cloud identity provider, and operable by one person.


Why Authentik

The solution needed to support OIDC and SAML natively, enforce MFA at the platform level rather than per-app, and provide a single pane of glass for authentication events.

Over Keycloak: Authentik's flow-based policy engine is more intuitive for a single operator. Keycloak is the enterprise standard, but its admin console assumes a dedicated identity team. Authentik ships with sane defaults and a visual flow editor that makes policy changes feel like editing a flowchart, not writing XML.

Over Authelia: Authelia is a forward-auth proxy. It doesn't function as a full identity provider. It can't issue OIDC tokens or serve as a SAML IdP. For services like Grafana and Wazuh that support native SSO, forward auth alone isn't enough.

Over Okta/Auth0: No cloud dependency. The identity layer should survive an internet outage, and homelab credentials shouldn't live on someone else's infrastructure.

OIDC over LDAP: Every service in the stack supports OAuth2/OIDC natively. LDAP would add a protocol translation layer and ongoing maintenance burden without delivering any capability that OIDC doesn't already cover at this scale.


Architecture

Authentik runs on Home One (VM 200) on Node-C as a Docker Compose stack. PostgreSQL and Redis are co-located on the same host. Deliberately. Splitting the identity provider from its database introduces a split-brain risk where Authentik is up but can't reach its user directory. At homelab scale with a single operator, co-location is the correct call.

User Login → Authentik (192.168.20.10:9000) → OIDC/SAML → Application
                │
                ├── MFA Challenge (TOTP/WebAuthn)
                ├── Policy Evaluation (group, IP, device)
                └── Audit Log → PostgreSQL

The stack:

Container Image Role
authentik-server ghcr.io/goauthentik/server Web UI, API, provider endpoints
authentik-worker ghcr.io/goauthentik/server Background tasks, outpost management
postgresql postgres:15 User directory, flow config, audit logs
redis redis:alpine Session storage, caching
portainer portainer/portainer-ce Container management UI

Initial deployment: generate the secret key (openssl rand -hex 32), set environment variables for both server and worker to point at the shared PostgreSQL and Redis instances, docker compose up -d, then hit the setup wizard at https://192.168.20.10:9443/if/flow/initial-setup/. Create a real admin user, disable akadmin.


Service Integration

Each integrated service gets its own provider in Authentik. The pattern is consistent: create a provider, select the protocol, set client type to Confidential, register the redirect URI, copy the client ID and secret into the service's config.

OIDC Providers

Used for Grafana, n8n, Vaultwarden, OpenWebUI, Proxmox, and Portainer. Each provider exposes the standard endpoints:

Authorization: http://192.168.20.10:9000/application/o/authorize/
Token:         http://192.168.20.10:9000/application/o/token/
User Info:     http://192.168.20.10:9000/application/o/userinfo/

Grafana's integration is representative. In Authentik, create an OAuth2/OpenID provider with the redirect URI pointing at Grafana's OAuth callback. In grafana.ini, configure the [auth.generic_oauth] section with the client credentials and Authentik's endpoint URLs.

Portainer follows the same pattern through its Settings → Authentication → OAuth panel, with one caveat: the redirect URIs in Authentik must include both the IP address and the domain (https://192.168.20.10:9443/ and https://portainer.tima.dev/).

SAML Provider

Used for the Wazuh Dashboard. Authentik provides the IdP metadata URL; the dashboard is configured with the SP entity ID. Users authenticate through Authentik and land in Wazuh with role mappings applied via Authentik group membership.

Forward Auth (Catch-All)

Services that don't natively support SSO sit behind Nginx Proxy Manager with Authentik's outpost. The outpost intercepts unauthenticated requests before they reach the application. Each proxy host gets an advanced config block that routes /outpost.goauthentik.io to Authentik and applies auth_request to everything else. This is the safety net: if a service can't speak OIDC or SAML, it still gets SSO and MFA through the reverse proxy layer.


MFA: Enforce at the Flow, Not the App

This is the key architectural decision. MFA is enforced at the Authentik flow level, not at individual applications. Add the MFA stage to the default authentication flow and every application inherits it automatically. No per-app configuration. No gaps.

The authentication flow runs three stages in sequence: identification (username/email), password, then the MFA TOTP challenge. Every login through Authentik, regardless of which service initiated it, hits all three.

Group Structure

Group MFA Access
homelab-admins TOTP + optional WebAuthn (YubiKey) All management surfaces
homelab-users TOTP SSO with standard access
service-accounts Token-based API access, no interactive login

The Grafana OAuth Fix

This one cost me time, and the debugging path is worth documenting because the failure mode is counterintuitive.

Symptom: Clicking "Sign in with Authentik" on Grafana returned redirect_uri_mismatch. The registered redirect URI in Authentik and the OAuth config in grafana.ini both showed http://192.168.20.40:3000/login/generic_oauth. Everything appeared to match.

Diagnosis: Browser dev tools → Network tab → inspect the actual OAuth redirect request. The redirect_uri parameter Grafana was sending: http://localhost:3000/login/generic_oauth. Not the configured IP. localhost.

Root cause: Grafana constructs the OAuth redirect_uri dynamically from its [server] section's root_url setting. When root_url is not explicitly set (which it isn't in a fresh install, the line is commented out with a semicolon) Grafana falls back to localhost. Correct from Grafana's own perspective. Completely wrong for any external OAuth callback.

The trap: The root_url setting is in the [server] section, not the [auth.generic_oauth] section. When you're debugging OAuth, you're staring at the auth block. The server section feels unrelated. And the commented-out line in a fresh grafana.ini looks intentional, like Grafana is using sensible defaults. It isn't.

Fix:

[server]
domain = 192.168.20.40
root_url = http://192.168.20.40:3000/

Restart Grafana, clear browser cache, and the OAuth redirect completes. MFA challenge appears (enforced at the Authentik flow level), user info comes through, account is created.


Results

Metric Before After
Service-specific credentials 12+ 1
MFA coverage ~30% (manual, per-app) 100% (enforced, platform-level)
Authentication audit trail Per-service logs (if any) Centralized in PostgreSQL
Offboarding Manual, per-service Single disable in Authentik

Accepted Limitations

Single point of failure. Home One is a single Authentik instance with no HA. If Node-C goes down, authentication for every integrated service fails. At homelab scale with one operator, this is accepted, but it means the identity layer has the same availability ceiling as the services it protects.

Session timeout tuning. Default session lifetimes are aggressive. Services with long-running sessions (Grafana dashboards, n8n workflow editors) can hit mid-session re-auth prompts. Tuning token lifetimes per-provider is an ongoing adjustment.

Redirect URI complexity. Services accessible via both IP and domain need both registered as redirect URIs in Authentik. Miss one and the OAuth callback fails silently with a mismatch error. This is easy to forget when adding a new reverse proxy entry.


What This Enables

Authentik is the identity foundation that the rest of the fleet's security posture builds on. Wazuh's SAML integration means security monitoring authenticates through the same flow as everything else. The n8n automation platform authenticates via OIDC, so workflow access is governed by the same policy engine. And when Tailscale's zero-trust overlay controls network-level access, Authentik controls application-level access. Two layers, one operator.

The next project post covers the security monitoring layer: Wazuh SIEM deployment, the 57-rule custom detection library, and the alert pipeline that lands threats in Discord in under 3 seconds.


Part of the Alliance Fleet Infrastructure Series on Holocron Labs