Post 016
The Problem
Twelve services, twelve passwords. No centralized authentication. No audit trail showing who logged into what and when. No way to enforce MFA uniformly - some services supported it, most didn't, and the ones that did each had their own enrollment flow.
In enterprise IT, this is the exact problem that Okta, Azure AD, and Ping solve. I wanted the same outcome - one login, universal MFA, full audit logging - on self-hosted infrastructure.
Why Authentik
Over Keycloak: Keycloak is the enterprise standard, but the configuration overhead is significant for a single-operator lab. Authentik's blueprint system and UI are more maintainable at this scale. The tradeoff: smaller community, less enterprise tooling.
Over Authelia: Authelia has fewer native OIDC integrations and no built-in user directory. For a lab with 15+ services, Authentik's feature set justifies the extra resource overhead.
Over LDAP: Every service in my stack supports OAuth2/OIDC natively. LDAP adds maintenance cost without benefit at this scale. If I were federating with legacy enterprise apps, LDAP would make sense.
Deployment
Authentik runs on Node-B (CR90 Corvette) in the Home One VM alongside PostgreSQL and Redis - its two dependencies. Co-locating them on the same host prevents split-brain identity failures.
Docker Compose
services:
authentik-server:
image: ghcr.io/goauthentik/server:latest
command: server
environment:
AUTHENTIK_SECRET_KEY: ${AUTHENTIK_SECRET_KEY}
AUTHENTIK_REDIS__HOST: redis
AUTHENTIK_POSTGRESQL__HOST: postgresql
AUTHENTIK_POSTGRESQL__USER: authentik
AUTHENTIK_POSTGRESQL__NAME: authentik
AUTHENTIK_POSTGRESQL__PASSWORD: ${POSTGRES_PASSWORD}
ports:
- "9000:9000"
- "9443:9443"
depends_on:
- postgresql
- redis
authentik-worker:
image: ghcr.io/goauthentik/server:latest
command: worker
environment:
AUTHENTIK_SECRET_KEY: ${AUTHENTIK_SECRET_KEY}
AUTHENTIK_REDIS__HOST: redis
AUTHENTIK_POSTGRESQL__HOST: postgresql
AUTHENTIK_POSTGRESQL__USER: authentik
AUTHENTIK_POSTGRESQL__NAME: authentik
AUTHENTIK_POSTGRESQL__PASSWORD: ${POSTGRES_PASSWORD}
depends_on:
- postgresql
- redis
Environment Setup
# Generate the secret key
openssl rand -hex 32 > /dev/null # Save to .env as AUTHENTIK_SECRET_KEY
# Create the database
docker exec -it postgres psql -U postgres -c "CREATE DATABASE authentik;"
Initial Access
https://192.168.20.10:9443/if/flow/initial-setup/
The setup wizard creates the akadmin account. After that, create your real admin user and disable the default.
MFA Enforcement
MFA is enforced at the Authentik flow level, not at individual applications. This is the key architectural decision - add the MFA stage to the default authentication flow and every application inherits it automatically.
Flow Configuration
Authentik → Flows → default-authentication-flow → Edit Stages:
1. Identification Stage (username/email)
2. Password Stage
3. MFA TOTP Stage ← Added here
Once the TOTP stage is in the default flow, no service can bypass MFA. No per-app configuration needed.
Group Structure
| Group | MFA | Access |
|---|---|---|
homelab-admins |
TOTP + optional WebAuthn | All management surfaces |
homelab-users |
TOTP | SSO with standard access |
service-accounts |
Token-based | API access, no interactive login |
Service Integrations
Grafana - OIDC
In Authentik: Applications → Providers → Create OAuth2/OpenID Provider.
Name: Grafana
Client type: Confidential
Redirect URIs: http://192.168.20.40:3000/login/generic_oauth
In Grafana's grafana.ini:
[server]
domain = 192.168.20.40
root_url = http://192.168.20.40:3000/
[auth.generic_oauth]
enabled = true
name = Authentik
client_id = YOUR_CLIENT_ID
client_secret = YOUR_CLIENT_SECRET
scopes = openid profile email
auth_url = http://192.168.20.10:9000/application/o/authorize/
token_url = http://192.168.20.10:9000/application/o/token/
api_url = http://192.168.20.10:9000/application/o/userinfo/
Critical gotcha: The root_url in Grafana's [server] section must be explicitly set. Without it, the OAuth redirect URI defaults to localhost and the callback fails with redirect_uri_mismatch. Full detail in Post 010.
Wazuh Dashboard - SAML
Authentik provides a SAML provider; the Wazuh dashboard is configured with the IdP metadata URL and SP entity ID. Users authenticate via Authentik and land in the dashboard with role mappings applied via Authentik group membership.
Portainer - OAuth2
Name: Portainer
Client type: Confidential
Redirect URIs:
https://192.168.20.10:9443/
https://portainer.tima.dev/
In Portainer → Settings → Authentication → OAuth:
Authorization URL: http://192.168.20.10:9000/application/o/authorize/
Access Token URL: http://192.168.20.10:9000/application/o/token/
Resource URL: http://192.168.20.10:9000/application/o/userinfo/
Client ID: (from Authentik provider)
Client Secret: (from Authentik provider)
Scopes: openid profile email
NPM Forward Auth - Catch-All
Services that don't natively support SSO sit behind NPM with Authentik's forward auth outpost. The outpost intercepts unauthenticated requests before they reach the application.
NPM Advanced Config (per proxy host):
location /outpost.goauthentik.io {
proxy_pass http://192.168.20.10:9000/outpost.goauthentik.io;
proxy_set_header Host $host;
proxy_set_header X-Original-URL $scheme://$http_host$request_uri;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $http_host;
}
auth_request /outpost.goauthentik.io/auth/nginx;
Debugging tip: If the outpost returns "configuration error," it's not a network issue - it means the outpost can't match the incoming request to a provider. Verify:
- The provider's External Host matches exactly (including
https://vshttp://) - The application is in the outpost's Selected Applications list
- The forward auth request includes
X-Original-URLandX-Forwarded-Hostheaders
Integrated Services
| Service | Protocol | MFA Enforced | Method |
|---|---|---|---|
| Grafana | OIDC | ✅ | Native OAuth |
| Portainer | OAuth2 | ✅ | Native OAuth |
| Wazuh | SAML | ✅ | Native SAML |
| n8n | OIDC | ✅ | Native OAuth |
| Vaultwarden | OIDC | ✅ | Native OAuth |
| OpenWebUI | OAuth2 | ✅ | Native OAuth |
| Homepage | Forward Auth | ✅ | NPM outpost |
| Uptime Kuma | Forward Auth | ✅ | NPM outpost |
Results
| Metric | Before | After |
|---|---|---|
| Passwords per service | 12+ (unique per app) | 1 |
| MFA coverage | ~30% (manual per-app) | 100% enforced |
| Login audit visibility | None | Complete (every auth event) |
| Offboarding time | Manual checklist | Disable one account |
| Password reset overhead | Frequent | Near zero |
What Broke
OIDC redirect loops - Misconfigured callback URLs. The root_url gotcha in Grafana (Post 010) was the worst offender. Created a checklist for onboarding new apps.
MFA lockouts - During initial rollout, testing TOTP enrollment with a temporary device. Lost the TOTP seed. Had to reset MFA via the admin API. Implemented recovery codes as standard practice.
Session timeouts - Initially set too aggressively (1 hour). Users hit re-auth prompts constantly. Tuned to 8-hour sessions with sliding refresh tokens.
Forward auth "configuration error" - The outpost couldn't match requests because the External Host in the provider didn't include the scheme (https://). Added scheme verification to the onboarding checklist.
What I'd Do Differently in Production
Single Authentik instance is a single point of failure. If Authentik goes down, authentication for all services fails. In production: multiple Authentik workers behind a load balancer, HA PostgreSQL backend, Redis Sentinel.
Related: Post 010 - Grafana OAuth with Authentik: The root_url Gotcha | Post 014 - Tailscale + Authentik Zero Trust