📱 Erkannter Endgerättyp ⛱️ Tag und Nacht. Verbraucht keinen oder einen 🍪.
guest
Login 🧬 0 Ihre DNS in den Krei.se-DNS-Servern, führt zum Bio-Labor 🍪 0 Anzahl Ihrer gespeicherten Kekse, führt zur Keksdose

🛃 NFS Userproxy 👮 for Webapps via WebDAV

This userproxy works similar to IAP solutions without the large 💥 blast radius.

Security layer is user, www-data, root, (host).

😨 Risk mitigation

  • (Very likely >10%) If user falls, other users, www-data, root and host stand.

  • (Possible <1%) If www-data falls all users (esp. tickets), root and host stand, BUT userpasswords that login after compromise here fall (user cred accumulation). Use OTP to mitigate this.

  • (Very unlikely <0.1%) If root falls host princs fall. ALL current tickets fall. Only userpasswords that login after compromise here fall (user cred accumulation). Use OTP to mitigate this.

This is a designproblem of IAP and same risk as on user workstations. Always use GSSAPI (but then you dont need a userproxy lol).

  • If you offer many userproxies (256MB RAM are enough) the 💥 blast radius is n user per proxy.

--> 💥 blast radius 1️⃣ with 1 user and 1 host is doable.

🏆 Best security at 1 proxy, 1 user + OTP. Shorten ticket lifetime on proxy with a cleanup loop and delete tickets.

(host) falls with root

  • If root falls the host principle is lost (not a surprise) but it might be possible (surprise, its not!) to have machineid less mounts only using the userticket - which has a lifetime while host/ has none. In that case the host is not able to access public nfs mounts inside the domain after all usertickets are invalid.

I tried a machine keytab less mount no dice - you need a host principal owned for the machine. Check NFS security and allow no anon reads.

Speed and Space

Forking the backend takes next to no time (<5ms) and less than 10MB RAM per user. Process exits after some seconds to account for burst requests then memory is free again. Easy 1000 users on 8GB or sth.

Further work

Extensible to VDI solutions with loginctl linger (more ram ofc).

Please notice this is a work in progress but rn works as intended, no bugs with multiusers able to write etc. Testing and feedback is welcome.

current solution

cat webdav.conf 
<IfModule mod_ssl.c>
<VirtualHost *:443>
    ServerName userproxy.domain.tld
    ServerAlias webdav.domain.tld

    SSLEngine on
    SSLCertificateFile /etc/letsencrypt/live/userproxy.domain.tld/fullchain.pem
    SSLCertificateKeyFile /etc/letsencrypt/live/userproxy.domain.tld/privkey.pem

    # One attempt to make sure we never have the frontend touch the filesystem
    # this means Location / goes to this Directory, will issue a stat()
    DocumentRoot /var/www/empty

    #userproxy front, if you connect here 
    <Directory />
        # disallows .htaccess
        AllowOverride None
        # hmm well
        Options None
        # allow from all new syntax
        Require all granted
    </Directory>

    # this makes Apache not look into the Directory /homes which would issue a lookup of .htaccess and stat() with user www-data
    <Location /homes>
        AuthType Basic
        AuthName "WebDAV"
        AuthBasicProvider PAM
        AuthPAMService webdav
        Require valid-user

        # header passing authed user
        RequestHeader set X-Remote-User expr=%{REMOTE_USER}
        #SetHandler "proxy:http://127.0.0.1:8080/homes"
        Options -FollowSymLinks

    </Location>

    # PROXYPRESERVEHOST

    # without this the backend would not know who is connecting and just see 127.0.0.1:8080
    # with it the backend gets the request parameter:    Host: userproxy.domain.tld.

    # Gemini explain: Why you need it: WebDAV responses (like PROPFIND) contain XML tags called <D:href>.
    ## These tags tell the client where the files are. If the Backend thinks its name is 127.0.0.1,
    ## it will send links back to the client like http://127.0.0.1:8080/homes/richard/file.txt.
    ## The client (Nautilus) will try to click that and fail because it can't reach 127.0.0.1.
    ## ProxyPreserveHost ensures the Backend knows its "Public Name."

    ProxyPreserveHost On

    #RewriteEngine On
    #RewriteCond %{REQUEST_URI} ^/home/[^/]+$
    #RewriteRule ^(.*)$ $1/ [R=301,L]

    # forwarder
    ## Apache's connection pooling.
    ## The Frontend ProxyPass keeps idle connections open to the Backend (127.0.0.1:8080) to save time. However, mpm_itk drops root privileges to richard on the first request. If the Frontend reuses that same connection milliseconds later for a background check without the auth header, or for another user, the backend child process is already richard. It cannot setuid() again because it is no longer root, causing the request to fail unexpectedly.

    ## Force the Frontend to close the connection after every single request so the Backend spawns a fresh mpm_itk worker from root every time.

    # Zum testen kann man disablereuse anmachen, für burstrequests in prod ruhig aus. Läuft nur paar sekunden
    #ProxyPass /homes http://127.0.0.1:8080/safe-homes nocanon disablereuse=On
    ProxyPass /homes http://127.0.0.1:8080/safe-homes nocanon

    # this is NOT a backfeed, the data comes back via socket automagically. This only does:

    ## If the Backend sends a "Redirect" (like the trailing slash issue we had), it sends a header:
    ## Location: http://127.0.0.1:8080/homes/richard/

    ## If you didn't have ProxyPassReverse, the Frontend would send that exact string to Nautilus. Nautilus would try to connect to http://127.0.0.1:8080 and die.
    ## ProxyPassReverse sees that "Location" header, recognizes the 127.0.0.1:8080 part, and rewrites it to the Frontend's public URL:
    ## Location: https://userproxy.domain.tld/homes/richard/

    ProxyPassReverse /homes http://127.0.0.1:8080/homes

    # im currently working on this RequestHeader edit

    # =========================================================
    # WEBDAV INTEROPERABILITY: THE DESTINATION HEADER
    # =========================================================

    # PROBLEM: WebDAV 'MOVE' and 'COPY' methods use a 'Destination' header 
    # to tell the server where to put the new file.
    # The client (Nautilus/WinExplorer) sends an ABSOLUTE URL:
    #   Destination: https://userproxy.domain.tld/homes/richard/new.txt

    # CONFLICT: Our Backend (8080) is plain HTTP. When it sees an HTTPS 
    # destination, it thinks we are asking it to move a file to a 
    # DIFFERENT server (External Cross-Server Move), which it will deny (403/502).

    # SOLUTION: We must "downgrade" the header to HTTP so the Backend 
    # recognizes the destination as its own local filesystem.
    # The 'early' flag ensures this happens before the proxy logic executes.
    RequestHeader edit Destination ^https:// http:// early

    # NOTE: Since we are now using Namespace Matching (/homes -> /homes), 
    # we NO LONGER need to edit the path itself (e.g., /home -> /shadow_homes).
    # This reduces complexity and improves reliability with GVfs.

</VirtualHost>
</IfModule>

<VirtualHost 127.0.0.1:8080>

    # Sadly wont work. its recommended to disable ssh access for users.
    # You need to create the parent webdav folder with 3770 so getgid and sticky bit on and have it anyuser:user_group chowned

    # Still the DavLock file itself will be 750 created from a user, so wait or create the file too and 770 it to anyuser:user_group
    # even more secure is to disable any deletion of those files which works with sticky bit on the parent folder, then chown to www-data:user_group
    # now the users can write the lock file but not delete it which would make it reappear with 750 and the user creating it as the owner
    #DavLockDB /var/lib/apache2/webdav/%{HTTP:X-Remote-User}/DavLock
    DavLockDB /var/lib/apache2/webdav/DavLock

    LogLevel alert rewrite:trace6 mpm_itk:trace4 \
    core:trace5 \
    dav:trace8 \
    dav_fs:trace8 \
    authz_core:trace5 \

    ## UseCanonicalName On: Prevents Apache from generating self-referential 
    ## redirects using its internal IP (127.0.0.1). Forces it to use the ServerName.
    ServerName https://userproxy.domain.tld
    UseCanonicalName On

    #DocumentRoot /var/www/empty

    Alias /safe-homes /var/www/empty

    # dont go insane this is the http request /homes to the actual on disk folder /homes lol
    # Alias /homes /homes

    ## IDENTITY BRIDGE: Capture the header from the Frontend.
    ## mpm_itk switches the process UID to Richard BEFORE the Directory walk.
    # was in location which is wrong, drop to user priv asap
    AssignUserIDExpr %{HTTP:X-Remote-User}

    <Directory />
        AllowOverride None
        Options -FollowSymLinks -Indexes
        Require all granted
    </Directory>

    # 3. The Pivot
    <Location /safe-homes>
        Require all granted
        RewriteEngine On

        # Because Alias ran first, the URI string here is "/var/www/empty/richard/"
        # We capture the "/richard/" part and bounce it to "/homes/richard/"
        RewriteRule ^/var/www/empty(.*)$ /homes$1 [L]
    </Location>

Alias /homes /homes

    #<LocationMatch "^/homes/([^/]+)">
    #<Location /homes>
    <Directory /homes>
        DAV On

        # DirectoryCheckHandler On: Critical for mpm_itk + Kerberos.
        # It tells Apache to proceed even if the parent process (www-data) 
        # can't fully validate the path components.
        # DirectoryCheckHandler On

        # Disable .htaccess searches to prevent "EUID 33" probes on NFS.
        AllowOverride None

        # Options -FollowSymLinks: Prevents lstat() calls by the parent 
        # process. Using +SymLinksIfOwnerMatch is the "safe" compromise.
        Options +Indexes +SymLinksIfOwnerMatch -MultiViews

        Require all granted
    </Directory>
    #</Location>
    #</LocationMatch>

    # better mpm itk info
    # LogLevel mpm_itk:info
    # LogLevel mpm_itk:trace2

    ErrorLog ${APACHE_LOG_DIR}/backend_error.log
    CustomLog ${APACHE_LOG_DIR}/backend_access.log combined
    # thanks gemini
    ## %P: Process ID
    ## %{tid}P: Thread ID (useful for event/worker MPMs)
    ## %u: Authenticated User (from header)

    LuaHookLog /etc/apache2/get_uid.lua log_uid

    LogFormat "%h %l %u %t \"%r\" %>s %b \"%{X-Remote-User}i\" PID:%P OS_UID:%{system_uid}n" itk_debug
    CustomLog ${APACHE_LOG_DIR}/backend_access.log itk_debug

</VirtualHost>

current pam

Please notice this pam script will run from frontend www-data with uid www-data and NOT root. The hook to take tickets to root is sudoed.

YOU NEED www-data ALL=(root) NOPASSWD: /usr/local/bin/takeTicketsAsRoot.sh (visudo)

root@userproxy:/etc/apache2/sites-enabled# cat /etc/pam.d/webdav 
#auth required pam_unix.so
#account required pam_unix.so
#session required pam_unix.so

# gets a user principal
# auth    required    pam_krb5.so minimum_uid=1000 

# glaube das is suse
# ccache_dir=/var/lib/gssproxy/webclients ccname_template=FILE:%d/krb5cc_%U

# das ist eh falsch, debian ist korrekt auch für appdefaults in krb5.conf:
# https://manpages.debian.org/unstable/libpam-krb5/pam_krb5.5.en.html
# ccache_dir=/var/lib/gssproxy/webclients ccache=FILE:%d/krb5cc_%U

# This hook knows the password and stores the ticket to ccache_dir=/var/lib/gssproxy/webclients
auth required pam_exec.so expose_authtok /usr/local/bin/webdav-kinit.sh

# 2. Always succeed the "account" check 
# This bypasses the need for the user to be in /etc/passwd
account required    pam_permit.so

current mount hook

  • uses KCM and sudo, triggers mount via autofs.

/usr/local/bin/webdav-kinit.sh

root@userproxy:/etc/apache2/sites-enabled# cat /usr/local/bin/webdav-kinit.sh    
#!/bin/bash
# /usr/local/bin/webdav-kinit.sh

LOG_FILE="/var/log/webdav-kinit.log"

# 1. Capture the timestamp and metadata
# PAM_RHOST: The remote IP of the client (Nautilus/curl/etc)
# PAM_USER:  The user logging in
# PAM_SERVICE: Usually 'webdav' (from your config)
NOW=$(date "+%Y-%m-%d %H:%M:%S")
CLIENT_IP="${PAM_RHOST:-unknown_ip}"
SERVICE="${PAM_SERVICE:-unknown_service}"
PID=$$

MOUNTPOINT="/homes/$PAM_USER"

# plaintext password from pam_exec
read -r PASSWORD

# Log the initial connection attempt
echo "[$NOW] [PID: $PID] AUTH_START: User='$PAM_USER' Client='$CLIENT_IP' Service='$SERVICE'" >> "$LOG_FILE"

USER_UID=$(id -u "$PAM_USER")

#CCACHE="/var/lib/gssproxy/webclients/krb5cc_${USER_UID}"

# better syntax similar to krb5 lines in pam
# not needed with kcm
CCACHE_DIR="/var/lib/gssproxy/webclients"
CCACHE="$CCACHE_DIR/krb5cc_${USER_UID}"

# Request the ticket and force it into the gssproxy path
# echo "$PASSWORD" | kinit -c "FILE:$CCACHE" "$PAM_USER"
# echo "$PASSWORD" | kinit -c "KCM:$USER_UID" "$PAM_USER"

/usr/bin/sudo /usr/local/bin/kinit-as-user.sh $PASSWORD $PAM_USER

KINIT_RET=$?

# exit 0

# kinit failed (bad password), exit immediately
if [ $KINIT_RET -ne 0 ]; then

    echo "[$NOW] [PID: $PID] KINIT_FAIL: Code=$KINIT_RET Error='$KINIT_ERR'" >> "$LOG_FILE"
    exit $KINIT_RET
fi

# 2. Trigger the AutoFS mount
# The Apache frontend runs as www-data, so doing this might result in "Permission denied" 
# to read the folder contents. That does not matter! The simple act of calling stat() 
# forces the kernel to ask AutoFS to resolve the path, which triggers the mount.

echo "[$NOW] [PID: $PID] KINIT_SUCCESS: Ticket saved to $CCACHE" >> "$LOG_FILE"
#echo "[$NOW] [PID: $PID] KINIT_SUCCESS: Ticket saved to KCM" >> "$LOG_FILE"

# hand all tickets over to root
# not needed with kcm

# needs www-data ALL=(root) NOPASSWD: /usr/local/bin/takeTicketsAsRoot.sh (visudo)
# /usr/bin/sudo /usr/local/bin/takeTicketsAsRoot.sh $PAM_USER $CCACHE

MOUNT_CHECK=$(grep -qs "/homes/$PAM_USER " /proc/mounts && echo "MOUNTED" || echo "NOT_MOUNTED_YET")

echo "[$NOW] [PID: $PID] MOUNT_STATUS_BEFORE: $MOUNT_CHECK" >> "$LOG_FILE"
echo "------------------------------------------------------------------" >> "$LOG_FILE"

if ! grep -qs "$MOUNTPOINT " /proc/mounts; then
    # Not mounted? Poke it to wake up AutoFS.
    # We suppress errors because www-data might get "Permission Denied" 
    # even if the mount succeeds (which is fine).
    # stat "$MOUNTPOINT" >/dev/null 2>&1
    # stat "$MOUNTPOINT" >> "$LOG_FILE"
    # mount $MOUNTPOINT

    # id less mount
    # KRB5CCNAME="FILE:$CCACHE" mount -t nfs4 -o sec=krb5 "md.it-verband-chemnitz.de:/homes/$PAM_USER" "$MOUNTPOINT"
    # KRB5CCNAME=FILE:$CCACHE mount -t nfs4 -o sec=krb5 md.it-verband-chemnitz.de:/homes/$PAM_USER "$MOUNTPOINT"
    # 
    # this would work rn but you still need the machine principals.
    #/usr/bin/sudo /usr/local/bin/mountAsRoot.sh $CCACHE $PAM_USER "$MOUNTPOINT"

    POKE_OUT=$(/usr/bin/timeout 3s /usr/bin/ls -ld "$MOUNTPOINT" 2>&1)
    POKE_RET=$?

    if [ $POKE_RET -eq 0 ]; then
        echo "[$NOW] [PID: $PID] POKE_SUCCESS: $POKE_OUT" >> "$LOG_FILE"
    elif [ $POKE_RET -eq 124 ]; then
        echo "[$NOW] [PID: $PID] POKE_TIMEOUT: NFS Server did not respond in 3s!" >> "$LOG_FILE"
    else
        echo "[$NOW] [PID: $PID] POKE_MSG: $POKE_OUT (Code: $POKE_RET)" >> "$LOG_FILE"
    fi

fi

MOUNT_CHECK=$(grep -qs "/homes/$PAM_USER " /proc/mounts && echo "MOUNTED" || echo "NOT_MOUNTED_YET")

echo "[$NOW] [PID: $PID] MOUNT_STATUS_AFTER: $MOUNT_CHECK" >> "$LOG_FILE"
echo "------------------------------------------------------------------" >> "$LOG_FILE"

# check the davlock directory or create it

# mkdir /var/lib/apache2/webdav/$PAM_USER
# chown $PAM_USER:user_group /var/lib/apache2/webdav/$PAM_USER
# chmod 700 /var/lib/apache2/webdav/$PAM_USER

# timeout 0.5s stat "$MOUNTPOINT" >/dev/null 2>&1 || true

#if ! mountpoint -q "$MOUNTPOINT"; then
    # Instead of stat, just try to 'ls' the PARENT directory 
    # to wake up AutoFS without touching the restricted files.
    # ls /homes >/dev/null 2>&1
#fi

exit 0

current KCM solution needs sudo for the kinit

/usr/local/bin/kinit-as-user.sh

#!/bin/bash
# /usr/local/bin/kinit-as-user.sh

# 1. Grab arguments
TARGET_USER="$2"
PASSWORD="$1"

# 2. Run kinit as the specific user. 
# 'runuser' will pass the stdin (your password pipe) through to kinit.
echo "$PASSWORD" | runuser -u "$TARGET_USER" -- kinit -c "KCM:" "$TARGET_USER"
# runuser -u "$TARGET_USER" -- kinit -c "KCM:" "$TARGET_USER" <<< "$PASSWORD"

# 3. Capture the return code of kinit (passed through runuser)
KINIT_RET=$?

# 4. Final Exit - This ensures Apache knows if it worked
exit

gssproxy

This had euid = 33 for www-data to hold tickets, not needed anymore

cat /etc/gssproxy/99-network-fs-clients.conf 
[service/network-fs-clients]

# Naming does not matter, its catched via euid anyway
#[service/nfs-client]
    mechs = krb5
    cred_store = keytab:/etc/krb5.keytab.nfs
    # not needed. This is where gssproxy would store your tickets when no user ccache is found
    # cred_store = ccache:FILE:/var/lib/gssproxy/webclients/krb5cc_%U
    # only needed for fixed keytab users like a htpc but on this proxy never used
    # cred_store = client_keytab:/var/lib/gssproxy/webclients/%U.keytab
    cred_usage = initiate
    allow_any_uid = yes
    trusted = yes
    euid = 0
    # euid = 33
    min_lifetime = 60
    debug_level = 3

Timeouts NFS Mounts

  1. Systemd Automounts

If your mounts are managed via systemd, you can find the global timeout in /etc/systemd/system.conf or by looking at the specific automount unit.

  1. AutoFS/SSSD Timeout

/etc/autofs.conf (or /etc/default/autofs on older Debian).

timeout = 300 # 5m, 3600 is 1h

  1. SSSD Side

Since sssd_autofs is starting and stopping, ensure the responder stays alive a bit longer to avoid the "startup delay" when a user clicks. In /etc/sssd/sssd.conf:

[autofs]
# Keep the responder alive for 1 hour after the last request
idle_timeout = 3600

TBI @TODO

machineidless mount - seems impossible. you need a host principal

Current PTF

Hintergrund ändern. Verbraucht keinen oder einen 🍪.

Verknüpften Viewport öffnen

🎮 Steuerung
Dokumentation 🕹️
Sie sind leider kein Entwickler :(

Content Nodes Amount

Diligence / PTF Amount

FPS

Vertex-Count