Adding a zfs disk as SMB Share on a Proxmox host

February 2, 2024

UPDATE: I’ve abandoned this setup. Despite the fact that this works here, later it is a nightmare to deal with permissions. Trying to get anything to run on docker on this setup is beyond my capability or patience to figure out. If I was more patient it might work from a NAS setup on the proxmox host and the disks passed through for the NAS to handle alone and zfs at that point. I have instead put TrueNAS Scale on the bare metal and have simply used the virtualization to stand up a simple Debian VM with docker compose and all is well. TrueNAS handles zfs nicely and allows snapshots and backups on the zfs disks.

Usecase: running a single simple proxmox host with VMs/LXCs as guests. I’m trying to keep things simple and secure. I’m not interested in installing a NAS on proxmox (tried that, TrueNAS Scale and OMV, and didn’t really like it). I’ve loaded a pair of 8Tb spinning disks as a zfs datastore for the proxmox host and it’s guests. I’m looking to share the main datastore to the VMs/LXCs to provide storage for these systems. Proxmox will then take snapshots of this datastore likely hourly enabling quite a granular recovery option (at least that’s the plan). Backups of the VM promise to be more complicated due to this configuration (backing up an smb share as part of a VZDump is a no-no or requires complicated setup to manage correctly), but that will come after I figure snapshots out.

setting up zfs as a share on proxmox

on the sharer (proxmox host)

  1. create a zfs pool (this will appear as a single zfs “disk” when it’s done) using the disks added to system (system picks up disks without any interference from admin)
  2. in GUI go to node > Disks > ZFS: click on Create: ZFS
  3. fill in the form as needed (I have a simple pair of disks so went with “mirror”)

create a zfs mountpoint to share

zfs create -o mountpoint=/<zfs pool name>/<share directory> <zfs pool name>/<share directory>
zfs create <zfs pool name>/<share directory>

NOTE the absence and presence of leading slashes… this is important if you want this to work. This creates the new directory and mount point and mounts the “drive”, there are ways to create quotas etc… you’ll have to do your zfs homework for this.

zfs is “aware of” or capable of sharing either nfs or smb… since we’re going with smb we need to “turn sharing on” for our protocol

zfs set sharesmb=on <zfs pool name>/<share directory>

https://computingforgeeks.com/how-to-configure-samba-share-on-debian

Make a share directory

mkdir /newdisk/media

Now we install samba (zfs is already installed and working as you can see above), the samba client and cifs-utils which make this easier to do and monitor

apt install samba smbclient cifs-utils


configure samba to share the mountpoint and use the correct network interface of the system

nano /etc/samba/smb.conf


adjust this to correctly point to the network interface the system needs to share out on (vmbr0 in the case of a vanilla proxmox install), all IPs in this article are fiction and you’ll need to adjust them to relevant IPs from your systems). 

; interfaces = 192.168.1.100/24 vmbr0

Add the following near the end of the smb.conf file and ensure it matches your setup. 

[<share-name>]
 comment = Share for doing cool stuff
force create mode = 0770
force directory mode = 0770
inherit permissions = Yes
path = /<share-path>
 read only = No
valid users = @smbshare

These settings are up to you, there are a large number of possible options many of which affect security. I’m not an expert and can not vouch for these settings. again this is a “get it working” situation. This is a somewhat restrictive setup (only users in the smbshare group are allowed) which is deliberate as I’m planning on sharing this with a specific VM as the main data drive and nothing else.

Now (speaking of groups and permissions) we need permission to get at the samba share. This means a samba group to make permissions easier to manage

groupadd smbshare

change the ownership of the shared directory

chgrp -R smbshare /<zfs pool name>/<share directory>

change the permissions on the shared directory

chmod 2770 /<zfs pool name>/<share directory>

the last is a 0 for a moderately restricted setup… you’d put a 2775 there to make it more permissive, note the 2 sets up inheritance of group ownership (SGID bit if you want to look this trick up) on anything that’s created here subsequently.

And finally a user to access the samba share… you can use adduser and walk through the easy way or you can short cut the process and not end up with a home directory for this user and set the password separately using useradd

add user

useradd -M -s /sbin/nologin sambauser

add user to new group

usermod -aG smbshare sambauser

set password for user

smbpasswd -a sambauser

enable the new user

smbpasswd -e sambauser

finally restart the samba server and see what you’ve got

systemctl restart nmbd

running smbstatus with nothing attached will be disappointing as nothing will show (which is fine).
running testparm will parse your smb.conf and should let you know that all is well and show onscreen what shares are configured

On the sharee (VM running as a Guest of Proxmox)

test connecting to the samba share using (from here https://www.lokarithm.com/2023/05/14/how-to-mount-smb-with-command-line-in-ubuntu-and-debian/)

mount -t cifs -o username=sambauser //servernameOrIP/sharename /mnt/share

if you are challenged for a password and no errors show then it worked, to see that

df -h

this will show the list of mounts including your new one and it’s projected size, it’s gratifying seeing that work.

now to make this mount every time the system boots up. You can use fstab for this. In my initial research I came to the conclusion that this is unreliable and for a Debian server this appears to be the case. However see below to understand why this is and how I solved it for my case.

install Cifs-utils

apt install cifs-utils

create a credentials file

nano /<directory where you put the credentials file>/.smbcredentials
username=sambauser
password=<sambauserspassword.

Now, to make this work on reboot we have to add this to fstab. 

nano /etc/fstab

and add the following line to fstab (towards the end of the file) to mount the share

//<IP/hostname of share source>/share /mnt/share cifs vers=3.0,credentials=/<directory where you hid your samba credentials>/.smbcredentials

this enables the mount at boot time of this share.

To make the system use this version of fstab and mount your drive right now use

mount -a

Now…

df -h

shows your drive is connected and that all is well.

So… this doesn’t work reliably on boot. The system throws an error “network unreachable” when sifting through the logs (journalctl -xb). The system isn’t waiting for the network before trying to mount the remote drive. 

There are MANY articles on how to fix this timing issue for mounting a drive online and many suggest adding _netdev to the fstab line to make this work. This didn’t work for my install as this is dependent on the NetworkManager-wait-online service which isn’t how this debian system is configured. This system is dependent on ifupdown to get the network working and NetworkManager isn’t even installed let alone running to pay attention to _netdev. NOTE: even after installing NetworkManager this indicator exits early as it’s STILL not the system running the network. I also tried enabling (uncommented) WAIT_ONLINE_METHOD=ifup in /etc/default/networking. This ALSO didn’t work. Finally I have come to the same conclusion many others have come to and that is a bit of a hack. It’s a reliable and smart hack but it still feels hacky to me since the system should be able to do this on boot (is there a trick to make ifupdown behave like NetworkManager and throw some signal fstab could use to determine when to try mounting a network drive?). I’ve created a “wait-for-ping” service and made the fstab depend on this service before trying to mount. This solution came from here and looks like the following.

First create the service

nano /etc/systemd/system/wait-for-ping.service

add the following content

[Unit]
Description=Blocks until it successfully pings <IP address of source system here>
After=network-online.target

[Service]
ExecStartPre=/usr/bin/bash -c "while ! ping -c1 <IP address of source system here>; do sleep 1; done"
ExecStart=/usr/bin/bash -c "echo good to go"
RemainAfterExit=yes

[Install]
WantedBy=multi-user.target

now enable the service

systemctl daemon-reload
systemctl enable wait-for-ping.service

One more thing to do, fix fstab to pay attention to this new service when trying mount the samba share.

//<IP/hostname of share source>/share /mnt/share cifs vers=3.0,credentials=/<path to credentials file>,rw,x-systemd.automount,x-systemd.after=wait-for-ping.service,nofail

a few options have been added in here as in my adventures to solve the automount issue I discovered that there are options that one can add here. 

  • rw allows read and write
  • x-systemd.automount allows the system to automatically mount the drive if you try to use it and it isn’t already mounted. 
  • x-systemd.after=wait-for-ping.service forces fstab to wait patiently for a positive result from wait-for-ping before trying to mount this thing
  • nofail allows the system to continue the boot sequence and not report errors to screen (they are still logged) if the mount fails.

there are additional options as well, see the man pages for fstab to see what’s possible. Also NOTE, fstab causes systemd to automatically create unit files. To see these files you can

systemctl cat /mnt/share

for example will show the unit file generated for this mount. This is handy for troubleshooting this stuff as you work your way through the problems. Networked drives seem to come with a bunch of fussy problems and there are many solutions for each problem ymmv.

I also tried using autofs. While it worked for the automount (if not mounted, mount drive as configured when trying to access the drive manually) it didn’t solve the problem of mounting at boot time. Since systemd is built in and already running it made the most sense to me to work with the system to make this happen. 

See this link to understand why I haven’t moved /var to this large disk. 

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.