arm-nas

Arm NAS configuration with ZFS.

GPL-3.0 License

Stars
109

Arm NAS

Ansible playbook to configure my Arm NASes:

Hardware

Primary NAS - 45Drives HL15

The current iteration of the HL15 I'm running contains the following hardware:

Some of the above links are affiliate links. I have a series of videos showing how I put this system together:

Secondary NAS - Raspberry Pi 5 with SATA HAT

The current iteration of the Raspberry Pi 5 SATA NAS I'm running contains the following hardware:

Some of the above links are affiliate links. I have a series of videos showing how I put this system together:

Preparing the hardware

The HL15 should not require any special prep, besides having Ubuntu installed. The Raspberry Pi 5 is running Debian (Pi OS) and needs its PCIe connection enabled. To do that:

  1. Edit the boot config: sudo nano /boot/firmware/config.txt

  2. Add in the following config at the bottom and save the file:

    dtparam=pciex1
    dtparam=pciex1_gen=3
    
  3. Reboot

Confirm the SATA drives are recognized with lsblk.

Running the playbook

Ensure you have Ansible installed, and can SSH into the NAS using ssh user@nas-ip-or-address without entering a password, then run:

ansible-playbook main.yml

Accessing Samba Shares

After the playbook runs, you should be able to access Samba shares, for example the hddpool/jupiter share, by connecting to the server at the path:

smb://nas01.mmoffice.net/hddpool_jupiter

Until issue #2 is resolved, there is one manual step required to add a password for the jgeerling user (one time). Log into the server via SSH, run the following command, and enter a password when prompted:

sudo smbpasswd -a jgeerling

The same thing goes for the Pi, if you want to access it's ZFS volume.

Replication / Backups

Backups of the primary NAS (nas01) to the secondary NAS (nas02) are handled using Sanoid (and it's included syncoid replication tool).

Sanoid is configured on nas01 to store a set of monthly, daily, and hourly snapshots. Syncoid is run on cron on nas02 to pull snapshots nightly.

Sanoid should prune snapshots on nas01, and Syncoid on nas02.

You can check on snapshot health with:

  • nas01: sudo sanoid --monitor-snapshots && zfs list -t snapshot
  • nas02: zfs list -t snapshot

For example:

jgeerling@nas01:~$ sudo sanoid --monitor-snapshots
OK: all monitored datasets (hddpool/jupiter) have fresh snapshots

Offsite Backups to Amazon Glacier

TODO: See https://github.com/geerlingguy/arm-nas/issues/14

Benchmarks

I like to verify the performance of my NAS storage pools on the device itself, using my disk-benchmark.sh script.

You can run it by copying it to the server, making it executable, and running it with sudo:

wget https://raw.githubusercontent.com/geerlingguy/pi-cluster/master/benchmarks/disk-benchmark.sh
chmod +x disk-benchmark.sh
sudo MOUNT_PATH=/nvmepool/mercury TEST_SIZE=20g ./disk-benchmark.sh

Troubleshooting

Samba Monitoring

If you're having trouble mounting a share or authenticating with Samba, run sudo watch smbstatus to monitor connections to the server. Logs inside /var/log/samba aren't useful by default.

ZFS Command Line Cheat Sheet

# Check pool health (should return 'all pools are healthy')
zpool status -x

# List all zfs pools and datasets
zfs list

# List all zfs pool info
zpool list

# List single zfs pool info (verbose)
zpool status -v [pool_name]

# List all properties for a pool
zfs get all [pool_name]

# Scrub a pool manually (check progress with `zpool status -v`)
zpool scrub [pool_name]

# Monitor zfs I/O statistics (update every 2s)
zpool iostat 2

License

GPLv3 or later

Author

Jeff Geerling