Home » Blog

True Nas Scale Splitten der Boot Platte während der Installation

<time datetime='2022-12-02 00:00:00 +0000 UTC'>December 2, 2022</time>&nbsp;·&nbsp;5 min&nbsp;·&nbsp;Michael Bäcker (aka BakermanLP)

Ein paar Tips für die Installation von TrueNas Scale

truenas tips

True Nas Scale Splitten der Boot Platte während der Installation

Wenn die Boot Platte viel zu groß ist, dann kann man diese während der Installation in mehrere Teile aufsplitten und den verbleibenden Teil als Datenpool verwenden. https://www.reddit.com/r/truenas/comments/lgf75w/scalehowto_split_ssd_during_installation/

Hier nun eine Kopie des oben verlinkten Artikels


I have had a scale installation on two 500"GB" ssds. Which is quite a waste given that you can't share anything on the boot-pool. With a bit of digging around I figured out how to partition the install drives and put a second storage pool on the ssds.

First, a bunch of hints and safety disclaimers:

  • You follow this on your own risks. I have no clue what the current state of scale is wrt to replacing failed boot drives etc and have no idea if that will work with this setup in the future.
  • Neither scale nor zfs respect your disks, if you want to safe-keep a running install somewhere remove the disk completely.
  • Don't ask me how to go from a single disk install to a boot-pool mirror with grub being installed and working on both disks. I tried this until I got it working, backed up all settings and installed directly onto both ssds.
  • Here's a rescue image with zfs included for the probable case something goes to shit: https://github.com/nchevsky/systemrescue-zfs/tags

The idea here is simple. I want to split my ssds into a 64gb mirrored boot pool and ~400GB mirrored storage pool.

  1. create a bootable usb stick from the latest scale iso (e.g with dd)
  2. boot from this usb stick. Select to boot the Truenas installer in the first screen (grub). This will take a bit of time as the underlying debian is loaded into ram.
  3. When the installer gui shows up chose []shell out of the 4 options
  4. We're going to adjust the installer script:

If you want to take a look at it beforehand it's in this repo under "/usr/sbin/truenas-install" https://github.com/truenas/truenas-installer

# to get working arrow keys and command recall type bash to start a bash console:
bash
# find the installer script, this should yield 3 hits
find / -name truenas-install
# /usr/sbin/truenas-install is the one we're after
# feel the pain as vi seems to be the only available editor
vi /usr/sbin/truenas-install

We are interested in the create_partition function, specifically in the call to create the boot-pool partition

line ~3xx:    create_partitions()
...
# Create boot pool
if ! sgdisk -n3:0:0 -t3:BF01 /dev/${_disk}; then
    return 1
fi

move the courser over the second 0 in -n3:0:0 and press x to delete. Then press 'i' to enter edit mode. Type in '+64GiB' or whatever size you want the boot pool to be. press esc, type ':wq' to save the changes:

# Create boot pool
if ! sgdisk -n3:0:+64GiB -t3:BF01 /dev/${_disk}; then
    return 1
fi

You should be out of vi now with the install script updated. let's run it and install truenas scale:

/usr/sbin/truenas-install

The 'gui' installer should be started again. Select '[]install/upgrade' this time. When prompted to select the drive(s) to install truenas scale to select your desired ssd(s). They were sda and sdb in my case. Set a password or don't (I didn't because im not on a us-keyboard layout and hence my special characters in passwords are always the wrong ones when trying ot get in later). I also didn't select any swap. Wait for the install to finish and reboot.

  1. Create the storage pool on the remaining space:

Once booted connect to the webinterface and set a password. Enable ssh or connect to the shell in System -> Setting. That shell keep double typing every key press so I went with ssh.

figure out which disks are in the boot-pool:

zpool status boot-pool
# and
fdisk -l

should tell you which disks they are. They'll have 3 or 4 partitions compared to disks in storage pools with only 2 partitions. In my case they were /dev/sda and /dev/sdb

next we create the partitions on the remaining space of the disks. The new partition is going to be nr 4 if you don't have a swap partition set up, or nr 5 if you have:

# no swap
sgdisk -n4:0:0 -t4:BF01 /dev/sda
sgdisk -n4:0:0 -t4:BF01 /dev/sdb
# swap
sgdisk -n5:0:0 -t5:BF01 /dev/sda
sgdisk -n5:0:0 -t5:BF01 /dev/sdb

update the linux kernel table with the new partitions

partprobe

and figure out their ids:

fdisk -lx /dev/sdX
fdisk -lx /dev/sdY

finally we create the new storage pool called ssd-storage (name it whatever you want):

zpool create -f ssd-storage mirror /dev/disk/by-partuuid/[uuid_from fdisk -lx disk1] /dev/disk/by-partuuid/[uuid_from fdisk -lx disk2]

export the newly created pool:

zpool export ssd-storage

and go back to the webinterface and import the new ssd-storage pool in the storage tab.

If something goes horribly wrong boot up the rescue image and destroy all zpools on the desired boot disks. Then open up gparted and delete all partitions on the boot disks. If you reboot between creating the storage partitions and creating the zpool the server might not boot because some ghostly remains of an old boot-pool linger in the newly created partitions. boot the rescue disk and create the storage pool from there. They are (currently) compatible.

Have fun and don't blame me if something goes sideways :P

cheers