Complete Guide to TrueNAS CORE Virtualisation Under Proxmox
My Hardware
Made a decision to go with consumer hardware for this build. Do treat these components as a guide as my choices for this build may not reflect your requirements.
As always, go with enterprise/server grade hardware if your budget allows.
RAM
Literally spent months researching this issue.
ECC RAM is the go to choice for TrueNAS builds. However I decided to go with non-ECC memory. There reason this will be revealed in conclusion. I have two other NASes with ECC memory that are now backup NASes so I should be protected on that front.
In other word this new NAS build I’m doing is an experiment as to whether ECC RAM is really important, or not.
Always test the RAM to make sure they are working as expected. I’m doing this with memtest. You do not want a stuck bit in your memory ruining all your data And yes, it can happen.
The minimum memory requirement for TrueNAS is 8 GB, so you need at least 16 GB for your server because of the Proxmox overhead.
ZFS needs heaps of RAM, so I’m going with 2x32GB DDR4 3200MHz with the option of moving to 4x32GB in the future. Ideally go with RAM from the Qualified Vendor List of your motherboard. Unfortunately I couldn’t find any compatible RAM module so I picked a random one.
CPU: Ryzen
As the picture above suggested, I’m running a Ryzen 5600G. This CPU has a built in GPU, and to me feels faster than a Xeon.
Choosing this CPU means I do not need to install a dedicated GPU card and waste a PCI-e slot.
Motherboard
You will need a motherboard with IOMMU and ACS support in the UEFI/BIOS. I am using Asrock B550M Phantom 4. This motherboard has IOMMU but no ACS support.
Missing the latter feature means I lost the flexibility. Example I can pass through the first PCI 16x slot, but not the second PCI 8x slot because the latter group has two many critical devices tied to to it.
IOMMU is only needed if you want to pass through your HBA controller into the virtualised Proxmox. Whilst optional, these two features are really good to have as it means you will not be limited by hardware restrictions.
Deciding on a suitable AMD motherboard is a minefield. Ideally you’ll want to use a motherboard that works with a 5000 series CPU without a firmware upgrade. Second you need to find a motherboard/CPU combination that works with ECC if you choose to move to that in the future.
After months of research I gave up and just went with random parts. Comment below if you find a good CPU/motherboard that supports ECC.
Host Storage
I’m going with NVMe and SSD. NVMe to host the Proxmox Operating System and some virtual drives. While the SSD provides a additional storage for VMs and containers.
HBA Controller
This is optional because you can pass through the hard disks from your Promox VE host into TrueNAS CORE virtual machine (VM). Having a controller means I can simply pass this card into the VM, and forget about creating virtual raw disks for the VM.
It is also good to have if I decided to move away from Proxmox. Because TrueNAS is directly accessing the disks from the controller, it means I have less of an issues if I want to move to other OSes.
Choose this card according to the number of storage drives you want, and the IOMMU group. As an example, choose a 16i (16 internal) if you can only assign 1 passthrough card and you want up to 16 bays. For my case I chose a LSI 8i card as I can only pass through one PCI-e card and my case can only accommodate 8 bays.
If you are getting a controller online make sure you get one that looks like this:
Key identification markers are:
- Perforated PCI bracket
- Black heat sink
- IT firmware (P.20 and above). You need to flash the BIOS to IT mode (instead of RAID).
Network Controller
This is optional. Most consumer motherboards only have 1 Ethernet port (like mine) so a second network card may be necessary. You can also share all your traffic on the onboard NIC.
TrueNAS is known to be picky with NIC. Most onboard motherboard uses the Realtek NIC which can be problematic in TrueNAS. Proxmox being Debian based is more forgiving. And it is possible to pass the Realtek NIC into your TrueNAS VM as a Virtual Network device, thereby abstracting a lot of issues away from TrueNAS. That being said, Realtek do have performance issues and it’s best to look for a motherboard with an Intel chipset NIC for the speed and compatibility reason.
With this optional Network card, ideally you’ll want to pass this through directly into your VM as well. However, I am constrained by my IOMMU group limitations and will only be passing this through a OpenVSwitch bridge.
I’m going with a Chelsio 2x 10GBps SFP+ network card.
HDD cage
Again this is good to have, but not entirely necessary. If going with a HDD card first check your onboard SATA or HBA controller is supporting hot plug (i.e. you can remove/add HDDs while the computer is still running). There is no point going with HDD cages if your host controller do not support hot-plug.
I am using a SilverStone RM21-308 case with 8 bays.
Last but not least. If you are going ahead with the Ryzen 5000 series setup, heed this advice - buy your 5000 series CPU and motherboard from the same source and have them upgrade the motherboard BIOS before you pick them up. Older motherboards require you to use an older Ryzen CPU in order to flash to a newer BIOS. You’ll be stuck in a huge pickle if you don’t have any spare old Ryzen CPU you can randomly use for this BIOS upgrade. Make this your retailer’s problem, not yours. You can thank me later.
About The Author
3 comments
5 star: | (2) | |
---|---|---|
4 star: | (0) | |
3 star: | (0) | |
2 star: | (0) | |
1 star: | (0) | |
(5.0)
Comment from: Diogene Visitor
I was writing a similar guide for the teens that come to the IT workshop in our charity for children association and I found your guide. You did fantastic work. This is very helpful. 🙏🙏🙏🙏
Comment from: Diogene Visitor
be careful in part 10
gpart add -a 4k -b 128 -t freebsd-zfs da4
must be
gpart add -a 4k -t freebsd-zfs da4
if you re specify the -b option with same number you will overwrite the first partition
Comment from: ECC guy Visitor
Ryzen 5650 Pro CPU since it has ECC support and is an APU so no IPMI needed. AsRock Riptide X570 mobo since it supports ECC, lots of PCIE slots, and has excellent IOMMU grouping (and costs $120 new atm) NEMIX ECC ram DDR4 3200 4x32GB ~$350. 1 stick was bad and it only cost me time as warranty was fully honored. Note memtext x86 actually didnt detect it unless i tested 1 stick at a time though journalctl reported the bad stick accurately post mortem.