So in the attempt to upgrade my storage/file server with some new hardware, I unintentionally (albeit intentionally when I ordered the parts months earlier) broke the CompactFlash to PATA solution I was using to serve as a boot drive for the operating system (FreeNAS). Long since I last updated FreeNAS, it came to my attention that the project has evolved from FreeBSD 7 onto FreeBSD 9, and taken on the new name NAS4Free. Since I obviously needed to reinstall my operating system for the server on a new device, and FreeBSD 9 drastically updated the ZFS feature set present in this NAS distribution, it only made sense to drink the Koolaid and upgrade to NAS4Free. Unfortunately, in doing so, I have had to take in and utilize a lot more of FreeBSD’s native features to make this system workable to my own standards (despite NAS4Free claiming support).

The install was simple enough, as has been in the past with FreeNAS. I performed an embedded (plus data partition, no swap) install from a USB external DVD drive1 to a 4GB USB thumb drive plugged in to the back of the tower. I had significant issues attempting to setup and mount the data partition, despite all the (lack of) instructions I saw for it in FAQs and forums. Apparently, assigning the drive in the NAS4Free WebGUI is not sufficient alone to get the UFS ID assigned to it2. I had to reboot the system before it could properly assign a mount through the WebGUI as well. Don’t mind the fact that I accidentally tried to format the drive and toasted the OS on the thumb drive at least once in troubleshooting this. In doing so, I committed the default partition settings to a hard copy, so in the likely event this isn’t document anywhere else, here are the settings to assign the data partition in the WebGUI:

For a default embedded + data installation, in the absence of a swap partition (even though this should be assigned to a number after the data partition), the data partition will be on drive daX (da if it is a USB device, ada if it is a PATA/SATA drive; X will be whatever number comes up on your system; mine happens to be da8), partition 2, formatted as UFS, in the MBR partition format. You should be able to assign the drive just fine under the Disk Management menu (here you’ll get your drive address, e.g. da0). Initiate a reboot. Then, with the information above (drive assignment, partition number, file system format, and partition format), you should be able to properly get a mount established under the Mount menu.

With the operating system and data partition finally sorted out on the thumb drive, it was time to get all of my ZFS arrays up and running, migrate to the new drives (after I established a new array), and reformat the old arrays3. Setting up the zpools over in the WebGUI was frustrating me with permissions and datasets, so I said screw it, and went back to command line. Only problem was, I had no idea how to setup 4K sector geometries for ZFS. Reading up a little more over here about ZFS and 4K drives, it appears that manipulating gnop to your advantage is the best way to do this. This is also how NAS4Free achieves this through their WebGUI, but it appears to persist & reestablish at boot (watching the boot logs), which should be unnecessary. To get this working on my system, I basically did the following:

gnop create -S 4096 /dev/da0
gnop create -S 4096 /dev/da1
gnop create -S 4096 /dev/da2
gnop create -S 4096 /dev/da3
zpool create fargo raidz1 /dev/da0.nop /dev/da1.nop /dev/da2.nop /dev/da3.nop

This sets up the 4K test geometries to the drives, and then you create the zpool (in this case a RAIDZ 1-parity setup) “fargo”, upon which the established array will think the drives have 4K physical sectors. After this has been accomplished, you can export the the zpool, remove the test gnop devices, and reimport the pool, henceforth preserving the geometry without needing to keep the gnop devices.

zpool export fargo
gnop destroy /dev/da0.nop /dev/da1.nop /dev/da2.nop /dev/da3.nop
zpool import fargo

To check that everything went as expected, you should be able to execute zdb -C yourpoolhere | grep ashift to verify the ashift value (9 for the standard 512-byte (29) sector, 12 for the advanced format 4096-byte (212)sector).

Okay, I’ve got all my arrays back in shape, and remigrated all my data accordingly, established on the new arrays (and a pair of unused hard drives just sitting in a striped ZFS array until I get another pair to put them into the last 4-drive RAIDZ1). One of the other hardware upgrades involved a pair of PCIe 1000BASE Intel NICs so I could finally get link aggregation setup to enhance my transfers to/from the Mac Pro (which also supports link aggregation). Yay! I might finally be able to saturate my array writes! So boom, I go into the WebGUI, setup lagg0 as a LACP aggregation of the two NICs, reboot, reassign my primary and secondary network interfaces, reboot, AND!

And…

And……and………

The WebGUI never comes back up. I trucked over to the server, fired up a VGA cable to one of my Mac Pro displays, and apparently NAS4Free can’t get an IP. At all. I’ve already configured my switch properly, because the Mac Pro was already running in LACP just fine, so I’m clueless as to what was going on. Dropping back into the shell and checking ifconfig, blah blah blah, I got this:

lagg0: flags=8802<BROADCAST,SIMPLEX,MULTICAST> metric 0 mtu 1500
ether 00:00:00:00:00:00
nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
media: Ethernet autoselect
status: no carrier
laggproto failover

Wait, what? failover?!? That wasn’t right. I went back and checked the WebGUI after I reassigned the primary back to the onboard RealTek NIC, and lo and behold, it still stated the interface was “LACP.” Destroying the interface, rebooting, and recreating as either failover or LACP gives me the exact same result via ifconfig: laggproto failover. *facepalm* I wanted to beat my head into a wall at this point. Instead, off I went a-Googling.

A few minutes later, I stumbled across a forum post from the FreeNAS 8 forums detailing how (a) the WebGUI is useless for this (mostly), and (b) how to properly establish the LACP via the command line. Yay! I’ve finally got a working LACP connection! FUCK! How do I maintain this on boot?!? I know I need to fire something into the rc.conf to do this (for every other boot variable I establish), but NIC variables I’m utterly clueless about (including many others…blasted FreeBSD!). A little more Googling got me to this nice, concise post on how to boot a LACP interface in FreeBSD, which coincidentally also would have told me how to get it setup in the command line, had I not just already figured that out. Just dumping all these variables into the rc.conf WebGUI interface (or directly into /conf/base/etc/rc.conf if you really want) finally got me to an established LACP on boot:

ifconfig_em0="up"
ifconfig_em1="up"
cloned_interfaces="lagg0"
ifconfig_lagg0="laggproto failover laggport em0 laggport em1"
ipv4_addrs_lagg0="172.16.23.124/24"
defaultrouter="172.16.23.100"

Obviously, you’ll need to tweak em0 and em1 to whatever your NIC interfaces are and the IPs to whatever works for your network, but this will get you the established LACP lagg interface on boot, upon which you can finally set it to your primary interface.

Four hours later in the evening, I finally had everything configured. Now, I swear, I just need to learn how to run netatalk and samba from the command line, and I might as well ditch NAS4Free and go full-blown FreeBSD 9. *sigh*


1The usage of the external DVD drive became necessary when I realized the difficulty in reattaching a CD/DVD drive to the internal ports on the server when I have nowhere to mount it. All twelve 5.25″ bays are occupied by 3x 3.5″ HDD drive bays that are custom designed for Antec’s über cases. That, and it’s one less port/bay I have to occupy in what is ultimately a primarily storage capacity server.
2I had error after error saying “incorrect partition” and “Cannot find UFS ID”, despite plugging in all the correct information after assigning the drive in the Disk Management menu.
3I was previously running two 3-drive RAIDZ1 arrays, which is terrible for optimizing the space in my system (66% storage efficiency). It worked beautifully when I was running only six drives off of the onboard SATA controller, but upon addition of an LSI SATA controller with 2x 4-port branchouts, it became immediately clear I should probably be running 4-drive RAIDZ1 setups (75% storage efficiency), so I can pair of two 4-drive RAIDZ1s on the controller, and ultimately finish with one more 4-drive RAIDZ1 through the onboard controller.

Leave a Reply

Your email address will not be published. Required fields are marked *