Project: Building a cheap NAS, part 6

Filed Under (Storage, VMWare ESXi 4) by Just An Admin on 04-10-2011

It has been a while. Due to ‘other activities’ we had to put our FreeNAS server project on hold. But we are here again and have found a nice and cool spot for our server in the Server Room. Good thing, because the noise is killing.

SASUC8I in ‘IT’ mode

When looking for a good controller for our setup, we came across several posts advising to use the SASUC8i controller in ‘IT’ mode aka Initiator/Target mode, especially when using a ZFS file system. In general, it is referred to as a non-RAID mode of operations for your LSI 1068e based controller. This allows software direct access to all the connected disks, instead of the logical RAID set that is defined. Although i have seen only good responses to the IT mode operation of this controller card, there have been few reports of I/O failures when querying SMART values rapidly (ref: here). In general this is a very good choice controller for a broad range of operating systems, including FreeNAS.

The best guide available on how to flash your controller can be found here:

As we are going to use ZFS as our ‘file system’ and we would like to benefit from the added support the IT mode can offer us, we flashed the controller with the IT mode firmware. We borrowed the PCI-E slot from one of our workstations and flashed the new firmware on the controller.

Installation of Freenas

Before we have placed the storage unit in it’s rack unit, we installed FreeNAS on the USB stick. The process is real easy. We downloaded the most recent production version from At the time of writing this is version 8.0.1-Release:



We downloaded the amd64 version, as we want to utilize the most recent and powerful version with the most modern technologies. After creating a CD using the ISO file, we booted the CD on a notebook that had the USB stick connected.  There are a lot of good manuals and guides out there that describe the installation of FreeNAS on a system, so we will not be going into that. The FreeNAS online documentation offers a good starting point:

The installation itself was a breeze and was very straight forward. We designated the USB drive as boot disk and after a few minutes of copying we are ready to go.

The First BOOT

After the first boot we can see that the netwerk interface on our internal network has acquired a DHCP IP address. If we visit the website using this IP address we can log on to the management console.

Once logged on, our first goal was to create a disk set using the ZFS filesystem and utilizing all our WD Green disks and the 60GB SSD disk as cache. As we are using the WD Green disks, we must select the option “Force 4096 bytes sector size”.

For added redundancy we will be using a RAID-Z2 as we are using 10 new disks, all from the same production date. I know there are different opinions about disk failure and the fact/myth that they can/will show problems all within a small time difference from each other, but I’m not one to gamble on this. We will assign the SSD disk (ada2 in my machine) as cache disk.

All of the 2TB WD disk will be partitioned and added to the same ZFS volume. Each disk will be partitioned with a 2GB Swap and 1.8TB data partition automatically.

A first test of our ZFS Volume

After enabling the SSH access through the management console, we were able to perform some test on the machine itself.

To end this post with at least a first impression of the performance, we have tested the ZFS volume using a simple command; DD. DD ( creates a new file copying data from ZERO or to NULL (in this case), leaving a good, general impression of the speed. Please take note that this is a rough indication of speed.

dd if=/dev/zero of=/mnt/array/file.img bs=10M count=1000
1000+0 records in
1000+0 records out
10485760000 bytes transferred in 23.391456 secs (448273076 bytes/sec)

dd if=/mnt/array/file.img of=/dev/null bs=10M count=1000
1000+0 records in
1000+0 records out
10485760000 bytes transferred in 16.583663 secs (632294567 bytes/sec)

dd if=/dev/zero of=/mnt/array/file.img bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes transferred in 0.308952 secs (3393976035 bytes/sec)

dd if=/mnt/array/file.img of=/dev/null bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes transferred in 0.182142 secs (5756914325 bytes/sec)

Now that is a good first test 🙂 The bs=1M test is ridiculously fast, but this is most certainly the cache ‘speaking’

In our next post we will do some more extensive test using IOmeter. If you have any test you would like to see performed, please let us know. I can not guarantee that we will be able to do them, but i will at least try.

Incoming search terms:


(1) Comment for the first post!

Post a comment