As a MacBook Pro user, I, like many others, suffered from the lack of internal storage in general. In particular, the rMBP that I uses daily has a merely 256 GB of SSD space, which is running short very quickly.
Ever since I started recording videos while flying, things has became even worse. With 50+ GB of videos after each flight, my poor 256 GB SSD has quickly ran out, forcing me to purchase a 1 TB external hard drive. However, after a year the external drive also started having difficulties keeping up with the amount of data that I generate, not even to mention the lack of redundancy and backup made it unsuited for putting anything you would like to keep in there.
So, at some point I decided to build a high-capacity NAS for myself and hope it will last at least a couple of years before needing another upgrade.
I wrote this post primarily as a note of what I have done in case I need to do it again. Hopefully it will be helpful for you if you are considering doing similar things.
Alright, we knew what we want at this point, but what should we get for it?
First, I checked commercial solutions like Synology, which supposedly is the best consumer grade NAS system in the market. However, that come with a price. The cheapest 4-bay system will easily cost you $300+ without any hard drive included. Not even to mention the not so impressive hardware specs which makes me question it's real world performance.
That is the point when I thought: why not build a NAS server yourself?
Finding the server
First thing when building a NAS server is to find the right hardware. For this build, a used server should work perfectly fine as we do not need that much processing performance for a storage solution. But we do want to have a server that has lots of RAM, some SATA connector and good NICs. Since I will be running the server in a living environment, noise level would also be a concern.
First I tried to dig on eBay. Despite I found lots of used Dell PowerEdge R410/R210 at less than $100. Having worked in a server room before, I knew those 1U unit just makes too much noise for me to to use inside my home. In general, tower server tends to be a little quieter but unfortunately there are only a few listing of those on eBay and they are quite pricy or underpowered.
Next, I checked Craigslist, and found a gentleman selling a used HP ProLiant N40L at only $75! I knew those servers, even used, usually costs in the range of more than $300, so I emailed the seller to make sure the server is still available. Knowing it still is, I quickly drove to San Mateo and picked up the server. I was very happy when I first saw it. It has minimum wear and aside from being slightly dusty, everything seems perfectly fine.
Here are some photos I took when I first got the server:
Here is the spec of the server that I got:
CPU: AMD Turion(tm) II Neo N40L Dual-Core Processor (64-bit)
RAM: 8 GB non-ECC RAM (upgraded by previous owner)
Flash: 4 GB USB Drive
SATA Connectors: 4 + 1
NIC: 1 Gbps on-board NIC
Needless to say, despite being few years old, this server still have better spec most of the pre-built NAS system I can find on the market, especially when it comes RAM capacity. Later on I even [upgraded](#Hardware upgrade) the RAM to 16 GB ECC memory for more buffer space and better data protection.
Getting the drives
Now we have a perfectly capable system, we need to get some hard drives for use inside our home build NAS. Obviously the $75 I paid only came with the server but no hard drives (and I wasn't expecting it to).
After some quick research, I found that WD Red hard drives works best with NAS build when drives are constantly running 24x7. I picked up 4x3 TB WD Red drives from Amazon. In general you can get whatever drive you want but do make sure they are of similar capacity and speed in order to avoid weird RAID performance issue later.
Setting up the system
Now many people will use FreeNAS for their NAS build, and there is absolutely nothing wrong with using it. Despite my server being perfectly capable of running FreeNAS, I opted to use CentOS as ZFS on Linux is "production-ready" and I am more familiar with Linux server management. Plus I do not really need all the fancy UI and features FreeNAS provides, a simply RAIDZ array and AFP sharing should be sufficient for me.
Installing CentOS on the USB drive is fairly straightforward. Just make another drive as the boot device and the installation wizard will guides you from there.
Building the RAID
After I have successfully installed CentOS, I installed ZFS on Linux using steps from here.
Once that has been completed, I loaded the ZFS Kernel module:
$ sudo modprobe zfs
and create the RAIDZ1 array using the
$ sudo zpool create data raidz1 ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T0609145 ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T0609146 ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T0609147 ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T0609148 $ sudo zpool add data log ata-SanDisk_Ultra_II_240GB_174204A06001-part5 $ sudo zpool add data cache ata-SanDisk_Ultra_II_240GB_174204A06001-part6
Notice that here I used the hard drive ID instead of their mapped names (
sdx) to reduce the chance of a failed mount after boot due to the lettering of the disks changing.
I also added ZIL and L2ARC cache that is running on a seprate SSD. It is partitioned as 5GB for ZIL and the rest for L2ARC.
As of RAIDZ1, it can tolerates up to 1 disk failing. While lots of people argue you should not be using it due to the chance of a second disk failing during the RAID rebuild and potentially destroying all the data, it was chosen by me primarily because I already backs up all my important data to an offsite location regularly and failure of the entire array will only affect the availability, not the durability of my data. If you do not have good backup solutions, then using something like RAIDZ2 or RAID10 is generally a much better idea.
You can confirm your pool has been created successfully by running:
$ sudo zpool status
$ sudo zfs list NAME USED AVAIL REFER MOUNTPOINT data 510G 7.16T 140K /mnt/data
By default, ZFS will mount your newly created pool directly under
/, which is usually undesirable. You can change it by running:
zfs set mountpoint=/mnt/data data
From here, you can choose to create one or more datasets for storing data. I created two datasets, one for Time Machine backup and one for general file storage. I applied a quota limit of 512 GB for the Time Machine backup dataset to make sure it does not grow indefinitely.
zfs set compression=on data
This turns on compression support of ZFS. Compression uses minimum CPU but could dramatically improve your I/O throughput and is always recommended.
zfs set relatime=on data
This reduces number of updates to
atime to reduce IOPS generated while accessing files.
By default, ZFS on Linux uses 50% of physical memory for ARC. For my use case (total number of file low), this can be safely increased to 90% as we do not run any other applications on the NAS server.
$ cat /etc/modprobe.d/zfs.conf options zfs zfs_arc_max=14378074112
and verified that it indeed took effect with arc_summary.py:
$ python arc_summary.py ... ARC Size: 100.05% 11.55 GiB Target Size: (Adaptive) 100.00% 11.54 GiB Min Size (Hard Limit): 0.27% 32.00 MiB Max Size (High Water): 369:1 11.54 GiB ...
Setup recurring tasks
I have setted up recurring scrub systemd timers using systemd-zpool-scrub to run once a week and auto snapshot cron tasks using zfs-auto-snapshot to automatically create snapshots every 15 min, 1 hour and 1 day.
$ cat /etc/netatalk/afp.conf [[email protected] ~]$ cat /etc/netatalk/afp.conf ; ; Netatalk 3.x configuration file ; [Global] ; Global server settings mimic model = TimeCapsule6,106 ; [Homes] ; basedir regex = /home ; [My AFP Volume] ; path = /path/to/volume ; [My Time Machine Volume] ; path = /path/to/backup ; time machine = yes [Datong's Files] path = /mnt/data/datong valid users = datong [Datong's Time Machine Backups] path = /mnt/data/datong_time_machine_backups time machine = yes valid users = datong
vol dbnest is a huge performance boost for me as by default Netatalk writes the CNID database to the root file system and it was really not desirable as my root system is running on a USB drive and is relatively slow. Turning on
vol dbnest causes the database to be stored to the Volume root, which in this case is the ZFS pool that has relatively better performance.
Enable ports on Firewall
$ sudo firewall-cmd --permanent --zone=public --add-service=mdns $ sudo firewall-cmd --permanent --zone=public --add-port=afpovertcp/tcp
If everything has been configured correctly, you should see your machine showing up in Finder and Time Machine should work as well.
It is a good idea to monitor the health status of your disks before failures occur.
$ sudo yum install smartmontools $ sudo systemctl start smartd
Monitors the charge of APC UPS and shuts down the system when the charge is getting critically low.
$ sudo yum install epel-release $ sudo yum install apcupsd $ sudo systemctl enable apcupsd
A week after I setted up my system, I am growing more concerned about the non-ECC memory installed in my system (see Why I Chose Non-ECC RAM for my FreeNAS for a reason why you should not run ZFS with non-ECC RAM). Plus, more ram for buffering is always a good thing with ZFS. So I purchased 2xKingston DDR3 8 GB ECC RAM from Amazon for $80 each and replaced the desktop RAM installed by the previous owner. The system booted without any problem on the first try and I can confirm ECC support has indeed been enabled:
$ dmesg | grep ECC [ 10.492367] EDAC amd64: DRAM ECC enabled.
The result has been amazing. I am able to constantly saturate the 1 Gbps LAN connection on the server when copying files and Time Machine works flawlessly. Overall, very happy with the setup.
1 * HP ProLiant N40L = $75
2 * 8 GB ECC RAM = $174
4 * WD Red 3 TB HDD = $440
Total = $689
Now I call that a pretty good deal.
- 01/07/2018 - Updated
zfs_arc_maxsize to use more of available system RAM.
- 01/07/2018 - Fixed error in command installing
- 01/07/2018 - Added ZIL and L2ARC cache configuration.