I ran across this very cool use of ZFS and VirtualBox on blogs.sun.com. The author made a snapshot of virtualbox VM’s and then used the cloning feature to instantly create new copies of the VM for testing or other uses. All the common bits are shared by the different clones so this setup saves gigs of disk storage. Very clever.
Yep. It actually solves the Rubik’s Cube using the light sensor. Awesome.
My 1.25TB Solaris “experiment” is becoming more and more important as I archive my digital life and move more of my media to the AppleTV. Digital video, unsurprisingly, happens to need a lot of storage. I’ve just about got it filled up and I think I’m ready for more free space. I’ve considered alternatives like Drobo, etc. but I just don’t have a solid desktop machine to host the files besides my Solaris server. Dedicated NAS devices are still relatively expensive (consider Drobo+DroboShare, TerraStation, ReadyNas, etc. – starting around $700 for an empty chassis). It’s a lot more economical to just add more storage to my existing Solaris server. Plus, the wireless connection from my MacBook to the Solaris box has been rock solid since the last Airport update from Apple.
There are a couple ways to do this, of course. I can create a new raidz set of 3-4 drives and then expand the zpool to include this new space. I still maintain data integrity because every drive in the zpool is part of a raidz set. My other choice is to create a new raidz set of 2-4TB and then move the data from the old zpool into a new zpool. After that’s done, I would destroy the old zpool and repurpose or sell the 250GB drives. What I don’t want to do is add single drives to the zpool because then I lose data integrity if that drive fails, since it’s not part of a raidz set.
Right now, I have five 250GB drives in the existing raidz set. 4 are SATA and 1 is ATA. For my situation, I’m interested in reducing the total number of drives (to reduce both power and heat) by taking advantage of the pricing on higher density drives. At presstime, 500GB drives are around $70 ($0.14/GB), 1TB drives are $120 ($0.12/GB) and 1.5TB drives are $150 ($0.10/GB). I can do 4x 1TB drives for $480 or 3x 1.5TB drives for $450 and end up with the same usable storage (3TB).
If I wanted to save a little money, I could get 4x 750GB drives for $360 (2.25TB usable). The other drive sizes don’t make sense for trying to replace an existing 1TB array. If you had room for 4 more drives in your box, then I would consider adding a 4x 500GB or 4x 640GB set because the cost per GB is pretty close.
Of course, I’m out of SATA ports on my motherboard and almost out of drive bays in the case. If I create a new zpool and move all my data, I’ll need to have both drive sets online at least for a day while I copy 1TB of data. My plan is to get a 4x SATA II PCI card and attach the new drives there. Then I can move the zpool from the old set to the new set. Once that’s done, I’ll probably move the drives to the motherboard SATA connectors and leave the PCI card idle until I need to do something like this again.
Cheap SATA cards are plentiful, but hardware compatability with Solaris 10 is always a crapshoot. I’ve heard enough anecdotal evidence that I’m convinced I can use a card based on the Silicon Image SIL3124 chipset. In fact, the card I have in mind is this specific model. This is one of those areas that is make or break for home-brew Solaris servers. Cheap SATA drives are the whole reason for wanting to build a box like this, but sometimes finding cards with drivers that work can be a showstopper.
If anyone has other recomendations for SATA cards, I’d love to hear about them.