>>>>> "rb" == Ross Becker <ross@XXX> writes: rb> Otherwise, I may dump the JBOD and go back to the rb> controller-based RAID. try smaller vdev's for better performance. for raidz2, n^2 + 2. so try maybe two vdevs of 6 disks each plus a hotspare shared by both. for seek-heavy workloads the consensus expectation for 6 disk raidz2 vdev is indeed 1/4 the performance of raid6. but for streaming reads and writes it's hoped to be about the same. It's interesting what you actually find though. Check that the areca's write cache is still enabled when it's in JBOD mode. Is that checkable? Is there some other mode you can try, where you make 12 RAID0 sets each containing 1 disk? I've a few rants about raid-on-a-card and don't like it, but you're unlikely to be swayed by them because you no doubt think the areca is the most awesome card ever and are probably right. However there are corruption bugs in ZFS which are always a problem, but are worse if you have no zpool-level redundancy, so if you use the areca to do RAID, you'll be more vulnerable to them. also there are a variety of problems where ZFS will declare a pool corrupt and refuse to import it, or will lock up or panic every time it tries to import the pool, and it sounds like sometimes you can get your data back if you edit the code to bypass the ``sanity check'' and import the pool anyway but if you are not a ZFS developer this isn't an option for you, so the pool may as well be dead just because it failed to pass some ``sanity check''. In short, afaict ZFS is approximately like UFS+logging (also ``always consistent on disk''), except without fsck_ufs and with more bugs to work out, and of course with some really important new features. Because of these two problems, no matter what you use to do the raid, if the data is irreplaceable you should store it on two separate filesystems (two zpools on two machines, or a zpool and something else). Finally, when doing backups of your pool to protect it from corruption as just advised, the 'zfs send' format is not robust like 'tar'. If you flip one bit early in the stream, the entire stream is lost (emphatically untrue of tar). And there is no tool to validate a stream without extracting it. There is no mechanism to extract a stream of one version onto a filesystem of another---so far, it always works AFAIK, but if it ever didn't, you're SOL as there's no clear consensus to support this and probably no regression test for it, and there is no zfs-recv-onto-UFS recovery tool analagous to 'ufsrestore' that will extract onto notZFS, or extract a small file from the middle of a gigantic stream. Therefore you can only use zfs send as immediate input to 'zfs recv' and should use GNU tar or something else if you want to write a stream to tape or store streams on some archival filesystem. this is all in the list. Maybe flip through the last month or two of postings. I sure wish I had, before I'd converted a bunch of stuff to ZFS.
Attachment:
pgpxUiaqkXmoT.pgp
Description: PGP signature