Skip navigation

UPDATE: Someone thought it’d be funny to submit this to Hacker News. It looks like I made some ZFS fans pretty unhappy. I’ll address some of the retorts posted on HN that didn’t consist of name-calling and personal attacks at the end of this article. And sorry, “OpenZFSonLinux,” I didn’t “delete the article after you rebuked what it said” as you so proudly posted; what I did was lock the post to private viewing while I added my responses, a process that doesn’t happen quickly when 33 of them exist. It’s good to know you’re stalking my posts though. It’s also interesting that you appear to have created a Hacker News user account solely for the purpose of said gloating. If this post has hurt your feelings that badly then you’re probably the kind of person it was written for.

It should also be noted that this is an indirect response to advice seen handed out on Reddit, Stack Overflow, and similar sites. For the grasping-at-straws-to-discredit-me HN nerds that can’t help but harp on the fact that “ZFS doesn’t use CRCs [therefore the author of this post is incompetent],” would you please feel free to tell that to all the people that say “CRC” when discussing ZFS? Language is made to communicate things and if I said “fletcher4” or “SHA256” they may not know what I’m talking about and think I’m the one who is clueless. Damned if you do, damned if you don’t.


tl;dr: Hard drives already do this, the risks of loss are astronomically low, ZFS is useless for many common data loss scenarios, start backing your data up you lazy bastards, and RAID-5 is not as bad as you think.


Bit rot just doesn’t work that way.

I am absolutely sick and tired of people in forums hailing ZFS (and sometimes btrfs which shares similar “advanced” features) as some sort of magical way to make all your data inconveniences go away. If you were to read the ravings of ZFS fanboys, you’d come away thinking that the only thing ZFS won’t do is install kitchen cabinets for you and that RAID-Z is the Holy Grail of ways to organize files on a pile of spinning rust platters.

In reality, the way that ZFS is spoken of by the common Unix-like OS user shows a gross lack of understanding of how things really work under the hood. It’s like the “knowledge” that you’re supposed to discharge a battery as completely as possible before charging it again which hasn’t gone away even though that was accurate for old Ni-Cd battery chemistry and will destroy your laptop or cell phone lithium-ion cells far faster than if you’d have just left it on the charger all the time. Bad knowledge that has spread widely tends to have a very hard time dying. This post shall serve as all of the nails AND the coffin for the ZFS and btrfs feature-worshiping nonsense we see today.

Side note: in case you don’t already know, “bit rot” is the phenomenon where data on a storage medium gets damaged because of that medium “breaking down” over time naturally. Remember those old floppies you used to store your photos on and how you’d get read errors on a lot of them ten years later? That’s sort of like how bit rot works, except bit rot is a lot scarier because it supposedly goes undetected, silently destroying your data and you don’t ever find out until it’s too late and even your backups are corrupted.

“ZFS has CRCs for data integrity”

A certain category of people are terrified of the techno-bogeyman named “bit rot.” These people think that a movie file not playing back or a picture getting mangled is caused by data on hard drives “rotting” over time without any warning. The magical remedy they use to combat this today is the holy CRC, or “cyclic redundancy check.” It’s a certain family of hash algorithms that produce a magic number that will always be the same if the data used to generate it is the same every time.

This is, by far, the number one pain in the ass statement out of the classic ZFS fanboy’s mouth and is the basis for most of the assertions that ZFS “protects your data” or “guards against bit rot” or other similar claims. While it is true that keeping a hash of a chunk of data will tell you if that data is damaged or not, the filesystem CRCs are an unnecessary and redundant waste of space and their usefulness is greatly over-exaggerated by hordes of ZFS fanatics.

Hard drives already do it better

Enter error-correcting codes (ECC.) You might recognize that term because it’s also the specification for a type of RAM module that has extra bits for error checking and correction. What the CRC Jesus clan don’t seem to realize is that all hard drives since the IDE interface became popular in the 1990s have ECC built into their design and every single bit of information stored on the drive is both protected by it and transparently rescued by it once in a while.

Hard drives (as well as solid-state drives) use an error-correcting code to protect against small numbers of bit flips by both detecting and correcting them. If too many bits flip or the flips happen in a very specific way, the ECC in hard drives will either detect an uncorrectable error and indicate this to the computer or the ECC will be thwarted and “rotten” data will successfully be passed back to the computer as if it was legitimate. The latter scenario is the only bit rot that can happen on the physical medium and pass unnoticed, but what did it take to get there? One bit flip will easily be detected and corrected, so we’re talking about a scenario where multiple bit flips happen in close proximity and in such a manner that it is still mathematically valid.

While it is a possible scenario, it is also very unlikely. A drive that has this many bit errors in close proximity is likely to be failing and the the S.M.A.R.T. status should indicate a higher reallocated sectors count or even worse when this sort of failure is going on. If you’re monitoring your drive’s S.M.A.R.T. status (as you should be) and it starts deteriorating, replace the drive!

Flipping off your CRCs

Note that in most of these bit-flip scenarios, the drive transparently fixes everything and the computer never hears a peep about it. ZFS CRCs won’t change anything if the drive can recover from the error. If the drive can’t recover and sends back the dreaded uncorrectable error (UNC) for the requested sector(s), the drive’s error detection has already done the job that the ZFS CRCs are supposed to do; namely, the damage was detected and reported.

What about the very unlikely scenario where several bits flip in a specific way that thwarts the hard drive’s ECC? This is the only scenario where the hard drive would lose data silently, therefore it’s also the only bit rot scenario that ZFS CRCs can help with. ZFS with CRC checking will detect the damage despite the drive failing to do so and the damage can be handled by the OS appropriately…but what has this gained us? Unless you’re using specific kinds of RAID with ZFS or have an external backup you can restore from, it won’t save your data, it’ll just tell you that the data has been damaged and you’re out of luck.

Hardware failure will kill your data

If your drive’s on-board controller hardware, your data cable, your power supply, your chipset with your hard drive interface inside, your RAM’s physical slot connection, or any other piece of the hardware chain that goes from the physical platters to the CPU have some sort of problem, your data will be damaged. It should be noted that SATA drive interfaces use IEEE 802.3 CRCs so the transmission from the drive CPU to the host system’s drive controller is protected from transmission errors. Using ECC RAM only helps with errors in the RAM itself, but data can become corrupted while being shuffled around in other circuits and the damaged values stored in ECC RAM will be “correct” as far as the ECC RAM is concerned.

The magic CRCs I keep making fun of will help with these failures a little more because the hard drive’s ECC no longer protects the data once the data is outside of a CRC/ECC capable intermediate storage location. This is the only remotely likely scenario that I can think of which would make ZFS CRCs beneficial.

…but again: how likely is this sort of hardware failure to happen without the state of something else in the machine being trashed and crashing something? What are the chances of your chipset scrambling the data only while the other millions of transistors and capacitors on the die remain in a functional and valid working state? As far as I’m concerned, not very likely.

Data loss due to user error, software bugs, kernel crashes, or power supply issues usually won’t be caught by ZFS CRCs at all. Snapshots may help, but they depend on the damage being caught before the snapshot of the good data is removed. If you save something and come back six months later and find it’s damaged, your snapshots might just contain a few months with the damaged file and the good copy was lost a long time ago. ZFS might help you a little, but it’s still no magic bullet.

Nothing replaces backups

By now, you’re probably realizing something about the data CRC gimmick: it doesn’t hold much value for data integrity and it’s only useful for detecting damage, not correcting it and recovering good data. You should always back up any data that is important to you. You should always keep it on a separate physical medium that is ideally not attached to the computer on a regular basis.

Back up your data. I don’t care about your choice of filesystem or what magic software you write that will check your data for integrity. Do backups regularly and make sure the backups actually work.

In all of my systems, I use the far less exciting XFS on Linux with metadata CRCs (once they were added to XFS) on top of a software RAID-5 array. I also keep external backups of all systems updated on a weekly basis. I run S.M.A.R.T. long tests on all drives monthly (including the backups) and about once a year I will test my backups against my data with a tool like rsync that has a checksum-based matching option to see if something has “rotted” over time.

All of my data loss tends to come from poorly typed ‘rm’ commands. I have yet to encounter a failure mode that I could not bounce back from in the past 10 years. ZFS and btrfs are complex filesystems with a few good things going for them, but XFS is simple, stable, and all of the concerning data loss bugs were ironed out a long time ago. It scales well and it performs better all-around than any other filesystem I’ve ever tested. I see no reason to move to ZFS and I strongly question the benefit of catching a highly unlikely set of bit damage scenarios in exchange for the performance hit and increased management complexity that these advanced features will cost me…and if I’m going to turn those features off, why switch in the first place?


Bonus: RAID-5 is not dead, stop saying it is

A related category of blind zealot is the RAID zealot, often following in the footsteps of the ZFS zealot or even occupying the same meat-suit. They loudly scream about the benefits of RAID-6, RAID-10, and fancier RAID configurations. They scorn RAID-5 for having terrible rebuild times, hype up the fact that “if a second drive dies while rebuilding, you lose everything!” They point at 10TB hard drives and do back-of-the-napkin equations and tell you about how dangerous and stupid it is to use RAID-5 and how their system that gives you less space on more drives is so much better.

Stop it, fanboys. You’re dead wrong and you’re showing your ignorance of good basic system administration practices.

I will concede that your fundamental points are mostly correct. Yes, RAID-5 can potentially have a longer rebuild time than multi-stripe redundant formats like RAID-6. Yes, losing a second drive after one fails or during a rebuild will lose everything on the array. Yes, a 32TB RAID-5 with five 8TB drives will take a long time to rebuild (about 50 hours at 180 MB/sec.) No, this isn’t acceptable in an enterprise server environment. Yes, the infamous RAID-5 write hole (where a stripe and its parity aren’t both updated before a crash or power failure and the data is damaged as as result) is a problem, though a very rare one to encounter in the real world. How do I, the smug techno-weenie advocating for dead old stupid RAID-5, counter these obviously correct points?

  • Longer rebuild time? This is only true if you’re using the drives for something other than rebuilding while it’s rebuilding. What you really mean is that rebuilding slows down less when you interrupt it with other work if you’re using RAID levels with more redundancy. No RAID exists that doesn’t slow down when rebuilding. If you don’t use it much during the rebuild, it’ll go a lot faster. No surprise there!
  • Losing a second drive? This is possible but statistically very unlikely. However, let’s assume you ordered a bunch of bad Seagates from the same lot number and you really do have a second failure during rebuild. So what? You should be backing up the data to an external backup, in which case this failure does not matter. RAID-6 doesn’t mean you can skip the backups. Are you really not backing up your array? What’s wrong with you?
  • RAID-5 in the enterprise? Yeah, that’s pretty much dead because of the rebuild process slowdown being worse. An enterprise might have 28 drives in a RAID-10 because it’s faster in all respects. Most of us aren’t an enterprise and can’t afford 28 drives in the first place. It’s important to distinguish between the guy building a storage server for a rack in a huge datacenter and the guy building a home server for video editing work (which happens to be my most demanding use case.
  • The RAID-5 “write hole?” Use an uninterruptible power supply (UPS). You should be doing this on any machine with important data on it anyway! Assuming you don’t use a UPS, Linux as of kernel version 4.4 has added journaling features for RAID arrays in an effort to close the RAID-5 write hole problem.

A home or small business user is better off with RAID-5 if they’re also doing backups like everyone should anyway. With a 7200 RPM 3TB drive (the best $/GB ratio in 7200 RPM drives as of this writing) costing around $95 each shipped, I can only afford so many drives. I know that I need at least three for a RAID-5 and I need double as many because I need to back that RAID-5 up, ideally to another machine with another identically sized RAID-5 inside. That’s a minimum of six drives for $570 to get two 6TB RAID-5 arrays, one main and one backup. I can buy a nice laptop or even build a great budget gaming desktop for that price, but for these storage servers I haven’t even bought the other components yet. To get 6TB in a RAID-6 or RAID-10 configuration, I’ll need four drives instead of three for each array, adding $190 to the initial storage drive costs. I’d rather spend that money on the other parts and in the rare instance that I must rebuild the array I can use the backup server to read from to reduce my rebuild time impact. I’m not worried about a few extra hours of rebuild.

Not everyone has thousands of dollars to allocate to their storage arrays or the same priorities. All system architecture decisions are trade-offs and some people are better served with RAID-5. I am happy to say, however, that if you’re so adamant that I shouldn’t use RAID-5 and should upgrade to your RAID levels, I will be happy to take your advice on one condition.

Buy me the drives with your own money and no strings attached. I will humbly and graciously accept your gift and thank you for your contribution to my technical evolution.

If you can add to the conversation, please feel free to comment. I want to hear your thoughts. Comments are moderated but I try to approve them quickly.


Update to address Hacker News respondents

First off, it seems that several Hacker News comments either didn’t read what I wrote, missed a few things, or read more into it than what I really said. I want to respond to some of the common themes that emerged in a general fashion rather than individually.

I am well aware that ZFS doesn’t exactly use “CRCs” but that’s how a lot of people refer to the error-checking data in ZFS colloquially so that’s the language I adopted; you pointing out that it’s XYZ algorithm or “technically not a CRC” doesn’t address anything that I said…it’s just mental masturbation to make yourself feel superior and it contributes nothing to the discussion.

I was repeatedly scolded for saying that the ZFS checksum feature is useless despite never saying that. I acknowledge that it does serve a purpose and use cases exist. My position is that I believe ZFS checksums constitute a lot of additional computational effort to protect against a few very unlikely hardware errors once the built-in error checking and correction in most modern hardware is removed from the overall picture. I used the word “most” in my “ZFS is useless for many common data loss scenarios” statement for a reason. This glossing over of important details is the reason I refer to such people as ZFS “zealots” or “fanboys.” Rather than taking the time to understand my position fully before responding, they quickly scanned the post for ways to demonstrate my clear ignorance of the magic of ZFS to the world and jumped all over the first thing that stood out.

 

kabdib related an anecdote where the RAM on a hard drive’s circuit board was flipping data bits in the cache portion and that the system involved used an integrity check similar to ZFS which is how the damage was detected. The last line sums up the main point: “Just asserting “CRCs are useless” is putting a lot of trust on stuff that has real-world failure modes. Remember that I didn’t assert that CRCs are useless; I specifically outlined where the ZFS checksum feature cannot be any more helpful than existing hardware integrity checks which is not the same thing. I question how common it is for hard drive RAM to flip only the bits in a data buffer/cache area without corrupting other parts of RAM that would cause the drive’s built-in software to fail. I’m willing to bet that there aren’t any statistics out there on such a thing. It’s good that a ZFS-like construct caught your hardware issue, but your obscure hard drive failure anecdote does not necessarily extrapolate out to cover billions of hard drives. Still, if you’re making an embedded device like a video game system and you can afford to add that layer of paranoia to it, I don’t think that’s a bad thing. Remember that the purpose of my post is to address those who blindly advocate ZFS as if it’s the blood of Computer Jesus and magically solves the problems of data integrity and bit rot.

rgbrenner offered indirect anecdotal evidence, repetitions of the lie that I asserted “CRCs are useless,” and then made a ridiculous attempt at insulting me: “If this guy wrote a filesystem (something that he pretends to have enough experience to critique), it would be an unreliable unusable piece of crap. Well then, “rgbrenner,” all I can say is that if you are so damned smart and have proof of this “unreliable and unusable” state that it’s in, file a bug against the filesystem I wrote and use on a daily basis for actual work so it can be fixed, and feel free to keep the condescending know-it-all attitude to yourself when you do so.

AstralStorm made a good point that I’ve also been trying to make: if your data is damaged in RAM that’s not used by ZFS, perhaps while the data is being edited in a program, it can be damaged while in RAM and ZFS will have no idea that it happened.

wyoung2 contributed a lot of information that was well-written and helpful. I don’t think I need to add anything to it, but it deserves some recognition since it’s a shining chunk of gold in this particular comment septic tank.

X86BSD said that “Consumer hardware is notriously busted. Even most of the enterprise hardware isn’t flawless. Firmware bugs etc. .” I disagree. In my experience the vast majority of hardware works as expected. Even most of the computers with every CPU regulator capacitor leaking electrolyte pass extended memory testing and CPU burn-in tests. Hard drives fail a lot more than other hardware does, sure, but even then the ECC does what it’s supposed to do and detects the error and reports it instead of handing over the broken data that failed the error check. I’d like some hard stats rather than anecdotes but I’m not even sure if they exist due to the huge diversity of failure scenarios that can come about.

asveikau recalls the hard drive random bit flipping problem hitting him as well. I don’t think that this anecdote has value because it’s a hard drive hardware failure. Sure, ZFS can catch it, but let’s remember that any filesystem would catch it because the filesystem metadata blocks will be read back with corruption too. XFS has optional metadata CRCs and those would catch this kind of disk failure so I don’t think ZFS can be considered much better for this failure scenario.

wyoung2 made another lengthy comment that requires me to add some details: I generally work only in the context of Linux md RAID (the raid5 driver specifically) so yes, there is a way to scrub the entire array: ‘echo check > /sys/block/md0/md/sync_action’. Also, if a Linux md RAID encounters a read error on a physical disk, the data is pulled from the remaining disk(s) and written back to the bad block, forcing the drive to either rewrite the data successfully or reallocate the sector which has the same effect; it no longer dumps a whole drive from the RAID on the basis of a single read error unless the attempts to do a “repair write” fail also. I can’t really comment on the anecdotal hardware problems discussed; I personally would not tolerate hardware that is faulty as described and would go well out of my way to fix the problem or replace the whole machine if no end was in sight. (I suppose this is a good time to mention that power supply issues and problems with power regulation can corrupt data…)

Yet another wyoung2 comment points out one big advantage ZFS has: if you use RAID that ZFS is aware of, ZFS checksums allow ZFS to know what block is actually bad when you check the array integrity. I actually mentioned this in my original post when I referenced RAID that ZFS pairs with. If you use a proper ZFS RAID setup then ZFS checksums become useful for data integrity; my focus was on the fact that without this ZFS-specific RAID setup the “ZFS protects your data” bullet-point is false. ZFS by itself can only tell you about corruption and it’s a dangerous thing to make people think the protection offered by a ZFS RAID setup is offered by ZFS by itself.

At this point I can only assume that rgbrenner just enjoys being a dick. And that, in contrast, AstralStorm understood what I was trying to say to at least some extent.

DiabloD3 quoted me on “RAID is not a replacement for backups” and then mentions ZFS external backup commands. Hey, uh, you realize that the RAID part was basically a separate post, right? In fact, there is not a single mention of ZFS in the RAID section of the post other than as a topic transition mechanism in the first paragraph. I included the RAID part because the ZFS religion and the RAID-over-5-only religion have the same “smell.”

I’ll have to finish this part later. It takes a lot of time to respond to criticism. Stay tuned for more. I have to stop so I can unlock the post and keep OpenZFSonLinux from eating off his own hands with anticipation. As a cliff-hanger, check this out…I enjoyed the stupidity of X86BSD’s second comment about me endangering my customers’ data [implicitly because I don’t use ZFS] so much that I changed my blog to integrate it and embrace how horrible of a person I am for not making my customers use ZFS with checksums on their single-disk Windows machines. If my destiny is to be “highly unethical” then I might as well embrace it.

Advertisements

13 Comments

  1. I hear you chief. It makes me sick too. Too many eExperts.

    Right now I’m building file server and wherever I go…I hear NAS, ZFS, FreeNAS, BTRFS.

    I seek stability and reliability. I really love my photos taken around the world with so many stories and “suffering”

    So far I wrote paranoid script for copying data and I can check all data integrity with checksums.

    I’m also paranoid (perhaps it is the teaching on extreme sports I do over 30years)

    So far I found nothing intelligent…I think I use standard ext4 on Debian. Keep checksum created right from the beginning of file. If anything get corrupted it starts all on flash cards in camera and I cannot do anything about it. Run scripts that check file system.

    What do you suggest as optimal intelligent solution for secure file storage/archive. I won’t run file server 24/7. I just want to satisfy my paranoia and sleep better 😀

    • There is no substitute for a good backup. Store your files on two different kinds of media at two different physical locations and sync those media as often as possible. If something goes sour on the server, you’ll have backups of most of it. Since nothing can guarantee data will never bit rot, it’s best to skip things like filesystem checksums and use redundancy instead. For the paranoia you can use a tool like md5deep to make a list of (and verify) data checksums periodically, but I’d only bother doing such a thing very infrequently (maybe every half-year) because it takes a ton of time and if you’re rsync-ing for backup it’s not going to transfer a rotted file unless the source file’s change time is also different.

  2. Cheers, finally some logical points regarding ZFS, bit rot, raid, and backing up your data.

  3. I’ve been in threads about the maths behind RAID5 failures. If they were taken at face value, I’d be sitting on six nines likelihood of seeing a URE take an array offline for any given year – but I’ve never seen it happen. People suggest I have to be lying when I say how many RAID5 arrays I have in production without a failure. It’s absurd.

  4. I don’t know much about btrfs so I’ll stick to ZFS related comments. ZFS does not use CRC, by default it uses fletcher4 checksum. Fletcher’s checksum is made to approach CRC properties without the computational overhead usually associated with CRC.

    Without a checksum, there is no way to tell if the data you read back is different from what you wrote down. As you said corruption can happen for a variety of reason – due to bugs or HW failure anywhere in the storage stack. Just like other filesystems not all types of corruption will be caught even by ZFS, especially on the write to disk side. However, ZFS will catch bit rot and a host of other corruptions, while non-checksumming filesystems will just pass the corrupted data back to the application. Hard drives don’t do it better, they have no idea if they’ve bit rotted over time and there are many other components that may and do corrupt data, it’s not as rare as you think. The longer you hold data and the more data you have the higher the chance you will see corruption at some point.

    I want to do my best to avoid corrupting data and then giving it back to my users so I would like to know if my data has been corrupted (not to mention I’d like it to self-heal as well which is what ZFS will do if there is a good copy available). If you care about your data use a checksumming filesystem period. Ideally, a checksumming filesystem that doesn’t keep the checksum next to the data. A typical checksum is less than 0.14 Kb while a block that it’s protecting is 128 Kb by default. I’ll take that 0.1% “waste of space” to detect corruption all day, any day. Now let’s remember ZFS can also do in-line compression which will easily save you 3-50% of storage space (depending on the data you’re storing) and calling a checksum a “waste of space” is even more laughable.

    I do want to say that I wholeheartedly agree with “Nothing replaces backups” no matter what filesystem you’re using. Backing up between two OpenZFS pools machines in different physical location is super easy using zfs snapshot-ting and send/receive functionality.

    [Admin edit: I got mad when senpai didn’t notice me]

    • It does not matter what algorithm is used for the CRC/checksum/hash. In all cases it is a smaller number generated from data that (if taken as one string of bits) constitutes a massively larger number, and it takes time to compute and storage to keep around. The question is this: is it worth the extra storage and the extra computation times for every single I/O operation performed on the filesystem? I say it isn’t.

      Hard drives DO in fact know if something has bit rotted, assuming the rot isn’t so severe that it extends beyond the error detection capabilities of the on-disk ECC. Whenever a drive reports an “uncorrectable error” it’s actually reporting an on-disk ECC error that was severe enough that the data couldn’t be corrected. In my opinion, on-disk checksums (CRCs, hashes, whatever term is preferred) are targeting a few types of very rare hardware failures (they must mangle data despite all hardware error checking mechanisms AND must not cause any other damage that crashes the program or machine which would process or write that data out to disk) and do so at significant expense (a check must be done for every piece of data that is read from disk). Even ZFS checksums are not foolproof; for example, if data is damaged in RAM or even in a CPU register before being sent to ZFS, the damaged data will still be treated as valid by ZFS because it has no way to know anything is wrong.

      As discussed in my post, ZFS checksums are useless without a working backup of the data to pull from, preferably a ZFS-specific RAID configuration that enables real-time “self-healing” as you’ve mentioned. Without some sort of redundancy…well, what are you going to do? You know it’s damaged but you have no way to fix it.

      You seem to take particular issue with my assertion that checksums are a waste of space. Granted, they’re relatively small compared to file data, however the space issue pales in comparison to the processing time and additional I/O for storing and retrieving those checksums. If the checksums aren’t beside the data then that 128K read will incur at least one 4K read to fetch the checksum which is not nearby, resulting in a disk performance hit. Enough read operations with checksum checking at once and streaming read speeds approach the speed of fully random I/O a lot faster than it would otherwise. It also takes CPU time to calculate a hash value over a 128K block; while some are faster than others, all take CPU time and large enough block sizes will repeatedly blow away CPU D-cache lines during the checksum work, reducing overall system performance. Since many ZFS users seem to pair it with FreeNAS and relatively small, weak systems like NAS enclosures, the implications of all this extra CPU hammering should be obvious. Of course, a Core i7 machine with 16GB of DDR4 RAM might do it so fast that it doesn’t matter as much, but being able to buy a bigger box to minimize the impact of lower efficiency does not change the fact that such a drop exists.

      In computing, we have to choose a set of compromises since rarely does any given solution satisfy speed, precision, reliability, etc. all at the same time. In my opinion, ZFS data checksums are not worth the added cost, particularly since the problem surface area is very small and unlikely to ever happen once the error checking coverage of hard drive ECC, RAM and on-CPU ECC if applicable, and various bus-level transceiver error detection methods are taken away. The beauty of computing is that you are free to make a different trade-off in favor of bit rot paranoia if it makes you sleep better at night. What’s right for me may not be right for you. I do not consider the very tiny risk of highly specific and unlikely corruption circumstances that can be detected to be worth covering ESPECIALLY since the same cosmic rays that can bit-flip the data in a detectable place could just as easily flip it in an undetectable place, but I’m not in your situation and making your choices.

      tl;dr: one of us is less risk-averse, and that’s okay.

  5. Perhaps this is long since dead, but I wanted to give an example where “bitrot” is quite common. Plenty of laptops still have 2.5″ mechanical hdds, if the drive is spinning and you pick up the laptop, it is quite likely to cause a few kilobytes of sequential broken data. Switch to zfs, activate copies x2 and the errors which the drive could notice, but not fix, are no longer a problem. Drive abuse to be sure, but quite common non the less.

    • It’s pretty hard to cause the damage you’re talking about, but the damage to the disk surface will be caught by the on-disk error correcting code if this happens. It is extremely unlikely that physical damage to the platter surface will cause data damage that can fool the ECC.

  6. And this is basically my thoughts on ZFS ever since I started hearing about it. Now, I’m not sure I would talk about ZFS independently of RAID-Z. The two are basically always paired. I will give them that the array expandability could be nice, but I have never seen a detailed speed test, and we highly value speed.

    That brings me to what I actually wanted to comment on. We will never use RAID-5 ever again. It’s not because of anything you mention, but rather, because the write speed is atrocious. After some problems, we found that our 5 drive array averaged about 5MB/s on write operations. This compared to a single drive averaging around 45MB/s. We tracked down the problem to something inherent in RAID-5. The data and parity are saved in different locations on each disk. This means that each write requires the head to write, then seek to another location on the drive, and write again. The reason random I/O is slow is all the head seeking, and RAID-5 forces this for every write. For $100 more we moved to a RAID-10 with slightly less space and 200MB/s writes. This is, of course, for mechanical drives. SSDs are not nearly as heavily effected, but are still slowed by random writes.

    Now, it does seem like to get the most out of modern drives and SMART, something would need to periodically force a read of every used bit on the drive to prevent bad bits from building up undetected. A full backup would do this, but they take forever. zpool scrub would do this. Does a full drive rsync do this as well? It’s much faster than a full backup.

    • I disagree that ZFS + RAID-Z are usually paired. The entire reason for my article is that people constantly sing the praises of ZFS without making it clear that RAID-Z (specifically, as opposed to ZFS on md/LVM RAID) is mandatory for many of the touted integrity features, specifically the magical self-healing that is such a huge draw. I feel that it is dangerous to advocate the features of ZFS without also explaining the requirements for those features to work, yet that’s what you see going on in most “what filesystem for my NAS/server?” threads: “ZFS, it magically stops bit rot and fixes damage! [But I’m not going to tell you about RAID-Z or emphasize good backups, nor about how detecting bit rot is useless without a non-broken backup copy!]”

      Your RAID-5 issue might be the same one I discovered if you’re using the md raid5 driver: very large stripe sizes cause massive write speed degradation and the default Linux md raid5 stripe cache size is too small. You’ll often see raid5 how-to guides say to use larger stripes for faster throughput but they are written by people that don’t understand that RAID-5 must be updated for an entire stripe at a time; it’s a form of write amplification just like SSDs, so even just writing one 4K sector requires reading not only a stripe width worth of sectors (minus the one being updated) from every disk excluding the parity disk but also writing one stripe width of parity in addition to the modified sector. For sequential workloads this tends to be of little consequence but for random writes it is simply a disaster. That’s why Linux caches up the stripe updates and tries to write them out more optimally, but the stripe cache is usually too small. It maxes out at 32768. Try using a 64k stripe width and setting the stripe cache size for all md raid5 arrays to 32768 after booting; you’ll probably notice a big difference in performance.

      RAID-10 has some issues of its own. I tried out md raid10 (far2) and found the overall performance to be quite poor relative to RAID-5. Of course, I didn’t try any sort of tuning knobs so I may not have given it a fair shake; however, I find that a well-tuned RAID-5 with a properly formatted and aligned XFSv5 filesystem performs well enough to easily handle dumping lossless compressed video data to it in real time while still serving up random small reads without issue, so it’s good enough for my situation. I can understand others choosing a different path though, and that’s what is so wonderful about the Linux ecosystem in general: everyone has options and can pick the one that suits them.

      A full-drive rsync will force reading of file data and most of the filesystem metadata but if you really want to force a full disk or array read from end-to-end, there’s an elegant and absurdly simple solution (though it’ll surely starve other tasks trying to perform I/O):

      cat /dev/md0 > /dev/null

      Or if you have the wonderful amazing glorious pv utility and want a progress indicator:

      pv -pterab /dev/md0 > /dev/null
    • 6TB RAID-10 XFS array scrub takes about 8hours on my file server.

      read/write over SMB tragedy (most likely Apple vs Linux vs Win)

      NFS 100+ MBs over Giga LAN

      otherwise max. speed 250MBs of RAID-1

  7. Bit rot is a problem now, it isn’t 1995 and you are just incorrect.

    Read the studies on hard drives and what the ACTUAL hard drive manufactures say.

    The entire reason for the extra checksum and checking/correcting on every read is the shear size of hard drives now.

    No hard drive ECC /CRC will not save you. Statistically every 12TB of data read there will be a silent data read error and that is what the manufacturers say, not some zfs zealots.
    The error read rate hasn’t changed much since 1995 and hardly anyone in 1995 would have been reading 12TB .
    You can buy a single 12 TB hard drive now , problem is you cannot read all 12 TB , without an error.

    Finally basically all OSes are going down the same route that zfs did for checking checksums of data on the fly. Linux has btrfs(use zfsonlinux) , Mac OS X new APFS and Microsoft’s ReFS.

    Read more here https://web.archive.org/web/20090228135946/http://www.sun.com/bigadmin/content/submitted/data_rot.jsp

    • You are objectively wrong and I can prove it any night of the week. I have a 12TB RAID-5 array sitting eight feet from me. If your “can’t read 12TB without an error” assertion is true for a single drive then five drives should be five times worse off, yet I’ve run a weekly data scrub on the array since I built it and there has not been a single parity mismatch. Even if the drive had a set of bit flips that happened to pass by ECC, the RAID-5 parity check would almost certainly still fail. For the parity check to pass despite the bit flips they’d have to be extremely specific and possibly span multiple disks in that specific manner.

      You also cite an article that cites studies from nearly a decade ago. Storage technology has changed a lot since 2008. The article is ultimately a marketing article, not a technical article. It’s written by a Sun “evangelist” which is a stupid name for “obnoxious marketing guy.”

      ReFS is being disabled as a new FS option in Windows 10 Pro SKUs soon, APFS is slow and has a lot of growing pains, btrfs is wonky in all sorts of ways and not trustworthy…what’s your point with all that other stuff? None of those are ZFS and none of those are seeing mass adoption.

      How do you explain my 12TB RAID-5 scrub consistently passing? Am I just super lucky and somehow blessed by God himself to the point that I never experience these data errors or is your assertion based on grossly outdated knowledge and the bit rot panic hype pushed by ZFS fanboys?


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: