Sorry for the hijacking of your thread.
Manny -
When I went to college for the first time, I took a psychology course. All of sudden I had all this new information and I started to "diagnose" everyone around me. This person is probably bi-polar, that person is probably a narcissist, etc. The truth is I had no idea what I was talking about. I understood the basics but not nearly enough to understand something as complex as a psychological disorder. You see, a little bit of knowledge gives you a false sense of confidence but these things are very complicated and it takes a LOT of learning and experience before you can say for certain how things work in the real world.
Think about this, the Windows NT code has been in development for 19 years and it contains anywhere from 50-150 millions of lines of code (how much exactly isn't public knowledge) and it was written by thousands of the worlds most talented programmers. Don't you think that the problems that would concern an inexperienced programmer, would be immediately apparent to an senior programmer (who probably has a PHD in compsci)? Doesn't it make sense that they would have taken these issues in to account and addressed them?
I do understand, it's an academic exercise that you learn in your first year of college.Also i'm not sure that you completely understand how general fault chance is calculated for serial connected elements. The simple rule - general fault chance is higher then the biggest chance among the elements.
Maybe I should have mentioned that I have a Bachelor of Science in Systems Engineering & Architecture, not to mention two decades of experience as a network & systems administrator.
Even on a drive with bad blocks, a journaled file systems (NTFS, BTRFS, EXT2-4, Reiser, ZFS) will either self correct or remap the bad blocks. Yes its possible for a drive to develop so many bad blocks that the Filesystem can't address them but that's just a symptom of a failing drive, you can't avoid that. This is mostly an issue for non-journaled file systems such as FAT (16, 32, exFat). So yes you understand there is a danger here but you don't seem to get what that means in the real world. This is a case of experience and you really need to manage a lot of devices over a long period to understand the difference.So ECCs that you have mentioned that used for storing data on hdd, works well, and some time even can repair bed block with remapping and without data corruption. But not always.
Sorry to tell you but we don't use ECC RAM that much anymore. It adds a premium to the cost of the server, ECC slows down RAM performance, reduces your max ram and the protection it offers has been debated for years. There was a time where I had to have ECC ram in a server to even install server applications (such as M.S Exchange or SQL) but since Win2008 that hasn't been an issue. Bits flip all the time and the more ram you have, the faster the clock rate, the bigger the problem gets. If M.S didn't handle this on the software level, most PCs would BSOD all the time. So ECC adds a level of protection on the hardware level but this is also handled by the OS. These days I don't buy ECC ram unless the vendor requires it and that doesn't happen to much anymore.Other links in the chain are much less protected. For example RAM for servers has ECC and that cause one extra chip on the module. Home users don't want to pay more for ECC and loose ram size. So it not protected much.
You and I are not in competition with one another. You are inexperienced and I'm taking the time to explain where you are making your mistakes because in the past were kind enough to do the same for me. You don't have to listen to me but I'd suggest spending some time on a professional forum like Technet.That you know something, and i know something, may be you know something better. But i know that no matter how cool you are, there is always something that you don't know.
The problem is you don't understand how to do a proper risk assessment and it's caused you to overestimate how dangerous these issues really are. So if your concern is explaining to people that their data is put at risk, that's fine just don't exaggerate the issue. Do you experience read issues? No you don't, so why are you trying to worry someone about something that COULD be an issue? An electrical spike COULD damage your computer, is the solution to tell people don't turn power on their computers? Of course not. Yes each app is different and they CAN have this issue but it is not a problem with Fancycache at the moment. SO instead of worrying them about some hypothetical possibility, why not actually help them by taking the time to explain how they can protect their data using data parity strategies (RAID, Storage Spaces, Backups, etc)So you can very long say to me how solution safe is solution, but i will always say: "There is a way how it will go wrong! And it will!"
This product is drawing a lot of enthusiasts and it's clear they want to learn. Who are you you to judge if someone you don't know anything about is ready for that or not? You think you are helping people by telling them to go away because they don't understand how things work, you don't either. Should I tell you to go away, don't mess with this beta? You aren't helping someone to learn by scaring them away from experimenting. The only real thing a user needs to be told is that they need to have their data backed up before they start testing this product. Anyone who seeks out a product like this has or will break their machine at one time or another and in the process of figuring out how to fix it, they'll learn how computers work. That's the very soul of being a computer hobbyist, build it, tweak it, break it, fix it and repeat.If some one is smart enough to maintain caching and all possible issues - then he would not post such questions, and if he does, probably it is not good idea to advice him to use caching at all. General rule - "Afraid? - Don't use!" especially when it is in BETA.