Re: What the people are asking for + a few others.
Posted: Wed May 09, 2012 12:38 pm
Two new items added: Defer write times and GUI interface request.
Technical Support and Help Center
https://forum.romexsoftware.com/en-us/
Np. Let me know if you get them finish via the PM system if you like. This way people will know what will be comming into the next release and what issues are already fixed for the release.support wrote:Thank you, Mradr. I have set this topic sticky at top of lists.
Yes. I think I need to do a better example of what was being ask for it.PtDragon wrote:Dynamic pool sounds good (hope there will be option to configure limits, not only priority).
Complete this feature alone and you've got my money. Obviously not so easy as you're a layer on top of the filesystem which can be changed 'offline' without FC knowledge. I'm crossing my fingers.Mradr wrote: 1) Persistent L2.
Using L1 for Write-Cache is simply too risky for me personally. As a read cache, I just don't see the point. It's especially pointless as RAM is volatile/not persistent between boots. Windows + Superfetch are going to do a better job for most workloads. Also, Windows will automatically give you the pooling behavior - use all RAM based on priority of files accessed. If FancyCache is caching something in L1 and Windows has the file in mem (on the Standby list for instance) then you've just wasted 2x the RAM for nothing. For those interested, this waste of memory can be proven with RAMMAP (SysInternals).Mradr wrote: 3) Pool Dynamic Priority Percent Base L1.
4) Set L2 to Read, Write, or R/W, and give the option to disable L1 for L2 (Mradr, laferrierejc)
Please consider using the standard Windows performance counters to track FC performance (perfmon, logman). That is, make FC a provider of counter data. Then users can track usage in logs, schedule when to capture data and for how long. Why reinvent the wheel here - there is already a facility for tracking and graphing perf data. With this approach, you can even graph FC hits against actual disk IOps, memory usage etc. (thus better showing the incredible benefits of FC). Also users should be able to capture perf logs for scenarios like boot performance (post login) long before the FC management UI could ever come up.Mradr wrote: 6) Keep-Alive Performance Monitor.
There are two levels of the dynamically caching going on. One at the pool level and the other at the drive level. The pool level has very little user control as the system/FC handels that one. You just set the max size of it and the system will take care on how it grows or shrinks for the need. At the drive level you'll be able to set the dynamic min and max use of the pool. This would allow for the VM to take on more ram if it needs but still makes sure it has the min 4GB of the pooled cached ram. I'll updated a better example to help clear up this misunderstanding later.kalua wrote:Pooling memory with a min/max would be good, so all drives can share from the same pool. As it is now, I give about 16GB of a 24GB system to FC, but that is 8GB to one drive, 7GB to another, and 1GB to the system drive. When I launch something big like a VM, I have to reconfigure FC to use less. A better way would be to tell FC to use a min of 2GB, a max of 12GB, and have it dynamically allocate up to that 12GB only when needed.
Yup, the same idea I guess. A sort of software RAID that takes on less risk as the data is being saved as a 1+0 configuration (Note: don't get this confused with a raid 0 or a raid 0+1). All the speed with "lower" risk to data lost. What happens mainly is that instead of the main disk getting hited with all the write request, the next drive would be able to take on "half" on them. In the case of a SSD, the SSD can write faster than a HDD and provide almost 3-4x the performance as the drive can take on more write request than the HDD can. Even if the SSD was another HDD, the increase would still be higher as the main drive would be able to finish the write request and then get to work on the read request that much faster.kalua wrote: L2 writable would also be helpful in a way you might not have thought of. In particular, I would love to use less-expensive and lower power spinning hard drives of large capacity. I can't, because they are slower. A large FC read cache makes them very usable, but the slow write speeds means the write cache limited to RAM gets filled quickly.
I realize that with WD black and other 7200 RPM drives, using a conventional SSD that is only two times faster that the spinning drive for L2 cache may not give you much improvement if L2 was writable. But when the spinning media is a 5400 RPM low-power drive, and your L2 cache is on a high-end SSD (like a Revo X2 or a RAID-0) I believe the huge difference (my L2 cache disk is over 8 times faster writing than my other disks) in write speeds would make writable L2 a big plus.
"offline mode" likely just refers to any time a disk is mounted when FC isn't running. Such as booting into a different OS partition, or running chkdsk on boot. Alternatively, if your cache was on a USB device and was removed/used elsewhere. The only way around it is likely to be hackery, or a best-effort (not 100% stable) solution. I guess you could write your own filesystem as well.The only danger to this is that if data was changed in "off line mode" then FC might return the wrong data. I question what they mean by that along side that fact when and why data would change in "off line mode,"
Well yea, editing data when FC isn't around would refer to "offline mode," but:mabellon wrote:"offline mode" likely just refers to any time a disk is mounted when FC isn't running. Such as booting into a different OS partition, or running chkdsk on boot. Alternatively, if your cache was on a USB device and was removed/used elsewhere. The only way around it is likely to be hackery, or a best-effort (not 100% stable) solution. I guess you could write your own filesystem as well.The only danger to this is that if data was changed in "off line mode" then FC might return the wrong data. I question what they mean by that along side that fact when and why data would change in "off line mode,"
More or less.. you would just have to force the file system to always be NTFS even on flash drive, or as you said, create your own file system. Good luck on forcing people to use a whole new file system, lol. NTFS would on the other hand be ok as 90% of most drives use it already I guess.As for the "persistent cache" part of it:
Even if the data changes, why not check the data's "MD5" to make sure it is current? This would increase startup time of the program, so it wouldn't be really useful at startup[or aka first run of the program], but would offer that "safer" offline mode change to happen.
Then again... What data is being changed offline? I mean, most users wont be booting into Linux and changing data on a windows partition for fun. Even if they are, they would already know the risk of doing so. If they did not, then that would be a user failer on how to use their system. Even still, the File System should also be updating the Time Changed or the Data Accessed variable on the file. This gives even another option for a "safer" offline experience would it not?