I can only imagine that this hurts. My caches are fairly large, so I usually don't end up in such a scenario. I currently employ a 64 GByte L1 and a ~2 TByte L2 cache...RobF99 wrote:Here is a problem that has always existed and is further accentuated now with L2 Write cache....
I have a 30 Gb write cache for L2. My L1 is 1 Gb. Delay write 120 seconds - but the problem happens with smaller numbers and even larger e.g. 600 seconds.
I process a lot of images via batching in photoshop. e.g. 1000 X 1 Gb TIF files. When I save files at the beginning they save fast because they are writing to L2 - when it comes time for Primocache to write the data to the hard drive - whether 60 seconds, 120 seconds or 600 seconds - all other operations on the hard drives stop as it waits to write the data to hard drive (sometimes as much as 20 Gb). So my batching pauses for up to 20 minutes as it writes the data the hard drive. I can't even browse a folder or do anything while it write the L2 cache. In the end my overall time taken is the same as if I did not use Primocache - because no othe operations seem to be able to happen as it flushes the write cache. Using Windows 7 64 bit. The disk queue length is around 5 as it is writing. It looks like this write operation takes "exclusive" priority over the drive as it flushes and certainly takes priority over read operations on that same drive until ALL data is flushed.
Initially it will write 4 to 5 files per minute to L2 cache - when when it flushes I can wait for up to 20 minutes as it flushes.
Something needs to be done to lower the priority it takes over the hard drive. Can this be done as background? This might solve the problem.
Anyhow, it sounds a bit like you were writing data at a fairly high data rate. When your cache is smaller than the total amount of data written plus you are writing data faster than the target drive can sustain, caching will never help.
However, it might be the case that you are right and that no write operations toward L1 cache are accepted while flushing the cache to disk. If that is true, it should be changed - obviously. Any block written from L1 or L2 to disk can be free'ed up, so the flushing should not exclusive block everything, but free up block by block to further accept new write operations. Those will run into cache only at the speed of the flush to disk itself, but that would be better than not accepting data. Anyhow, this is really speculating, as I can't reproduce the situation on my system right now. After all, it sounds like you cache is way to small.
Do the math:
How many files are you writing in total?
How big they are?
How long does it take?
What is the maximum speed your (uncached) drive can sustain?
Use e.g. ATTO DiskBenchmark for this to measure.
If - after all - the amount of data is so high that it more or less fills the drive with I/O during the whole batch processing, you won't really accelerate it with a cache - except the cache is larger than the total size of the files. You may see a bit optimized read vs. write operations. Depending on your drive fragmentation, that would not really help. I strongly recommend defragmenting the drive regularly (every night for instance), that improves I/O strongly too.