What is this info mean?

FAQ, getting help, user experience about PrimoCache
User avatar
RAMbo
Level 6
Level 6
Posts: 73
Joined: Wed May 11, 2011 7:50 am

Re: What is this info mean?

Post by RAMbo »

All tests were done with 60sec deferred write, 32GB cache with 4k blocks.
Roughly the downloading works like this:

A] Several .RAR files are download.
B] Those files get unpacked. -> Writes double
C] .RAR files get deleted.

My aim is to halve the writes.

The downloads run on what's also my system disk so I don't want deferred writes to be very long.
Been thinking about creating a small partition just for the RAR files. The set deferred write to as high as possible.

I'm wondering if what I want is even possible?
Write and read speed for that taks isn't important because it's just a background task. Often I check the results hours after completion.
I download with only 5Mbyte/s, so that's a speed that can be handled with ease anyway.

If PrimoCache can't handle this situation, I'm considering putting the downloads on an entirely different disk. A HDD or a cheap SSD that I assume to be burned out in a short* while.

*= That's another question for me. If not caching writes reduces the lifespan to 3 months I certainly want avoid that. But if the SSD will still last 3 years. Well who care. The SDD will be replaced by then anyway...
User avatar
RAMbo
Level 6
Level 6
Posts: 73
Joined: Wed May 11, 2011 7:50 am

Re: What is this info mean?

Post by RAMbo »

Jaga wrote: Wed Jan 02, 2019 9:57 am Plus - you really should have a UPS on the machine if you're going to use deferred writes. You can get away with not having one and using a short delay (like 5s), but there's still a small amount of risk if the machine BSODs.
You are right in case the SSD is also a system disk (it is)
That's why if the testing is completed I will do one of the following:

a] Buy a SSD for temp files only.
b] Buy a HDD for temp files only.
c] Partition the current SSD and use for temp files only.

Powerfailure is extremely rare. BSODs less rare but much better on Win10 than ages ago on Win ME :-)
In case of a crash I just redownload. So I have no worries about that at all.
User avatar
Jaga
Contributor
Contributor
Posts: 692
Joined: Sat Jan 25, 2014 1:11 am

Re: What is this info mean?

Post by Jaga »

RAMbo wrote: Wed Jan 02, 2019 10:18 am All tests were done with 60sec deferred write, 32GB cache with 4k blocks.
Roughly the downloading works like this:

A] Several .RAR files are download.
B] Those files get unpacked. -> Writes double
C] .RAR files get deleted.

My aim is to halve the writes.

The downloads run on what's also my system disk so I don't want deferred writes to be very long.
Been thinking about creating a small partition just for the RAR files. The set deferred write to as high as possible.

I'm wondering if what I want is even possible?
Write and read speed for that taks isn't important because it's just a background task. Often I check the results hours after completion.
I download with only 5Mbyte/s, so that's a speed that can be handled with ease anyway.

If PrimoCache can't handle this situation, I'm considering putting the downloads on an entirely different disk. A HDD or a cheap SSD that I assume to be burned out in a short* while.

*= That's another question for me. If not caching writes reduces the lifespan to 3 months I certainly want avoid that. But if the SSD will still last 3 years. Well who care. The SDD will be replaced by then anyway...
What you want sounds possible yes, in fact very similar to what I do with multi-part FTP downloads. A single FTP download is broken up into ~50 parts, all start downloading into a temp directory that I have deferred writes enabled on, when they are complete the FTP software assembles them onto the long-term storage volume and removes them from the temp drive. Due to the long deferred write time, they don't usually hit the temp drive. I am using either a 300 or 600s deferred write time in order to accomplish this. I usually see massive Trimmed Blocks numbers because of it.

Note: with my configuration on that machine, I have 32GB of RAM and a ~20GB L1 cache task set (the L2 didn't do deferred writes in the past). The files I download usually fit into the L1, though sometimes I go over and see actual writes.

If your .RAR file download was onto the temp drive, and the unpack was also there, and your deferred write time was long enough to cover the entire set of 3 operations, then you'd never see writes to the actual temp drive. Your cache size would of course, have to be large enough to hold all the RAR data. Since it's on a boot drive, you might see flushing of other files from the cache, even boot files.

You could use either a L1 to do it, or a L2. SSDs are cheap enough that burning one out as a L2 drive in ~3 years isn't that big of a cost. I think I purchased a 512GB Samsung SSD with decent speeds for a little over $100 a year ago, and it's still going strong as the L2 for the entire server. Now that Primocache supports TRIM, it should be even less of a problem, but you'll still want to over-provision the SSD by leaving ~10% unprovisioned space at the end of the drive.

Since you have no UPS, just make sure you have a good system recovery DVD for whatever backup software you're using. It's possible to corrupt any part of the drive if Windows/Primocache has a problem, including the MFT/etc. My server doesn't crash either (even being W7), but I still get good backups every other day on it.

Will be interesting to hear your test results.
User avatar
Jaga
Contributor
Contributor
Posts: 692
Joined: Sat Jan 25, 2014 1:11 am

Re: What is this info mean?

Post by Jaga »

RAMbo wrote: Wed Jan 02, 2019 10:26 amThat's why if the testing is completed I will do one of the following:

a] Buy a SSD for temp files only.
b] Buy a HDD for temp files only.
c] Partition the current SSD and use for temp files only.
I went with a fully-dedicated SSD drive for a L2 volume. If you burn out your boot SSD by using a separate volume on it heavily for L2, then you'll ultimately need to re-build/restore the system drive.
User avatar
RAMbo
Level 6
Level 6
Posts: 73
Joined: Wed May 11, 2011 7:50 am

Re: What is this info mean?

Post by RAMbo »

Did 2 more tests.
Downloaded the same 9.09 GB from Usenet twice.
Looks I've to look for another setup as this does no good for usenet.


CACHE OFF
Volume (C:)
2019-01-02 21:53:08
-------------------
Total Read 2.07GB
Cached Read 0 (0.0%)
L2Storage Read 0 (0.0%)
L2Storage Write 0
Total Write (Req) 15.73GB
Total Write (L1/L2) 164.31MB / 0
Total Write (Disk) 15.73GB (100.0%)
Urgent/Normal 0 / 0
Deferred Blocks 0 (0.0%)
Trimmed Blocks 0
Prefetch Inactive
Free Cache (L1) 24.74GB
Free Cache (L2) 0
Cache Hit Rate 0.00%



CACHE ON
Volume (C:)
2019-01-02 20:48:28
-------------------
Total Read 507.33MB
Cached Read 491.46MB (96.9%)
L2Storage Read 0 (0.0%)
L2Storage Write 0
Total Write (Req) 15.94GB
Total Write (L1/L2) 15.94GB / 0
Total Write (Disk) 13.23GB (83.0%)
Urgent/Normal 0 / 13.23GB
Deferred Blocks 697826 (8.3%)
Trimmed Blocks 1417
Prefetch Inactive
Free Cache (L1) 22.17GB
Free Cache (L2) 0
Cache Hit Rate 96.87%
User avatar
RAMbo
Level 6
Level 6
Posts: 73
Joined: Wed May 11, 2011 7:50 am

Re: What does this info mean?

Post by RAMbo »

Jaga wrote: Wed Jan 02, 2019 8:20 pm If your .RAR file download was onto the temp drive, and the unpack was also there, and your deferred write time was long enough to cover the entire set of 3 operations, then you'd never see writes to the actual temp drive. Your cache size would of course, have to be large enough to hold all the RAR data. Since it's on a boot drive, you might see flushing of other files from the cache, even boot files.
Generally speaking I'm no fan of having many drives (but still I do..) It's not efficient use of space.
Not of storage space and not of RAM space. I mean if I have just one drive, the only thing I have to do is decide how much RAM PrimoCache gets.
If I have 2 drives I have to decide how much RAM PrimoCache gets per disk.
Ideal would be if PrimCache automatically manage space per drive and I only decide how much PrimoCache gets in total. But that's not how it works.


Right now i have 32GB assigned to PrimoCache. It doesn't help at all to avoid writes. But at least it's on the same system disk so the cache benefits all software on the system disk and not just the usenet software. Ok, results will be not so good when downloading becaue that wrecks cache performance. But when the downloads are not running, all that 32GB is dedicated to whatever task I'm running on my C drive.
If I cache 2 disk and simplu dedicate 16GB to each the situation is like this:
- If I only download 30% of the day, the the 16GB is a wasted 70% of the day.
- If other tasks on my system disk are light they don't need 16GB. The excess would be better used for downloads.

To make my situation easier to understand:
I have 3 disks needing PrimoCache.
- Downloads run say 4 hours a day and requires truckloads of cache. Seldom below 2GB, everyday max 10GB. Peak up 50GB.
- System runs 24/7. Load varies greatly. With that cache demand too I guess. Not sure when cache become useless overkill.
- Disk that's mostly cold storage + some light tasks

So besides the question of temp disk, L2 SSD etc, I can't even figure out how much each cache each disk should get.

To make things even worse, that 'cold storage' disk is actually a StableBit Drivepool of 4 disks (dublicated). For me, as a user that's just one disk. For PrimoCache that's 4 disk. That means I have not 3 but 6 disks I have to spread my 32GB cache over.



Are you reading Mr. PrimoCache? :-)
I would pay for a PrimoCache Pro or a seperate app that gives me all sorts of stats:
- Reads and writes per disk.
- Cache usage => how often was it full. How often only 50% was in use, etc
- Performance gain
- Advise on the best way spreading 32GB cache over all monitored disk.
- Advise on whatever you think is good to consider.

Assuming I'm not the only user that's too stupid to use PrimoCache, I think that would be a very worthwhile program to have.

You could use either a L1 to do it, or a L2. SSDs are cheap enough that burning one out as a L2 drive in ~3 years isn't that big of a cost. I think I purchased a 512GB Samsung SSD with decent speeds for a little over $100 a year ago, and it's still going strong as the L2 for the entire server. Now that Primocache supports TRIM, it should be even less of a problem, but you'll still want to over-provision the SSD by leaving ~10% unprovisioned space at the end of the drive.
Samsung has Magician software bundled with (some of) its SSDs. When I look in that app it also shows TB written and a total amount of days.
Do you happen to know that the number of days it was actually powered on?
It also states how much lifespan is used up. When I extrapolate those numbers the theoretical lifespan is 1475 TBW.
On the Samsung website it states the EVO 860 1TB is, 600 TBW. The 2TB version is 1200TBW. That matches my calculations because the used up lifespan has no digits so is rounded.
I'm wondering what's right. Are there two generations? It's quite important for me because if the lifespan is good I might use my sytem disk as a burn disk. I reinstall (regular install or disk image) Windows at least once a year anyway. If my SSD nears EOL, I'll swap it out before reinstalling Windows.


Since you have no UPS, just make sure you have a good system recovery DVD for whatever backup software you're using. It's possible to corrupt any part of the drive if Windows/Primocache has a problem, including the MFT/etc. My server doesn't crash either (even being W7), but I still get good backups every other day on it.
No worries. All stuff is backuped daily to two different disks. Data I'm working on and/or is vital is backed up every hour and keeping 80 generations.
User avatar
Jaga
Contributor
Contributor
Posts: 692
Joined: Sat Jan 25, 2014 1:11 am

Re: What does this info mean?

Post by Jaga »

RAMbo wrote: Thu Jan 03, 2019 7:30 amGenerally speaking I'm no fan of having many drives (but still I do..) It's not efficient use of space.
Not of storage space and not of RAM space. I mean if I have just one drive, the only thing I have to do is decide how much RAM PrimoCache gets.
If I have 2 drives I have to decide how much RAM PrimoCache gets per disk.
Ideal would be if PrimCache automatically manage space per drive and I only decide how much PrimoCache gets in total. But that's not how it works.
Not sure if you knew (or if I'm understanding you correctly), but Primocache can manage multiple drives in the same Cache Task. There's no need to create separate ones - just add multiple volumes to the same task. It's the second button in the toolbar, the blue flower-thingy. Primocache will auto-manage just one chunk of RAM for all volumes added to that task.

RAMbo wrote: Thu Jan 03, 2019 7:30 amTo make things even worse, that 'cold storage' disk is actually a StableBit Drivepool of 4 disks (dublicated). For me, as a user that's just one disk. For PrimoCache that's 4 disk. That means I have not 3 but 6 disks I have to spread my 32GB cache over.
Yeah, I use Drivepool also on a set of 9x8TB drives. I never cache my data drives (the pool) since the hit ratio on reads for such a large amount of data is miniscule. I suppose I could cache writes, but I'd rather have my L1 saved for more important C: data. The FTP trimming I see is just a bonus since it mostly fits into the cache. Now that Primocache can cache L2, I may setup a SSD for dedicated Pool write caching - I just hadn't done that in the past.

If I were you, I'd cache everything *except* your Pool drives in your L1, then buy a dedicated SSD for L2 caching of both your Pool, and other stuff (you can create multiple L2 volumes on it). That'll get you write caching on the L2 for the Pool, and additional L2 caching on your boot drive (any anything else).

RAMbo wrote: Thu Jan 03, 2019 7:30 amSamsung has Magician software bundled with (some of) its SSDs. When I look in that app it also shows TB written and a total amount of days.
Do you happen to know that the number of days it was actually powered on?
It also states how much lifespan is used up. When I extrapolate those numbers the theoretical lifespan is 1475 TBW.
On the Samsung website it states the EVO 860 1TB is, 600 TBW. The 2TB version is 1200TBW. That matches my calculations because the used up lifespan has no digits so is rounded.
I'm wondering what's right. Are there two generations? It's quite important for me because if the lifespan is good I might use my sytem disk as a burn disk. I reinstall (regular install or disk image) Windows at least once a year anyway. If my SSD nears EOL, I'll swap it out before reinstalling Windows.
The reason I don't advocate a separate volume on a boot drive as a L2 zone, is that it amplifies writes to a small section of the drive, instead of the whole drive. The lifespan estimates from Samsung are for writes that occur to the entire drive as a whole - if you segment off a small volume and hammer that with writes, it's like using a smaller SSD. And lifespan estimates go up for larger drives, down for smaller ones. You're going to artificially shorten the lifespan of a SSD by limiting heavy write activity to a small section of it. Hope that makes sense.

The SSD I bought for dedicated L2 caching on my server is a Samsung 860 EVO (500GB). The server has been on every day since purchased (I think it was back in ~July, so not quite a year, more like 6 months). It shows Drive Condition as "Good", and Total Bytes Written at 2.8TB. But I'm using half of it (or more) for the L2, so I'm not limiting the writes to too small a space that it would abnormally shorten the lifespan (see my above comment about that). Plus I have it over-provisioned heavily (I think 30% or more is unused at the end), which helps greatly with garbage collection and lifespan. And it previously hasn't been used as a L2 cache against the Pool.

Given that Primocache can do deferred writes using the L2 now, I'd expect FAR greater Total Bytes Written if I did use it as a Pool write buffer. For your scenario, if you use just a small portion of your boot SSD for L2 write caching on the pool, it would probably hammer that disk into oblivion very quickly. Definitely get a dedicated SSD for that kind of scenario. ;)
User avatar
RAMbo
Level 6
Level 6
Posts: 73
Joined: Wed May 11, 2011 7:50 am

Re: What is this info mean?

Post by RAMbo »

Not sure if you knew (or if I'm understanding you correctly), but Primocache can manage multiple drives in the same Cache Task
Must admit I forgot about that, but...
Yeah, I use Drivepool also on a set of 9x8TB drives. I never cache my data drives (the pool) since the hit ratio on reads for such a large amount of data is miniscule
Because I want write caching only on that pool I'm forced to use two caches.
But if I acces something from that drive it's usually the just a few MBs of the same file over and over again. Doesn't PrimoCache only cache those few blocks?
The reason I don't advocate a separate volume on a boot drive as a L2 zone, is that it amplifies writes to a small section of the drive, instead of the whole drive.
I fully understand what you mean, but I don't know SSDs work that way. Aren't they constantly shuffeling data around (wear balanacing)? A nearly full single partition SSD would suffer the same fate.

The server has been on every day since purchased (I think it was back in ~July, so not quite a year, more like 6 months). It shows Drive Condition as "Good", and Total Bytes Written at 2.8TB.
Mine for 341 days and at 58TB.
Mmmm, just looked at OP, it's at 2% and I can only lower it...? It suggests to expand my partition, but I only have one.
Well strictly speaking 3 but that's how windows works. 500MB System Reserved + 930GB main + 470MB recovery.
Never seen that last one on HDDs I think.
Don't understand how any of it relates to overprovisioning because that's 18.63GB
For your scenario, if you use just a small portion of your boot SSD for L2 write caching on the pool, it would probably hammer that disk into oblivion very quickly.
I don't see how I would benefit from L2. Doesn't PrimoCache Flush the L2 to my main disk every few seconds?
Yeah, I could use a defer write of a few hours but that would also affect my system files
User avatar
Jaga
Contributor
Contributor
Posts: 692
Joined: Sat Jan 25, 2014 1:11 am

Re: What is this info mean?

Post by Jaga »

RAMbo wrote: Fri Jan 04, 2019 5:10 am
Yeah, I use Drivepool also on a set of 9x8TB drives. I never cache my data drives (the pool) since the hit ratio on reads for such a large amount of data is miniscule
Because I want write caching only on that pool I'm forced to use two caches.
But if I acces something from that drive it's usually the just a few MBs of the same file over and over again. Doesn't PrimoCache only cache those few blocks?
That's the idea for the read cache, yes. It keeps the most used data available. If you aren't reading a lot of different files on those disks/volumes, it should keep what little you read in the cache.

Also, if you look in the Volume Specifications in the Cache Task, you can turn on/off L1/L2/Defer-Write/Prefetch per volume in the task. That means you could add the Pool drives to the task, and specify they only use the write cache. So your C: drive would be read/write for L1 (no L2 if it held the L2 on it), and the Pool drives (in the same task) could be write cache only. It's much more configurable than it used to be.

RAMbo wrote: Fri Jan 04, 2019 5:10 am
The reason I don't advocate a separate volume on a boot drive as a L2 zone, is that it amplifies writes to a small section of the drive, instead of the whole drive.
I fully understand what you mean, but I don't know SSDs work that way. Aren't they constantly shuffeling data around (wear balanacing)? A nearly full single partition SSD would suffer the same fate.
Yes, they do from what I know. However in a much smaller volume (like 30GB out of 500GB) you have much less space to shuffle it around, and stand the chance of running out of free clusters before the next cleanup (TRIM/GC) happens. It depends on how heavily you write to the smaller volume I think. Keeping the drive over-provisioned with a good amount of space helps all volumes significantly, as would making the smaller L2 volume even larger.


RAMbo wrote: Fri Jan 04, 2019 5:10 am Mine for 341 days and at 58TB.
Mmmm, just looked at OP, it's at 2% and I can only lower it...? It suggests to expand my partition, but I only have one.
Well strictly speaking 3 but that's how windows works. 500MB System Reserved + 930GB main + 470MB recovery.
Never seen that last one on HDDs I think.
Don't understand how any of it relates to overprovisioning because that's 18.63GB
The recovery volume exists on most boot drives with Windows, I forget when it was introduced (W8 perhaps). On any Basic Disk you can have either four (4) primary partitions, or three primaries and one Extended (that can contain logicals). Since you already have 3 on your C: drive, you could only make one more primary for the L2, since I don't think we can use Extended logical volumes for L2 targets.

If your recovery is listed last (right-most), then you won't be able to over-provision since you couldn't shrink the volumes to make free space. If however the boot volume (C:) is the right-most, you could shrink it and leave the unused space as the "over-provisioned area". That's really all it is, space that isn't taken up by a volume. The SSD realizes this and uses it for garbage collection, extending the lifespan of the drive.


RAMbo wrote: Fri Jan 04, 2019 5:10 am
For your scenario, if you use just a small portion of your boot SSD for L2 write caching on the pool, it would probably hammer that disk into oblivion very quickly.
I don't see how I would benefit from L2. Doesn't PrimoCache Flush the L2 to my main disk every few seconds?
Yeah, I could use a defer write of a few hours but that would also affect my system files
I'm not 100% certain how the new deferred-write L2 algorithms work, but I assume they work on the same principle as the L1. So if you set a deferred-write time to say 600 seconds, no writes made to the L2 (if they all fit) would hit the target drive for 10 minutes. When they did get flushed to the target drive, they'd be neatly ordered, and you'd get the benefit of trimmed blocks.

If you're writing out a lot of data to the pool, your L1 would probably suffer since it's not huge to begin with, and you want to keep important read data in there. So from that perspective, you'd want to reserve the L1 only for your boot volume. But using the L2 as a Pool write cache with a long delay would benefit you.

That strategy would end up using two cache tasks: one read/write L1 task for your C: volume, and a separate write-only L2 task for your Pool volumes. The deferred-write time could be different for each.

But that doesn't address your goals with the RARs. Your L1 would have to be large enough to hold all of the downloaded RAR files, and the deferred-write time long enough so that they never hit the C: drive. Does that sound possible with your 32GB of RAM? You could make Primocache block size larger than 4k to save on memory overhead.

From what I understand, you're really trying to tackle two goals at once: caching the RARs so they never hit the C: drive, and write-caching the Pool volumes. The question is - how large are the RARs, and will they fit into your L1 and leave room for Windows files to be cached, or do you need to use a SSD to hold them while they're unpacked?
User avatar
RAMbo
Level 6
Level 6
Posts: 73
Joined: Wed May 11, 2011 7:50 am

Re: What is this info mean?

Post by RAMbo »

Yes, they do from what I know. However in a much smaller volume (like 30GB out of 500GB) you have much less space to shuffle it around
So wear balancing is per partition? Not disk wide?
The recovery volume exists on most boot drives with Windows, I forget when it was introduced (W8 perhaps)
Could be. I went from win 7 to win 10. Even now most of my disks don't have recovery. Perhaps most of them were formatted under win 7?
But that doesn't address your goals with the RARs. Your L1 would have to be large enough to hold all of the downloaded RAR files, and the deferred-write time long enough so that they never hit the C: drive.
Most downloads are under 32GB so from that perspective the answer is yes.
I'm more and more convinced this problem can't be tackled.
Let's assume I buy a SSD or HHD just for Usenet. Both my downloaded and unpacked files are on that drive. Sounds like a solution that takes my system disk totally out of the equation, right? It's not, and this is why. When WinRAR unpacks it uses a temp file on C:. When done it moves the file to the target. So it still is using my system disk. But yes, writes are halved.

If I store the unpacked files on my system disk WinRAR still unpacks to System but it simply moves the result to the target directory. So that doesn't take any writes (besides updating FAT)


While this whole discussion is very interesting and I learn from every post, I'm also asking myself why bother, why even discuss this? You always wanted an SSD for raw speed and now you spend days discussing ways to avoid using it... :-)
Post Reply