Max size for L2 Cache Topic is solved

FAQ, getting help, user experience about PrimoCache
Post Reply
talaverde
Level 1
Level 1
Posts: 3
Joined: Sun Nov 25, 2018 3:37 am

Max size for L2 Cache

Post by talaverde »

I have 8 1TB SSD (Samsung 850 Pro) in a RAID 0 on a LSI RAID controller. I tried to set up a 5TB L2 cache (with the other 3 TB as over-provisioning. The software would only let me set up about 970GB. Is there a limit to how big the L2 cache can be?
talaverde
Level 1
Level 1
Posts: 3
Joined: Sun Nov 25, 2018 3:37 am

Re: Max size for L2 Cache

Post by talaverde »

FYI, I'm using this 5TB L2 cache on a 5x4TB RAID 5 (18TB) array, with a 32GB L1 cache.
User avatar
Support
Support Team
Support Team
Posts: 3623
Joined: Sun Dec 21, 2008 2:42 am

Re: Max size for L2 Cache

Post by Support »

Max L2 Cache size is 2044GB.
Axel Mertes
Level 9
Level 9
Posts: 180
Joined: Thu Feb 03, 2011 3:22 pm

Re: Max size for L2 Cache

Post by Axel Mertes »

Hi Talaverde,

can you explain a bit about what kind of data you are processing and storing on you system?

I am running two servers with PrimoCache Server.

The first server uses PrimoCache 2.4 for read caching 6 individual RAID5 and RAID6 system partitions from Fibre Channel SAN, about 64 TByte in total, and also read&write caching a single 64 TByte partition which is used as nearline backup using Syncovery software for mirroring/versioning. It uses 32 GB RAM L1 and 1 TB L2 SSD (RAID0 of four Samsung 850 EVOs) for read caching the 6 smaller RAIDs. Then I use another 32 GB RAM L1 and 1 TB L2 SSD (RAID0 of four Samsung 850 EVOs) for read&write caching the single 64 TB RAID6. The overhead for each cache is about 4.5 GB alone, so ~9 GB are used only for mapping the cache itself.

Important info:
I am using 64KB formatted NTFS volumes to minimize the size of the overhead!

The second server is having a 64 TByte RAID6 partition with 96 GB RAM L1 and 480 GB L2 Optane P900 SSD as read&write cache for a single 64 TB RAID6 partition (local SAS RAID). It will soon replace the years old SAN Fibre Channel infrastructure. The overhead is about 3.5 GB.

The second server is also using 64KB formatted NTFS volumes to minimize the size of the overhead.

When I hear the size relation between L1 and L2 cache and your RAID, I think you might be better using a ReFS volume storage space, that combines the SSD RAID and the HDD into one single volume. However, that method eats disk space a lot, depending on your security level. I'd also consider using a redundant RAID level with the SSDs in such a scenario.

So to better understand what you are aiming for, it would be interesting to learn what kind of data you are dealing with and how much data is accessed in a given daily timeframe?

We found that we touch roughly 1-2 TB of data per day at max, often less. We repeatedly overwrite image sequences using a render farm on a 10 GB Ethernet network and replay these from the server usually imediately after rendering, repeatebly. So we decided to have a cache in place that should roughly cover this amout of data to make the system feel like running from SSD alone, with rare reads from the source RAIDs.

In your case, the RAID0 with 8 TB is more than what PrimoCache can currently utilize. So you may either create two partitons on that RAID, one with 2 TByte in size for the SSD cache, the rest for a true SSD storage partition. However, the latter would be insecure if not done using e.g. RAID5 or RAID6. So using the 8 SSD to create e.g. a RAID5 or RAID6 volume and then creating two partitions = 2 TB cache + 6 TB storage volume might be a good idea. I would also recommend increasing the size of the L1 cache or system memory if possible. Which block size are you using for formatting the volume and how big is the overhead?

What do you think?

Cheers
Axel
talaverde
Level 1
Level 1
Posts: 3
Joined: Sun Nov 25, 2018 3:37 am

Re: Max size for L2 Cache

Post by talaverde »

Thanks for the feedback. I'm purely using these arrays for HyperV (fixed size) VHDs. I'm a software consultant so I have a lot of test environments. I also have all my domain VMs there too (DC,CA, etc). I also have some 2TB VHDs for a local copy of my Dropbox, OneDrive and Google Drive, so basically file storage in VHDs. I originally had the SSDs in a RAID 10, using Windows Server's version of ReFS. The SSD write performance degradation was horrendous as TRIM isn't passed through an LSI RAID controller. I decided a volatile cache with RAID 0 would give better performance and I could create more over-provisioning for Samsung's built in garbage collection.

The 18TB RAID5 is using WD Black 7200RPM. I have another 16TB RAID6 with 3TB WD Reds. I also have 2 10TB WD Reds. If need to expand my storage space, I'll likely create a RAID 5 by adding some more 10TB WD Reds.

I have a total of 128GB RAM. I have room to add another 256GB RAM but not the budget currently. I suppose I could break up the SSDs into two 4x1TB RAID0's, with 50% over-provisioning. Then, focus more on adding RAM to increase the L1 cache.

I'm currently testing MaxVeloSSD cache (trial period), using 32GB RAM cache and 5TB SSD cache. It's definitely much faster than it was w/o the caching software. The problem with this one is it's going to be over $300 if I end up buying it. I might be better off using PrimoCache and spend that extra $$ on more RAM.

I wonder why PrimoCache would only let me do a 960GB RAM cache? Maybe because it was the trial version?
User avatar
Support
Support Team
Support Team
Posts: 3623
Joined: Sun Dec 21, 2008 2:42 am

Re: Max size for L2 Cache

Post by Support »

talaverde wrote: Fri Nov 30, 2018 1:41 am I wonder why PrimoCache would only let me do a 960GB RAM cache? Maybe because it was the trial version?
I think you are talking about SSD cache, not RAM cache, right?
The trial version has the full functionality as the registered version. So it also supports ~2TB SSD cache.
PrimoCache needs a dedicated partition for ssd cache. So at first you make sure that this partition capacity is about 2TB.
A 2TB ssd cache may need certain memory overhead, to avoid using too much memory, make sure that the cache block size is not too small.
If you still have the problem, please upload a screenshot of your cache configuration dialog for reference.
zeroibis
Level 4
Level 4
Posts: 20
Joined: Thu Oct 11, 2018 11:13 am

Re: Max size for L2 Cache

Post by zeroibis »

Just want to point out that to avoid the raid TRIM issues you should be looking at software and not hardware based RAID. TRIM should pass to your drives as long as they are connected via an HBA. You should be able to set your RAID card to HBA mode if not recommend buying an HBA card.

I would highly recommend exploring software raid solutions. The hardware raid controller is going the way of the dinosaurs. Also good luck trying to recover a RAID 5 array with drives larger than 4TB odds are not in your favor nor time for that matter. Even raid 6 on 10TB drives you might as well give up at trying to recover that given the rebuild time. Some use things like snap raid to get around this but if you need real time redundancy mirroring is the only salvation when the drives get very large.
Post Reply