PrimoCache Server in combination with a Intel 900P Optane card

FAQ, getting help, user experience about PrimoCache
User avatar
Jaga
Contributor
Contributor
Posts: 692
Joined: Sat Jan 25, 2014 1:11 am

Re: PrimoCache Server in combination with a Intel 900P Optane card

Post by Jaga »

Axel Mertes wrote: Sat Oct 27, 2018 9:03 am We will get our new server next week with a single optane p900 480GB card caching a 64 TB RAID6 volume. Will give it a deep test drive with the latest server beta, including write caching. Can't wait to see how far I can grow network transfer peaks.

A second optane is planned, depending on the results.
What kind of data population do you have on the RAID 6 volume? The reason I ask is that from a read caching perspective, I typically never go under 5% cache size to data size. Ideally it's more like 10%, which means a 480GB cache would only cover (for read purposes) a 5TB data store size.

If just using the card for deferred writes, then it will be interesting to see what you get.
Axel Mertes
Level 9
Level 9
Posts: 180
Joined: Thu Feb 03, 2011 3:22 pm

Re: PrimoCache Server in combination with a Intel 900P Optane card

Post by Axel Mertes »

We do film post production. So its a lot of streaming data and even more image sequences in HD, 4k and higher.

We use a small render farm, so there is regularly hefty i/o stress on the system with hundrets of cores requesting identical source data, but writing excessively different target files simultaneously. Usually the rendered sequences are played back repeatedly shortly after rendering them. This is where a cache and especially the optane p900 will do well. Beta 3 brings the ability to keep just written data in the read cache as far as I know. I often requested this, as its a dramatic improvement over 2.40. The size is currently limited with 480 GB. We had 2 TB before using four SATA SSDs in a RAID0 config. The idea is to add another optane short hand or replace with future bigger/faster model. The performance should go up with the optane anyway.

The only question is if the size is sufficient for daily operation without reloading from source disk too often.

We use NTFS and defragment the volume daily. Undelete server helps keeping it smooth and protected. Performance of the HDD is maximized this way.
Axel Mertes
Level 9
Level 9
Posts: 180
Joined: Thu Feb 03, 2011 3:22 pm

Re: PrimoCache Server in combination with a Intel 900P Optane card

Post by Axel Mertes »

So, I have the new server set up and the Optane P900 is running.

I wonder on the performance values that I see.

Can it be that data that has just been saved to disk with write caching enabled is not immediately available for reading from the cache?
I thought that has been changed in Beta 3.x over 2.4?

In my opinion I would love to have one large cache, which is read & write cache at the same time and handled on e.g. a least used - first out strategy. So it doesn't matter if a block was cached for reading or writing, its just cached.

Right now it feels to still re-read the data from the original source disk.

@support:
Is that true?
User avatar
Support
Support Team
Support Team
Posts: 3623
Joined: Sun Dec 21, 2008 2:42 am

Re: PrimoCache Server in combination with a Intel 900P Optane card

Post by Support »

Axel Mertes wrote: Thu Nov 15, 2018 3:55 pm Can it be that data that has just been saved to disk with write caching enabled is not immediately available for reading from the cache?
Make sure that the option "Free Cache on Written" in Defer-Write advanced settings is not checked.
Axel Mertes wrote: Thu Nov 15, 2018 3:55 pm In my opinion I would love to have one large cache, which is read & write cache at the same time and handled on e.g. a least used - first out strategy. So it doesn't matter if a block was cached for reading or writing, its just cached.
If data is in the cache, when Windows/Application requests to read it, PrimoCache always get it from the cache, even it is in the write cache space.
Individual read/write cache space is just to prevent reading or writing from disturbing each other. If you want one large cache, then you may just uncheck the option "Individual Read/Write Cache Space" in L1 and L2 advanced settings.
Axel Mertes
Level 9
Level 9
Posts: 180
Joined: Thu Feb 03, 2011 3:22 pm

Re: PrimoCache Server in combination with a Intel 900P Optane card

Post by Axel Mertes »

@Support

I have "Free Cache on Written" not checked.
I have "Individual Read/Write Cache Space" not checked.
I have a block size of 64 KB - the volumes are formatted using that block size too.
I have "Enable Defer-Write" enabled with a latency setting of 60 seconds and write mode "Native".
Would you recommend using a different mode than "native" for my scenario of mostly image sequences and streaming video files or large size (4K content)?
  • Advanced Defer-Write Options:

    Write Mode: Specifies the behavior of writing deferred write-data to the underlying disk.

    •Native: Starts flushing all deferred data to the disk each time a time interval specified by Latency expires.
    •Intelligent: Native mode plus writing 10%~20% of deferred data into disk on Windows idle when deferred data amount reaches 90% of cache size.
    •Idle-Flush: Native mode plus writing all deferred data into disk when Windows is idle.
    •Buffer: Native mode plus writing deferred data into disk on Windows idle to keep 80% of cache free for new data when deferred data amount reaches 40% of cache size.
    •Average: Averaging the amount of deferred data over a period of time and smoothly writing data into disk to avoid sudden heavy disk activities.
When outstanding write cache data is flushed to disk, will PrimoCache then automatically free up that cache block immediately or will it still be present until another read request pulls it or new write cache data might need to overwrite it, on a least used - first overwritten policy?
I think that would make a lot of sense to keep write cache data even AFTER flush to disk operation as long as possible for potential read requests...

Which of the modes would you recommend in my scenario with HD/UHD image sequences and streaming video files?

I switch and am trying "Average" for now to see what changes.
User avatar
Support
Support Team
Support Team
Posts: 3623
Joined: Sun Dec 21, 2008 2:42 am

Re: PrimoCache Server in combination with a Intel 900P Optane card

Post by Support »

For your scenario, I prefer individual read/write cache space. Thus identical source data won't get easily overwhelmed by large amount of write streams.
Usually 60s is too long for heavy writing load. You may check cache statistics to see if there were "Urgent" writes. If you see "urgent" writes, it means that writing buffer is too small for latency period. You may either reduce latency or increase write cache buffer.
L1 write cache is also recommended for best write performance as ram writing speed is much faster than ssd.
Axel Mertes
Level 9
Level 9
Posts: 180
Joined: Thu Feb 03, 2011 3:22 pm

Re: PrimoCache Server in combination with a Intel 900P Optane card

Post by Axel Mertes »

I can see larger amounts of urgent writes, see here:

Image
https://www.dropbox.com/s/a68a307eu12tq ... 7.png?dl=0

I changed the setting for defer-write to 10s to see if thats better.
I will consider using individual cache space. If using individual cache space, will blocks that have just been written and are still existing in write cache space be available for cached reading if requested shortly after?
Consider rendering an image sequence and reviewing it immediately after rendering. Thats 90% of the workload here.

You did not answer this question yet:

When outstanding write cache data is flushed to disk, will PrimoCache then automatically free up that cache block immediately or will it still be present until another read request pulls it or new write cache data might need to overwrite it, on a least used - first overwritten policy?
I think that would make a lot of sense to keep write cache data even AFTER flush to disk operation as long as possible for potential read requests...


You can also see that I have L1 enabled and that it helps mostly on deferred writes. This volume is not yet in use for image sequence rendering, currently being used as a backup mirror. I will manage to use it as rendering drive early next week, to get a better idea of daily use scenario.

Here some numbers from real work load drives, both v2.4 server:

https://www.dropbox.com/s/d3te4606aqdrr ... 7.png?dl=0

https://www.dropbox.com/s/silra59uwepi7 ... 7.png?dl=0
>11 million trimmed blocks...

and the new mirror drive with 3.0.2 beta server:

https://www.dropbox.com/s/a68a307eu12tq ... 7.png?dl=0


Thanks for your support.
minhgi
Level 10
Level 10
Posts: 255
Joined: Tue May 17, 2011 3:52 pm

Re: PrimoCache Server in combination with a Intel 900P Optane card

Post by minhgi »

Hi Axel,

That cache behavior is there when you have zero L1 write cache enable. Everything deferred write will hit the L2 Cache before getting flush to the hard drive, but the still maintain in cache space use immediately.

The ideally L1 should have gotten flush to L2 (use it as a write buffer)and then get flush again hard drive. All written data at the L2 level should be readily be available to be user as read cache unless the data is old and get replace with newer on.
User avatar
Support
Support Team
Support Team
Posts: 3623
Joined: Sun Dec 21, 2008 2:42 am

Re: PrimoCache Server in combination with a Intel 900P Optane card

Post by Support »

Axel Mertes wrote: Sat Nov 17, 2018 4:14 am If using individual cache space, will blocks that have just been written and are still existing in write cache space be available for cached reading if requested shortly after?
Yes. Data are always be there ready for future reading if these blocks are not used for caching new data.
Axel Mertes wrote: Sat Nov 17, 2018 4:14 am When outstanding write cache data is flushed to disk, will PrimoCache then automatically free up that cache block immediately or will it still be present until another read request pulls it or new write cache data might need to overwrite it
Still be there. We don't waste cached data if these blocks are not used for others.
Post Reply