Page 1 of 3

[2016-12-07] PrimoCache 2.7.0 released!

Posted: Wed Dec 07, 2016 4:27 am
by Support
We're glad that PrimoCache desktop edition 2.7.0 is released now!

You may download it from the following page.
https://www.romexsoftware.com/en-us/pri ... nload.html

For the change log, please see
https://www.romexsoftware.com/en-us/pri ... gelog.html

Thank you for all your kind support!

Re: [2016-12-07] PrimoCache 2.7.0 released!

Posted: Wed Dec 07, 2016 10:21 am
by manus
When do you think, we get an update for server?

Re: [2016-12-07] PrimoCache 2.7.0 released!

Posted: Fri Dec 09, 2016 4:10 pm
by Support
Currently all updates since v2.4.0 are for Windows 10 series only. For Windows server systems prior to Server 2016, you may just use the v2.4.0.
For Windows server 2016, so far we haven't seen problems that happened in Windows 10, so you can still use the v2.4.0 if it was working well.
We also built the corresponding versions for the server edition and will provide the links later if you're interested in them.
Thanks.

Re: [2016-12-07] PrimoCache 2.7.0 released!

Posted: Tue Dec 13, 2016 6:58 am
by rutra80
L2 WO behaviour seems to be working good, thanks!

Re: [2016-12-07] PrimoCache 2.7.0 released!

Posted: Fri Jan 06, 2017 8:08 am
by points
support wrote:Currently all updates since v2.4.0 are for Windows 10 series only. For Windows server systems prior to Server 2016, you may just use the v2.4.0.
For Windows server 2016, so far we haven't seen problems that happened in Windows 10, so you can still use the v2.4.0 if it was working well.
We also built the corresponding versions for the server edition and will provide the links later if you're interested in them.
Thanks.
Actually there is a major issue: On workstation with 2.7. I see memory overhead with 2048MB L1 cache size and 4KB cluster size of 452 MB.
On server version with same size setting I see overhead requirement of 4.73 GB!!!!!
That is tenfold!

I believe it was one of the major improvements in versions since 2.4 that overhead has been reduced.
This is urgently required for the server version as well.

Re: [2016-12-07] PrimoCache 2.7.0 released!

Posted: Sat Jan 07, 2017 2:12 am
by Support
@points, the overhead amount is related to not only the L1/L2 cache size but also target volume's capacity. We confirm that there're not any changes on overhead between 2.4 and 2.7.

Re: [2016-12-07] PrimoCache 2.7.0 released!

Posted: Sat Jan 07, 2017 3:05 pm
by points
@support: Thank you for your information.
Size resp. disk workstation: 400 GB -> Overhead: 452 MB
Size resp. disk server: 1 TB -> Overhead: 4.73 GB

That's still a exponentional increase in overhead and should be addressed.

Re: [2016-12-07] PrimoCache 2.7.0 released!

Posted: Sun Jan 08, 2017 2:12 am
by Support
@points, can you upload the screenshots of the PrimoCache main dialog which shows target volumes and cache settings for your workstation and server? Thanks.

Re: [2016-12-07] PrimoCache 2.7.0 released!

Posted: Sun Jan 08, 2017 6:23 am
by points
Server and workstation are running on different PrimoCache configurations which makes it harder to compare.
Can you try to simply reproduce the issue with a 2.4 server and a 2.7 workstation version with identical settings entered on your machine. All you have to do is to open the settings dialoag, enter identical data and observe the memorey overhead stated.
If both show the identical memory overhead then the issue is that it needs too much overhead when the disks grow larger.
4GB overhead for a 2GB cache seems to be somewhat excessive.

Re: [2016-12-07] PrimoCache 2.7.0 released!

Posted: Mon Jan 09, 2017 3:57 am
by Support
A larger target disk requires more overhead memory. Also some options like defer-write, prefetch increases overhead. We confirm that both 2.4 and 2.7 have the same code handling the overhead.

BTW, we considered many designs to reduce the overhead, but these methods bring performance penalty. In 3.x version, we'll make a balance between performance and overhead. However, a bigger block size is still an effective way to reduce the overhead a lot.