Paging file is for speed the speed of RAM is still faster than my 120GB SSD drive as well as it does not eat up to 4GB of my 120GB SSD drive.
The pagefile is designed to work at HDD speed, it doesn't do big fast R/Ws. You'll see the same R/W speeds that you saw on your SSD which is not very much (mine is around 5MBs right now). One of the things the pagefile does is protect the OS against the volatility of data held in RAM. When you put your pagefile in ram you are removing a layer of protection that helps insure system stability. The space you are saving on your SSD is less valuable to your system then the stability the pagefile provides to you over time. Think on this, if this was a good option don't you think the people at Microsoft who are experts on this would have moved it to RAM already? The pagefile isn't constantly being hammered with writes it's more of a trickle. You can test this by putting the page on a separate disk and running a perfmon to log the transfer rates.
http://en.wikipedia.org/wiki/Paging
I was planning to try it for DLNA playback but it seems to slow it down for streaming so my test for using it for DLNA streaming.
Those Western Digital Green drives do have aggressive power saving so that could be your problem. However they should handle video streaming no problem, you might have to experience a lag if the drives are spun down but FancyCache probably wont help you much with that, it wouldn't come in to affect until you tried to watch the movie a 2nd time. From my experience 99% of the time it's the network that is the culprit. If you're using Wifi that is probably where you're getting your problems. Wifi is slow and prone to interference from other networks, microwaves, walls, etc. DLNA adds another layer of latency and network overhead.
since I know the smaller sectors have better performance but eats up more cpu resources.
It's actually the opposite. Small sectors waste less space when saving data to a disk but the smaller they are much slower and they use more CPU (though that again is a none-issue because CPUs are so powerful). Think about it like making a phone call, you take your phone out of your pocket, you enter your password to open the phone, you open the phone app, then the contact list, you scroll to the contact, you hit call, you talk to the other person and then you go through the process of hanging up and putting your phone away. So 512k is when you make the call and you talk for 10 minutes and go over everything before you hang up, with 4k you are making the call for each sentence. All the tasks that take place to make a data transfer possible is called the overhead. The more calls that you make the more overhead you have to deal with.
My question on block size was why FancyCache defaults the 2TB to 512KB Sectors and the 1TB to 256KB
You really got to read up on how a drives work in order to understand this. The super duper simplified answer is - Size of the blocks times by the number of blocks equals the size of the disk. So a 2TB would need larger block sizes to have an addressable number of sectors. In this case around 3 million. Each device involved with your storage system could be optimized for a different block size. You storage controller might prefer 256k and your drives 512k and you find that you get the best performance at 128k for some reason.
So how and why is FancyCache selecting different block values for the various drives and are they good default or starting values really was not answered.
This product needs to be manually configured, picking the wrong block size can seriously degrade performance on certain transfer sizes so you really do have to do a battery of disk benchmarks in order to find out what is going on with your machine. Your other choice is take a guess and try see if it feels faster, if it does and your happy mission accomplished!
You say not to use defer write but all the guides on all the sites I have read said to set it to 10 seconds write defer.
Right and they are giving you bad advice, write defer is only useful for certain specific situations and you need the experience to understand what those are.
Think about it this way, lets say a competitive shooter removes the safety, so they can get a faster firing rate. They can do this fairly safely because they have an intimate knowledge of gun safety. However when you do that you increase your chances of an accidental misfire but you're still not able to shoot much faster because you don't know the techniques it takes to take advantage of the faster firing rate.
In the end it will take the same amount of time to write the data to the disk but you'll be putting it more at risk. So you see the transfer meter disappear quicker, but all you've done is moved the data from one part of the memory to another and you're hiding the transfer.. it's a net sum you don't gain anything.
Most applications are not bottlenecked when writing to the disk and they don't really transfer that much data. Perfmon is the tool you'll want to use to witness this, setup a few counters and then run your app.
http://portal.smartertools.com/KB/a66/h ... ation.aspx
Now I do use the write defer but I am using it to address a major write performance issue when using triple redundant Storage spaces (similar to RAID) on 8 1TB HDDs but this is a very specific situation. I wouldn't use it for general computing.