[Logo] Terracotta Discussion Forums (LEGACY READ-ONLY ARCHIVE)
  [Search] Search   [Recent Topics] Recent Topics   [Members]  Member Listing   [Groups] Back to home page 
[Register] Register / 
[Login] Login 
[Expert]
Messages posted by: Chris3x  XML
Profile for Chris3x -> Messages posted by Chris3x [12]
Author Message
I really need to solve this problem. Ideas?
So I found the offline compactor, when i run it over my cache data it does significantly reduce it's size. So I think we can assume that the compactor threads are never running for whatever reason. Is there any way to progamatically trigger the compaction? Or do I need to roll my own?
I worked out how to change the compaction policy, tried SizedBasedCompationPolicy but it just chewed all the disk up quickly.
So my optimisation has drastically reduced the rate of growth of my diskstore, but it is still growing.

I've also confirmed im not mistakenly actually creating more keys, all im doing is updating existing elements.

Is there a way to determine what policy the compactor is running with?
There are multiple caches.

I was running refreshes like every 5 minutes. I did have a look through the code and found that it was doing alot more cache writes then i expected when doing updates, i've optimized (dont delete/write if i dont have to etc) that so I will let you know how that goes after testing.

The selfpopulatingcaches don't actually have much in them, but there refreshes update some elements in other caches (pretty convoluted i know it just evolved that way when i went to bigmemory).
My test isn't writing keys in a tight loop,

im running the refresh method on selfpopulatingcache fairly often.
Thread dump says its in the run method of class compatorimpl at line 125
Checking out my test rig via jmx is reporting 2 compcatorthreads, they spend 99.9%+ of time in a wait state, this does look like a good lead.
Thanks for the feedback, ill try and attach with jmx and see what the compactor thread is doing.


The access pattern is just updating a small subset of random keys and occasionally adding new ones. My test system is just updating on the same keys.

So your saying that any update should grow the diskstore, but the compactor should come along and reduce that ? I'm seeing rapid growth (once every cache initalised im using about 8gb of disk, but it doesn't take many updates to take this out to 20gb+, only like ~%1 of keys updates, no significant growth in value sizes afaik)
So still struggling with this, thought i would add some config details

updatecheck=false
maxbytelocaloffheap="10g"
overflowtodisk="false"
overflowtooffheap="true"

then the caches have eternal="true" and maxEntrieslocalheap set, and synchronousWrites="false"


could eternal = true be causing the diskstore to continue to grow? My in-memory side isn't noticeably increasing?
I'm using bigmemory 4.04.

I tried turning to synchonous writes but it still keeps growing with updates.

Any ideas how I can debug the issue?
I've run into a similar issue (cache size was exploding as i was building up the caches), and came to the same conclusion that it must be some sort of journalling issue. I rewrote the initialization to avoid lots of small changes to existing cache elements (instead inserting only once it was fully formed), which seemed to fix the issue. The issue i have now as updates tickle in the diskstore slowly grows (disproportionally more then i'm inserting), until it runs the machine out of disk space.


The obvious thing that occurs to me to try is to turn off the async writes for the diskstore (assuming our guess of whats going wrong is correct i.e that multiple changes to a element before the first change can be flushed to disk is causing this behavior).

Any advice would be welcome. I would also like this mechanism explained some more, because i assume you must have some sort of compaction scheme to deal with this, and i'm wondering why it isn't working for me (as i don't deal with alot of updates once initialised).
 
Profile for Chris3x -> Messages posted by Chris3x [12]
Go to:   
Powered by JForum 2.1.7 © JForum Team