[Logo] Terracotta Discussion Forums (LEGACY READ-ONLY ARCHIVE)
  [Search] Search   [Recent Topics] Recent Topics   [Members]  Member Listing   [Groups] Back to home page 
[Register] Register / 
[Login] Login 
[Expert]
Keys in Terracotta Disk Store remains always alive !!!  XML
Forum Index -> Ehcache
Author Message
iacolucc

neo

Joined: 04/14/2011 09:39:14
Messages: 5
Offline

Hi all, I'm new to Distributed EHCache with terracotta server. I did a simple test using EHCache and Terracotta server.
I configured ehcache with this xml configuration snippet:

<cache name="NPAT_CACHE"
maxElementsInMemory="10"
maxElementsOnDisk="20"
eternal="false"
timeToIdleSeconds="30"
timeToLiveSeconds="0" memoryStoreEvictionPolicy="LFU" >

<terracotta consistency="eventual" synchronousWrites="false" />

</cache>

I inserted 20 objects in the cache and I see 20 objects in the On-disk Size column on the terracotta developer console.

Then I inserted other 20 objects in cache and I see 40 objects in the On-disk Size column on the terracotta developer console.

I wait for some minutes and those size on disk store remained the same!

Then in java code I got some keys in the cache using getKeys() from the EHCache API.

Why Object keys still remain in the disk store also if objects are expired?

Please help me.

thank you

Marco Iacolucci
gkeim

ophanim

Joined: 12/05/2006 10:22:37
Messages: 685
Location: Terracotta, Inc.
Offline

The default storageStrategy is now "DCV2" but if you change it to "classic" you'll see the expected behavior. DCV2 is relatively new and we're still having conversations about how its time-based and size-based management should work. DCV2 is meant for very large caches and some of these management strategies can have significant performance impact.

For now, if you really need to strictly limit the number of elements on the Terracotta Server Array, you must use the classic storageStrategy.

<terracotta storageStrategy="classic"/>


Gary Keim (terracotta developer) Want to post to this forum? Join the Terracotta Community
ssubbiah

jedi

Joined: 05/24/2006 14:25:22
Messages: 117
Location: Saravanan Subbiah
Offline

Let me try and explain how things work in DCV2. Like Gary said it is highly tuned for significantly large caches.

By default DCV2 has 2048 segments. Each segment acts as an individual unit and the maxElementsOnDisk setting from the cache is divided down to the segment. So setting a low number as 20 as ur maxElementsOnDisk will make each segment have its maxElementsOnDisk as 1 as it cant go any lower than that. So even if you add 40 elements, its probably all getting hashed to different segments and not triggering eviction. But you can reduce the number of segments by lowering the concurrency to say 1 or 2 in ur config to work around this.

Also in DCV2, we do lazy expiry of elements based on tti/ttl. What this means is that we dont unnecessarily scan elements trying to look for expired elements until either its accessed or if the number of elements overshoots way beyond maxElementsOnDisk. This is also done for performance reasons.

Even though DCV2 is tuned to perform well for large caches out of the box, its easy to make it work the way you want it to perform with smaller caches. So I would not suggest moving to "classic" mode unless you have a strong reason to do so. Going forward all new features and improvements will only be supported for "DCV2"

Saravanan Subbiah
Terracotta Engineer
iacolucc

neo

Joined: 04/14/2011 09:39:14
Messages: 5
Offline

Thank you to all for the prompt response!

Just to clarify, I'm trying to learn Terracotta products in order to start using it in production for a distributed system that have to give services about phone traffic in some telecommunications companies.

The needs (and use cases) are:

- users of an application ask the phone traffic to our system doing a query on an oracle DB that could obtain 1000 records i.e., and then tipically it paginate over that result set 10 records per 10 records.

- each pagination request go on a different application server instance (so we need cache clustering)

- The result set should remain in distributed cache for about 30 seconds, then should be evicted completely

Given this scenario i built a jmeter test simulating some users that do some queryes. Some of these users get results and paginate three times with some seconds of pauses between pages. Other users do a query and visualize only the first page.

After two hours or less I observe a great degradation in performance and I see an exagerate on-disk size on the developer console. I see also a grow in clients heap memory!

It is for this reason I tried to simplyfy the configuration and did manual tests in order to understand terracotta ehcache behaviour.

So I ask to you what are your advice for a similar use case, keeping in mind that objects that have to be put in cache are a result of some complex elaborations and not objects got directly from hibernate.

I wait for your response.

Thank you in advance.

Marco
iacolucc

neo

Joined: 04/14/2011 09:39:14
Messages: 5
Offline

Excuse me, I forgot to say that in my jmeter test I configured ehcache with high values like:

maxElementsInMemory="1000"
maxElementsOnDisk="100000"

or similar values.

The result was that after some time performance and heap memori consuption was too high.
iacolucc

neo

Joined: 04/14/2011 09:39:14
Messages: 5
Offline

Hi all,

the problem seems to be solved using a good configuration.

I did a similar test with this configuration on the webapp:

<cache name="NPAT_CACHE"
maxElementsInMemory="50000"
maxElementsOnDisk="10000"
eternal="false"
timeToIdleSeconds="30"
timeToLiveSeconds="0" memoryStoreEvictionPolicy="LFU" >

<terracotta consistency="strong"
synchronousWrites="false" storageStrategy="DCV2" concurrency="1" />

</cache>

Now I can see a perfect behaviour in the developer console, a stable handling of the memory and the on-disk size is constant.
On the client side the behaviour is perfect!

Now what I would try to understand are the meaning of various console indicator like "Impeding factors", Hit ratio and so on. I don't know the exact meaning of those values.

Thank you for the support.

Marco Iacolucci

ilevy

consul

Joined: 04/16/2008 10:26:42
Messages: 357
Offline

This section has some info on the what you see in the dev console:

http://www.terracotta.org/documentation/dev-console.html
iacolucc

neo

Joined: 04/14/2011 09:39:14
Messages: 5
Offline

Hi all,

after a bunch of load tests I think the better configuration for my use case (that explained above) is to use the classic storageStrategy in that it seems to have better performance and stability. Using DCV2 strategy my tests always ended with on-disk size saturation and terracotta server crashed!

I need more explaination on the terracotta server behaviour under DCV2 strategy.

Here my configurations (tc-config.xml):

<?xml version="1.0" encoding="UTF-8" ?>
<tc:tc-config xmlns:tc="http://www.terracotta.org/config"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.terracotta.org/schema/terracotta-6.xsd">

<tc-properties>
<property name="l2.nha.dirtydb.autoDelete" value="true"/>
<property name="l1.cachemanager.enabled" value="true"/>
<property name="logging.maxLogFileSize" value="1024"/>
<property name="ehcache.storageStrategy.dcv2.perElementTTITTL.enabled" value="true" />
</tc-properties>

<system>
<configuration-model>development</configuration-model>
</system>

<servers>

<server host="itnag050" name="itnag050" bind="itnag050">
<data>/appl/presswls/t/data/server-data</data>
<logs>/appl/presswls/t/logs/server-logs</logs>
<index>/appl/presswls/t/logs/server-index</index>
<statistics>/appl/presswls/t/logs/server-statistics</statistics>

<dso>

<garbage-collection>
<enabled>true</enabled>
<verbose>true</verbose>
<interval>120</interval>
</garbage-collection>

</dso>
</server>

<server host="itnag051" name="itnag051" bind="itnag051">

<data>/appl/presswls/t/data/server-data</data>
<logs>/appl/presswls/t/logs/server-logs</logs>
<index>/appl/presswls/t/logs/server-index</index>
<statistics>/appl/presswls/t/logs/server-statistics</statistics>

<dso>

<garbage-collection>
<enabled>true</enabled>
<verbose>true</verbose>
<interval>120</interval>
</garbage-collection>

</dso>
</server>

</servers>

<clients>
<logs>logs-%i</logs>
</clients>

</tc:tc-config>


and ehcache.xml:

<?xml version="1.0" encoding="UTF-8"?>

<ehcache xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:noNamespaceSchemaLocation="ehcache.xsd">

<cache name="NPAT_CACHE" maxElementsInMemory="50000"
maxElementsOnDisk="100000" eternal="false" timeToIdleSeconds="60"
timeToLiveSeconds="0" memoryStoreEvictionPolicy="LFU">

<terracotta consistency="strong" synchronousWrites="false"
storageStrategy="classic" concurrency="1">
</terracotta>
</cache>

<terracottaConfig url="itnag050:9510, itnag051:9510" />

</ehcache>


Any further explaination is welcome!

Also because in teh future I will need configurations for other use cases for more larger caches, and I want to be sure terracotta servers doesn't crash.

Best regards

Marco Iacolucci
rajoshi

seraphim

Joined: 07/04/2011 04:36:10
Messages: 1491
Offline

Issue seems to be resolved.Please let us know if more information is required.

Rakesh Joshi
Senior Consultant
Terracotta.
 
Forum Index -> Ehcache
Go to:   
Powered by JForum 2.1.7 © JForum Team