[Logo] Terracotta Discussion Forums (LEGACY READ-ONLY ARCHIVE)
  [Search] Search   [Recent Topics] Recent Topics   [Members]  Member Listing   [Groups] Back to home page 
[Register] Register / 
[Login] Login 
[Expert]
Insufficient memory when using localCacheEnabled  XML
Forum Index -> BigMemory
Author Message
rvansa

journeyman

Joined: 04/05/2013 02:25:59
Messages: 14
Offline

Hi, I've tried to run TSA with localCacheEnabled but the response times are very low - loading ~250 MB of data from 3 client computers to 4 computer server array (actually each machine runs two Terracotta instances, first primary and second with mirror of server from another machine) takes about 15 minutes - that is way way slower than I had with localCacheDisabled.

I am getting tons of "Insufficient local cache memory to store the value for key ..." warnings in client log, but I am not sure why the client just does not wait until it flushes data to server (and clears the space for new data instead of complaining).

See configuration:
Code:
 <ehcache xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
          xsi:noNamespaceSchemaLocation="http://ehcache.org/ehcache.xsd"
          name="testCacheConfig" updateCheck="false" monitoring="on" maxBytesLocalHeap="64m" maxBytesLocalOffHeap="1m">
    <cache name="testCache"
           eternal="true"
           timeToIdleSeconds="0"
           timeToLiveSeconds="0"
           overflowToDisk="false"
           overflowToOffHeap="true"
           maxBytesLocalHeap="64M">
       <pinning store="localMemory"/>
       <terracotta consistency="strong" localCacheEnabled="true" synchronousWrites="true">
          <nonstop enabled="false"/>
       </terracotta>
       <persistence strategy="distributed"/>
    </cache>
 </ehcache>
 


Code:
 <tc:tc-config xmlns:tc="http://www.terracotta.org/config">
     <servers>
         <mirror-group group-name="group3">
             <!--Host: edg-perf12-->
             <server host="edg-perf12" name="node03">
                 <tsa-port>9510</tsa-port>
                 <tsa-group-port>9530</tsa-group-port>
                 <jmx-port>9520</jmx-port>
                 <data>/tmp/hudson/jdg/data/node03</data>
                 <logs>/tmp/hudson/jdg/logs/node03</logs>
                 <offheap>
                     <enabled>true</enabled>
                     <maxDataSize>1792m</maxDataSize>
                 </offheap>
             </server>
             <!--Host: edg-perf13-->
             <server host="edg-perf13" name="node03_mirror">
                 <tsa-port>9511</tsa-port>
                 <tsa-group-port>9531</tsa-group-port>
                 <jmx-port>9521</jmx-port>
                 <data>/tmp/hudson/jdg/data/node03_mirror</data>
                 <logs>/tmp/hudson/jdg/logs/node03_mirror</logs>
                 <offheap>
                     <enabled>true</enabled>
                     <maxDataSize>1792m</maxDataSize>
                 </offheap>
             </server>
         </mirror-group>
         <mirror-group group-name="group4">
             <!--Host: edg-perf13-->
             <server host="edg-perf13" name="node04">
                 <tsa-port>9510</tsa-port>
                 <tsa-group-port>9530</tsa-group-port>
                 <jmx-port>9520</jmx-port>
                 <data>/tmp/hudson/jdg/data/node04</data>
                 <logs>/tmp/hudson/jdg/logs/node04</logs>
                 <offheap>
                     <enabled>true</enabled>
                     <maxDataSize>1792m</maxDataSize>
                 </offheap>
             </server>
             <!--Host: edg-perf10-->
             <server host="edg-perf10" name="node04_mirror">
                 <tsa-port>9511</tsa-port>
                 <tsa-group-port>9531</tsa-group-port>
                 <jmx-port>9521</jmx-port>
                 <data>/tmp/hudson/jdg/data/node04_mirror</data>
                 <logs>/tmp/hudson/jdg/logs/node04_mirror</logs>
                 <offheap>
                     <enabled>true</enabled>
                     <maxDataSize>1792m</maxDataSize>
                 </offheap>
             </server>
         </mirror-group>
         <mirror-group group-name="group1">
             <!--Host: edg-perf10-->
             <server host="edg-perf10" name="node01">
                 <tsa-port>9510</tsa-port>
                 <tsa-group-port>9530</tsa-group-port>
                 <jmx-port>9520</jmx-port>
                 <data>/tmp/hudson/jdg/data/node01</data>
                 <logs>/tmp/hudson/jdg/logs/node01</logs>
                 <offheap>
                     <enabled>true</enabled>
                     <maxDataSize>1792m</maxDataSize>
                 </offheap>
             </server>
             <!--Host: edg-perf11-->
             <server host="edg-perf11" name="node01_mirror">
                 <tsa-port>9511</tsa-port>
                 <tsa-group-port>9531</tsa-group-port>
                 <jmx-port>9521</jmx-port>
                 <data>/tmp/hudson/jdg/data/node01_mirror</data>
                 <logs>/tmp/hudson/jdg/logs/node01_mirror</logs>
                 <offheap>
                     <enabled>true</enabled>
                     <maxDataSize>1792m</maxDataSize>
                 </offheap>
             </server>
         </mirror-group>
         <mirror-group group-name="group2">
             <!--Host: edg-perf11-->
             <server host="edg-perf11" name="node02">
                 <tsa-port>9510</tsa-port>
                 <tsa-group-port>9530</tsa-group-port>
                 <jmx-port>9520</jmx-port>
                 <data>/tmp/hudson/jdg/data/node02</data>
                 <logs>/tmp/hudson/jdg/logs/node02</logs>
                 <offheap>
                     <enabled>true</enabled>
                     <maxDataSize>1792m</maxDataSize>
                 </offheap>
             </server>
             <!--Host: edg-perf12-->
             <server host="edg-perf12" name="node02_mirror">
                 <tsa-port>9511</tsa-port>
                 <tsa-group-port>9531</tsa-group-port>
                 <jmx-port>9521</jmx-port>
                 <data>/tmp/hudson/jdg/data/node02_mirror</data>
                 <logs>/tmp/hudson/jdg/logs/node02_mirror</logs>
                 <offheap>
                     <enabled>true</enabled>
                     <maxDataSize>1792m</maxDataSize>
                 </offheap>
             </server>
         </mirror-group>
     </servers>
     <clients>
         <logs>/tmp/hudson/jdg/clientlogs/%h</logs>
     </clients>
 </tc:tc-config>
 
Jaza

journeyman

Joined: 04/15/2012 07:17:33
Messages: 28
Offline

Hi, I see two things that might help:

1) your cache sizing looks incorrect. You provide 65m of memory to each L1 (64 heap + 1 offheap) while trying to load 250mb of total data. And since you pin your cache to local memory it cannot free up local resources.
2) your consistency settings (strong/sync writes) are very strict, especially since you want to pump lots of data into it. You should enable bulk mode for that.

Regards,
Jan
rvansa

journeyman

Joined: 04/05/2013 02:25:59
Messages: 14
Offline

Oh, you're right, I've left the pinning info there from another configuration (without L1) when I was trying to pin the entries on server.

Regarding the consistency - I want to use the cache as big distributed map, not just as some "cache" where the entries may get lost/non-actual and nothing serious happens.
gyan10

ophanim

Joined: 06/28/2011 23:15:25
Messages: 701
Offline

That means you are not pinning data for this particular config.
You have also set "timeToIdleSeconds" and "timeToLiveSeconds" to 0 which means Element can idle and live for infinity.I would suggest to set some appropriate time and increase localheap size and and see the results.

Gyan Awasthi
Terracotta -A Software AG company
rvansa

journeyman

Joined: 04/05/2013 02:25:59
Messages: 14
Offline

gyan10:

I've set these properties because I really don't want to lose the data, even if it is idle. Therefore, appropriate time is really infinity. When the data will be no longer required these can be eventually moved into other persistent storage but I want this to be handled by the application, not the server. I am aware of the resources (memory) assigned to TSA and will never exceed it.

Why should I increase the local heap size? I assume that having L1 should improve reads compared to L1 disabled, but the results are much worse. 64 MB is a wild guess, but it should work (and loading 250 MB for 15 minutes does not seem exactly as "working"), tweaking the optimal value is another thing.

Btw., after I've re-ran it without the pinning, still no improvement.
gyan10

ophanim

Joined: 06/28/2011 23:15:25
Messages: 701
Offline

rvansa:
In case of localCacheDisabled, all the data goes to TSA and nothing will stay at L1.
In case of localCacheEnabled, your data remains at L1 heap and offheap.From your post, what I understand that You are trying to put 250 MB data in 64 mb and 1m offheap.So this is a sizing issue.You need to provide sufficient memory or some eviction policy to store this much data.You have also set "timeToIdleSeconds" and "timeToLiveSeconds" to 0. So nothing will evicted from L1. thats why I suggested to increase localheap and ofheap to store 250 MB of data. I suggest to set below config:
overflowToDisk="true"
overflowToOffHeap="true"
maxBytesLocalHeap="150M"
maxBytesLocalOffHeap="200M">

Gyan Awasthi
Terracotta -A Software AG company
rvansa

journeyman

Joined: 04/05/2013 02:25:59
Messages: 14
Offline

gyan10:

OK, probably I haven't understood correctly the way how it is working. I've thought that L1 works as a local cache where we try to read the data from and if it is not there (due to space constraints) we try to get it from TSA instead.
According to EHCache docs:
With TSA the data is split between an Ehcache node, which is the L1 Cache and the TSA, which is the L2 Cache. As with the other replication mechanisms the L1 can hold as much data as is comfortable. All the rest lies in the L2. 


From what you say I understand that all L1 nodes (clients) hold all the data. This does not fit to the "scalability" model, does it? Or do I misunderstand the TTI and TTL options? According to table here http://ehcache.org/documentation/user-guide/configuration#dynamically-changing-cache-configuration these options hold BOTH for client and TSA - after the timeouts expire the elements could be removed from TSA as well. I'd let the elements to be removed from L1 as long as these can be reloaded from L2 upon demand.

So to say, I want to keep in the TSA 1 TB of data, but on each client I can spare at most the 64 MB for the local cache (regardless if heap/offheap, therefore I wouldn't use the offheap storage due to serialization costs). Data never expires on TSA. Can I configure that?
gyan10

ophanim

Joined: 06/28/2011 23:15:25
Messages: 701
Offline

No, your understanding is correct that L1 works as a local cache where we try to read the data from and if it is not there (due to space constraints) we try to get it from TSA instead. However when localCacheDisabled is disabled that means you have disabled L1 cache so no data will be stored at L1.

when localCacheEnabled:
once the local caches get filled data flushes to L2(TSA).
Yes You can keep TB of data on L2. You can pin the cache.

•inCache – Cache data is pinned in the cache, which can be in any tier cache data is stored. The tier is chosen based on performance-enhancing efficiency algorithms. This option cannot be used with distributed caches that have a nonzero maxElementsOnDisk setting.
Please read Terracotta online doc creafuly to understand configuration details.

Cheers!!

Gyan Awasthi
Terracotta -A Software AG company
 
Forum Index -> BigMemory
Go to:   
Powered by JForum 2.1.7 © JForum Team