[Logo] Terracotta Discussion Forums (LEGACY READ-ONLY ARCHIVE)
  [Search] Search   [Recent Topics] Recent Topics   [Members]  Member Listing   [Groups] Back to home page 
[Register] Register / 
[Login] Login 
[Expert]
OutOfMemoryError  XML
Forum Index -> Ehcache
Author Message
jandam

neo

Joined: 07/25/2010 05:36:55
Messages: 3
Offline

I'm new to ehcache. I made simple test that produces OOME on line 'Code:
float[] values = new float[1000000];
.
VM args: -ea -Xmx128M
It looks like DiskStore isn't fast enough to write values evicted from MemoryStore. DiskStore uses queue to write data to disk. It looks like waiting values for write fill available memory. Is there any possibility to make DiskStore blocking when reaches some limit of queued values.
Without 'overflowToDisk' everything works perfectly. - Cahce returns null for evicted items.
Please can you help me with cache configuration/finding solution for this problem.

Thank you very much

Martin

Code:
         CacheManager manager = CacheManager.create();
 
         CacheConfiguration configuration = new CacheConfiguration("test", 5).memoryStoreEvictionPolicy(MemoryStoreEvictionPolicy.LFU).overflowToDisk(true).eternal(true);
         Cache cache = new Cache(configuration);
 
         manager.addCache(cache);
 
         for(int index = 0; index < 500; index++) {
             UUID uuid = UUID.randomUUID();
             float[] values = new float[1000000];
             Arrays.fill(values, index);
             Data data = new Data(uuid, values);
             idList.add(uuid);
             cache.put(new Element(uuid, data.getData()));
         }
 
         for(int index = 0; index < 10; index++) {
             UUID uuid = idList.get(100 + index);
             Element element = cache.get(uuid);
             if(element == null) {
                 System.out.println("Element is null: " + uuid);
             } else {
                 float[] data = (float[]) element.getValue();
                 System.out.println("Data: " + data);
             }
         }
 
         manager.shutdown();
 
zeeiyer

consul

Joined: 05/24/2006 14:28:28
Messages: 493
Offline

Have you looked at tuning diskSpoolBufferSizeMB

diskSpoolBufferSizeMB: This is the size to allocate the DiskStore for a spool buffer. Writes are made to this area and then asynchronously written to disk. The default size is 30MB. Each spool buffer is used only by its cache. If you get OutOfMemory errors consider lowering this value. To improve DiskStore performance consider increasing it. Trace level logging in the DiskStore will show if put back ups are occurring.
 

Sreeni Iyer, Terracotta.
Not a member yet - Click here to join the Terracotta Community
jandam

neo

Joined: 07/25/2010 05:36:55
Messages: 3
Offline

Thank you for reply. It doesn't help. i tried to set lower size. The main problem is that the cache element is evicted from MemoryStore and added to DiskStore queue to write it asynchronously. So the cache element remains in memory till successful write to disk. What I think is that adding new elements to MemoryStore isn't blocked when you reach diskSpoolBufferSizeMB limit.
rajoshi

seraphim

Joined: 07/04/2011 04:36:10
Messages: 1491
Offline

Issue seems to be resolved.Please let us know if more information is required.

Rakesh Joshi
Senior Consultant
Terracotta.
jandam

neo

Joined: 07/25/2010 05:36:55
Messages: 3
Offline

OOME is there without any change. Tested on 2.4.3, 2.5.0b1.

Problem is that methods Cache:put*Internal are waiting only once for BACK_OFF_TIME_MILLIS = 50ms when disk spool cache is full.

When write of huge object to disk takes longer than BACK_OFF_TIME_MILLIS new element is put to cache. Meanwhile evicted item is still being written to disk storage.


2.5.0b1, line 1541, Cache:backOffIfDiskSpoolFull()

Code:
     /**
      * wait outside of synchronized block so as not to block readers
      * If the disk store spool is full wait a short time to give it a chance to
      * catch up.
      * todo maybe provide a warning if this is continually happening or monitor via JMX
      */
     private void backOffIfDiskSpoolFull() {
 
         if (compoundStore.bufferFull()) {
             // back off to avoid OutOfMemoryError
             try {
                 Thread.sleep(BACK_OFF_TIME_MILLIS);
             } catch (InterruptedException e) {
                 // do not care if this happens
             }
         }
     }
 
 


Sincerely
Martin
rajoshi

seraphim

Joined: 07/04/2011 04:36:10
Messages: 1491
Offline

Hi we will look into this.

Rakesh Joshi
Senior Consultant
Terracotta.
 
Forum Index -> Ehcache
Go to:   
Powered by JForum 2.1.7 © JForum Team