[Logo] Terracotta Discussion Forums (LEGACY READ-ONLY ARCHIVE)
  [Search] Search   [Recent Topics] Recent Topics   [Members]  Member Listing   [Groups] Back to home page 
[Register] Register / 
[Login] Login 
[Expert]
The configured limit of 1,000 object references was reached while attempting to calculate the size o  XML
Forum Index -> BigMemory
Author Message
deviskec

neo

Joined: 10/16/2014 09:07:50
Messages: 1
Offline

Hello, we get really a lot of warnings spamming our log (multiple per second) of this warning:

[WARN] net.sf.ehcache.pool.sizeof.ObjectGraphWalker.checkMaxDepth(ObjectGraphWalker.java:209) - The configured limit of 1,000 object references was reached while attempting to calculate the size of the object graph. Severe performance degradation could occur if the sizing operation continues. This can be avoided by setting the CacheManger or Cache <sizeOfPolicy> elements maxDepthExceededBehavior to "abort" or adding stop points with @IgnoreSizeOf annotations. If performance degradation is NOT an issue at the configured limit, raise the limit value using the CacheManager or Cache <sizeOfPolicy> elements maxDepth attribute. For more information, see the Ehcache configuration documentation.

If I set it to 'abort', it seems ro dump huge amount of Cache contents to the logs which is pretty useless and kills the performance. If I increase the value (to like 20000), we still get this message. Can anyone explain why this message appears so often (and I'm pretty sure we don't have Cache tree value going 1000+ in depth)?

Also we seem to get really bad results using off-heap vs on-heap memory usage (like off-heap is 10x slower). Is this the usual penalty for serialization/deserialization or is there something else going on?
 
Forum Index -> BigMemory
Go to:   
Powered by JForum 2.1.7 © JForum Team