[Logo] Terracotta Discussion Forums (LEGACY READ-ONLY ARCHIVE)
  [Search] Search   [Recent Topics] Recent Topics   [Members]  Member Listing   [Groups] Back to home page 
[Register] Register / 
[Login] Login 
[Expert]
Messages posted by: cdennis  XML
Profile for cdennis -> Messages posted by cdennis [83] Go to Page: Previous  1, 2, 3, 4, 5, 6 Next 
Author Message
When you say "terminate the process" what exactly do you mean? Depending on how you are achieving your termination the JVM may not be running the shutdown hooks: http://docs.oracle.com/javase/6/docs/api/java/lang/Runtime.html#addShutdownHook(java.lang.Thread)

The disk-persistence provided by Ehcache is not designed to be fault-tolerant. It relies on the cache-manager having been shut down cleanly in order to work (in your case using a shutdown hook). The warning you are seeing on startup indicates that an unclean shutdown was detected and therefore the on-disk data is being deleted.

In short the only way you can "guarantee" persistence is by ensuring that you always shut the cache-manager down cleanly.

Hope this clarifies things for you,

Chris
You're entirely right this is/was a bug. I've filed a JIRA for this (https://jira.terracotta.org/jira/browse/EHC-933) and checked in a fix (rev 5352 in trunk). This has been merged to the 2.5.x branch and so will be part of the upcoming 2.5.2 release.

Thanks for the pointer,

Chris
If you could post the output of 'java -version' for the JVM you are trying to use with the command line options that are used inside the shell script, and the shell script and config file (if you've modified them at all) too, then we'll see if we can figure out why it's not working for you. The full output of the actual shell script would be good as well.

Thanks,

Chris
I've posted a reply on your other issue that covers the behavior here: http://forums.terracotta.org/forums/posts/list/6371.page#32454
I had a look at the heap dump for test-case 5 (TC disabled, BigMemory enabled) and I'm seeing a ColdFusion bug - they are performing unsynchronized access to a WeakHashMap via multiple threads. That access has caused one of the entry chains to become corrupted and the two accessing threads are now spinning inside the map (the chains have becomes loops). I've reported this as an issue with Adobe (http://cfbugs.adobe.com/cfbugreport/flexbugui/cfbugtracker/main.html#bugId=87240), I'll also try and chase this up with Adobe via other channels.

This is the same thing (the looping) that is happening in your other forum post. I'm not sure yet whether the heap usage issues are caused by this or not - but as I point out in the CF issue thats pretty much academic until Adobe fix the current bug.

Sorry for the bad news,

Chris
If you could provide the source code for your test so that we can attempt to reproduce this in house that will greatly increase our chances of figuring out whats going wrong here.
The SizeOfFilter interface actually defines two methods:

Code:
 public interface SizeOfFilter {
 
     /**
      * Returns the fields to walk and measure for a type
      * @param klazz the type
      * @param fields the fields already "qualified"
      * @return the filtered Set
      */
     Collection<Field> filterFields(Class<?> klazz, Collection<Field> fields);
 
     /**
      * Checks whether the type needs to be filtered
      * @param klazz the type
      * @return true, if to be filtered out
      */
     boolean filterClass(Class<?> klazz);
 }
 


In the resource filtering implementation this means the filterFields method simply removes the matching fields from that collection. The filterClass method is the one that looks at the class names.

With that said there was a discrepancy between the documentation and the code with regard to package based filtering via a resource. That is now handled correctly after I fixed it in trunk. I also improved the resource parsing by trimming whitespace before or after a line. These changes should go out with the next 2.5.x line release (2.5.1).
I've created:
http://jira.terracotta.org/jira/browse/EHC-909
http://jira.terracotta.org/jira/browse/EHCWEB-3

in response to this.
I'll try to answer your questions in order:

1. No, not that I'm aware of. XML configuration is not part of JSR-107.

2. I don't entirely follow what you are saying here - although there is some confusing documentation in Ehcache that references JCache there is no link between the two reader implementations. There hasn't been a JCache loader returning method in CacheLoaderFactory for a long time (Ehcache 1.5). In that sense the 2.5 documentation is correct.

3. The only way I can think that you can solve your problem would be to create an Ehcache CacheLoaderFactory that takes the name of a JSR-107 cache loader as a property in it's XML config. Then the factory can instantiate that 107 loader, wrap it in the ehcache-jcache wrapper class (JCacheCacheLoaderAdapter) and return that as your Ehcache CacheLoader instance.

IMHO working with the Ehcache JSR-107 implementation right now is living right on the bleeding edge (although things should be improving and stabilizing in the not too distant future).

Hope this helps,

Chris
This is being tracked in JIRA: https://jira.terracotta.org/jira/browse/EHC-898
The exceptions you're seeing are because the in-heap structure within Ehcache is claiming a valid serialized value is present at a specific file offset and yet none is found there. From the information I have so far it's hard to say what might be going wrong. If you could supply me with answers to the following questions that would be helpful.

1. Are the disk store files being written to on a local or a remote (network) disk?
2. Does this only happen when using a previously persisted cache (i.e. not on first run)?
3. What JVM are you running?
4. How do you terminate the JVM?
5. Do you move the Ehcache disk store files around at all?

Hopefully this information will help to narrow down the circumstances that trigger the issue, and allow us to work towards getting a reproducible test case.

Thanks,

Chris
Ehcache 2.0.1 does indeed not make much of an effort to 'defragment' the disk space that it uses. In releases 2.1.0 and later we moved to using a binary tree based region set to do allocation/freeing of on disk space. Although I wouldn't go so far as to call this true 'defragmentation' (since once allocated nothing gets moved around) it is designed to be more intelligent in regards to how disk space is initially allocated and therefore prevent fragmentation from occurring.

Hope this explain everything okay,

Chris
The answer below is based on the following assumptions:
1. The cache you're trying to search here is a Hibernate 2nd level cache (which it looks like it is).
2. You are using release 3.5.0 or earlier of Terracotta (and it's associated packages).

If this is the case then I'm pretty sure you're seeing an instance of a known bug. Basically when we encode Hibernate key types in a different way to regular keys within the clustered cache implementation. Unfortunately we neglected to account for this in the key de-serialization logic. This means that the code attempts to decode the key assuming the wrong data format (hence the OptionalDataException). In 3.5.1 this is "fixed" such that you shouldn't see this exception, however it won't help you since the key type you'll get back will not be the same user type that you passed in (it will be an encoded string). In the upcoming 3.6.0 release this is fixed correctly however - and you should be able to retrieve an instance of CacheKey instead. You might also be able to get around this by not requesting the keys be retrieved when you perform your queries.
As you have debug logging turned on, you are likely to see a large number of logging messages that are not important to your use of the cache. I would recommend not having logging turned up so high for the net.sf.ehcache hierarchy of loggers unless you have a genuine need to.

In this case the remove that is occurring (and failing) here is "invalidating" any pre-existing mapping before putting the new value. In this case there is no pre-existing mapping and so the failed remove debug logging is triggered.

Chris
Neither of the above two dumps look Sigar related to me - there seem to be no Sigar related methods in the stack traces. I assume that the reply to your post @ http://forums.terracotta.org/forums/posts/list/5711.page answers your other question.

Regards,

Chris Dennis
 
Profile for cdennis -> Messages posted by cdennis [83] Go to Page: Previous  1, 2, 3, 4, 5, 6 Next 
Go to:   
Powered by JForum 2.1.7 © JForum Team