[Logo] Terracotta Discussion Forums (LEGACY READ-ONLY ARCHIVE)
  [Search] Search   [Recent Topics] Recent Topics   [Members]  Member Listing   [Groups] Back to home page 
[Register] Register / 
[Login] Login 
[Expert]
Messages posted by: Hamlet  XML
Profile for Hamlet -> Messages posted by Hamlet [8]
Author Message
I found the answer on StackOverflow:

You need a bootstrapCacheLoaderFactory in your ehcache_listener.xml. For example:

Code:
<cache name="myCache" ...>
    ...
    <bootstrapCacheLoaderFactory
             class="net.sf.ehcache.distribution.RMIBootstrapCacheLoaderFactory"
             properties="bootstrapAsynchronously=true"
             propertySeparator="," />    
 </cache>



This is a cross list from StackOverflow... but I'm not getting answers there: http://stackoverflow.com/questions/16844455/ehcache-replicated-cache-not-synchronizing-at-startup

I have an ehcache Cache replicated across two machines. The peers correctly find each other and replicate once both peers are started. However, if the 1st peer starts first, and receives several elements, and then the 2nd peer starts later... the 2nd peer never sees the elements that were added while it was not yet alive.

Here is exactly the order:
1. Cache A is started
2. Add "1234" to Cache A
3. Cache B is started
4. get "1234" from Cache B -> NOT FOUND

My expectation: If 2 caches are replicated, then getting an existing element returns the same value for either cache.

My cache Elements are just primitive String/Integer types.

An example is in GitHub here: https://github.com/HamletDRC/EhcachePOC

Ehcache configurations are here: https://github.com/HamletDRC/EhcachePOC/tree/master/src/main/resources

In the sample project, log4j is enabled for the ehcache classes so that you can see that the peers do find each other and do replicate, but only elements that were added since the peer group started, not elements that existed previously.

You only need a JDK and Maven installed to build it.

To reproduce:
1. Run ReplicatedCacheWriter
2. Wait 6 seconds for the writer to create elements [1, 2, 3, 4, 5, 6]
3. Run ReplicatedCacheListener
4. The listener finds all elements that were "put" after it came alive, but no elements "put" before it came alive.

Here is the ehcache.xml
Code:
 <cacheManagerPeerProviderFactory
         class="net.sf.ehcache.distribution.RMICacheManagerPeerProviderFactory"
         properties="peerDiscovery=automatic, multicastGroupAddress=231.0.0.1,
                   multicastGroupPort=4446, timeToLive=32"/>
 
 <cacheManagerPeerListenerFactory
         class="net.sf.ehcache.distribution.RMICacheManagerPeerListenerFactory"
         properties="port=40002, socketTimeoutMillis=2000"/>

...
Code:
<cacheEventListenerFactory
             class="net.sf.ehcache.distribution.RMICacheReplicatorFactory"
             properties="replicateAsynchronously=true, replicatePuts=true, replicateUpdates=true,
                     replicateUpdatesViaCopy=false, replicateRemovals=true "/>


(Obviously, the Listener Port is different between the two nodes)

How can I get the caches to synchronize at startup?
I am using ehcache in a cluster, and I want both cluster nodes to always have the same cache entries.

I am using RMICacheManagerPeerProviderFactory and multicast to find other peers. Peers are found and the logs show that things are working. On the cache, ReplicateAsynchronously is true.

This does what I want most the time. But we sometimes (once a day?) see consistency issues. I tried to read the documentation about replication consistency but I did not understand the sentences. The doc is here:
* http://ehcache.org/documentation/user-guide/cache-topologies#potential-issues-with-replicated-caching

The document says this:
For those times when it is important, Ehcache provides for synchronous delivery of puts and updates via invalidation. 

Does this mean that synchronously delivery guarantees consistency? What happens when delivery fails? Does the original cache.put(e: Element) fail as well?

The document also says this:
If the replicateUpdatesViaCopy property is set to false in the replication configuration, updates are made by removing the element in any other cache peers. This forces the applications using the cache peers to return to a canonical source for the data.  

This is not the behavior I am seeing. I have two caches that are out of sync (cache A contains key 1234 and cache B does not). When cache B is asked for key 1234 a null is returned. Please explain the meaning of this sentence in more depth.
I posted a message to the AspectWerkz forum titled "Testing that a pointcut expression is contained within a set of classes/jar". We'll see if this is possible. Thanks guys.
I wanted to write a quick JUnit test to make some basic assertions about my project's terracotta.xml file. I just want to have a test fail in case someone changes locations of classes or renames a field that is referenced without updating the terracotta.xml.

It's simple to make assertions about the <class-expression>, <method-expression>, and <field-name> nodes that do not contain the regex style wildcards. But some fields do have those wildcards, for instance:

<method-expression>* com.example.MyInstance.*(..)</method-expression>

Is there a standard library that can be used to quickly resolve these into a set of references from my project?
Page 247 "Using the java.util.concurrent Abstractions" contains a few paragraphs on why not to use them. 1) It does not seperate the master from the worker. Clustering the ExecutorService "will ot allow us to scale out the master independently of the workers but force us to run them all on every JVM". 2) ExecutorService lacks a layer of reliability and control because of its black box nature. Failing over tasks to other nodes becomes impossible.

Also, the ForkJoinExecutor does not implement java.util.concurrent.ExecutorService, it just has matching type signatures. I was mistaken, but it doesn't really matter.

I'm a beginner at Terracotta, so bear with me... from my perspective, there doesn't seem to be anything that I can reliably share/cluster within the ExecutorService. I'd like to share the ForkJoinPool/ThreadPool's task queue, but I can't reliably reach into the compiled class and configure which data structure is shared. (But maybe I should look at the source code and hopefully the synchronization policy is amenable to this).

This is all just for fun, not production code, so I'm not going to email Brian. But any advice on how you'd do this is greatly appreciated.
I'm interested in using both Terracotta and Doug Lea's Fork/Join framework (http://jcp.org/en/jsr/detail?id=166) JSR 166.

To my sadness, I see that the Definitive Guide to Terracotta warns against using the Java 5 executor services, which the fork/join framework is written against.

I haven't looked much at the CommonJ Work Manager library, but don't relish the idea of rewriting a work-stealing and forking framework on top of it (by "don't relish" I of course mean that I'm incapable).

Does anyone have any suggestions on how to get a Fork/Join proof of concept running on top of Terracottta? My current direction is to have each JVM start up their own ExecutorService and then share the work queue somehow. Thought I'd post here before investing too much time.

Thanks in advance.
I'm interested in using both Terracotta and Doug Lea's Fork/Join framework (http://jcp.org/en/jsr/detail?id=166) JSR 166.

To my sadness, I see that the Definitive Guide to Terracotta warns against using the Java 5 executor services, which the fork/join framework is written against.

I haven't looked much at the CommonJ Work Manager library, but don't relish the idea of rewriting a work-stealing and forking framework on top of it (by "don't relish" I of course mean that I'm incapable).

Does anyone have any suggestions on how to get a Fork/Join proof of concept running on top of Terracottta? My current direction is to have each JVM start up their own ExecutorService and then share the work queue somehow. Thought I'd post here before investing too much time.

Thanks in advance.
 
Profile for Hamlet -> Messages posted by Hamlet [8]
Go to:   
Powered by JForum 2.1.7 © JForum Team