[Logo] Terracotta Discussion Forums (LEGACY READ-ONLY ARCHIVE)
  [Search] Search   [Recent Topics] Recent Topics   [Members]  Member Listing   [Groups] Back to home page 
[Register] Register / 
[Login] Login 
[Expert]
Messages posted by: halbert  XML
Profile for halbert -> Messages posted by halbert [24] Go to Page: Previous  1, 2
Author Message
My Terracotta array recently quit unexpectedly.

Looking at the logs I see this:

Code:
java.lang.AssertionError: Lookup for non-exisistent Objects : BitSetObjectIDSet [ ObjectID=[105361150] ] lookup context is : LookupContext [ txnID = ServerTransactionID{ClientID[15],TransactionID=[10383859]}, oids = BitSetObjectIDSet [ ObjectID=[105361150] ], seqID = SequenceID=[10383859], clientTxnID = TransactionID=[10383859], numTxn = 1] = { pending = true, lookedupObjects.size() = 0}
         at com.tc.objectserver.tx.TransactionalObjectManagerImpl$LookupContext.assertNoMissingObjects(TransactionalObjectManagerImpl.java:491)
         at com.tc.objectserver.tx.TransactionalObjectManagerImpl$LookupContext.setResults(TransactionalObjectManagerImpl.java:463)
         at com.tc.objectserver.impl.ObjectManagerImpl$ObjectManagerLookupContext.setResults(ObjectManagerImpl.java:1137)
         at com.tc.objectserver.impl.ObjectManagerImpl.basicInternalLookupObjectsFor(ObjectManagerImpl.java:560)
         at com.tc.objectserver.impl.ObjectManagerImpl.basicLookupObjectsFor(ObjectManagerImpl.java:512)
         at com.tc.objectserver.impl.ObjectManagerImpl.processPendingLookups(ObjectManagerImpl.java:987)
         at com.tc.objectserver.impl.ObjectManagerImpl.postRelease(ObjectManagerImpl.java:792)
         at com.tc.objectserver.impl.ObjectManagerImpl.addFaultedObject(ObjectManagerImpl.java:385)
         at com.tc.objectserver.handler.ManagedObjectFaultHandler.handleEvent(ManagedObjectFaultHandler.java:57)
         at com.tc.async.impl.StageImpl$WorkerThread.run(StageImpl.java:145)
 



This is followed by a Tread dump and then the server dies. This happens on all the servers in the array at the same time. My array consists of 2 server instances.

It does not recover after this.

This outage caused a production issue so it is a big deal (for me).

I'm using Terracotta 3.7.0 Open source.

I noticed this same issue as already been logged:

http://jira.terracotta.org/jira/browse/EHCTERR-19
http://jira.terracotta.org/jira/browse/EHCTERR-23

One of the bug reports mentions it was fixed and the other says unresolved.

So I'm not sure what the issue is and how to prevent it from happening again.

Thank you for any assitance.
Very quiet forum here.

I'm surprised nobody has answered my question yet. I found the answer to my original question but I'm a little disapointed in the lack of response here.
I called the setTimeToLive(x) method on an Element instance obtained from a clustered cache.

I passed the value 5 (seconds) for parameter x however even after waiting for several minutes, the Element can still be fetched from the cache.

This is on a clustered cache using Terracotta 3.7.0

Does the Element class not support changing the timeToLive or timeToIdle from the value setup in the XML configuration file?

Am I doing something wrong?

How do you in general decide on the maximum memory configuration for the terracotta servers?

Is it just an estimate of the maximum amount of data that'll be cached?

I assume if cached data can't fit in memory, it'll save to the file system.
What's the best way to monitor Terracotta's memory usage and GC performance?

Is the Dev Console the best way to do that?

Or should I be using a JMX console or should I simply monitor the log files (turn on GC verbose and print options)
Ok, looks like consistency must be "strong" when using local transactions.

I think local transactionalMode is what I need but do I need to have consistency=strong as well?

I'm using terracotta in a clustered environment.

I have 2 distributed caches, call them cache1 and cache2

Say the the initial state of the caches are as follows:

cache1: { key="hello", value=1 }
cache2: { key="hello", value="A" }

What the best way to ensure that changes to the 2 caches are atomic.

So if something changes the state of the caches to this:

cache1.put("hello", 2)
cache2.put("hello", "B" }

i don't want this to happen when getting the values from the cache:
cache1.get("hello") == 2
cache2.get("hello") == "A"

I'm ok with getting the initial state after the cache has been changed (i don't need strong consistency) but having one new value and the other old is unacceptable to me.



I'm getting the same problem. Or at least I'm getting the same symptoms.

Basically client is rejected for some reason from the cache cluster.

I'm not sure if my clients are being rejected because of GC or some other reason but what I do know for sure is that once the affected clients are in this state, they will NEVER reconnect on their own !!!

Am I naive in thinking that that clients should try to reconnect (automatically) to the terracotta cache server ?

 
Profile for halbert -> Messages posted by halbert [24] Go to Page: Previous  1, 2
Go to:   
Powered by JForum 2.1.7 © JForum Team