[Logo] Terracotta Discussion Forums (LEGACY READ-ONLY ARCHIVE)
  [Search] Search   [Recent Topics] Recent Topics   [Members]  Member Listing   [Groups] Back to home page 
[Register] Register / 
[Login] Login 
[Expert]
Messages posted by: august  XML
Profile for august -> Messages posted by august [56] Go to Page: 1, 2, 3, 4 Next 
Author Message
bummer:

http://docs.terracotta.org/confluence/display/release/Terracotta+3.5+Release+Notes#Terracotta3.5ReleaseNotes-ResolvedIssues

"6374 - Support dynamic changes to maxElementsOnDisk for clustered caches".

But still, any reason to nuke the cache?
Hi all,

Just a quick one ... we are using Terracotta 3.5.2 and need to raise the MaxElementsOnDisk value. Is there any reason we would need to delete the cache? I'm thinking we would simply make the change via the dev console and update the xml to ensure it is retained on next restart.

Any reason Terracotta 3.5.2 can't handle this?

Thanks,

August
When we use -Dcom.tc.session.serverid it seems to override the jvmroute value. This is interesting.

So when we don’t set it we get SESSION.worker, when we DO set it we get SESSION.valuewesetwithjavaproperty

Do we maybe need to do this to make terracotta aware of the route/worker?

And will this effect the runaway object count?


Couple of things i'm thinking - mostly around config/tuning:

1. i've found no <distributable /> tag in the app and I (the admin, not the dev) have not set it in the tomcat. From what i'm reading this is needed for tomcat clustering and sessions (from: http://wiki.metawerx.net/wiki/Web.xml.Distributable):

"If an application is run in a cluster without being marked as distributable, session changes will only occur on a single JVM. Therefore, when the user connects to one of the other JVM's, their session will not be recognised, and a new session will be created. This may force them to log in again, establishing a 2nd session on the other JVM. As they switch between the two servers, various other problems may arise. " 


Does terracotta require this? Seems like it would not, but then again, doesn't it just hook into the usual tomcat session stuff?

2. other forum posts have mentioned setting session.serverid to the jvmroute (worker) name on the L1's at jvm startup. I'd expect these to then appear in the session part of the console (ie .worker gets appended to JSESSION).

I'm thinking that maybe sessions are getting duplicated and bounced around the L1's as all the objects are in the L1's heaps.

I'll test this theories out over the next few days - curious if anyone else thinks i'm nuts or has any thoughts

August
Hi all,

I have a basic apache 2.2 / tomcat 5.5 / terracotta 3.4.0 sessions setup. For some reason we are seeing increasing live objects counts with fairly small DGCs. Client heaps and GC's are good (< 2secs, heap usage is tiny, with full gc's every 45 mins or so). The live object count increase only started when we added the second node (see attached).

To handle balancing we use mod_jk with jvmroute turned on to allow stickiness (it was there with only one client, but not sticky ie no jvmroute). I can see in the headers and logs the session id getting the jvmroute/worker name being appended (ie Gdls9834928djls.worker01).

However, when i browse the objects in terracotta i don't see the worker values getting added (see attached). Should i?

Would this affect object count? Any thoughts on why object count might be just going up and up once we added the second node and mod_jk stickiness?

Thanks,

August
Just as a followup and thank you ... 3.5.2 has been a wonderful success! Thanks again for the help and to anyone else using old stuff ... don't forget the upgrade path!
3.5.2 for sure ... its just got so many tasty additions!

thanks again!

august
some more interesting graphs ... the system is in the dying state now ... heaps are pretty high on all nodes. in the operator events you can see the passives going through gc hell. (01/04 = passive, 02/03 = active). i've still not been able to find any nodes getting zapped.

something's gotta stand out ... i just need to look deeper. or from a different angle ..
thanks ari ... i'll read and re-read those sections.

as much as i look in the logs i don't see any indication of failover, tho.

it also seems that when the "request for non-existent object" warnings begin our app seems to hit some kind of threshold limit (see attached graph, it shows logins of our app, which place session objects into terracotta). we flatten at around 15:30 on the graph and the object warnings start around the same time (first one is at 2011-07-30 15:37:20,585).

wrt the possible passive/active failures i should see that in the logs, right? but it's all clear there.

i'll continue to see what kind of tuning options are out there.

i should say this is terracotta 3.3.0 ... perhaps an upgrade is in order?

as always, the help is much appreciated!

august
Found this a post here:http://forums.terracotta.org/forums/posts/list/1803.page' target='_new' rel="nofollow"> http://forums.terracotta.org/forums/posts/list/1803.page where i read:

Generally, you shouldn't ever see TCObjectNotFoundException. It basically means that the client node is asking for an object that doesn't exist in the server (because it's been dgc'ed). And dgc shouldn't happen unless the object is no longer referenced anywhere.

One possible cause for this is if you are doing dirty reads - reading clustered state outside of a clustered lock. 


Now, to work out how to see if that's happening ... (i'm a lowly sysadmin ) - any tips?
Hi all,

In recent testing of our Terracotta 3.3.0 cluster (2 stripes) we see the following for our clustered ehcache app:


2011-07-25 21:03:22,993 [WorkerThread(managed_object_fault_stage, 0, 0)] WARN com.tc.objectserver.api.ObjectManager - Request for non-existent object : ObjectID=[1:4788715] context = ObjectManagerLookupContext@1067751743 : [ processed count = 1, responseContext = WaitForLookupContext [ ObjectID=[1:4788715], missingOK = true], missing = BitSetObjectIDSet [ ] ] 


This only happens on the passive nodes and they eventually start reporting long gc's and memory cleaned after gc of 99% then 100% then eventually they die.

I'm not sure how to troubleshoot this one ... why only the passive nodes suffering? Is this warning telling me that an object has been DGC'd and can't be found? Should that happen? We get hundreds of these and plenty before the GC warnings on the nodes. It seems like the nodes can't find the objects and therefor are unable to do effective GCs ... even though the objects must still be cluttering the heap (?).

Any thoughts appreciated ...

August
in this case tomcat ...
is there some kind of forum cleanup happening? i have gotten this response to a bunch of my posts ...
yeah it's all good
No not old ... i posted this last week ...

???
 
Profile for august -> Messages posted by august [56] Go to Page: 1, 2, 3, 4 Next 
Go to:   
Powered by JForum 2.1.7 © JForum Team