[Logo] Terracotta Discussion Forums (LEGACY READ-ONLY ARCHIVE)
  [Search] Search   [Recent Topics] Recent Topics   [Members]  Member Listing   [Groups] Back to home page 
[Register] Register / 
[Login] Login 
Messages posted by: rajoshi  XML
Profile for rajoshi -> Messages posted by rajoshi [1464] Go to Page: 1, 2, 3  ...  96, 97, 98 Next 
Author Message
Hi ,
You are using "localTempSwap" so data will go to disk but will not be persisted.

llocalTempSwap Enables temporary local disk usage. This option provides an extra tier for data storage during operation, but this store is not persisted. After a restart, the disk is cleared of any BigMemory data.

Please refer to below link for more information on this :

Can you please raise a community Jira for this here :

We don't support DSO anymore , Please consider express mode for your use case.
tim is designed for DSO use cases , TC 3.7.5 do not support DSO . That's why there is no tim for tomcat or jetty. Please use express mode.
Which version of ehcache you are using , we would recommend you to go for distributed caching rather than replication . You can refer about it here :

okk good to know its resolved.
Looks like a network issue, as both active and passive are not able to communicate and paasive tries to become active after waiting for time set by "l2.nha.tcgroupcomm.reconnect.timeout" . This finally leads to Split brain scenario with two active in system as depicted by below log snippet :

2013-06-25 09:30:39,246 [WorkerThread(receive_group_message_stage, 0)] FATAL tc.operator.event - NODE : fedora2 Subsystem: CLUSTER_TOPOLOGY Message: SPLIT BRAIN, fedora2 and fedora1 are ACTIVE
2013-06-25 09:30:39,246 [WorkerThread(receive_group_message_stage, 0)] WARN com.tc.l2.ha.L2HAZapNodeRequestProcessor - Not quiting since the other servers weight = 0,-9223372036854775808,0,211827818898726,8429431442619700378 is not greater than my weight = 0,-9223372036854775808,0,211827818964262,5327811563257844455

So you've got to adjust that value up -- and also look at the HealthChecker values for server-server (see the documentation) -- until split-brains no longer occur. But note that your cluster will pause for that time, and you'll also have a longer time-to-failover should that be needed. It's a balancing act (assuming that fixing the network isn't an option).
How you are dedicating 650 mb to two caches, if you set maxBytesLocalHeap = 600 MB than on the cache using this config will allow no. of elements who's size sum <=600MB then it will start flushing elements to disk as you are using tempswap. Also here I can see three different configurations can you please post how you are using these with cache.
Yes , upgrade to Tc 3.5.1 should fix this .It has new rejoin feature that would let the client connect back to the server.
You can not query on data , you can get information on cache data like TPS , heap, offheap uaage , configuration being used etc.
Can you raise the community Jira for the same here :

Great to know issue is resolved, thanks for sharing with others.
Please find answers below inline:

1)Can the active server and mirror reside in two different Linux boxes ?

- Yes you can define them on two different physical boxes.

2) Where do I need to give the IP address of the boxes so that mirror and active can connect to eah other?

It is defined in tc-config.xml configuration file.

3) I referred documentation . It seems we need to give the entire entry in one tc-config.xml in the active server. Am I correct?

Yes in tc-cofig.xml we define the complete configuration details related to terracotta set up. Define both active and passive configuration.

4) I believe I need to copy the server files to other Linux box as well. But if in case if both active and mirror are in same machine do we need to do the same?

- NO , you don't need to do anything if they are on same box.

5) Sample tc-config.xml for both active and mirror.

You can get an example of that on our website :

How you are trying to search on maps can you please elaborate , how you are comparing them ?
Can you provide your ehcache.xml , it is definitely point to some issue in source or config.
Profile for rajoshi -> Messages posted by rajoshi [1464] Go to Page: 1, 2, 3  ...  96, 97, 98 Next 
Go to:   
Powered by JForum 2.1.7 © JForum Team