[Logo] Terracotta Discussion Forums (LEGACY READ-ONLY ARCHIVE)
  [Search] Search   [Recent Topics] Recent Topics   [Members]  Member Listing   [Groups] Back to home page 
[Register] Register / 
[Login] Login 
[Expert]
Messages posted by: rmahajan  XML
Profile for rmahajan -> Messages posted by rmahajan [47] Go to Page: 1, 2, 3, 4 Next 
Author Message
In case of reconnect, the client is still part of the cluster and holding references/locks etc , once it is ejected from the cluster by the health check after certain reconnect tries, it can no longer come back and claim as the existing client.

Now either you have to restart your client to join back the cluster or if the rejoin functionality is enabled, it can automatically rejoin as a fresh client without having to restart your client JVM. You should also look at NonStop cache which is generally used along with rejoin functionality.
Hi,

The persistence is not necessary for Rejoin functionality.

You only need persistence if you want clients to rejoin even after complete TSA restart - where as your problem is clients getting disconnected after the network disruptions.

To address your concerns about persistence, it will adds a little bit to the put latency but doesn't necessarily impact reads but it mainly depends on the drive you have. You can use SSD or flash drives as your persistent store disks to over come any latency issues.

The best way to solve this problem is by fixing network disruptions if any.
Yes, You can also play with the health checker settings to increase decrease the tolerance, but make sure that doesnt impact the general health checking by large amount.
If the requirement is Rejoining of clients after any disruptions, then I recommend using Rejoin functionality.

Cheers,
Ridhav

This should help :
http://terracotta.org/documentation/4.0/terracotta-server-array/high-availability#71266

Cheers,
Ridhav
You can customize logging by creating a file named ".tc.custom.log4j.properties" either in your home directory ("user.home") or the working dir of your java process ("user.dir"). In there you can do the usual log4j configuration. This is supported for versions terracotta-3.6.x and above.

Other than that you can use the following properties (applicable for client and server)
logging.maxBackups = 20 [The maximum number of backup log files to keep]
logging.maxLogFileSize = 512 [By Default in MB, no need to mention M, once file size reached, it will rolled over]

Cheers,
Ridhav
There are some scenarios where query cache would not return anything.

It happens mainly if the results are constantly being invalidated by table modifications. Any table modification causes the timestamp cache to be updated. When you do a lookup that returns any entity through the query cache, it’s possible it is invalidated by an insert or update totally unrelated to that entity.

Cheers,
Ridhav
Check here:
http://terracotta.org/documentation/3.7.4/enterprise-ehcache/reference-guide#71266

Cheers,
Ridhav
Not sure if its helpful, but the way forward is to use Terracotta server for Ehcache clustering. You would need couple of config lines change to enable clustering using Terracotta. You can then use different features like NonStop to handle such situations.

Cheers,
Ridhav
Can you provide some more details on the use case on what you are trying to acheive by using Blocking cache ? Are you mixing it with NonStop ?
No Worries :-)
Can you please try with the latest version of ehcache to see if the problem still exists. This was reported with the previous versions but no longer exists with new builds.

Cheers,
Ridhav
No intention to spam your thread, but RMI replication is no longer the way forward. Your requirements can easily be addressed by couple of configs - [either Strong or Eventual Consistency] if would be using Terracotta for clustering the ehcache instances.

Cheers,
Ridhav
There are different factors you need to take in account to determine that. I recommend reading the documentation here and come back with specific questions:
http://ehcache.org/documentation/configuration/cache-size

Cheers,
Ridhav
The key and object should be serializable. In case of ProtoBuf, You will either have to use Object.toByteArray() to store the byteArray directly in the cache or you can create a wrapper class which implements serializable and has a field called byte[] data.
I will recommend second approach as that gives you the opportunity to be flexible in future.
Please pm me if you need more details. Thanks.

-Ridhav
Can you please share your test case, or Event listener code if you have configured one.
Can you please elaborate on what kind of issues you had while node was being VMotioned ?

Personally I have seen a scenario where under high load, CPU starvation was detected by JVMS from the VM and then node was VMotioned.

As the particular terracotta node (could be L1 or L2) was not responding (because of resource starvations), the health checks kicked in and ejected the unresponsive nodes out of the cluster.

The best way to solve this problem is to anticipate the load and provide enough resources to the VM at the first place.

Or else you can tune the health checking to cater for the pauses that resource starvation or VMotion might bring to the picture.

Cheers,
Ridhav
 
Profile for rmahajan -> Messages posted by rmahajan [47] Go to Page: 1, 2, 3, 4 Next 
Go to:   
Powered by JForum 2.1.7 © JForum Team