[Logo] Terracotta Discussion Forums (LEGACY READ-ONLY ARCHIVE)
  [Search] Search   [Recent Topics] Recent Topics   [Members]  Member Listing   [Groups] Back to home page 
[Register] Register / 
[Login] Login 
[Expert]
Messages posted by: alexsnaps  XML
Profile for alexsnaps -> Messages posted by alexsnaps [476] Go to Page: 1, 2, 3  ...  30, 31, 32 Next 
Author Message
Right, somehow the Searchable.allowDynamicIndexing(bool) isn't implementing the builder pattern and has its return type be void.

Just do:
Code:
 Searchable searchable = new Searchable();
 searchable.alloDynamicIndexing(true);
 
Right, anything you share across threads by making it available through the cache, needs to be thread safe one way or another... In this case, it just happens that an internal thread to Ehcache (that spills to disk) is accessing the entry, but it could be any other thread in your application.

Or you make sure every entry is copied on read/write or both. Comes at a higher cost, but makes sure no single instance is ever accessed from different threads.
Same questions as above. How do you shut your JVM process down ?
Are new files being created with each redeploy of the app ?
Possibly Ehcache's non-stop feature is what you are looking for ?
http://ehcache.org/documentation/configuration/non-stop-cache
Created https://hibernate.atlassian.net/browse/HHH-8732
Thanks a lot for the reproducible testcase!
That's been very helpful. Indeed the SlewClock wouldn't deal well with the wall clock stepping back even further in time, while it was already trying to catch up... I've changed this in the head of trunk (r8300) and merged back to the 2.7 and 2.6 branches. You should be able to test it out for yourself.
I've also added a test that validates the SlewClock's output, but also the latency of every method call. As of now, you should see a latency of at most 100msec _once_ per thread for when the clock step further back that it already had.

btw, can you elaborate on what kind of environment you're deploying to ? I'm _very_ surprised by these systems behaviors wrt time!
I'll start with the obvious, DST is _not_ turning the clock back! DST is merely an adjustment to the TZ. UTC, which is what all internal libraries that use time rely on, is _not_ affected by DST. So basically you should not adjust time, but the time settings of your OS to account for DST.

On to the problem you are seeing: certainly given the stacktrace of that last thread above, you are experiencing issues wrt to setting the time back. You can read on the thread here :
http://cs.oswego.edu/pipermail/concurrency-interest/2013-September/011759.html

You could test how your system reacts to time going back during a thread sleep as such :
Code:
 public class Main {
   public static void main(String[] args) throws InterruptedException {
     final long sleep = TimeUnit.SECONDS.toMillis(10);
     final Date now = new Date();
     System.out.println("It's " + now);
     System.out.println("I'll see you  again at " + new Date(now.getTime() + sleep) + " (in 10 seconds) ...");
     Thread.sleep(sleep);
     System.out.println("Back at " + new Date());
   }
 }
 

Turning the clock back during the 10 second sleep, say 10 seconds back, and observe the outcome. It looks to me like you'd end up effectively sleeping for 20 seconds...

You could also try running the test for SlewClock at :
https://svn.terracotta.org/repo/ehcache/tags/ehcache-core-2.4.3/src/test/java/net/sf/ehcache/util/SlewClockTest.java

Modifying it slightly to log how long it sleeps at most when clock is turned back (you'll notice that this "only" tests the SlewClock itself, the time source is "replaced" so that an ongoing Thread.sleep wouldn't suffer from the system clock being turned back...). Basically modifying SlewClockVerifierThread.run() as so :
Code:
         public void run() {
             long overallMax = 0;
             long run =0;
             while (!stopped.get()) {
                 final long l = System.nanoTime();
                 long timeMillis = SlewClock.timeMillis();
                 final long max = System.nanoTime() - l;
                 if(max > overallMax) {
                     System.out.println("Max " + TimeUnit.NANOSECONDS.toMillis(max));
                     overallMax = max;
                 }
               // snip
 

And you'll notice that no thread ever sleeps for longer than 50 millis.
While I can't rule out that there is no bug in our code, it still feels to me like what you are experiencing is not an issue in the SlewClock (or Timestamper), but rather oddities in behaviour, when turning the clock back by "a lot", which I'd advise you to not do (as there is no reason to ever do so neither).
Hope this helps understanding what might be going on here...
Not quite sure what you are experiencing here, but I can think of couple explanations. All of them though will lead to the same conclusion: don't set the clock back, and certainly not by a year... Let me first explain what you might be seeing, and then why you should not do this on a running system like this:

The SlewClock's role is to make sure the Timestamper _never_ sees the clock go backwards. What it will do is "slow time down" until it caught up. Never should the SlewClock let you see a latency higher than 50ms (that's a default, overridable with a system property, I'll let you look up the source if you would want to explore that route). The way it does this is by remembering the latest time it gave and, if required, slows the threads querying down. But this code is meant to deal with "small" step backs (i.e. seconds...). Currently it catches up (when all thread just pound on it) in about 1.5x the time the wall clock stepped back (i.e. in your case it would take a worst 18 months to catch up this year... at best... a year).
The Timestamper "only" tries to timestamp uniquely per milli. Potentially, if you ask for more than it can allocate per milli (default is 1 << 12 unique timestamps per milli), it could in turn spin and ask the SlewClock more than once for the time. Should it ask more than twice (and basically introduced another 100 millis of latency max + the time it took for your CPU to loose all CASes to the mem location), Timestamper will log (info level though) something along the lines of "Thread spin-waits on time to pass. Looped X times ..."... Do you see any of that?
Anyways, if you have a very concurrent system you could see a couple 100millis of added latency to creating a Hibernate session (as they all get timestamped, more on that later though). Is that the stalling you talk about? Or do you actually see threads being stuck for much longer? If so, I'd send you back to your JVM (and OS ?) to see how it'd honour an "on-going" Thread.sleep if the clock steps back (i.e. whether it honours relative vs. absolute time).

Now on why you should _never_ step the clock back by so much! From an ehcache perspective: all your TTI/TTL will not be honoured! From an hibernate perspective, the way transaction isolation works, is using this Timestamper stuff in order to see how to handle "in flight changes" (SoftLock instances put in read-write caches) comparing their timestamp with sessions accessing these cache entries. Basically you most probably want to have time being a concept your system can rely on...

All in all, a very long and painful explanation. But I hope it clarifies a bit.
Alex
You can't really use write-behind in such a setup really. Or at least not without changing your current architecture a lot.
Basically, a writer (used by the write-behind cache) could use an entityManager (and entities) to have stuff persisted to the database. But you can't store entities (managed ones at least) in your cache directly. Also, using write-behind requires you to not use the cache-aside pattern. While you are using the cache through hibernate, you're effectively using that pattern today.
So, long story short, you'd need to overhaul your current data access patterns quite a bit in order to use write-behind.
Hope this makes some sense, and helps you decide where to take this from here.
Alex
Still can't help with this... Also, you see threads waiting to acquire lock, but do you see threads holding locks ? Or do these threads hang forever ?
I really don't know why you think you face a similar issue.
Correct me if I'm wrong, but you don't use BlockingCache there ?! Do you ? This is all using Ehcache's Hibernate 2LC, isn't it ?
All I can see from the thread dump is contention ? I must be missing something. I'd start a new thread and share much more details. Thanks.
Not quite sure what the problem is, but there is a bigger one lurking.
You can't use a read-write caching strategy with an asynchronously replicated cache. Basically, when using that strategy, Hibernate will try and "lock" entries in your cache during updates, this won't work out well in such a replicated environment.
Can you provide a small reproducible test case ?
I can't think of anything being wrong right now, but with exact versioning info, config & some example code, we should be able to figure it out...
Thanks!
It's a known issue. All my fault actually! A refactoring I did in the 2.7 line introduced this. But I fixed it and it'll be part of the coming 2.7.2 release.
Sorry about the inconvenience !
As I explained above... but this isn't about using Ehcache with OpenJDK, but using the sizing feature of Ehcache with OpenJDK.
We believe that the fault backs shouldn't be too off though. And in this particular case above, given we managed to get hold of theUnsafe, you're all good...
 
Profile for alexsnaps -> Messages posted by alexsnaps [476] Go to Page: 1, 2, 3  ...  30, 31, 32 Next 
Go to:   
Powered by JForum 2.1.7 © JForum Team