[Logo] Terracotta Discussion Forums (LEGACY READ-ONLY ARCHIVE)
  [Search] Search   [Recent Topics] Recent Topics   [Members]  Member Listing   [Groups] Back to home page 
[Register] Register / 
[Login] Login 
[Expert]
Messages posted by: lijie  XML
Profile for lijie -> Messages posted by lijie [19] Go to Page: 1, 2 Next 
Author Message

cdennis wrote:
I've looked over the thread dumps here and I see no evidence of any kind of deadlock occurring in the product. I do see that you are using explicit locking on the cache - one thing that could cause this issue would be if there were unbalanced lock/unlock calls through the explicit locking API. It would be likely be worth your while to double check all of your code that interacts with the explicit locking API to confirm that there are no errors there.

Also, if you are a paying customer and have a support contract with us, it would definitely be best to pursue this through the regular support channels.

Chris 


Thanks for your reply.
Some time ago, we didn't using explicit lock, and had the problem.
Then we add the explicit locking, Code:
 put data
 try{
     cache.acquireWriteLockOnKey(ele.getObjectKey());
     cache.put(ele);
 }finally{
     cache.releaseWriteLockOnKey(ele.getObjectKey());
 }
 ---------------------------------------------
 get data
 try{
     cache.acquireReadLockOnKey(key);
     ele = cache.get(key);
 }
 finally{
     cache.releaseReadLockOnKey(key);
 }
 


Other I feel confused that why the put/get cache is blocking at AbstractLockedOffHeapHashMap?
at com.terracottatech.offheapstore.AbstractLockedOffHeapHashMap.put(AbstractLockedOffHeapHashMap.java:130)

at com.terracottatech.offheapstore.AbstractLockedOffHeapHashMap.getAndSetMetadata(AbstractLockedOffHeapHashMap.java:280)


Who hold the lock in the AbstractLockedOffHeapHashMap and not release?
We create an offheap cache.(ehcache-ee-2.7.0.jar and ehcache-ee-2.8.2.jar)
Then get and put data at the sometime(some data are duplicating), sometime our system blocked and can't get/put data from the cache.

The thread dump in the attach.

the information like :

at com.terracottatech.offheapstore.AbstractLockedOffHeapHashMap.getAndSetMetadata(AbstractLockedOffHeapHashMap.java:280)
at com.terracottatech.offheapstore.AbstractLockedOffHeapHashMap.put(AbstractLockedOffHeapHashMap.java:130)
at com.terracottatech.offheapstore.AbstractLockedOffHeapHashMap.put(AbstractLockedOffHeapHashMap.java:130)
at com.terracottatech.offheapstore.AbstractLockedOffHeapHashMap.put(AbstractLockedOffHeapHashMap.java:130)
at com.terracottatech.offheapstore.AbstractLockedOffHeapHashMap.put(AbstractLockedOffHeapHashMap.java:130)
at com.terracottatech.offheapstore.AbstractLockedOffHeapHashMap.put(AbstractLockedOffHeapHashMap.java:130)
 


Any ideas? Thanks.
Hi, we meet the same problem, how to avoid generate the jar in the temp directiory?
thx~
We use bigmemory-3.7.2.jar
and using the config:
<persistence strategy="localRestartable" synchronousWrites="false"/>

The /persistence/cachedata directory used too huge, like as:
513M seg000000005.frs
513M seg000000006.frs
513M seg000000007.frs
513M seg000000008.frs
513M seg000000009.frs
513M seg000000010.frs
513M seg000000011.frs
513M seg000000012.frs
513M seg000000013.frs
513M seg000000014.frs
513M seg000000015.frs
513M seg000000016.frs
513M seg000000017.frs
...

How can we control the files number and size?
Thanks.
Hi,
We have got the same problem.
Have you resolved it?
How to control the /persistence/cachedata size?
Thanks.
Hi,
I find the issue ( https://jira.terracotta.org/jira/browse/EHC-978) have been fixed.
When will the go version 4.0 release?
Thanks.
I create an off-heap cache and set the cache MaxMemoryOffHeap = 10M.
Then I put data to the cache and print the cache size, the cache size info as:
cache.size = 100
cache.size = 200
cache.size = 300
cache.size = 400
cache.size = 500
cache.size = 600
cache.size = 700
cache.size = 800
...
cache.size = 6800
cache.size = 6900
cache.size = 7000
cache.size = 7100
cache.size = 7200
cache.size = 7300
cache.size = 7400
cache.size = 7500
cache.size = 7600
cache.size = 7700
cache.size = 7800
cache.size = 7890
cache.size = 7874
cache.size = 7860
cache.size = 7775
cache.size = 7717

cache.size = 7580
cache.size = 7477
cache.size = 7401
cache.size = 7359
cache.size = 7300
cache.size = 7265
cache.size = 7232
...

Maybe the off-heap size not given enough, so the cache size reach 7890 then decrease.
My question is :
Why the cache size not keep at 7890 but decrease ?
What data will be removed when the cache size reached the maxmum?
And how can I control its?
Thanks.
Hi,steve

I have filed at https://jira.terracotta.org/jira/browse/EHC-978

Thanks.
I load 2800000 objects, time cost = 1780250 ms

Then I shutdown and restart my app, the recovery logs:

信息: Updating stored schema
2012-12-12 21:06:43 net.sf.ehcache.store.offheap.search.Slf4jLoggerFactory$Logger info
信息: Updating stored schema
2012-12-12 21:07:02 net.sf.ehcache.store.offheap.search.Slf4jLoggerFactory$Logger info
信息: Updating stored schema
2012-12-12 21:07:02 net.sf.ehcache.store.offheap.search.Slf4jLoggerFactory$Logger info
信息: Updating stored schema
2012-12-12 21:08:05 com.terracottatech.frs.recovery.RecoveryManagerImpl$ProgressLoggingFilter filter
信息: Recovery progress 10%
2012-12-12 21:08:30 net.sf.ehcache.store.offheap.search.Slf4jLoggerFactory$Logger info
信息: Updating stored schema
2012-12-12 21:09:44 com.terracottatech.frs.recovery.RecoveryManagerImpl$ProgressLoggingFilter filter
信息: Recovery progress 20%
2012-12-12 21:11:45 com.terracottatech.frs.recovery.RecoveryManagerImpl$ProgressLoggingFilter filter
信息: Recovery progress 30%
2012-12-12 21:11:55 net.sf.ehcache.store.offheap.search.Slf4jLoggerFactory$Logger info
信息: Updating stored schema
2012-12-12 21:13:26 com.terracottatech.frs.recovery.RecoveryManagerImpl$ProgressLoggingFilter filter
信息: Recovery progress 40%
2012-12-12 21:15:34 com.terracottatech.frs.recovery.RecoveryManagerImpl$ProgressLoggingFilter filter
信息: Recovery progress 50%
2012-12-12 21:17:30 com.terracottatech.frs.recovery.RecoveryManagerImpl$ProgressLoggingFilter filter
信息: Recovery progress 60%
2012-12-12 21:19:09 com.terracottatech.frs.recovery.RecoveryManagerImpl$ProgressLoggingFilter filter
信息: Recovery progress 70%
2012-12-12 21:21:05 com.terracottatech.frs.recovery.RecoveryManagerImpl$ProgressLoggingFilter filter
信息: Recovery progress 80%
2012-12-12 21:23:21 com.terracottatech.frs.recovery.RecoveryManagerImpl$ProgressLoggingFilter filter
信息: Recovery progress 90%
2012-12-12 21:24:08 net.sf.ehcache.store.offheap.search.Slf4jLoggerFactory$Logger info
信息: Updating stored schema
2012-12-12 21:25:20 com.terracottatech.frs.recovery.RecoveryManagerImpl$ProgressLoggingFilter filter
信息: Recovery progress 100%
recover cost = 1161304, cache.size = 2800000 


My pragram code snippet:
Code:
		cm = CacheManager.create();
 		Cache cache2 = cm.getCache("test");
 		if(cache2!=null && cache2.getSize()>0) {
 			return cache2;
 		}
 		ccf = new CacheConfiguration();
 		ccf.persistence(new PersistenceConfiguration().strategy(Strategy.LOCALRESTARTABLE).synchronousWrites(false));
 		ccf.setName("test");
 		ccf.setMaxEntriesLocalHeap(5000);
 		ccf.setOverflowToOffHeap(Boolean.TRUE);
 		ccf.setMaxMemoryOffHeap("20000m");
 		Searchable searchable = new Searchable();
 		//[color=red]fieldNames.length > 100[/color]
 		for(String fieldName : fieldNames) {			
 			searchable.addSearchAttribute(new SearchAttribute().name(fieldName));
 		}
 		ccf.addSearchable(searchable);
 		cache = new Cache(ccf);
 		cm.addCache(cache);
 		return cache;


Why recovery time costs so much?

Our test environment:
CPU: 64core @2.00GHz
Memory: 128G
OS:CentOS release 6.2 (Final)
jdk1.6( -XX:MaxDirectMemorySize=20480m)

Thanks.
ok,I see.

Thank you very much for your quickly reply again.
Hi Joshi,thanks for your quickly reply.

Maybe I am not clarified the question.

I want add size to an already initialized cache.

For example I have got the cache which initialized 100M:

CacheManager cm = CacheManager.create();
CacheConfiguration ccf = new CacheConfiguration();
ccf.setName("test");
ccf.setMaxEntriesLocalHeap(5000);
ccf.setOverflowToOffHeap(Boolean.TRUE);
ccf.setMaxBytesLocalOffHeap("100M");
Cache cache = new Cache(ccf);
cm.addCache(cache);
return cache

then put data to the cache. When I found the cache nearly full, so I want add 50M to the cache dynamicly (then the cache size changed to 150M, and the data also can use). How can I add 50M to the cache dynamicly?

Thanks.
I got a cache like this:

CacheManager cm = CacheManager.create();
CacheConfiguration ccf = new CacheConfiguration();
ccf.setName("test");
ccf.setMaxEntriesLocalHeap(5000);
ccf.setOverflowToOffHeap(Boolean.TRUE);
ccf.setMaxMemoryOffHeap("100M");
Cache cache = new Cache(ccf);
cm.addCache(cache);

Can bigmemory go support cache size dynamic extend?
How can I extend the offheap size in the running time?

Thx.

klalithr wrote:
There is 2 parts to your question
a) Creating a searchable cache - http://ehcache.org/documentation/apis/search

b) How do you keep a cache in sync with the DB.
http://scaleupandout.blogspot.com/2012/02/how-do-i-keep-cache-and-db-in-sync.html

HTH! 


Can't open "http://scaleupandout.blogspot.com/2012/02/how-do-i-keep-cache-and-db-in-sync.html "
And check this link for Allocating Direct Memory in the JVM: http://ehcache.org/documentation/configuration/bigmemory#configuration

NOTE: Direct Memory and Off-heap Memory Allocations

To accommodate server communications layer requirements, the value of maxDirectMemorySize must be greater than the value of maxBytesLocalOffHeap. The exact amount greater depends upon the size of maxBytesLocalOffHeap. The minimum is 256MB, but if you allocate 1GB more to the maxDirectMemorySize, it will certainly be sufficient. The server will only use what it needs and the rest will remain available.
 

The code snippet:
Code:
public void testTotalCount(Cache cache) {
 		List<Person> personList = new ArrayList<Persion>();
 		DateFormat df = new SimpleDateFormat("yyyy-MM-dd");
 		Attribute<Date> create_time = cache.getSearchAttribute("create_time");
 		
 		//get the first 1000 results
 		Query q = cache.createQuery();
 		q.maxResults(1000);
 		q.includeKeys();
 		q.includeValues();
 		Criteria c1 = create_time.between(df.parse("2012-09-01"),df.parse("2012-09-31"));
 		q.addCriteria(c1).addOrderBy(create_time, Direction.ASCENDING);
 		Results rs = q.execute();
 		for(Result r : rs.all()) {
 			personList.add((Person)r.getValue());
 		}
 		
 		//get the total count
 		Query qC = cache.createQuery();
 		qC.addCriteria(c1);
 		qC.includeAggregator(Aggregators.count());
 		Results cRs = qC.execute();
 		int count = (Integer) cRs.all().get(0).getAggregatorResults().get(0);
 }


So,I execute the query twice. I can't sure whether it is right or not?
 
Profile for lijie -> Messages posted by lijie [19] Go to Page: 1, 2 Next 
Go to:   
Powered by JForum 2.1.7 © JForum Team