[Logo] Terracotta Discussion Forums (LEGACY READ-ONLY ARCHIVE)
  [Search] Search   [Recent Topics] Recent Topics   [Members]  Member Listing   [Groups] Back to home page 
[Register] Register / 
[Login] Login 
Messages posted by: steve  XML
Profile for steve -> Messages posted by steve [585] Go to Page: 1, 2, 3  ...  37, 38, 39 Next 
Author Message
I could be wrong but last I checked "BigMemory Go" is Free but not open source. It does support the regular Ehcache API/library which is open.
There are no known issues mixing and matching 64bit and 32bit jvms. The only things that change are how much space an oid requires (which can be mitigated with compressed oops) and how big a heap/offheap can be used.
Still works with JDK 1.6
Unfortunately it's unlikely DSO will be on 1.7 anytime soon. If possible I would leverage Ehcache and or the Toolkit if your on 1.7

Anything with "ee" in it would only be in the enterprise version of the product. Look for a jar of the same name without the ee in the open source kit. Another option is to use the trial version of the enterprise product which will have the ee jar.

Pretty much. Also adds fault tolerance and an admin console.

Yeah, licensing big memory max will take care of all your clustering needs.
Sorry for the confusion. The features you described are configured and work the same whether using BigMemory or Ehcache.

Does that help (Making sure I understand the question)?

Thanks! We are taking a look asap

Thanks for posting. Can you file a JIRA and we'll take a look at it.


Lots of things impact recover performance including:
Disk Speed
Machine speed and load
Tuning parameters
Index count

We'll reproduce and set expectations.
I would at least upgrade to 3.6.5. Lots of useful bug fixes on the 3.6 line in there.
I would say start small, maybe 10k-100k and then keep upping it until your not seeing the performance/latencies you want.
Off the top of my head I might create a class called BigFileStorageManager that is created with a reference to a cache as it's parameter. The class would have 2 methods called

store(Key, BigFileStream)
BigFileStream retrieve(Key)

What it would do is break your multi-gig file into smaller chunks and give them specialized key names derived from the original (i.e. myKey-1of100, myKey-2of100 etc). The stream would be an abstraction that as bytes are requested from it pulls the section of the file that is needed from the cache and returns then proper portion without the user of the stream knowing it's happening (basically making the stored entry look like one big stream.

Does that help?
We don't really test with files that big so I don't know for sure but... I've heard of a number of people fragmenting large entries into smaller entries and wrapping them in a stream like class for easily getting them in and out of the cache. Does that make sense?
I could give more detail if I'm being too abstract.
It's also fully restartable and fault tolerant.
Profile for steve -> Messages posted by steve [585] Go to Page: 1, 2, 3  ...  37, 38, 39 Next 
Go to:   
Powered by JForum 2.1.7 © JForum Team