[Logo] Terracotta Discussion Forums (LEGACY READ-ONLY ARCHIVE)
  [Search] Search   [Recent Topics] Recent Topics   [Members]  Member Listing   [Groups] Back to home page 
[Register] Register / 
[Login] Login 
Messages posted by: steve  XML
Profile for steve -> Messages posted by steve [585] Go to Page: Previous  1, 2, 3 , ... 37, 38, 39 Next 
Author Message
Works for me. Maybe try again?
Depending on what your trying to do some people have had good success using the search API rather than iterating. Would that work for you?

Usually this sort of things happens when one has a non-transient reference to a jdbc connector from some object you are storing in your session. Is that possible here?

Would love it if someone built one though. I wonder if one could adapt the OSCache one to plug in on top of Ehcache?
I have checked out the tech preview of 2.6. It has a new fault tolerant restartable store that is fully memory resident once restarted.

If you need to stick with 2.5 their is also the ability to persist and restart but it isn't fault tolerant (i.e. must have a clean shutdown)

Certainly reasonable to put in some protection for this sort of thing. File a request. Http://jira.terracotta.org
Sounds odd. File an issue. Http://Jira.terracotta.org
I'll let someone who knows spring annotations help you get through the issues of configuring it but I thought I would point out that instead of using count based tuning (save the last 10 or 100 method invocations) you'd probably get better performance if you used bytes based tuning (use 100m of heap for caching).


Their are two reasons that someone might get that warning.

1) An object that is being put in the cache has a reference to a graph of objects that was larger than expected and or referring to something unintentionally.

2) You may just have large object graphs

The warning is intended for people with situation 1. If you are in situation 2 then just raise the limit and the warning will go away.

It can be configured with this:

<sizeOfPolicy maxDepth="100" maxDepthExceededBehavior="abort"/>

more details in this doc:

Good questions:
- If everything stays on heap then your write, everyone will get the same instance as long as copy on read/write is false

- In an overflow and or clustered world you can NOT rely on object identity across gets/puts. At any time, in the background, objects may or may not be identical any more.

Does that help?
In clustered it does now. In standalone it will in the summer release. Just curious whether this needs to be monotonic or if you just need guaranteed no dups. You may want to batch for efficiency.

Best way to get a suggestion looked at is to put it in a JIRA

Are you seeing this after a restart or in the run of usage?
I would file a JIRA on this to get the fastest response.
No worries, thanks for letting us know!
Profile for steve -> Messages posted by steve [585] Go to Page: Previous  1, 2, 3 , ... 37, 38, 39 Next 
Go to:   
Powered by JForum 2.1.7 © JForum Team