[Logo] Terracotta Discussion Forums (LEGACY READ-ONLY ARCHIVE)
  [Search] Search   [Recent Topics] Recent Topics   [Members]  Member Listing   [Groups] Back to home page 
[Register] Register / 
[Login] Login 
[Expert]
Messages posted by: danap  XML
Profile for danap -> Messages posted by danap [18] Go to Page: 1, 2 Next 
Author Message
Thanks Tim.

I used the java.ext.dirs because it was a quick way to move the classes up. The problem is that Resin does not give us easy access to the -classpath for the actual resin server JVM since that JVM is started by a watchdog application. I have done some more work on the Resin TIM to name the root and server class loaders. I am attempting to test this now. If that works I can move the jars into the server ClassLoader. I spoke with Santi and he indicated that it should work better since the classes in the ext dirs do not have access to the classes in the webapp.

As for isolating the web application's caches by cache name, this might be problematic. We use the ehcache.xml to define parameters for our cache's behaviors ( max items, max idle time, etc. ). I could use a different configuration mechanism that would take this into account but that is extra work.

Thanks
OK, I have made some progress.

I have moved the jars needed by EHCache into a directory outside of my web applications and put that directory in a -Djava.ext.dirs=... parameter to the JVM running the container.

This has corrected the problem with EHCache but when I attempt to use the application, I get exceptions from Hibernate where hibernate objects are getting ClassCastExceptions when getting data out of the cache.

I am confused how ClassLoaders and classes work together in terracotta.

In Java, classes are the same IF they have the same name AND the same class loader. So Webapp A and Webapp B can both have the Foo class but instances of Foo, one from A and one from B, would not be considered to be the same class. As such, any static variables created by Foo would have separate instances between Webapp A and Webapp B.

Say:
Code:
 import java.util.Map;
 import java.util.HashMap;
 public class Foo {
   public static Map bar = new HashMap();
 
   ...
 }
 


In terracotta the backend distinguishes classes by ClassLoader Name and Class Name. Therefore, Foo.bar in ClassLoader named RESIN_WEBAPP::/A would be different from Foo.bar in ClassLoader named RESIN_WEBAPP::/B. Right?

If so, then if I declare Foo.bar as a root there should be a separate root for each ClassLoader that loads the Foo class.

But this is not the behavior I am seeing. It appears as though, at least for static roots, there is only one cluster wide instance. For exaxmple, net.sf.ehcache.CacheManager.ALL_CACHE_MANAGERS is behaving as though there are is only one.
I have two web applications deployed into a Resin 3.1.6 container both of which are using an framework that we have developed. As such, they have pretty much the same structure from the service tier down. In this framework we define 4 hibernate session factories to access 4 different databases using EHCache as the second level cache provider. We also have another EHCache CacheManager defined for each that is used to cache objects constructed from results of calling a back-end EJB tier.

The first application starts fine and is functional but when I add the second application, I get ClassCastExceptions from the EHCache. See below.

I think what is happening here is that the first application starts and creates an instance of ALL_CACHE_MANAGERS with cache managers loaded using the first application's class loader. Then when the first CacheManager for the second application is loaded it attempts to iterate through this list, which gets faulted in from the tc server, and gets the ClassCastException because the CacheManagers in the list that was faulted in were loaded with the other webapp's class loader but the class that the reference is being converted to is loaded with the second web app's classloader.

Does that sound right?

If so, this means that the EHCache library needs to be loaded in a shared classloader. This makes my deployment more complicated and it is something I would like to avoid.

Code:
 [2008-08-12 21:41:46.505] Chained Exception Object: java.lang.ClassCastException
 [2008-08-12 21:41:46.505] 
 [2008-08-12 21:41:46.505] Description: net.sf.ehcache.CacheManager
 [2008-08-12 21:41:46.505] 
 [2008-08-12 21:41:46.505] Stack trace:
 [2008-08-12 21:41:46.505]       net.sf.ehcache.CacheManager.detectAndFixDiskStorePathConflict(CacheManager.java:299)
 [2008-08-12 21:41:46.505]       net.sf.ehcache.CacheManager.configure(CacheManager.java:275)
 [2008-08-12 21:41:46.505]       net.sf.ehcache.CacheManager.__tc_wrapped_init(CacheManager.java:218)
 [2008-08-12 21:41:46.505]       net.sf.ehcache.CacheManager.init(CacheManager.java)
 [2008-08-12 21:41:46.505]       net.sf.ehcache.CacheManager.<init>(CacheManager.java:205)
 [2008-08-12 21:41:46.505]       sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
 [2008-08-12 21:41:46.505]       sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
 [2008-08-12 21:41:46.505]       sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
 [2008-08-12 21:41:46.505]       java.lang.reflect.Constructor.newInstance(Constructor.java:494)
 [2008-08-12 21:41:46.505]       org.springframework.beans.BeanUtils.instantiateClass(BeanUtils.java:96)
 
Actually, I am not sure but I may have found a different solution.

I analyzed the Joda Time objects that were likely to be put in the session (my main concern is session support): LocalDate, LocalTime, DateTime, etc.. They all follow a basic pattern in that they contain state ( like a long for milliseconds since the beginning of the epoch ) and a reference to a Chronology. A Chronology is where all the action happens. Calculations are performed by passing the internal state or a reference to this into a Chronology's methods and getting back the result. The Chronology either uses the passed internal state or calls methods on interfaces that each of the stateful classes implement to do its calculations but the Chronology does not contain state. In fact, in some cases, the Chronology is a singleton itself ( for example the ISOChronology has a static member that holds the UTC Chronology ).

Given all this, it seems to me that the Chronology is not really something that needs to be clustered. So I have written a TIM that contains some classes that wrap the chronology in a decorator and holds a ChronologyMemento that knows how to acquire a reference to the Chronology when it gets to another TC client JVM. So far I have implemented a ChronologyMemento for the ISOChronology and the simple decendents of AssembledChronology. Some decendents of AssembledChronology are more complex and will need specialized ChronologyMementos created.

Code:
 public interface ChronologyMemento extends Serializeable {
   Chronology resolve();
 }
 
 public class ChronologyWrapper extends Chronology {
   private ChronologyMemento memento;
   private transient Chronology decorated = null;
 
   public static Chronology createChronologyWrapper(Chronology c) {
     ... // Create the wrapper 
   }
 
   private ChronologyWrapper(Chronology chronology) {
     decorated = chronology;
     ... // Create the appropriate ChronologyMemento for the chronology passed in.
   }
 
   private void checkChronology() {
     if ( decorated == null ) {
       decorated = memento.resolve();
     }
   }
   // implement delegations of all with calls to checkChronology before calling super:
   public void foo() {
     checkChronology();
     super.foo();
   }
 }
 


The TIM exports these classes and the ChronologyMemento implementation classes and then instruments all the Chronology referencing classes to wrap the Chronologies that they create or are passed with ChronologyWrappers by calling ChronologyWrapper.createChronologyWrapper(...);

Does this sound like an appropriate approach? Is there anything that does not make sense here?

PS. I plan to post the code for this TIM when I get it working and get the legal issues with doing so ironed out with my client ( I am a consultant doing work for hire ).
I am running into this same issue and the locks config will not work because of performance concerns.

It seems to me that the problem is with the iChronology field, in each of the concrete time/date implementations ( e.g. LocalDate, DateTime, etc... ).

The Chronology implementations all implement readResolve so that, when serialized, they get hooked up to the local instances of the fields. Personally, I do not understand why they don't all also implement writeReplace but it seems that only ISOChronology does so.

In any case, I think that a call to readResolve for most of these and some special work for ISOChronology could fix this.

My questions is, can a beanshell on-load script reassign 'self'?

Alternatively, is there a way for us to tell terracotta to honor the Serialize contract. In other words, call writeReplace and readResolve if present?

Alternatively, is it possible to create on-save code that would be executed before an object is clustered.
I have run into a similar situation. I have an application that spends a lot of time waiting for back-end services to respond. In the interests of getting more things done at once our engineers have created a few different scenarios where they execute multiple "includes", which then execute calls to these back-end services, at once by putting them in a thread execution service.

I am wondering: how one might go about this multi processing? Is there a way for a thread to suspend it's lock on the session and allow another thread or threads to access the session?

Also, if the session is locked by each request that accesses it and only unlocked after the request completes, how can the system handle serviceing multiple requests at once? Our application uses AJAX techniques heavily and it's performance depends on our ability to have multiple requests running for the same session simultaneously.

hhuynh wrote:
There might be more logging in DSO client log. Could you check there?

 


Did this process did not write to the client logs. Thanks.
I commented out this line and the error went away. Since this TIM does not define any additional boot jar classes the boot jar should still be usable. The TIM loads fine when used in the client.

Code:
        <module group-id="com.ihg.dec.framework" name="tim-endeavor" version="4.5.0-SNAPSHOT"/>
 
I am getting an error while creating the boot jar. without the -v option it just gives the last line. Is there anyway to get more diagnostic information out of this process?

Even if there isn't, what are the likely causes for this?

Code:
 PSimerD@PSIMERD-WXP /cygdrive/c/resin/endeavor/cro
 $ /cygdrive/c/terracotta/terracotta-2.6-stable4/bin/make-boot-jar.sh -o lib-tc/tc-boot.jar -f tc-conf.xml -v -v
 2008-05-23 13:58:51,657 INFO - Terracotta 2.6-stable4, as of 20080430-210402 (Revision 8422 by cruise@WXPMO0 from 2.6)
 2008-05-23 13:58:52,029 INFO - Attempting to load configuration from the file at 'c:\resin\endeavor\cro\tc-conf.xml'...
 2008-05-23 13:58:52,044 INFO - Successfully loaded configuration from the file at 'c:\resin\endeavor\cro\tc-conf.xml'. Config is:
 
 <?xml version="1.0" encoding="UTF-8"?>
 <tc:tc-config xsi:schemaLocation="http://www.terracotta.org/schema/terracotta-4.xsd" xmlns:tc="http://www.terracotta.org/config" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
   <!--Tell DSO where the Terracotta server can be found; 
        See the Terracotta DSO Guide for additional information.-->
   <servers>
     <server host="%i" name="tc-server">
       <data>data/server-data</data>
       <logs>logs/server-logs</logs>
     </server>
     <update-check>
       <enabled>false</enabled>
     </update-check>
   </servers>
   <!--Tell DSO where to put the generated client logs
        See the Terracotta DSO Guide for additional information.-->
   <clients>
     <logs>logs/%(tc.client.log.name)</logs>
     <modules>
       <repository>d:/Documents and Settings/psimerd/.m2/repository</repository>
       <module name="clustered-resin-3.1.2" version="1.0.1-SNAPSHOT"/>
       <module name="clustered-cglib-2.1.3" version="2.6.0-SNAPSHOT" />
       <module name="clustered-hibernate-3.2.5" version="2.6.0-SNAPSHOT" />
       <module name="clustered-commons-collections-3.1" version="2.6.0-SNAPSHOT" />
 
       <module group-id="com.ihg.dec.framework.tim" name="tim-jsfri-1.2_06" version="1.0.0-SNAPSHOT" />
       <module group-id="com.ihg.dec.framework" name="tim-endeavor" version="4.5.0-SNAPSHOT"/>
     </modules>
     <dso>
       <debugging>
         <instrumentation-logging>
           <class>true</class>
         </instrumentation-logging>
       </debugging>
     </dso>
   </clients>
   <application>
     <dso> 
        <additional-boot-jar-classes>
         <include>java.util.Locale</include>
         <include>java.util.TimeZone</include>
         <include>sun.util.calendar.ZoneInfo</include>
        </additional-boot-jar-classes>
       
       <instrumented-classes>
         <!--Include all classes for DSO instrumentation-->
         <include>
           <class-expression>org.apache.commons.collections..*</class-expression>
           <honor-transient>true</honor-transient>
         </include>
         <include>
           <class-expression>com.ihg.dec.apps.contactcenter.model..*</class-expression>
           <honor-transient>true</honor-transient>
         </include>
         <include>
           <class-expression>com.ihg.dec.apps.contactcenter.config.ContactCenterConfig</class-expression>
           <honor-transient>true</honor-transient>
         </include>
         <include>
           <class-expression>com.ihg.dec.apps.contactcenter.dto..*</class-expression>
           <honor-transient>true</honor-transient>
         </include>
         <include>
           <class-expression>com.ihg.dec.apps.contactcenter..*Exception</class-expression>
           <honor-transient>true</honor-transient>
         </include>
         <include>
           <class-expression>com.ihg.dec.apps.contactcenter.framework.dynabean..*</class-expression>
           <honor-transient>true</honor-transient>
         </include>
         <include>
           <class-expression>com.ihg.dec.apps.contactcenter.web.jsf..*</class-expression>
         </include>
         <include>
           <class-expression>org.apache.commons.beanutils.BasicDynaClass</class-expression>
         </include>
         <include>
           <class-expression>org.apache.commons.beanutils.DynaProperty</class-expression>
         </include>
       </instrumented-classes>
 
       <!--Tell DSO which applications in your web container is using DSO-->
       <web-applications>
         <web-application>favoriteHotels</web-application>
         <web-application>contactcenter</web-application>
       </web-applications>
     </dso>
   </application>
 </tc:tc-config>
 
 
 2008-05-23 13:58:52,401 INFO - Configuration loaded from the file at 'c:\resin\endeavor\cro\tc-conf.xml'.
 2008-05-23 13:58:54,153 FATAL - BundleActivator start failed
 

gbevin wrote:
Code:
Bundle bundle = getExportedBundle(context, "org.foo.tim-foo");
 


In call to getExportedBundle() what is "org.foo.tim-foo"?
OK, I have looked into this a little further and I think I have an idea. The LRUMap simply overrides the removeEldestEntry method. Which, BTW, is intended to be overriden so I think that some effort into making this class extendable despite being logically managed would be a good idea.

Is it possible to replace the implementation of this class with a class in a TIM? If the container has Child First ClassLoader implementations, then it may not be possible to do it just by putting the replacement class first in the class path. Can a Class Adapter completely replace a class implementation? Then I could just write a class that does not extend LinkedHashMap and just implements Map and exhibits the same behavior.
Yes, I have seen that. My problem is that this is being put into the session directly as a session attribute. As such, there is no field to mark transient. Also, I found some information that extending logically managed classes support was added in 2.2 since I am using 2.6 why would this not work? As this is an implementation of Map, can it too be logically managed? Is there any way for me to take control of this in a TIM?
Has anyone gotten Sun's JSF implementation to work with terracotta?

I have run into a problem where I get a TCNonPortableObjectError with the following details:

Thread : hmux-127.0.0.1:6802-0
JVM ID : VM(11)
Logically-managed class name : java.util.HashMap
Logical method name : put(Object,Object)
Unshareable class : com.sun.faces.util.LRUMap
Logically-managed superclass names: java.util.LinkedHashMap


It looks like it is complaining when terracotta sessions is trying to put an attribute that the JSF impl is putting in the session into it's session backing hash map.

So I tried removing the class from being clustered and got a TCNonPortableObjectError with this:


Thread : hmux-127.0.0.1:6802-0
JVM ID : VM(0)
Logically-managed class name: java.util.HashMap
Logical method name : put(Object,Object)
Non-included class : com.sun.faces.util.LRUMap

Can I make LRUMap logically managed?
I am writing a TIM for a framework that we have developed in house. It uses some SoftRefernces and creates Proxies using ASM that need to be clustered so I am adding names to the ClassLoaders and I am replacing the SoftReferences with an alternate implementation.

However, I think there is an issue with the logic that picks where to apply the code changes in my ClassAdapters. I tried using commons-logging ( which appears to be used by terracotta ) to output some diagnostic information but I got nothing. Is there a commons logging config that I need to replace?

Any pointers to how I can get debug messages into the terracotta client logs would be great.

Thanks
OK, I got terracotta tied into Resin. There are some interesting methods for configuration of resin that I was able to do so I did not have to add the filter to my webapp's web.xml. I will write up a description with more detail in a separate post.

The problem I am having now is that, as teck predicted, I got an error regarding a name for a class loader. I have the resin source so I looked at the class loader. So the question is, how is a class loader given a name? java.lang.ClassLoader does not have any name attribute or any sort of identifying attribute that I can discern.

Thanks for any help y'all can give.
 
Profile for danap -> Messages posted by danap [18] Go to Page: 1, 2 Next 
Go to:   
Powered by JForum 2.1.7 © JForum Team