Author |
Message |
|
Thanks! that problem is solved. Thank you again!
|
|
|
Hi,
I am facing a weird problem.
Please tell me where to place the ehcache.xml so that the web-application can find it.
I am always getting this error when I try to access ehcache.xml
java.lang.NullPointerException
at com.accenture.cloud.client.NRTSearching.searchDocEcache(NRTSearching.java:184)
at com.accenture.test.MyPortlet.serveResource(MyPortlet.java:75)
at com.liferay.portlet.FilterChainImpl.doFilter(FilterChainImpl.java:119)
at com.liferay.portal.kernel.portlet.PortletFilterUtil.doFilter(PortletFilterUtil.java:71)
at com.liferay.portal.kernel.servlet.PortletServlet.service(PortletServlet.java:92)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at org.apache.catalina.core.ApplicationDispatcher.invoke(ApplicationDispatcher.java:646)
at org.apache.catalina.core.ApplicationDispatcher.processRequest(ApplicationDispatcher.java:436)
at org.apache.catalina.core.ApplicationDispatcher.doForward(ApplicationDispatcher.java:374)
at org.apache.catalina.core.ApplicationDispatcher.forward(ApplicationDispatcher.java:302)
at com.liferay.portlet.InvokerPortletImpl.invoke(InvokerPortletImpl.java:635)
at com.liferay.portlet.InvokerPortletImpl.invokeResource(InvokerPortletImpl.java:747)
at com.liferay.portlet.InvokerPortletImpl.serveResource(InvokerPortletImpl.java:504)
at com.liferay.portal.action.LayoutAction.processPortletRequest(LayoutAction.java:871)
at com.liferay.portal.action.LayoutAction.processLayout(LayoutAction.java:613)
at com.liferay.portal.action.LayoutAction.execute(LayoutAction.java:232)
at org.apache.struts.action.RequestProcessor.processActionPerform(RequestProcessor.java:431)
at org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:236)
at com.liferay.portal.struts.PortalRequestProcessor.process(PortalRequestProcessor.java:153)
at org.apache.struts.action.ActionServlet.process(ActionServlet.java:1196)
at org.apache.struts.action.ActionServlet.doPost(ActionServlet.java:432)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:637)
at com.liferay.portal.servlet.MainServlet.callParentService(MainServlet.java:508)
at com.liferay.portal.servlet.MainServlet.service(MainServlet.java:485)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at com.liferay.portal.kernel.servlet.BaseFilter.processFilter(BaseFilter.java:196)
at com.liferay.portal.kernel.servlet.BaseFilter.doFilter(BaseFilter.java:126)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at com.liferay.portal.kernel.servlet.BaseFilter.processFilter(BaseFilter.java:196)
at com.liferay.portal.kernel.servlet.BaseFilter.doFilter(BaseFilter.java:126)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at com.liferay.portal.kernel.servlet.BaseFilter.processFilter(BaseFilter.java:196)
at com.liferay.portal.servlet.filters.strip.StripFilter.processFilter(StripFilter.java:309)
at com.liferay.portal.kernel.servlet.BaseFilter.doFilter(BaseFilter.java:123)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at com.liferay.portal.kernel.servlet.BaseFilter.processFilter(BaseFilter.java:196)
at com.liferay.portal.kernel.servlet.BaseFilter.doFilter(BaseFilter.java:126)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at com.liferay.portal.kernel.servlet.BaseFilter.processFilter(BaseFilter.java:196)
at com.liferay.portal.servlet.filters.gzip.GZipFilter.processFilter(GZipFilter.java:121)
at com.liferay.portal.kernel.servlet.BaseFilter.doFilter(BaseFilter.java:123)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at com.liferay.portal.kernel.servlet.BaseFilter.processFilter(BaseFilter.java:196)
at com.liferay.portal.servlet.filters.secure.SecureFilter.processFilter(SecureFilter.java:182)
at com.liferay.portal.kernel.servlet.BaseFilter.doFilter(BaseFilter.java:123)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at com.liferay.portal.kernel.servlet.BaseFilter.processFilter(BaseFilter.java:196)
at com.liferay.portal.servlet.filters.autologin.AutoLoginFilter.processFilter(AutoLoginFilter.java:254)
at com.liferay.portal.kernel.servlet.BaseFilter.doFilter(BaseFilter.java:123)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at org.apache.catalina.core.ApplicationDispatcher.invoke(ApplicationDispatcher.java:646)
at org.apache.catalina.core.ApplicationDispatcher.processRequest(ApplicationDispatcher.java:436)
at org.apache.catalina.core.ApplicationDispatcher.doForward(ApplicationDispatcher.java:374)
at org.apache.catalina.core.ApplicationDispatcher.forward(ApplicationDispatcher.java:302)
at com.liferay.portal.servlet.FriendlyURLServlet.service(FriendlyURLServlet.java:134)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at com.liferay.portal.kernel.servlet.BaseFilter.processFilter(BaseFilter.java:196)
at com.liferay.portal.kernel.servlet.BaseFilter.doFilter(BaseFilter.java:126)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at com.liferay.portal.kernel.servlet.BaseFilter.processFilter(BaseFilter.java:196)
at com.liferay.portal.kernel.servlet.BaseFilter.doFilter(BaseFilter.java:126)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at com.liferay.portal.kernel.servlet.BaseFilter.processFilter(BaseFilter.java:196)
at com.liferay.portal.servlet.filters.strip.StripFilter.processFilter(StripFilter.java:309)
at com.liferay.portal.kernel.servlet.BaseFilter.doFilter(BaseFilter.java:123)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at com.liferay.portal.kernel.servlet.BaseFilter.processFilter(BaseFilter.java:196)
at com.liferay.portal.kernel.servlet.BaseFilter.doFilter(BaseFilter.java:126)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at com.liferay.portal.kernel.servlet.BaseFilter.processFilter(BaseFilter.java:196)
at com.liferay.portal.servlet.filters.gzip.GZipFilter.processFilter(GZipFilter.java:110)
at com.liferay.portal.kernel.servlet.BaseFilter.doFilter(BaseFilter.java:123)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at com.liferay.portal.kernel.servlet.BaseFilter.processFilter(BaseFilter.java:196)
at com.liferay.portal.servlet.filters.secure.SecureFilter.processFilter(SecureFilter.java:182)
at com.liferay.portal.kernel.servlet.BaseFilter.doFilter(BaseFilter.java:123)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at com.liferay.portal.kernel.servlet.BaseFilter.processFilter(BaseFilter.java:196)
at com.liferay.portal.servlet.filters.i18n.I18nFilter.processFilter(I18nFilter.java:222)
at com.liferay.portal.kernel.servlet.BaseFilter.doFilter(BaseFilter.java:123)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at com.liferay.portal.kernel.servlet.BaseFilter.processFilter(BaseFilter.java:196)
at com.liferay.portal.kernel.servlet.BaseFilter.doFilter(BaseFilter.java:126)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at com.liferay.portal.kernel.servlet.BaseFilter.processFilter(BaseFilter.java:196)
at com.liferay.portal.servlet.filters.cache.CacheFilter.processFilter(CacheFilter.java:442)
at com.liferay.portal.kernel.servlet.BaseFilter.doFilter(BaseFilter.java:123)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at com.liferay.portal.kernel.servlet.BaseFilter.processFilter(BaseFilter.java:196)
at com.liferay.portal.servlet.filters.etag.ETagFilter.processFilter(ETagFilter.java:45)
at com.liferay.portal.kernel.servlet.BaseFilter.doFilter(BaseFilter.java:123)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at com.liferay.portal.kernel.servlet.BaseFilter.processFilter(BaseFilter.java:196)
at com.liferay.portal.servlet.filters.autologin.AutoLoginFilter.processFilter(AutoLoginFilter.java:254)
at com.liferay.portal.kernel.servlet.BaseFilter.doFilter(BaseFilter.java:123)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at com.liferay.portal.kernel.servlet.BaseFilter.processFilter(BaseFilter.java:196)
at com.liferay.portal.servlet.filters.sso.ntlm.NtlmPostFilter.processFilter(NtlmPostFilter.java:81)
at com.liferay.portal.kernel.servlet.BaseFilter.doFilter(BaseFilter.java:123)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at com.liferay.portal.kernel.servlet.BaseFilter.processFilter(BaseFilter.java:196)
at com.liferay.portal.sharepoint.SharepointFilter.processFilter(SharepointFilter.java:179)
at com.liferay.portal.kernel.servlet.BaseFilter.doFilter(BaseFilter.java:123)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at com.liferay.portal.kernel.servlet.BaseFilter.processFilter(BaseFilter.java:196)
at com.liferay.portal.servlet.filters.virtualhost.VirtualHostFilter.processFilter(VirtualHostFilter.java:240)
at com.liferay.portal.kernel.servlet.BaseFilter.doFilter(BaseFilter.java:123)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at com.liferay.portal.kernel.servlet.BaseFilter.processFilter(BaseFilter.java:196)
Please help!
|
|
|
I wish to setup Ehcache, Terracotta server to work on one machine (single instance) for the time being.
My tc-config.xml looks like below:
---------------------------------------------------------------
<?xml version="1.0" encoding="UTF-8"?>
<!--
All content copyright Terracotta, Inc., unless otherwise indicated. All rights reserved.
-->
<tc:tc-config xmlns:tc="http://www.terracotta.org/config"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.terracotta.org/schema/terracotta-6.xsd">
<!-- Tell DSO where the Terracotta server can be found;
See
- Terracotta Configuration Guide and Reference
- About Terracotta Configuration Files
for additional information. -->
<tc:tc-config xmlns:tc="http://www.terracotta.org/config"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.terracotta.org/schema/terracotta-6.xsd">
<!-- Tell DSO where the Terracotta server can be found;
See
- Terracotta Configuration Guide and Reference
- About Terracotta Configuration Files
for additional information. -->
<tc-properties>
<property name="productkey.path" value="/mnt/terracotta-ee-3.6.1/terracotta-license.key" />
</tc-properties>
<servers>
<server host="ip-10-40-222-77.ec2.internal" name="Server1">
<data>/mnt/terracotta/server-data</data>
<!-- <l2-group-port>9530</l2-group-port>-->
<!--<dso>
<persistence>
<mode>permanent-store</mode>
</persistence>
</dso>
-->
</server>
<!-- <server host="ip-10-40-222-77.ec2.internal" name="Server2">
<data>/mnt/terracotta/server-data</data>
<l2-group-port>9530</l2-group-port>
<dso>
<persistence>
<mode>permanent-store</mode>
</persistence>
</dso>
</server>
<server host="ip-10-40-222-77.ec2.internal" name="Server3">
<data>/mnt/terracotta/server-data</data>
<l2-group-port>9530</l2-group-port>
<dso>
<persistence>
<mode>permanent-store</mode>
</persistence>
</dso>
</server>
<server host="ip-10-40-222-77.ec2.internal" name="Server4">
<data>/mnt/terracotta/server-data</data>
<l2-group-port>9530</l2-group-port>
<dso>
<persistence>
<mode>permanent-store</mode>
</persistence>
</dso>
</server>
-->
<!-- <mirror-groups>
<mirror-group group-name="groupA">
<members>
<member>Server1</member>
<member>Server3</member>
</members>
</mirror-group>
<mirror-group group-name="groupB">
<members>
<member>Server2</member>
<member>Server4</member>
</members>
</mirror-group>
</mirror-groups>
-->
<!--<ha>
<mode>networked-active-passive</mode>
<networked-active-passive>
<election-time>5</election-time>
</networked-active-passive>
</ha>
-->
<!-- <server host="%i" name="sample">
<dso-port>9510</dso-port>
<jmx-port>9520</jmx-port>
<data>terracotta/demo-server/server-data</data>
<logs>terracotta/demo-server/server-logs</logs>
<statistics>terracotta/demo-server/server-statistics</statistics>
</server> -->
</servers>
</tc:tc-config>
---------------------------------
And ehcache.xml inside Flume classpath has the following config:
--------------------------------------------
<?xml version="1.0" encoding="UTF-8"?>
<ehcache name="RTLogCacheManager">
<defaultCache
maxElementsInMemory="10000"
eternal="false"
timeToIdleSeconds="500"
timeToLiveSeconds="300000"
overflowToDisk="true"
diskSpoolBufferSizeMB="30"
maxElementsOnDisk="10000000"
diskPersistent="false"
diskExpiryThreadIntervalSeconds="500"
memoryStoreEvictionPolicy="LRU"/>
<cache name="RTLogCache"
maxElementsInMemory="100"
eternal="false"
timeToIdleSeconds="600"
timeToLiveSeconds="600"
overflowToDisk="false"
diskPersistent="false" maxElementsOnDisk="10000000">
<cacheWriter writeMode="write-behind"
minWriteDelay="1"
maxWriteDelay="1"
rateLimitPerSecond="5"
writeCoalescing="true"
writeBatching="true"
writeBatchSize="1"
retryAttempts="2"
retryAttemptDelaySeconds="1">
<cacheWriterFactory class="com.accenture.logpro.HBaseWriterLoaderFactory.HbaseWriterFactory"/>
</cacheWriter>
<terracotta/>
</cache>
<!-- distributed ehcache using terracotta -->
<!-- <terracottaConfig url="ip-10-40-222-77.ec2.internal:9510"/> -->
<!--<terracottaConfig url="ip-10-40-222-77.ec2.internal:9510"/>-->
<terracottaConfig url="localhost:9510"/>
-----------------------------------------
When I start Terracotta I get the success message first:
-------------------------------------------------
[root@ip-10-40-222-77 bin]# ./start-tc-server.sh
2012-12-15 22:37:39,316 INFO - Terracotta Enterprise 3.6.1, as of 20120125-122228 (Revision 15328-19619 by cruise@su10vmo104 from 3.6.1)
2012-12-15 22:37:39,923 INFO - Successfully loaded base configuration from file at '/mnt/terracotta-ee-3.6.1/bin/tc-config.xml'.
2012-12-15 22:37:39,971 INFO - Log file: '/mnt/terracotta-ee-3.6.1/bin/logs/terracotta-server.log'.
2012-12-15 22:37:40,024 INFO - Terracotta license loaded from /mnt/terracotta-ee-3.6.1/terracotta-license.key
Capabilities: DCV2, TMC, authentication, ehcache, ehcache offheap, operator console, quartz, quartz manager, quartz where, roots, search, security, server array offheap, server striping, sessions
Date of Issue: 2012-12-07
Edition: FX
Expiration Date: 2013-02-28
License Number: Download Trail Licence_Accept
License Type: Trial
Licensee: Download Trail Licence_Accept
Max Client Count: 50
Product: Enterprise Suite
ehcache.maxOffHeap: 250G
terracotta.serverArray.maxOffHeap: 250G
2012-12-15 22:37:42,539 INFO - Available Max Runtime Memory: 490MB
2012-12-15 22:37:44,924 INFO - JMX Server started. Available at URL[service:jmx:jmxmp://0.0.0.0:9520]
2012-12-15 22:37:50,998 INFO - Becoming State[ ACTIVE-COORDINATOR ]
2012-12-15 22:37:51,030 INFO - Terracotta Server instance has started up as ACTIVE node on 0:0:0:0:0:0:0:0:9510 successfully, and is now ready for work.
------------------------------------------------
On seeing "terracotta-client.log" log, I get this INFO log:
-----------------------------------------------------------------
2012-12-15 12:53:44,797 [ClientLockManager LockGC] INFO com.tc.object.locks.ClientLockManager - ClientID[0]: Lock GC collected 135 garbage locks
-----------------------------------------------------------
Please help. DOes any of the logs tell of any error. I could not find any error logs as such.
Do I need to check any more log locations(I am not aware of!).
Please help!
|
|
|
No sir. I am using Enterprise Ehcache as L1 cache for Terracotta and using the 30 days license. Please help
|
|
|
Hi,
I have 4 servers. Two of them are active and two passive. Do i have to have password-less authentication on all servers in order to have distributed setup of ehcache?
When I check the logs for flume for which terracotta is a client, I am getting the following INFO logs at the end...its in bold. Can you help?
[root@ip-10-40-222-77 ~]# tail -f /usr/lib/flume/logs-10.40.222.77/terracotta-client.log
2012-12-15 12:50:45,039 [WorkerThread(client_coordination_stage, 0)] INFO com.tc.management.remote.protocol.terracotta.TunnelingEventHandler - Client JMX server ready; sending notification to L2 server
2012-12-15 12:50:45,303 [logicalNode ec2-75-101-165-36.compute-1.amazonaws.com-19] INFO org.terracotta.api.org.terracotta.modules.ehcache.st ore.TerracottaClusteredInstanceFactory - Ehcache Core version 2.5.1 was built on 20120125-1123, at revision 5203, with jdk 1.6.0_30 by cruis e@su10mo129
2012-12-15 12:50:48,413 [logicalNode ec2-75-101-165-36.compute-1.amazonaws.com-19] INFO org.terracotta.cache.logging.TCLoggerConfigChangeLis tener - Changed cache [Distributed Cache] maxTTLSeconds from 0 to 600
2012-12-15 12:50:48,414 [logicalNode ec2-75-101-165-36.compute-1.amazonaws.com-19] INFO org.terracotta.cache.logging.TCLoggerConfigChangeLis tener - Changed cache [Distributed Cache] maxTTISeconds from 0 to 600
2012-12-15 12:50:48,414 [logicalNode ec2-75-101-165-36.compute-1.amazonaws.com-19] INFO org.terracotta.cache.logging.TCLoggerConfigChangeLis tener - Changed cache [RTLogCacheManager_RTLogCache_0] name from Distributed Cache to RTLogCacheManager_RTLogCache_0
2012-12-15 12:50:48,421 [logicalNode ec2-75-101-165-36.compute-1.amazonaws.com-19] INFO org.terracotta.cache.logging.TCLoggerConfigChangeLis tener - Changed cache [RTLogCacheManager_RTLogCache_0] capacityEvictionPolicyDataFactory from org.terracotta.cache.evictor.LFUCapacityEvict ionPolicyData$Factory@3ffaab5 to org.terracotta.cache.evictor.LRUCapacityEvictionPolicyData$Factory@6205320
2012-12-15 12:50:48,421 [logicalNode ec2-75-101-165-36.compute-1.amazonaws.com-19] INFO org.terracotta.cache.logging.TCLoggerConfigChangeLis tener - Changed cache [RTLogCacheManager_RTLogCache_0] targetMaxInMemoryCount from 0 to 100
2012-12-15 12:50:48,427 [logicalNode ec2-75-101-165-36.compute-1.amazonaws.com-19] INFO org.terracotta.cache.logging.TCLoggerConfigChangeLis tener - Changed cache [RTLogCacheManager_RTLogCache_0] targetMaxTotalCount from 0 to 10000000
2012-12-15 12:50:49,293 [logicalNode ec2-75-101-165-36.compute-1.amazonaws.com-19] INFO org.terracotta.api.org.terracotta.modules.ehcache.st ore.ClusteredStore - Clustered Store [cache=RTLogCache] with checkContainsKeyOnPut: false
2012-12-15 12:50:49,293 [logicalNode ec2-75-101-165-36.compute-1.amazonaws.com-19] INFO org.terracotta.api.org.terracotta.modules.ehcache.st ore.ClusteredStore - Clustered Store [cache=RTLogCache] with storageStrategy: DCV2
2012-12-15 12:53:44,797 [ClientLockManager LockGC] INFO com.tc.object.locks.ClientLockManager - ClientID[0]: Lock GC collected 135 garbage locks
|
|
|
========This is a new post====
In ehcache, I am getting a weird error!
"java.lang.RuntimeException: java.lang.AssertionError: Looking up in the wrong Remote Manager :".
Not sure why the error is coming. Flume is using Ehcache as "sink", to feed data. Terracotta L1 cache -> ehcache.
Where do I need to check. Please help.
Shouvanik
|
|
|
The error "2012-12-12 03:33:00,465 WARN net.sf.ehcache.Cache: Performance may degrade and server disks could run out of space!
The distributed cache RTLogCache does not have maxElementsOnDisk set. Failing to set maxElementsOnDisk could mean no eviction of its elements from the Terracotta Server Array disk store. To avoid this, set maxElementsOnDisk to a non-zero value. " is resolved.
Now, "java.lang.RuntimeException: java.lang.AssertionError: Looking up in the wrong Remote Manager ". I don't know what is the problem.
|
|
|
Now, I am having the following message on flume node log
2012-12-12 03:33:00,465 WARN net.sf.ehcache.Cache: Performance may degrade and server disks could run out of space!
The distributed cache RTLogCache does not have maxElementsOnDisk set. Failing to set maxElementsOnDisk could mean no eviction of its elements from the Terracotta Server Array disk store. To avoid this, set maxElementsOnDisk to a non-zero value.
2012-12-12 03:33:00,623 ERROR com.cloudera.flume.core.connector.DirectDriver: Closing down due to exception on open calls
2012-12-12 03:33:00,629 INFO com.cloudera.flume.core.connector.DirectDriver: Connector logicalNode ec2-75-101-165-36.compute-1.amazonaws.com-19 exited with error: java.lang.RuntimeException: java.lang.AssertionError: Looking up in the wrong Remote Manager : GroupID[0] id : ObjectID=[1:171000] depth : 500 parent : ObjectID=[-1]
net.sf.ehcache.CacheException: java.lang.RuntimeException: java.lang.AssertionError: Looking up in the wrong Remote Manager : GroupID[0] id : ObjectID=[1:171000] depth : 500 parent : ObjectID=[-1]
at net.sf.ehcache.CacheManager.init(CacheManager.java:366)
at net.sf.ehcache.CacheManager.<init>(CacheManager.java:242)
at net.sf.ehcache.CacheManager.create(CacheManager.java:853)
at net.sf.ehcache.CacheManager.create(CacheManager.java:797)
at ehcache.RTLogCache.<init>(Unknown Source)
at ehcache.RTLogCache.getInstance(Unknown Source)
at com.EHCacheSink.open(Unknown Source)
at com.cloudera.flume.core.connector.DirectDriver$PumperThread.run(DirectDriver.java:88)
Caused by: java.lang.RuntimeException: java.lang.AssertionError: Looking up in the wrong Remote Manager : GroupID[0] id : ObjectID=[1:171000] depth : 500 parent : ObjectID=[-1]
at com.tc.object.ClientObjectManagerImpl.lookup(ClientObjectManagerImpl.java:594)
at com.tc.object.ClientObjectManagerImpl.lookupObject(ClientObjectManagerImpl.java:463)
at com.tc.object.ClientObjectManagerImpl.lookupRootOptionallyCreateOrReplace(ClientObjectManagerImpl.java:946)
at com.tc.object.ClientObjectManagerImpl.lookupRoot(ClientObjectManagerImpl.java:656)
at com.tc.object.bytecode.ManagerImpl.lookupRoot(ManagerImpl.java:571)
at com.tc.object.bytecode.ManagerUtil.lookupRoot(ManagerUtil.java:377)
at org.terracotta.api.Terracotta.lookupOrCreateRoot(Terracotta.java:35)
at org.terracotta.modules.ehcache.store.TerracottaClusteredInstanceFactory.getOrCreateStoreInternal(TerracottaClusteredInstanceFactory.java:245)
at org.terracotta.modules.ehcache.store.TerracottaClusteredInstanceFactory.getOrCreateStore(TerracottaClusteredInstanceFactory.java:229)
at org.terracotta.modules.ehcache.store.TerracottaClusteredInstanceFactory.createStore(TerracottaClusteredInstanceFactory.java:123)
at net.sf.ehcache.terracotta.StandaloneTerracottaClusteredInstanceFactory.createStore(StandaloneTerracottaClusteredInstanceFactory.java:67)
at net.sf.ehcache.terracotta.ClusteredInstanceFactoryWrapper.createStore(ClusteredInstanceFactoryWrapper.java:93)
at net.sf.ehcache.CacheManager.createTerracottaStore(CacheManager.java:506)
at net.sf.ehcache.Cache.initialise(Cache.java:1068)
at net.sf.ehcache.CacheManager.initializeEhcache(CacheManager.java:1125)
at net.sf.ehcache.CacheManager.addCacheNoCheck(CacheManager.java:1156)
at net.sf.ehcache.CacheManager.addConfiguredCaches(CacheManager.java:705)
at net.sf.ehcache.CacheManager.doInit(CacheManager.java:423)
at net.sf.ehcache.CacheManager.init(CacheManager.java:357)
... 7 more
Caused by: java.lang.AssertionError: Looking up in the wrong Remote Manager : GroupID[0] id : ObjectID=[1:171000] depth : 500 parent : ObjectID=[-1]
at com.tc.object.RemoteObjectManagerImpl.basicRetrieve(RemoteObjectManagerImpl.java:220)
at com.tc.object.RemoteObjectManagerImpl.retrieve(RemoteObjectManagerImpl.java:207)
at com.tc.object.ClientObjectManagerImpl.lookup(ClientObjectManagerImpl.java:561)
... 25 more
2012-12-12 03:33:00,630 INFO com.cloudera.flume.collector.CollectorSource: closed
2012-12-12 03:33:00,630 INFO com.cloudera.flume.handlers.thrift.ThriftEventSource: Closed server on port 35853...
2012-12-12 03:33:00,632 INFO com.cloudera.flume.handlers.thrift.ThriftEventSource: Queue still has 0 elements ...
2012-12-12 03:33:00,632 ERROR com.cloudera.flume.core.connector.DirectDriver: Exiting driver logicalNode ec2-75-101-165-36.compute-1.amazonaws.com-19 in error state CollectorSource | EHCacheSink because java.lang.RuntimeException: java.lang.AssertionError: Looking up in the wrong Remote Manager : GroupID[0] id : ObjectID=[1:171000] depth : 500 parent : ObjectID=[-1]
Can you please help?
|
|
|
I was wondering what could be the terracotta client now?
I have flume which takes as a source, a text file and sends data to ehcache.
So, is TC client "ehcache" which is co-located with Flume on a single node?
Then what is the classpath for the TC client? I had included the license key at the Flume classpath via flume-env.sh. Still getting the same error!
What is supposed to be the classpath that you had mentioned. Please help?
|
|
|
Hi,
I am having a weird problem with terracotta. It says that
"2012-12-11 08:11:05,435 INFO - Terracotta Enterprise 3.6.1, as of 20120125-122228 (Revision 15328-19619 by cruise@su10vmo104 from 3.6.1)
2012-12-11 08:11:06,040 INFO - Successfully loaded base configuration from file at '/mnt/terracotta-ee-3.6.1/bin/tc-config.xml'.
2012-12-11 08:11:06,092 INFO - Log file: '/mnt/terracotta-ee-3.6.1/bin/logs/terracotta-server.log'.
2012-12-11 08:11:06,145 INFO - Terracotta license loaded from /mnt/terracotta-ee-3.6.1/terracotta-license.key
Capabilities: DCV2, TMC, authentication, ehcache, ehcache offheap, operator console, quartz, quartz manager, quartz where, roots, search, security, server array offheap, server striping, sessions
Date of Issue: 2012-12-07
Edition: FX
Expiration Date: 2013-02-28
License Number: Download Trail Licence_Accept
License Type: Trial
Licensee: Download Trail Licence_Accept
Max Client Count: 50
Product: Enterprise Suite
ehcache.maxOffHeap: 250G
terracotta.serverArray.maxOffHeap: 250G
2012-12-11 08:11:08,572 INFO - Available Max Runtime Memory: 490MB
2012-12-11 08:11:11,032 INFO - JMX Server started. Available at URL[service:jmx:jmxmp://0.0.0.0:9520]
2012-12-11 08:11:18,073 INFO - Becoming State[ ACTIVE-COORDINATOR ]
2012-12-11 08:11:18,100 INFO - Terracotta Server instance has started up as ACTIVE node on 0:0:0:0:0:0:0:0:9510 successfully, and is now ready for work."
But, when I look at the logs for Flume(which is configured to use Terracotta -Ehcache) at "usr/lib/flume/logs-10.40.222.77/terracotta-client.log"
I get this error:
"2012-12-11 07:31:13,909 [logicalNode ec2-75-101-165-36.compute-1.amazonaws.com-19] ERROR com.terracottatech.console - Terracotta license key is required for Enterprise capabilities. Please place terracotta-license.key in Terracotta installed directory or in resource path. You could also specify it through system property -Dcom.tc.productkey.path=/path/to/key
2012-12-11 07:31:13,914 [logicalNode ec2-75-101-165-36.compute-1.amazonaws.com-19] ERROR com.tc.license.LicenseManager - License key not found org.terracotta.license.LicenseException"
And, when I see the logs at "/var/log/flume/flume-flume-node-ip-10-40-222-77.out", I get the error as
"2012-12-11 07:31:13,909 ERROR - Terracotta license key is required for Enterprise capabilities. Please place terracotta-license.key in Terracotta installed directory or in resource path. You could also specify it through system property -Dcom.tc.productkey.path=/path/to/key".
I downloaded the correct key and placed it at /mnt/terracotta-ee-xxx/ directory, as well as instructed by the Terracotta site.
Please help. I am not able to find the solution !
|
|
|
is there some server that I need to start ? I don't get the idea..how to configure ehcache...i have two unix boxes..i have to configure ehcache to be available on these two machines in a cluster (or what is called configuring ehcache in a distributed mode)
also we have to keep in mind that I am not using Terracotta server as L2 cache...am replacing it with HBase..how to do that??
|
|
|
I am a newbee in EhCache. I want to read a file and insert data into EhCache. How to do that?
Shouvanik
|
|
|
|
|