Author |
Message |
|
How do I get the time before this time?
|
|
|
I little slice of code that is supposed to return the previous time a trigger ran. It doesn't matter how the job is triggered, it always returns the same date as the fire time. Here is the code:
Code:
Date startDate = context.getTrigger().getPreviousFireTime();
Date endDate = context.getFireTime();
log("startDate=${DATE_PARAM_SDF.format(startDate)} ");
log("endDate=${DATE_PARAM_SDF.format(endDate)}");
Here is the output:
Code:
[DEBUG] (StandardJob.java:253 - 2012-03-14 15:42:00.172) - startDate=2012-03-14 15:42:00
[DEBUG] (StandardJob.java:253 - 2012-03-14 15:42:00.178) - endDate=2012-03-14 15:42:00
In case you are wondering about the dollar/curly bracket syntax, this job is written in groovy, which allows for tokenized strings.
Additional info
----------------
Quartz version: 1.8.4
Job store: jdbc (oracle delegate)
Here is the quartz properties:
Code:
org.terracotta.quartz.skipUpdateCheck=true
org.quartz.scheduler.instanceId=ClusterName.instancename
org.quartz.scheduler.instanceIdGenerator.class=org.quartz.simpl.SimpleInstanceIdGenerator
org.quartz.scheduler.instanceName=ClusterName.instancename
org.quartz.scheduler.rmi.export = false
org.quartz.scheduler.rmi.proxy = false
org.quartz.scheduler.wrapJobExecutionInUserTransaction = false
org.quartz.threadPool.class = org.quartz.simpl.SimpleThreadPool
org.quartz.threadPool.threadCount = 10
org.quartz.threadPool.threadPriority = 5
org.quartz.threadPool.threadsInheritContextClassLoaderOfInitializingThread = true
org.quartz.jobStore.misfireThreshold = 60000
org.quartz.jobStore.class = org.quartz.impl.jdbcjobstore.JobStoreTX
org.quartz.jobStore.driverDelegateClass = org.quartz.impl.jdbcjobstore.StdJDBCDelegate
org.quartz.jobStore.useProperties = false
org.quartz.jobStore.dataSource = myDS
org.quartz.jobStore.tablePrefix = QRTZ_
org.quartz.jobStore.isClustered = true
org.quartz.dataSource.myDS.driver=oracle.jdbc.driver.OracleDriver
org.quartz.dataSource.myDS.user = test
org.quartz.dataSource.myDS.password = test
org.quartz.dataSource.myDS.maxConnections = 20
org.quartz.dataSource.myDS.testOnBorrow = false
org.quartz.dataSource.myDS.validationQuery = select 0 from dual
org.quartz.dataSource.myDS.minIdleConnections = 5
org.quartz.dataSource.myDS.validationQueryTimeout = 60000
org.quartz.dataSource.myDS.testWhileIdle = true
org.quartz.dataSource.myDS.timeBetweenEvictionRunsMillis = 60000
org.quartz.dataSource.myDS.numTestsPerEvictionRun = 20
database.minIdleConnections=5
org.quartz.dataSource.myDS.URL=jdbc:oracle:thin:@server:1521:db
org.quartz.dataSource.myDS.server=server
org.quartz.dataSource.myDS.port=1521
org.quartz.dataSource.myDS.sid=db
org.quartz.jobListener.StandardJobListener.class = com.everbank.datamover2.core.jobs.StandardJobExecutionListener
I need help quick.
|
|
|
I have configured a quartz instance with a jdbc job store to facilitate clustering. It works pretty well but after a few weeks of steadily increasing the number of jobs in the cluster, I got a call from a DBA complaining that my app is opening and closing ~10 connections per second. He told me this is the statement that is being run several times per second Code:
SELECT *
FROM QRTZ_LOCKS
WHERE LOCK_NAME = :1 FOR UPDATE
He asked me why we aren't pooling the connections and i told him, that I thought we were. When I looked at the quartz config, we are in fact using the jdbc datasource as follows (which looks to me like a pool):
Code:
org.terracotta.quartz.skipUpdateCheck=true
# Generates a unique instance ID
org.quartz.scheduler.instanceId=cluster1.node1
#Naming convention: ClusterName.SchedulerName
org.quartz.scheduler.instanceName=cluster1.node1
org.quartz.scheduler.rmi.export = false
org.quartz.scheduler.rmi.proxy = false
org.quartz.scheduler.wrapJobExecutionInUserTransaction = false
org.quartz.threadPool.class = org.quartz.simpl.SimpleThreadPool
org.quartz.threadPool.threadCount = 10
org.quartz.threadPool.threadPriority = 5
org.quartz.jobStore.class = org.quartz.impl.jdbcjobstore.JobStoreTX
org.quartz.jobStore.driverDelegateClass = org.quartz.impl.jdbcjobstore.StdJDBCDelegate
org.quartz.jobStore.useProperties = false
org.quartz.jobStore.dataSource = myDS
org.quartz.jobStore.tablePrefix = QRTZ_
org.quartz.jobStore.isClustered = true
#datasource stuff
org.quartz.dataSource.myDS.driver=oracle.jdbc.driver.OracleDriver
org.quartz.dataSource.myDS.user = myapp
org.quartz.dataSource.myDS.password = password
org.quartz.dataSource.myDS.maxConnections = 5
org.quartz.dataSource.myDS.URL=jdbc:oracle:thin:@192.168.1.1:1521:qrtz
org.quartz.dataSource.myDS.server=192.168.1.1
org.quartz.dataSource.myDS.port=1521
org.quartz.dataSource.myDS.sid=qrtz
org.quartz.jobListener.StandardJobListener.class = myapp.core.jobs.StandardJobExecutionListener
#Plugins ...
#org.quartz.plugin.triggHistory.class = org.quartz.plugins.history.LoggingJobHistoryPlugin
#org.quartz.plugin.jobInitializer.class = org.quartz.plugins.xml.XMLSchedulingDataProcessorPlugin
#org.quartz.plugin.jobInitializer.fileNames = quartz_data.xml
#org.quartz.plugin.jobInitializer.failOnFileNotFound = true
#org.quartz.plugin.jobInitializer.scanInterval = 120
#org.quartz.plugin.jobInitializer.wrapInUserTransaction = false
So here are 2 half million dollar questions:
1) Is there anything special I have to do to force quartz to pool connections for the JDBC job store
2) What is the point of org.quartz.dataSource.myDS.maxConnections if quartz does not pool connections by default?
|
|
|
I have a cron trigger that is set to rerun a job using the following expressions:
0 0,5,10,15,20,25,30,35,40,45,50,55 * * * ?
0 0/5 * * * ?
It seems to work the first time and in fact, the scheduler reports that the job has run and continues to advance the next fire time but it never gets into the first line of the job's execute() method.
How do you debug that the trigger is updating but the job never runs?
|
|
|
I have a job execution listener that I need to add a unique ID to every job that runs. I have an ID generated via a database sequence. Can I set that on the JobDataMap of the JobExecutionListener from inside the jobToBeExecuted() method? If I set it there, will it be accessible from the Job's execute() method regardless of type of job?
|
|
|
I need a globally unique identifier for each job. I assume this is already a feature but I need someone to tell me how to get this info. In any case the string would have to be unique not only for the unique job and trigger but also for each triggered run. Here is an example.
I have 2 jobs (MySimpleJob and MySimpleJob2). Each is scheduled by 2 triggers (if you are up on your 2nd grade math, that means 4 triggers total).
If each trigger is scheduled to run once per hour, then we should have 96 *UNIQUE* id strings after 24 hours of running.
Now that I have explained what I need, what api call gives me the UID?
|
|
|
In a perfect world (at least for my needs), I would like a non-clustered job to be scheduled from any node for all other nodes. Additionally, but not quite as important, it would be create if any node could schedule a job and identify 1 or more different nodes to execute it.
I am intrigued about having a clustered and non-clustered scheduler in the same VM. Do you have any documentation or examples of what that config might look like? I only ever have 1 scheduler per VM. I never even noticed the config that allowed more than 1 per vm.
|
|
|
I have an application that exists on multiple tomcat instances. The quartz schedulers inside each tomcat instance are clustered. That means that my jobs only run in one tomcat for a given scheduled job (which is what is wanted in most cases). I would like to know if there is a way to mark certain jobs as non-clustered in a clustered quartz instance. I know it sounds crazy but for business processes, it makes sense that they only be run once regardless of which server they run on. However, you may want other jobs that are infrastructure related to run on all tomcat instances. How would that be done given the fact that quartz is clustered?
|
|
|
I am trying to set up an interface to schedule jobs for quartz and I have run into a snag: I need getter methods for all the boolean properties and there are none. I thought I could just create a wrapper object that extends JobDetail and implements the boolean getters by having the body of of each getter return isXXXXXX(); However there does not appear to be an isRequestRecovery() method for JobDetail. What method of JobDetail returns the value set by setRequestRecovery() ??????
|
|
|
1.7.x still has not been pushed to maven central. What is the hold up? do we have an ETA on when it will be done?
|
|
|
This release was supposed to be released to the maven central repo with the group id of org.quartz-scheduler. I was told this on the mailing list on jan 12th. I have been patiently watching repo1 and still no new quartz artifact. what is the current status? Is there a problem with the central repo? What is the expected date for the 1.7 release to hit maven central?
|
|
|
I have never done this, but I believe the solution is as simple as creating 2 jobs. Your first job would have to be the processing job. pardon my psuedocode but it might look something like this:
Code:
public class MyProcessingJob implements InterruptableJob
{
private boolean continue;
public void execute( JobExecutionContext jobExecContext ) throws JobExecutionException
{
this.continue=true;
while (this.continue)
{
// --- do processing
}
}
public void interrupt()
{
this.continue=false;
}
}
Then you would simply write another job that you run every day at 6AM and looks something like this:
Code:
public class MyManagerJob implements Job
{
public void execute( JobExecutionContext jobExecContext ) throws JobExecutionException
{
Scheduler sched = StdSchedulerFactory.getDefaultScheduler();
sched.interrupt("processingJobName","processingJobGroup");
//The above parameters would be the name ang group you originally
// used to schedule the jobs.
}
}
Hope it helps
|
|
|
if you have over jobs' source files why not just create a parent abstract class that invokes the execute method on all your jobs and catches the exceptions
Why not write something like:
Code:
public abstract class ManagedJob implements Job
{
public void execute( JobExecutionContext jobExecContext ) throws JobExecutionException
{
try
{
executeRealWork(jobExecContext );
}
catch( Exception e )
{
ManagementBusinesiness.emailAdmin(jobExecContext );
}
}
public abstract Boolean executeRealWork(JobExecutionContext jobExecContext );
}
And then you just have to change all your jobs to extend this one and change the execute() to executeRealWork() in your individual job classes.
Another alternative to use and AOP crosscut against all Jobs, This might be a more declarative way of handling it abd would likely result in less code changes. but I have never done so I can't really comment on ease or difficulty.
|
|
|
well at least rate my response.
|
|
|
I have found the Scheduler available from quartz to be extremely thorough. I have yet to need info about a job that the scheduler does not have access to. For example, If you want to print the name, Run time and next fire time of your currently executing jobs, you can do the following:
Code:
List<JobExecutionContext> jobs = sched.getCurrentlyExecutingJobs();
for (JobExecutionContext jec: jobs)
{
LOGGER.debug("Job Name:"+jec.getJobDetail().getName());
LOGGER.debug("Job Run Time:"+jec.getJobRunTime());
LOGGER.debug("Next Fire Time:"+jec.getNextFireTime());
}
|
|
|