Author |
Message |
|
Hi,
I start a persistent clustered scheduler on 3 different nodes. I have Interruptable clustered jobs that then run randomly across these nodes depending on which scheduler instance picks up the trigger. So far so good.
I also have code stop a job that first calls the scheduler.interrupt() and then scheduler.deletJob(). This works as expected if I only have 1 node or if I execute this code on the node where the job happens to be running.
If the job is not running on the same node as where I execute this, the job will be deleted but obviously interrupt will not be called.
What is the best way to handle this?
1) Should I broadcast an interrupt to all nodes, wait a second and then do the delete?
2) Query each node to see which node is actually executing the job (Is there even a way to do this? I cannot find one), and then get that node to do the interrupt and delete?
3) Is there something out of the box that Quartz can do?
Thanks,
Michael
|
|
|
Is there a scheduler event to tell me when my scheduler finds a new job for it in the database?
I have 2 nodes, one with scheduler A started and one with scheduler B started. I then also create a scheduler B on node A but do not start it but use it to schedule jobs so that they get placed in the database for the running scheduler B to pick up and run. Call it poor man's clustering.
What I would like to do is add a listener to this job on scheduler B. Is there a scheduler event or hook that I can use to tell me when the scheduler finds a new job in the database so that I can add the listener?
If listeners were persisted I could just do all this from node A, but they are not.
Thanks,
Michael
|
|
|
|
|