Author |
Message |
10/19/2011 02:23:43
|
gumis
neo
Joined: 10/19/2011 02:06:58
Messages: 6
Offline
|
Hello,
Scenario is following:
- There are one long transaction and in scope of this transaction job is scheduled
- There are many rather short transaction also inserting jobs
Problem: Long running transaction prevents from inserting jobs from short transaction.
Statement on which we got timeout is from OracleDelegate.insertJobDetail(OracleDelegate.java:190):
INSERT INTO QRTZ_JOB_DETAILS (JOB_NAME, JOB_GROUP, DESCRIPTION, JOB_CLASS_NAME, IS_DURABLE, IS_VOLATILE, IS_STATEFUL, REQUESTS_RECOVERY,
JOB_DATA) VALUES(?, ?, ?, ?, ?, ?, ?, ?, ?)
lockOnInsert property is set to false.
Do you have idea why oracle blocks this insert statement? I would not expect such behaviour event if there is another transaction running that already inserted another job.
Thanks for help,
Gumis
|
|
|
10/19/2011 05:33:31
|
jhouse
seraphim
Joined: 11/06/2009 15:29:56
Messages: 1703
Offline
|
Because all of the scheduling data (tables) is effectively/logically on large collection (e.g. Map) all access to it is locked to prevent concurrency issues (just like Hashtable or ConcurrentHashMap).
Hence long-running transactions should avoid performing scheduling operations until near the end of all of their work.
|
|
|
10/19/2011 05:54:44
|
gumis
neo
Joined: 10/19/2011 02:06:58
Messages: 6
Offline
|
Thanks, but I would like to understand it a bit better.
How it is locked? I thought that when you set lockOnInsert = false quartz do not perform any additional locking to standard locking applied by database when you insert/update rows.
|
|
|
10/19/2011 06:54:56
|
jhouse
seraphim
Joined: 11/06/2009 15:29:56
Messages: 1703
Offline
|
Which scheduler API functions are you calling?
|
|
|
10/19/2011 07:50:47
|
gumis
neo
Joined: 10/19/2011 02:06:58
Messages: 6
Offline
|
Scheduler.scheduleJob(jobDetail, trigger) from quartz 1.6.6
|
|
|
10/19/2011 08:34:02
|
jhouse
seraphim
Joined: 11/06/2009 15:29:56
Messages: 1703
Offline
|
We would need to see a view of all other SQL being performed by transactions at the same time to know why this deadlocked.
Access to the quartz tables is highly concurrent by its very nature, so the lockOnInsert property defaults to true to ensure no deadlocks by explicit high-level locking as I described in my previous comment.
I agree that it's somewhat of a mystery why oracle would need to block on that insert statement - unless another thread is working with another row with the very same key as the one trying to be inserted. (E.g. it is deleting a row with the same job name and group, but has not yet committed).
What TX isolation level are you using?
|
|
|
10/19/2011 12:08:57
|
gumis
neo
Joined: 10/19/2011 02:06:58
Messages: 6
Offline
|
TX isolation level is SERIALIZABLE.
|
|
|
10/19/2011 12:09:27
|
gumis
neo
Joined: 10/19/2011 02:06:58
Messages: 6
Offline
|
I don't think it is deadlock - I mean it is not situation that the first transaction is waiting for resource acquired by the second and the second for the resource acquired by the first.
Transaction timeout is much longer then distributed_lock_timeout and the second transaction gets ORA-02049 waiting for lock acquired by first transaction.
I rather suspect something like described here:
http://ora-02049.ora-code.com/msg/26060.html
"TX-4 is a complete different matter. I faced TX-4 problems twice. The first one was related to the lack of free ITL slots in a table (or index), I don't remember."
|
|
|
10/20/2011 06:25:56
|
jhouse
seraphim
Joined: 11/06/2009 15:29:56
Messages: 1703
Offline
|
Is your oracle server running in dedicated or shared mode?
The serializable level is very likely the cause - all transactions stack up behind each other one-by-one, so they therefore have time to timeout, just as if they were waiting for a lock. No?
|
|
|
10/20/2011 07:34:13
|
gumis
neo
Joined: 10/19/2011 02:06:58
Messages: 6
Offline
|
Sorry I made a mistake. Of course we are using default isolation mode which is READ_COMMITED
|
|
|
07/20/2016 09:31:47
|
frblanco
neo
Joined: 07/20/2016 08:14:09
Messages: 1
Offline
|
If you have a unique key and there are two sessions perfoming an insert that compromises this UK, the second one will be blocked. Even if you are in read commited mode and the first transaction is not yet commited.
I think other types of constraints will do it too.
|
|
|
|