![]() However, your implementation does not actually achieve that. The best defense against deadlocks is generally to avoid them by beingĬertain that all applications using a database acquire locks on You were on the right track with trying to acquire locks in a consistent sort order. ![]() I'm confused about how a deadlock could still happen in this situation. Process 426 waits for ShareLock on transaction 850467903 blocked by process 316. Process 316 waits for ShareLock on transaction 850467907 blocked by process 426. ![]() This was the exact error reported by Postgres: The source of the deadlock was reported to be at the update statement of table B (when incrementing the counter of the row). I thought the above statement didn't work, so I changed it to use a PL/pgSQL FOR loop: FOR row_b IN (īut this also didn't work and deadlocks were still happening. The deadlocks were still happening after this change. WHERE name = 'some name' AND store = 'store id' To address this, I changed the trigger function to update the rows in table B in the order of their PK: UPDATE table_b Initially, this led to Postgres reporting deadlocks when the trigger was updating the rows in table B. As there are multiple web servers at once, there will always be multiple concurrent insert statements/transactions. Normally, the web server would insert multiple rows into table A using a single insert statement with the READ COMMITTED isolation level, which would cause the trigger to fire for each row inserted. I have an AFTER INSERT trigger for table A that updates multiple rows in table B (incrementing a column in table B).
0 Comments
Leave a Reply. |