Created by: bradvogel
…Bits/bull/issues/371#issuecomment-260158407.
Double-processing happens when two workers find out about the same job at the same time via getNextJob
. One worker is taking the lock, processing the job, and moving it to completed before the second worker can even try to get the lock. When the second worker finally gets around to trying to get the lock, the job is already in the completed state. But it processes it anyways since it got the lock.
So the fix here is for the takeLock script to ensure the job is in the active queue prior to taking the lock. That will make sure jobs that are in wait, completed, or even removed from the queue altogether don't get double processed. Per the discussion in #370 (closed) though, takeLock is parameterized to only require the job be in active when taking the lock while processing the job. There are other cases such as job.remove() that the job might be in a different state, but we still want to be able to lock it.
This fixes existing existing broken unit test "should process each job once".
This also prevents hazard https://github.com/OptimalBits/bull/issues/370.