S00002

Error Code

S00002

Error Message

The openChunks operation failed because the chunk <xxx> is currently locked and in use by transaction <tid>. RefId:S00002

Probable Causes

This error occurs when a transaction attempts to lock a chunk that is already locked by another transaction. The locked chunk and the transaction locking it are indicated in the error message.

  • For Error Message 1, the number of the subscriber nodes that can be connected to the publisher is not specified.

  • For Error Message 2 or 3, the port listening on the subscription is not specified.

Solutions

DolphinDB supports transactions and offers ACID (atomicity, consistency, isolation, durability) guarantees. For each insert, delete or update operation, DolphinDB creates a transaction to lock the corresponding chunks. For concurrent writes to a database, it is recommended that these writers do not perform insert, delete or update operations on the same chunk at the same time.

For example, when two concurrent jobs attempt to write data to the same set of chunks, i,e,, chunks 1, 2, and 3, the aforementioned error occurs: <ChunkInTransaction>The openChunks operation failed because the chunk '/testDB/1/2' is currently locked and in use by transaction 8. RefId:S00002.
login(`admin, `123456)
dbpath="dfs://testDB"
tbname="tb";
if(existsDatabase(dbpath))
	dropDatabase(dbpath);
db=database(dbpath,VALUE,1..2);
dumyTable=table(1..3 as id, 1..3 as val);
tb=createPartitionedTable(db,dumyTable,"tb", "id");

def Job(dbPath, tbName) {
	for(i in 1..100) {
		dumyTable=table(1..3 as id, 1..3 as val)
		tb = loadTable(dbPath, tbName)
		t = table(1..3 as id, 1..3 as val);
		tb.append!(t)
	}
}

submitJob("job1", "", Job, dbpath, tbname)
submitJob("job2", "", Job, dbpath, tbname)

getRecentJobs()

To solve this issue, you can:

  1. Modify jobs to avoid concurrent operations on the same chunk.

  2. Set the atomicity of the database to 'CHUNK' using the setAtomicLevel function. In this way, if a write-write conflict occurs, the transaction will keep writing to the non-locked chunks and keep attempting to write to the chunk in conflict until it is still locked after a few minutes. Note that this may split a write into multiple transactions.