Hi 
I have deployed 5 node Ignite cluster on K8S with persistence enabled
(version 2.9.0 on java 11)
I started ingesting data to 3 tables and after ingesting large amount of
data using JDBC batch insertion (around 20 million records to each of 3
tables with backup set to 1), now i connected to visor shell (from one of
the pod which i deployed just to use as visor shell) using the same ignite
config file which is used for ignite servers and after visor shell connects
to ignite cluster the unwanted wal record cleanup stopped (which should run
post checkpoint ) and WAL started growing linearly as there is continuous
data ingestion. This is making WAL disk run out of space and pods crashes.

when i see logs there are continuous warning messages saying 
Could not clear historyMap due to WAL reservation on cp: CheckpointEntry
[id=e8bb9c22-0709-416f-88d6-16c5ca534024, timestamp=1606979158669,
ptr=FileWALPointer [idx=1468, fileOff=45255321, len=9857]], history map size
is 4


and if i see checkpoint finish messages 
 
Checkpoint finished [cpId=ca254956-5550-45d6-87c5-892b7e07b13b,
pages=494933, markPos=FileWALPointer [idx=1472, fileOff=72673736, len=9857],
walSegmentsCleared=0, walSegmentsCovered=[1470 - 1471], markDuration=464ms,
pagesWrite=8558ms, fsync=3597ms, total=14027ms]

here you can see walSegmentsCleared=0   means there are no WAl segments
cleared even after checkpoint, not sure what is causing this behaviour.

we are ingesting very large data (~25mb/s)
please someone help in this issue 


regards,
Shiva





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Reply via email to