[jira] [Updated] (NIFI-2395) PersistentProvenanceRepository Deadlocks caused by a blocked journal merge
[ https://issues.apache.org/jira/browse/NIFI-2395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joseph Witt updated NIFI-2395: -- Priority: Blocker (was: Critical) > PersistentProvenanceRepository Deadlocks caused by a blocked journal merge > -- > > Key: NIFI-2395 > URL: https://issues.apache.org/jira/browse/NIFI-2395 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 0.6.0, 0.7.0 >Reporter: Brian Davis >Assignee: Joseph Witt >Priority: Blocker > > I have a nifi instance that I have been running for about a week and has > deadlocked at least 3 times during this time. When I say deadlock the whole > nifi instance stops doing any progress on flowfiles. I looked at the stack > trace and there are a lot of threads stuck doing tasks in the > PersistentProvenanceRepository. Looking at the code I think this is what is > happening: > There is a ReadWriteLock that all the reads are waiting for a write. The > write is in the loop: > {code} > while (journalFileCount > journalCountThreshold || repoSize > > sizeThreshold) { > // if a shutdown happens while we are in this loop, kill > the rollover thread and break > if (this.closed.get()) { > if (future != null) { > future.cancel(true); > } > break; > } > if (repoSize > sizeThreshold) { > logger.debug("Provenance Repository has exceeded its > size threshold; will trigger purging of oldest events"); > purgeOldEvents(); > journalFileCount = getJournalCount(); > repoSize = getSize(getLogFiles(), 0L); > continue; > } else { > // if we are constrained by the number of journal > files rather than the size of the repo, > // then we will just sleep a bit because another > thread is already actively merging the journals, > // due to the runnable that we scheduled above > try { > Thread.sleep(100L); > } catch (final InterruptedException ie) { > } > } > logger.debug("Provenance Repository is still behind. > Keeping flow slowed down " > + "to accommodate. Currently, there are {} > journal files ({} bytes) and " > + "threshold for blocking is {} ({} bytes)", > journalFileCount, repoSize, journalCountThreshold, sizeThreshold); > journalFileCount = getJournalCount(); > repoSize = getSize(getLogFiles(), 0L); > } > logger.info("Provenance Repository has now caught up with > rolling over journal files. Current number of " > + "journal files to be rolled over is {}", > journalFileCount); > } > {code} > My nifi is at the sleep indefinitely. The reason my nifi cannot move forward > is because of the thread doing the merge is stopped. The thread doing the > merge is at: > {code} > accepted = eventQueue.offer(new Tuple<>(record, blockIndex), 10, > TimeUnit.MILLISECONDS); > {code} > so the queue is full. > What I believe happened is that the callables created here: > {code} > final Callable callable = new > Callable() { > @Override > public Object call() throws IOException { > while (!eventQueue.isEmpty() || > !finishedAdding.get()) { > final > Tuple tuple; > try { > tuple = eventQueue.poll(10, > TimeUnit.MILLISECONDS); > } catch (final InterruptedException > ie) { > continue; > } > if (tuple == null) { > continue; > } > indexingAction.index(tuple.getKey(), > indexWriter, tuple.getValue()); > } > return null; > } > {code} > finish before the offer adds its first event because I do not see any Index > Provenance Events threads. My guess is the while loop cond
[jira] [Updated] (NIFI-2395) PersistentProvenanceRepository Deadlocks caused by a blocked journal merge
[ https://issues.apache.org/jira/browse/NIFI-2395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joseph Witt updated NIFI-2395: -- Affects Version/s: 0.7.0 > PersistentProvenanceRepository Deadlocks caused by a blocked journal merge > -- > > Key: NIFI-2395 > URL: https://issues.apache.org/jira/browse/NIFI-2395 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 0.6.0, 0.7.0 >Reporter: Brian Davis >Assignee: Joseph Witt >Priority: Critical > > I have a nifi instance that I have been running for about a week and has > deadlocked at least 3 times during this time. When I say deadlock the whole > nifi instance stops doing any progress on flowfiles. I looked at the stack > trace and there are a lot of threads stuck doing tasks in the > PersistentProvenanceRepository. Looking at the code I think this is what is > happening: > There is a ReadWriteLock that all the reads are waiting for a write. The > write is in the loop: > {code} > while (journalFileCount > journalCountThreshold || repoSize > > sizeThreshold) { > // if a shutdown happens while we are in this loop, kill > the rollover thread and break > if (this.closed.get()) { > if (future != null) { > future.cancel(true); > } > break; > } > if (repoSize > sizeThreshold) { > logger.debug("Provenance Repository has exceeded its > size threshold; will trigger purging of oldest events"); > purgeOldEvents(); > journalFileCount = getJournalCount(); > repoSize = getSize(getLogFiles(), 0L); > continue; > } else { > // if we are constrained by the number of journal > files rather than the size of the repo, > // then we will just sleep a bit because another > thread is already actively merging the journals, > // due to the runnable that we scheduled above > try { > Thread.sleep(100L); > } catch (final InterruptedException ie) { > } > } > logger.debug("Provenance Repository is still behind. > Keeping flow slowed down " > + "to accommodate. Currently, there are {} > journal files ({} bytes) and " > + "threshold for blocking is {} ({} bytes)", > journalFileCount, repoSize, journalCountThreshold, sizeThreshold); > journalFileCount = getJournalCount(); > repoSize = getSize(getLogFiles(), 0L); > } > logger.info("Provenance Repository has now caught up with > rolling over journal files. Current number of " > + "journal files to be rolled over is {}", > journalFileCount); > } > {code} > My nifi is at the sleep indefinitely. The reason my nifi cannot move forward > is because of the thread doing the merge is stopped. The thread doing the > merge is at: > {code} > accepted = eventQueue.offer(new Tuple<>(record, blockIndex), 10, > TimeUnit.MILLISECONDS); > {code} > so the queue is full. > What I believe happened is that the callables created here: > {code} > final Callable callable = new > Callable() { > @Override > public Object call() throws IOException { > while (!eventQueue.isEmpty() || > !finishedAdding.get()) { > final > Tuple tuple; > try { > tuple = eventQueue.poll(10, > TimeUnit.MILLISECONDS); > } catch (final InterruptedException > ie) { > continue; > } > if (tuple == null) { > continue; > } > indexingAction.index(tuple.getKey(), > indexWriter, tuple.getValue()); > } > return null; > } > {code} > finish before the offer adds its first event because I do not see any Index > Provenance Events threads. My guess is the while loop condition is
[jira] [Updated] (NIFI-2395) PersistentProvenanceRepository Deadlocks caused by a blocked journal merge
[ https://issues.apache.org/jira/browse/NIFI-2395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Payne updated NIFI-2395: - Fix Version/s: 0.8.0 1.0.0 > PersistentProvenanceRepository Deadlocks caused by a blocked journal merge > -- > > Key: NIFI-2395 > URL: https://issues.apache.org/jira/browse/NIFI-2395 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 0.6.0, 0.7.0 >Reporter: Brian Davis >Assignee: Joseph Witt >Priority: Blocker > Fix For: 1.0.0, 0.8.0 > > > I have a nifi instance that I have been running for about a week and has > deadlocked at least 3 times during this time. When I say deadlock the whole > nifi instance stops doing any progress on flowfiles. I looked at the stack > trace and there are a lot of threads stuck doing tasks in the > PersistentProvenanceRepository. Looking at the code I think this is what is > happening: > There is a ReadWriteLock that all the reads are waiting for a write. The > write is in the loop: > {code} > while (journalFileCount > journalCountThreshold || repoSize > > sizeThreshold) { > // if a shutdown happens while we are in this loop, kill > the rollover thread and break > if (this.closed.get()) { > if (future != null) { > future.cancel(true); > } > break; > } > if (repoSize > sizeThreshold) { > logger.debug("Provenance Repository has exceeded its > size threshold; will trigger purging of oldest events"); > purgeOldEvents(); > journalFileCount = getJournalCount(); > repoSize = getSize(getLogFiles(), 0L); > continue; > } else { > // if we are constrained by the number of journal > files rather than the size of the repo, > // then we will just sleep a bit because another > thread is already actively merging the journals, > // due to the runnable that we scheduled above > try { > Thread.sleep(100L); > } catch (final InterruptedException ie) { > } > } > logger.debug("Provenance Repository is still behind. > Keeping flow slowed down " > + "to accommodate. Currently, there are {} > journal files ({} bytes) and " > + "threshold for blocking is {} ({} bytes)", > journalFileCount, repoSize, journalCountThreshold, sizeThreshold); > journalFileCount = getJournalCount(); > repoSize = getSize(getLogFiles(), 0L); > } > logger.info("Provenance Repository has now caught up with > rolling over journal files. Current number of " > + "journal files to be rolled over is {}", > journalFileCount); > } > {code} > My nifi is at the sleep indefinitely. The reason my nifi cannot move forward > is because of the thread doing the merge is stopped. The thread doing the > merge is at: > {code} > accepted = eventQueue.offer(new Tuple<>(record, blockIndex), 10, > TimeUnit.MILLISECONDS); > {code} > so the queue is full. > What I believe happened is that the callables created here: > {code} > final Callable callable = new > Callable() { > @Override > public Object call() throws IOException { > while (!eventQueue.isEmpty() || > !finishedAdding.get()) { > final > Tuple tuple; > try { > tuple = eventQueue.poll(10, > TimeUnit.MILLISECONDS); > } catch (final InterruptedException > ie) { > continue; > } > if (tuple == null) { > continue; > } > indexingAction.index(tuple.getKey(), > indexWriter, tuple.getValue()); > } > return null; > } > {code} > finish before the offer adds its first event because I do not see any Index > Provenance E
[jira] [Updated] (NIFI-2395) PersistentProvenanceRepository Deadlocks caused by a blocked journal merge
[ https://issues.apache.org/jira/browse/NIFI-2395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joseph Witt updated NIFI-2395: -- Assignee: Mark Payne (was: Joseph Witt) > PersistentProvenanceRepository Deadlocks caused by a blocked journal merge > -- > > Key: NIFI-2395 > URL: https://issues.apache.org/jira/browse/NIFI-2395 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 0.6.0, 0.7.0 >Reporter: Brian Davis >Assignee: Mark Payne >Priority: Blocker > Fix For: 1.0.0, 0.8.0 > > > I have a nifi instance that I have been running for about a week and has > deadlocked at least 3 times during this time. When I say deadlock the whole > nifi instance stops doing any progress on flowfiles. I looked at the stack > trace and there are a lot of threads stuck doing tasks in the > PersistentProvenanceRepository. Looking at the code I think this is what is > happening: > There is a ReadWriteLock that all the reads are waiting for a write. The > write is in the loop: > {code} > while (journalFileCount > journalCountThreshold || repoSize > > sizeThreshold) { > // if a shutdown happens while we are in this loop, kill > the rollover thread and break > if (this.closed.get()) { > if (future != null) { > future.cancel(true); > } > break; > } > if (repoSize > sizeThreshold) { > logger.debug("Provenance Repository has exceeded its > size threshold; will trigger purging of oldest events"); > purgeOldEvents(); > journalFileCount = getJournalCount(); > repoSize = getSize(getLogFiles(), 0L); > continue; > } else { > // if we are constrained by the number of journal > files rather than the size of the repo, > // then we will just sleep a bit because another > thread is already actively merging the journals, > // due to the runnable that we scheduled above > try { > Thread.sleep(100L); > } catch (final InterruptedException ie) { > } > } > logger.debug("Provenance Repository is still behind. > Keeping flow slowed down " > + "to accommodate. Currently, there are {} > journal files ({} bytes) and " > + "threshold for blocking is {} ({} bytes)", > journalFileCount, repoSize, journalCountThreshold, sizeThreshold); > journalFileCount = getJournalCount(); > repoSize = getSize(getLogFiles(), 0L); > } > logger.info("Provenance Repository has now caught up with > rolling over journal files. Current number of " > + "journal files to be rolled over is {}", > journalFileCount); > } > {code} > My nifi is at the sleep indefinitely. The reason my nifi cannot move forward > is because of the thread doing the merge is stopped. The thread doing the > merge is at: > {code} > accepted = eventQueue.offer(new Tuple<>(record, blockIndex), 10, > TimeUnit.MILLISECONDS); > {code} > so the queue is full. > What I believe happened is that the callables created here: > {code} > final Callable callable = new > Callable() { > @Override > public Object call() throws IOException { > while (!eventQueue.isEmpty() || > !finishedAdding.get()) { > final > Tuple tuple; > try { > tuple = eventQueue.poll(10, > TimeUnit.MILLISECONDS); > } catch (final InterruptedException > ie) { > continue; > } > if (tuple == null) { > continue; > } > indexingAction.index(tuple.getKey(), > indexWriter, tuple.getValue()); > } > return null; > } > {code} > finish before the offer adds its first event because I do not see any Index > Provenance Event
[jira] [Updated] (NIFI-2395) PersistentProvenanceRepository Deadlocks caused by a blocked journal merge
[ https://issues.apache.org/jira/browse/NIFI-2395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Payne updated NIFI-2395: - Status: Patch Available (was: Open) > PersistentProvenanceRepository Deadlocks caused by a blocked journal merge > -- > > Key: NIFI-2395 > URL: https://issues.apache.org/jira/browse/NIFI-2395 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 0.7.0, 0.6.0 >Reporter: Brian Davis >Assignee: Mark Payne >Priority: Blocker > Fix For: 1.0.0, 0.8.0 > > > I have a nifi instance that I have been running for about a week and has > deadlocked at least 3 times during this time. When I say deadlock the whole > nifi instance stops doing any progress on flowfiles. I looked at the stack > trace and there are a lot of threads stuck doing tasks in the > PersistentProvenanceRepository. Looking at the code I think this is what is > happening: > There is a ReadWriteLock that all the reads are waiting for a write. The > write is in the loop: > {code} > while (journalFileCount > journalCountThreshold || repoSize > > sizeThreshold) { > // if a shutdown happens while we are in this loop, kill > the rollover thread and break > if (this.closed.get()) { > if (future != null) { > future.cancel(true); > } > break; > } > if (repoSize > sizeThreshold) { > logger.debug("Provenance Repository has exceeded its > size threshold; will trigger purging of oldest events"); > purgeOldEvents(); > journalFileCount = getJournalCount(); > repoSize = getSize(getLogFiles(), 0L); > continue; > } else { > // if we are constrained by the number of journal > files rather than the size of the repo, > // then we will just sleep a bit because another > thread is already actively merging the journals, > // due to the runnable that we scheduled above > try { > Thread.sleep(100L); > } catch (final InterruptedException ie) { > } > } > logger.debug("Provenance Repository is still behind. > Keeping flow slowed down " > + "to accommodate. Currently, there are {} > journal files ({} bytes) and " > + "threshold for blocking is {} ({} bytes)", > journalFileCount, repoSize, journalCountThreshold, sizeThreshold); > journalFileCount = getJournalCount(); > repoSize = getSize(getLogFiles(), 0L); > } > logger.info("Provenance Repository has now caught up with > rolling over journal files. Current number of " > + "journal files to be rolled over is {}", > journalFileCount); > } > {code} > My nifi is at the sleep indefinitely. The reason my nifi cannot move forward > is because of the thread doing the merge is stopped. The thread doing the > merge is at: > {code} > accepted = eventQueue.offer(new Tuple<>(record, blockIndex), 10, > TimeUnit.MILLISECONDS); > {code} > so the queue is full. > What I believe happened is that the callables created here: > {code} > final Callable callable = new > Callable() { > @Override > public Object call() throws IOException { > while (!eventQueue.isEmpty() || > !finishedAdding.get()) { > final > Tuple tuple; > try { > tuple = eventQueue.poll(10, > TimeUnit.MILLISECONDS); > } catch (final InterruptedException > ie) { > continue; > } > if (tuple == null) { > continue; > } > indexingAction.index(tuple.getKey(), > indexWriter, tuple.getValue()); > } > return null; > } > {code} > finish before the offer adds its first event because I do not see any Index > Provenance Events thre
[jira] [Updated] (NIFI-2395) PersistentProvenanceRepository Deadlocks caused by a blocked journal merge
[ https://issues.apache.org/jira/browse/NIFI-2395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joseph Witt updated NIFI-2395: -- Fix Version/s: (was: 0.8.0) > PersistentProvenanceRepository Deadlocks caused by a blocked journal merge > -- > > Key: NIFI-2395 > URL: https://issues.apache.org/jira/browse/NIFI-2395 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 0.6.0, 0.7.0 >Reporter: Brian Davis >Assignee: Joseph Witt >Priority: Blocker > Fix For: 1.0.0 > > > I have a nifi instance that I have been running for about a week and has > deadlocked at least 3 times during this time. When I say deadlock the whole > nifi instance stops doing any progress on flowfiles. I looked at the stack > trace and there are a lot of threads stuck doing tasks in the > PersistentProvenanceRepository. Looking at the code I think this is what is > happening: > There is a ReadWriteLock that all the reads are waiting for a write. The > write is in the loop: > {code} > while (journalFileCount > journalCountThreshold || repoSize > > sizeThreshold) { > // if a shutdown happens while we are in this loop, kill > the rollover thread and break > if (this.closed.get()) { > if (future != null) { > future.cancel(true); > } > break; > } > if (repoSize > sizeThreshold) { > logger.debug("Provenance Repository has exceeded its > size threshold; will trigger purging of oldest events"); > purgeOldEvents(); > journalFileCount = getJournalCount(); > repoSize = getSize(getLogFiles(), 0L); > continue; > } else { > // if we are constrained by the number of journal > files rather than the size of the repo, > // then we will just sleep a bit because another > thread is already actively merging the journals, > // due to the runnable that we scheduled above > try { > Thread.sleep(100L); > } catch (final InterruptedException ie) { > } > } > logger.debug("Provenance Repository is still behind. > Keeping flow slowed down " > + "to accommodate. Currently, there are {} > journal files ({} bytes) and " > + "threshold for blocking is {} ({} bytes)", > journalFileCount, repoSize, journalCountThreshold, sizeThreshold); > journalFileCount = getJournalCount(); > repoSize = getSize(getLogFiles(), 0L); > } > logger.info("Provenance Repository has now caught up with > rolling over journal files. Current number of " > + "journal files to be rolled over is {}", > journalFileCount); > } > {code} > My nifi is at the sleep indefinitely. The reason my nifi cannot move forward > is because of the thread doing the merge is stopped. The thread doing the > merge is at: > {code} > accepted = eventQueue.offer(new Tuple<>(record, blockIndex), 10, > TimeUnit.MILLISECONDS); > {code} > so the queue is full. > What I believe happened is that the callables created here: > {code} > final Callable callable = new > Callable() { > @Override > public Object call() throws IOException { > while (!eventQueue.isEmpty() || > !finishedAdding.get()) { > final > Tuple tuple; > try { > tuple = eventQueue.poll(10, > TimeUnit.MILLISECONDS); > } catch (final InterruptedException > ie) { > continue; > } > if (tuple == null) { > continue; > } > indexingAction.index(tuple.getKey(), > indexWriter, tuple.getValue()); > } > return null; > } > {code} > finish before the offer adds its first event because I do not see any Index > Provenance Events threads. My
[jira] [Updated] (NIFI-2395) PersistentProvenanceRepository Deadlocks caused by a blocked journal merge
[ https://issues.apache.org/jira/browse/NIFI-2395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joseph Witt updated NIFI-2395: -- Resolution: Fixed Status: Resolved (was: Patch Available) > PersistentProvenanceRepository Deadlocks caused by a blocked journal merge > -- > > Key: NIFI-2395 > URL: https://issues.apache.org/jira/browse/NIFI-2395 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 0.6.0, 0.7.0 >Reporter: Brian Davis >Assignee: Joseph Witt >Priority: Blocker > Fix For: 1.0.0 > > > I have a nifi instance that I have been running for about a week and has > deadlocked at least 3 times during this time. When I say deadlock the whole > nifi instance stops doing any progress on flowfiles. I looked at the stack > trace and there are a lot of threads stuck doing tasks in the > PersistentProvenanceRepository. Looking at the code I think this is what is > happening: > There is a ReadWriteLock that all the reads are waiting for a write. The > write is in the loop: > {code} > while (journalFileCount > journalCountThreshold || repoSize > > sizeThreshold) { > // if a shutdown happens while we are in this loop, kill > the rollover thread and break > if (this.closed.get()) { > if (future != null) { > future.cancel(true); > } > break; > } > if (repoSize > sizeThreshold) { > logger.debug("Provenance Repository has exceeded its > size threshold; will trigger purging of oldest events"); > purgeOldEvents(); > journalFileCount = getJournalCount(); > repoSize = getSize(getLogFiles(), 0L); > continue; > } else { > // if we are constrained by the number of journal > files rather than the size of the repo, > // then we will just sleep a bit because another > thread is already actively merging the journals, > // due to the runnable that we scheduled above > try { > Thread.sleep(100L); > } catch (final InterruptedException ie) { > } > } > logger.debug("Provenance Repository is still behind. > Keeping flow slowed down " > + "to accommodate. Currently, there are {} > journal files ({} bytes) and " > + "threshold for blocking is {} ({} bytes)", > journalFileCount, repoSize, journalCountThreshold, sizeThreshold); > journalFileCount = getJournalCount(); > repoSize = getSize(getLogFiles(), 0L); > } > logger.info("Provenance Repository has now caught up with > rolling over journal files. Current number of " > + "journal files to be rolled over is {}", > journalFileCount); > } > {code} > My nifi is at the sleep indefinitely. The reason my nifi cannot move forward > is because of the thread doing the merge is stopped. The thread doing the > merge is at: > {code} > accepted = eventQueue.offer(new Tuple<>(record, blockIndex), 10, > TimeUnit.MILLISECONDS); > {code} > so the queue is full. > What I believe happened is that the callables created here: > {code} > final Callable callable = new > Callable() { > @Override > public Object call() throws IOException { > while (!eventQueue.isEmpty() || > !finishedAdding.get()) { > final > Tuple tuple; > try { > tuple = eventQueue.poll(10, > TimeUnit.MILLISECONDS); > } catch (final InterruptedException > ie) { > continue; > } > if (tuple == null) { > continue; > } > indexingAction.index(tuple.getKey(), > indexWriter, tuple.getValue()); > } > return null; > } > {code} > finish before the offer adds its first event because I do not see any Index