[ 
https://issues.apache.org/jira/browse/OAK-3976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15740002#comment-15740002
 ] 

Vikas Saurabh commented on OAK-3976:
------------------------------------

Also, I was a bit afraid of any sort of deadlocks arising here. So, I tried 
\[0] 100 committer threads adding a random node (sleeping 10ms in between) and 
a thread running background ops every 1s (journal push threshold set to 10). 
Letting this party run for 10s didn't dead-lock... so, there's a bit of relief 
:).

\[0]:

    @Test
    public void journalPushMustntDeadlock() throws Exception {
        int oldJournalPushThreshold = DocumentNodeStore.journalPushThreshold;
        DocumentNodeStore.journalPushThreshold = 10;
        try {
            final DocumentNodeStore ns = 
builderProvider.newBuilder().setAsyncDelay(0).getNodeStore();
            final AtomicBoolean stopTest = new AtomicBoolean();

            List<Thread> threads = new ArrayList<>();
            threads.add(new Thread(new Runnable() {
                @Override
                public void run() {
                    while (!stopTest.get()) {
                        ns.runBackgroundOperations();
                        try {
                            Thread.sleep(1000); //slow background thread
                        } catch (InterruptedException e) {
                            // ignore and continue;
                        }
                    }
                }
            }));
            for (int i = 0; i < 100; i++) {
                threads.add(new Thread(new Runnable() {
                    @Override
                    public void run() {
                        while (!stopTest.get()) {
                            NodeBuilder builder = ns.getRoot().builder();
                            builder.child("foo" + UUID.randomUUID());
                            try {
                                merge(ns, builder);
                            } catch (CommitFailedException e) {
                                e.printStackTrace();
                                //ignore errors and continue
                            }
                            try {
                                Thread.sleep(10);
                            } catch (InterruptedException e) {
                                // ignore and continue;
                            }
                        }
                    }
                }));
            }

            for (Thread t : threads) {
                t.start();
            }
            Thread.sleep(10000);//let them party for 10 seconds
            stopTest.set(true);
            for (Thread t : threads) {
                t.join();
            }
            int i = 1;
        } finally {
            DocumentNodeStore.journalPushThreshold = oldJournalPushThreshold;
        }
    }

> journal should support large(r) entries
> ---------------------------------------
>
>                 Key: OAK-3976
>                 URL: https://issues.apache.org/jira/browse/OAK-3976
>             Project: Jackrabbit Oak
>          Issue Type: Improvement
>          Components: documentmk
>    Affects Versions: 1.3.14
>            Reporter: Stefan Egli
>            Assignee: Vikas Saurabh
>             Fix For: 1.6, 1.5.16
>
>
> Journal entries are created in the background write. Normally this happens 
> every second. If for some reason there is a large delay between two 
> background writes, the number of pending changes can also accumulate. Which 
> can result in (arbitrary) large single journal entries (ie with large {{_c}} 
> property).
> This can cause multiple problems down the road:
> * journal gc at this point loads 450 entries - and if some are large this can 
> result in a very large memory consumption during gc (which can cause severe 
> stability problems for the VM, if not OOM etc). This should be fixed with 
> OAK-3001 (where we only get the id, thus do not care how big {{_c}} is)
> * before OAK-3001 is done (which is currently scheduled after 1.4) what we 
> can do is reduce the delete batch size (OAK-3975)
> * background reads however also read the journal entries and even if 
> OAK-3001/OAK-3975 are implemented the background read can still cause large 
> memory consumption. So we need to improve this one way or another.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to