[jira] [Commented] (OAK-9041) Build Jackrabbit Oak #2732 failed

2020-05-06 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-9041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100952#comment-17100952
 ] 

Hudson commented on OAK-9041:
-

Build is still failing.
Failed run: [Jackrabbit Oak 
#2741|https://builds.apache.org/job/Jackrabbit%20Oak/2741/] [console 
log|https://builds.apache.org/job/Jackrabbit%20Oak/2741/console]

> Build Jackrabbit Oak #2732 failed
> -
>
> Key: OAK-9041
> URL: https://issues.apache.org/jira/browse/OAK-9041
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: continuous integration
>Reporter: Hudson
>Priority: Major
>
> No description is provided
> The build Jackrabbit Oak #2732 has failed.
> First failed run: [Jackrabbit Oak 
> #2732|https://builds.apache.org/job/Jackrabbit%20Oak/2732/] [console 
> log|https://builds.apache.org/job/Jackrabbit%20Oak/2732/console]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OAK-9041) Build Jackrabbit Oak #2732 failed

2020-05-06 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-9041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100847#comment-17100847
 ] 

Hudson commented on OAK-9041:
-

Build is still failing.
Failed run: [Jackrabbit Oak 
#2740|https://builds.apache.org/job/Jackrabbit%20Oak/2740/] [console 
log|https://builds.apache.org/job/Jackrabbit%20Oak/2740/console]

> Build Jackrabbit Oak #2732 failed
> -
>
> Key: OAK-9041
> URL: https://issues.apache.org/jira/browse/OAK-9041
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: continuous integration
>Reporter: Hudson
>Priority: Major
>
> No description is provided
> The build Jackrabbit Oak #2732 has failed.
> First failed run: [Jackrabbit Oak 
> #2732|https://builds.apache.org/job/Jackrabbit%20Oak/2732/] [console 
> log|https://builds.apache.org/job/Jackrabbit%20Oak/2732/console]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OAK-9052) Reindexing using --doc-traversal-mode may need a lot of memory

2020-05-06 Thread Thomas Mueller (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-9052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100829#comment-17100829
 ] 

Thomas Mueller commented on OAK-9052:
-

https://github.com/oak-indexing/jackrabbit-oak/pull/154

With the memory setting "0" (the default value), a temporary file is created 
for the linked list, so that heap memory usage is constant (around 30 MB I 
guess). Internally, a persistent key-value store, the H2 MVStore, is used (the 
same one as used by the MongoMK for the persistent cache). Every minute, the 
file is compacted (configurable using the 
"oak.indexer.linkedList.compactMillis" system property)

It's possible to use the old behavior by setting the system property 
"oak.indexer.memLimitInMB" to 100.

> Reindexing using --doc-traversal-mode may need a lot of memory
> --
>
> Key: OAK-9052
> URL: https://issues.apache.org/jira/browse/OAK-9052
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: indexing, mongomk
>Reporter: Thomas Mueller
>Assignee: Thomas Mueller
>Priority: Major
>
> Indexing using oak-run and --doc-traversal-mode uses the FlatFileStore. For 
> aggregation, there is a limit on memory usage, by default around 100 MB. 
> Depending on the content structure, this limit can be exceeded. 
> It would be good to find a way to avoid a memory limit, for example using a 
> temporary storage (a file, or a persistent key/value store).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OAK-9024) oak-solr-osgi imports org.slf4j.impl

2020-05-06 Thread Manfred Baedke (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-9024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100821#comment-17100821
 ] 

Manfred Baedke commented on OAK-9024:
-

[~jsedding],

bq. I don't think embedding a logging binding was ever the goal of 
oak-solr-osgi. If you are interested in the original reasoning why this bundle 
was created, the discussion https://markmail.org/thread/5xsyx5l4c6euqtt2 may be 
interesting.

Yes, we went with option 2 mentioned there, which only confirms that we need to 
embed all the transitive runtime dependencies.

> oak-solr-osgi imports org.slf4j.impl
> 
>
> Key: OAK-9024
> URL: https://issues.apache.org/jira/browse/OAK-9024
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: solr
>Reporter: Julian Reschke
>Assignee: Manfred Baedke
>Priority: Minor
> Fix For: 1.28.0
>
> Attachments: OAK-9024.patch
>
>
> From the manifest:
> {{org.slf4j.impl;version="[1.6,2)"}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (OAK-9024) oak-solr-osgi imports org.slf4j.impl

2020-05-06 Thread Manfred Baedke (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-9024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100821#comment-17100821
 ] 

Manfred Baedke edited comment on OAK-9024 at 5/6/20, 1:51 PM:
--

[~jsedding],

bq. I don't think embedding a logging binding was ever the goal of 
oak-solr-osgi. If you are interested in the original reasoning why this bundle 
was created, the discussion https://markmail.org/thread/5xsyx5l4c6euqtt2 may be 
interesting.

Yes, we went with option 2 mentioned there, which only confirms that we need to 
embed all the transitive dependencies.


was (Author: baedke):
[~jsedding],

bq. I don't think embedding a logging binding was ever the goal of 
oak-solr-osgi. If you are interested in the original reasoning why this bundle 
was created, the discussion https://markmail.org/thread/5xsyx5l4c6euqtt2 may be 
interesting.

Yes, we went with option 2 mentioned there, which only confirms that we need to 
embed all the transitive runtime dependencies.

> oak-solr-osgi imports org.slf4j.impl
> 
>
> Key: OAK-9024
> URL: https://issues.apache.org/jira/browse/OAK-9024
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: solr
>Reporter: Julian Reschke
>Assignee: Manfred Baedke
>Priority: Minor
> Fix For: 1.28.0
>
> Attachments: OAK-9024.patch
>
>
> From the manifest:
> {{org.slf4j.impl;version="[1.6,2)"}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OAK-9024) oak-solr-osgi imports org.slf4j.impl

2020-05-06 Thread Manfred Baedke (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-9024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100809#comment-17100809
 ] 

Manfred Baedke commented on OAK-9024:
-

[~jsedding], maybe I'm completely missing something, but if we embed zookeeper 
and do not embed org.slf4j.slf4j-log4j12, we'd have to make sure by code 
inspection that zookeeper will continue working. And every time we update the 
zookeeper dependency, we'd have to verify that again. That's a maintenance hell.

> oak-solr-osgi imports org.slf4j.impl
> 
>
> Key: OAK-9024
> URL: https://issues.apache.org/jira/browse/OAK-9024
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: solr
>Reporter: Julian Reschke
>Assignee: Manfred Baedke
>Priority: Minor
> Fix For: 1.28.0
>
> Attachments: OAK-9024.patch
>
>
> From the manifest:
> {{org.slf4j.impl;version="[1.6,2)"}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OAK-8890) LDAP login may fail if a server or intermediate silently drops connections

2020-05-06 Thread Manfred Baedke (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100775#comment-17100775
 ] 

Manfred Baedke commented on OAK-8890:
-

Done: http://svn.apache.org/viewvc?view=revision&revision=1877435

> LDAP login may fail if a server or intermediate silently drops connections
> --
>
> Key: OAK-8890
> URL: https://issues.apache.org/jira/browse/OAK-8890
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: auth-ldap
>Reporter: Manfred Baedke
>Assignee: Manfred Baedke
>Priority: Major
> Attachments: OAK-8890.patch
>
>
> This has been seen on production systems with Oak 1.10.2, where a firewall 
> was configured to drop idle connections after a timeout without sending an 
> RST (for security reasons). When this happens, the connection pool used by 
> the LdapPrincipalProvider will still consider these connections healthy. 
> Eventually such a connection will be used for an actual LDAP BIND/SEARCH, 
> which will simply timeout.
> The connection pool is an instance of 
> org.apache.commons.pool.impl.GenericObjectPool, which has configuration 
> options to deal with the scenario (namely running an eviction task which will 
> properly close idle connections after a timeout which is shorter than the 
> timeout interval used by the firewall) .
> The creation of the connection pool used is hard coded and most of the 
> configuration options are not available. 
> I propose to change that. I'll supply a patch soon.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (OAK-8890) LDAP login may fail if a server or intermediate silently drops connections

2020-05-06 Thread Manfred Baedke (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manfred Baedke resolved OAK-8890.
-
Fix Version/s: 1.28.0
   Resolution: Fixed

> LDAP login may fail if a server or intermediate silently drops connections
> --
>
> Key: OAK-8890
> URL: https://issues.apache.org/jira/browse/OAK-8890
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: auth-ldap
>Reporter: Manfred Baedke
>Assignee: Manfred Baedke
>Priority: Major
> Fix For: 1.28.0
>
> Attachments: OAK-8890.patch
>
>
> This has been seen on production systems with Oak 1.10.2, where a firewall 
> was configured to drop idle connections after a timeout without sending an 
> RST (for security reasons). When this happens, the connection pool used by 
> the LdapPrincipalProvider will still consider these connections healthy. 
> Eventually such a connection will be used for an actual LDAP BIND/SEARCH, 
> which will simply timeout.
> The connection pool is an instance of 
> org.apache.commons.pool.impl.GenericObjectPool, which has configuration 
> options to deal with the scenario (namely running an eviction task which will 
> properly close idle connections after a timeout which is shorter than the 
> timeout interval used by the firewall) .
> The creation of the connection pool used is hard coded and most of the 
> configuration options are not available. 
> I propose to change that. I'll supply a patch soon.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OAK-9042) Improve azure archive recovery during startup

2020-05-06 Thread Julian Reschke (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-9042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100756#comment-17100756
 ] 

Julian Reschke commented on OAK-9042:
-

[~smiroslav], [~adulceanu]: are you sure that you want to add 
org.apache.jackrabbit.oak.segment.spi.persistence.split to the exported API?

> Improve azure archive recovery during startup
> -
>
> Key: OAK-9042
> URL: https://issues.apache.org/jira/browse/OAK-9042
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-azure, segment-tar
>Affects Versions: 1.26.0
>Reporter: Miroslav Smiljanic
>Assignee: Andrei Dulceanu
>Priority: Major
>  Labels: Patch
> Fix For: 1.28.0
>
> Attachments: proposal.patch, test_tar_repo_recovery.patch
>
>
> During repository startup if archive directory is not closed properly, 
> recovery will be performed. During that procedure, segents are copied to the 
> backup directory and deleted from the source direcory, one by one.
> It can create problems and negativelly impact other ongoing actiivties, which 
> are accessing the same archive. This activity, for example, can be repository 
> cloning in order to create new environment. 
> Proposed patch, after creating backup is not deleting all segments from 
> archive, but only segments which could not be recovered. 
> [^proposal.patch]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OAK-9042) Improve azure archive recovery during startup

2020-05-06 Thread Julian Reschke (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-9042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100755#comment-17100755
 ] 

Julian Reschke commented on OAK-9042:
-

trunk: [r1877431|http://svn.apache.org/r1877431] 
[r1877193|http://svn.apache.org/r1877193]

> Improve azure archive recovery during startup
> -
>
> Key: OAK-9042
> URL: https://issues.apache.org/jira/browse/OAK-9042
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: segment-azure, segment-tar
>Affects Versions: 1.26.0
>Reporter: Miroslav Smiljanic
>Assignee: Andrei Dulceanu
>Priority: Major
>  Labels: Patch
> Fix For: 1.28.0
>
> Attachments: proposal.patch, test_tar_repo_recovery.patch
>
>
> During repository startup if archive directory is not closed properly, 
> recovery will be performed. During that procedure, segents are copied to the 
> backup directory and deleted from the source direcory, one by one.
> It can create problems and negativelly impact other ongoing actiivties, which 
> are accessing the same archive. This activity, for example, can be repository 
> cloning in order to create new environment. 
> Proposed patch, after creating backup is not deleting all segments from 
> archive, but only segments which could not be recovered. 
> [^proposal.patch]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (OAK-8912) Version garbage collector is not working if documents exceeded 100000

2020-05-06 Thread Julian Reschke (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100663#comment-17100663
 ] 

Julian Reschke edited comment on OAK-8912 at 5/6/20, 12:09 PM:
---

1) 1.10.2 is outdated; please use the latest release from the current 
maintenance branch 1.22 (1.22.3 right now)

2) If you can reproduce this, please attach sufficient code so that we can as 
well. Optimally, see the existing VersionGC tests and see why they pass (what's 
the difference compared to your setup?)

3) The exception indicates that a connection wasn't properly closed; setting 
the system property 
"org.apache.jackrabbit.oak.plugins.document.rdb.RDBConnectionHandler.CHECKCONNECTIONONCLOSE"
 to "true" might give you better diagnostics.


was (Author: reschke):
1) 1.10.2 is outdated; please use the latest release from the current 
maintenance branch 1.22 (1.22.3 right now)

2) If you can reproduce this, please attach sufficient code so that we can as 
well. Optimally, see the existing VersionGC tests and see why they pass (what's 
the difference to you setup?)

3) The exception indicates that a connection wasn't properly closed; setting 
the system property 
"org.apache.jackrabbit.oak.plugins.document.rdb.RDBConnectionHandler.CHECKCONNECTIONONCLOSE"
 to "true" might give you better diagnostics.

> Version garbage collector is not working if documents exceeded 10
> -
>
> Key: OAK-8912
> URL: https://issues.apache.org/jira/browse/OAK-8912
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: documentmk
>Reporter: Ankush Nagapure
>Priority: Major
> Attachments: exception.txt
>
>
> Oak version - 1.10.2, PostgreSQL 10.7 (10.7), using driver: PostgreSQL JDBC 
> Driver 42.2.2 (42.2).
> *Actual :*
> After below code run, if document collectlimit exceeded 10, it throws 
> exception attached in .txt
>   {color:#0747a6}public static void runVersionGC() { {color}
>  {color:#0747a6}          log.info("Running garbage collection for 
> DocumentNodeStore");{color}
>  {color:#0747a6}          try {{color}
>  {color:#0747a6}                 final VersionGCOptions versionGCOptions = 
> new VersionGCOptions();{color}
>  {color:#0747a6}                
> *versionGCOptions.withCollectLimit(100);*{color}
>  {color:#0747a6}                
> *documentNodeStore.getVersionGarbageCollector().setOptions(versionGCOptions);*{color}
>  {color:#0747a6}                 log.info("versionGCOptions.collectLimit : " 
> + versionGCOptions.collectLimit);{color}
>  {color:#0747a6}                
> documentNodeStore.getVersionGarbageCollector().gc(0, TimeUnit.DAYS);{color}
>  {color:#0747a6}           } catch (final DocumentStoreException e) {{color}
>  {color:#0747a6}             //{color}
>  {color:#0747a6}     }{color}
> Below is the code to create repository and get documentNodeStore object for 
> version garbage collection.
>    {color:#0747a6}private static Repository createRepo(final Map String> dbDetails){color}
>  {color:#0747a6}             throws DataStoreException {{color}
>  {color:#0747a6}       try {{color}
>  {color:#0747a6}             final RDBOptions options ={color}
>  {color:#0747a6}                new 
> DBOptions().tablePrefix(dbDetails.get(DB_TABLE_PREFIX)).dropTablesOnClose({color}
>  {color:#0747a6}                   false);{color}
>  {color:#0747a6}            final DataSource ds ={color}
>  {color:#0747a6}            RDBDataSourceFactory.forJdbcUrl({color}
>  {color:#0747a6}                 dbDetails.get("dbURL"),{color}
>  {color:#0747a6}                 dbDetails.get("dbUser"),{color}
>  {color:#0747a6}                 dbDetails.get("dbPassword"));{color}
>  {color:#0747a6}          final Properties properties = 
> buildS3Properties(dbDetails);{color}
>  {color:#0747a6}          final S3DataStore s3DataStore = 
> buildS3DataStore(properties);{color}
>  {color:#0747a6}          final DataStoreBlobStore dataStoreBlobStore = new 
> DataStoreBlobStore(s3DataStore);{color}
>  {color:#0747a6}          final Whiteboard wb = new 
> DefaultWhiteboard();{color}
> {color:#0747a6}         bapRegistration ={color}
>  {color:#0747a6}                         
> wb.register(BlobAccessProvider.class,(BlobAccessProvider)             {color}
> {color:#0747a6}                           
> dataStoreBlobStore,properties);{color}
> {color:#0747a6}         *documentNodeStore =*{color}
>  {color:#0747a6}                *new RDBDocumentNodeStoreBuilder()*{color}
>  {color:#0747a6}                    *.setBlobStore(dataStoreBlobStore)*{color}
>  {color:#0747a6}                    *.setBundlingDisabled(true)*{color}
>  {color:#0747a6}                    *.setRDBConnection(ds, options)*{color}
>  {color:#0747a6}                    *.build();*{color}
> {color:#07

[jira] [Commented] (OAK-8912) Version garbage collector is not working if documents exceeded 100000

2020-05-06 Thread Julian Reschke (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100663#comment-17100663
 ] 

Julian Reschke commented on OAK-8912:
-

1) 1.10.2 is outdated; please use the latest release from the current 
maintenance branch 1.22 (1.22.3 right now)

2) If you can reproduce this, please attach sufficient code so that we can as 
well. Optimally, see the existing VersionGC tests and see why they pass (what's 
the difference to you setup?)

3) The exception indicates that a connection wasn't properly closed; setting 
the system property 
"org.apache.jackrabbit.oak.plugins.document.rdb.RDBConnectionHandler.CHECKCONNECTIONONCLOSE"
 to "true" might give you better diagnostics.

> Version garbage collector is not working if documents exceeded 10
> -
>
> Key: OAK-8912
> URL: https://issues.apache.org/jira/browse/OAK-8912
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: documentmk
>Reporter: Ankush Nagapure
>Priority: Major
> Attachments: exception.txt
>
>
> Oak version - 1.10.2, PostgreSQL 10.7 (10.7), using driver: PostgreSQL JDBC 
> Driver 42.2.2 (42.2).
> *Actual :*
> After below code run, if document collectlimit exceeded 10, it throws 
> exception attached in .txt
>   {color:#0747a6}public static void runVersionGC() { {color}
>  {color:#0747a6}          log.info("Running garbage collection for 
> DocumentNodeStore");{color}
>  {color:#0747a6}          try {{color}
>  {color:#0747a6}                 final VersionGCOptions versionGCOptions = 
> new VersionGCOptions();{color}
>  {color:#0747a6}                
> *versionGCOptions.withCollectLimit(100);*{color}
>  {color:#0747a6}                
> *documentNodeStore.getVersionGarbageCollector().setOptions(versionGCOptions);*{color}
>  {color:#0747a6}                 log.info("versionGCOptions.collectLimit : " 
> + versionGCOptions.collectLimit);{color}
>  {color:#0747a6}                
> documentNodeStore.getVersionGarbageCollector().gc(0, TimeUnit.DAYS);{color}
>  {color:#0747a6}           } catch (final DocumentStoreException e) {{color}
>  {color:#0747a6}             //{color}
>  {color:#0747a6}     }{color}
> Below is the code to create repository and get documentNodeStore object for 
> version garbage collection.
>    {color:#0747a6}private static Repository createRepo(final Map String> dbDetails){color}
>  {color:#0747a6}             throws DataStoreException {{color}
>  {color:#0747a6}       try {{color}
>  {color:#0747a6}             final RDBOptions options ={color}
>  {color:#0747a6}                new 
> DBOptions().tablePrefix(dbDetails.get(DB_TABLE_PREFIX)).dropTablesOnClose({color}
>  {color:#0747a6}                   false);{color}
>  {color:#0747a6}            final DataSource ds ={color}
>  {color:#0747a6}            RDBDataSourceFactory.forJdbcUrl({color}
>  {color:#0747a6}                 dbDetails.get("dbURL"),{color}
>  {color:#0747a6}                 dbDetails.get("dbUser"),{color}
>  {color:#0747a6}                 dbDetails.get("dbPassword"));{color}
>  {color:#0747a6}          final Properties properties = 
> buildS3Properties(dbDetails);{color}
>  {color:#0747a6}          final S3DataStore s3DataStore = 
> buildS3DataStore(properties);{color}
>  {color:#0747a6}          final DataStoreBlobStore dataStoreBlobStore = new 
> DataStoreBlobStore(s3DataStore);{color}
>  {color:#0747a6}          final Whiteboard wb = new 
> DefaultWhiteboard();{color}
> {color:#0747a6}         bapRegistration ={color}
>  {color:#0747a6}                         
> wb.register(BlobAccessProvider.class,(BlobAccessProvider)             {color}
> {color:#0747a6}                           
> dataStoreBlobStore,properties);{color}
> {color:#0747a6}         *documentNodeStore =*{color}
>  {color:#0747a6}                *new RDBDocumentNodeStoreBuilder()*{color}
>  {color:#0747a6}                    *.setBlobStore(dataStoreBlobStore)*{color}
>  {color:#0747a6}                    *.setBundlingDisabled(true)*{color}
>  {color:#0747a6}                    *.setRDBConnection(ds, options)*{color}
>  {color:#0747a6}                    *.build();*{color}
> {color:#0747a6}           repository = new 
> Jcr(documentNodeStore).with(wb).createRepository();{color}
>  {color:#0747a6}          return repository;{color}
>  {color:#0747a6}      } catch (final DataStoreException e) {{color}
>  {color:#0747a6}               log.error("S3 Connection could not be 
> created." + e);{color}
>  {color:#0747a6}              throw new DataStoreException("S3 Connection 
> could not be created");{color}
>  {color:#0747a6}      }{color}
>  {color:#0747a6}  }{color}
> {color:#172b4d}Even after setting collectLimit in code, still it is taking 
> 10 as limit.{color}
> {color:#172b4d}Expected :{color}
> v

[jira] [Updated] (OAK-8912) Version garbage collector is not working if documents exceeded 100000

2020-05-06 Thread Julian Reschke (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke updated OAK-8912:

Component/s: documentmk

> Version garbage collector is not working if documents exceeded 10
> -
>
> Key: OAK-8912
> URL: https://issues.apache.org/jira/browse/OAK-8912
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: documentmk
>Reporter: Ankush Nagapure
>Priority: Major
> Attachments: exception.txt
>
>
> Oak version - 1.10.2, PostgreSQL 10.7 (10.7), using driver: PostgreSQL JDBC 
> Driver 42.2.2 (42.2).
> *Actual :*
> After below code run, if document collectlimit exceeded 10, it throws 
> exception attached in .txt
>   {color:#0747a6}public static void runVersionGC() { {color}
>  {color:#0747a6}          log.info("Running garbage collection for 
> DocumentNodeStore");{color}
>  {color:#0747a6}          try {{color}
>  {color:#0747a6}                 final VersionGCOptions versionGCOptions = 
> new VersionGCOptions();{color}
>  {color:#0747a6}                
> *versionGCOptions.withCollectLimit(100);*{color}
>  {color:#0747a6}                
> *documentNodeStore.getVersionGarbageCollector().setOptions(versionGCOptions);*{color}
>  {color:#0747a6}                 log.info("versionGCOptions.collectLimit : " 
> + versionGCOptions.collectLimit);{color}
>  {color:#0747a6}                
> documentNodeStore.getVersionGarbageCollector().gc(0, TimeUnit.DAYS);{color}
>  {color:#0747a6}           } catch (final DocumentStoreException e) {{color}
>  {color:#0747a6}             //{color}
>  {color:#0747a6}     }{color}
> Below is the code to create repository and get documentNodeStore object for 
> version garbage collection.
>    {color:#0747a6}private static Repository createRepo(final Map String> dbDetails){color}
>  {color:#0747a6}             throws DataStoreException {{color}
>  {color:#0747a6}       try {{color}
>  {color:#0747a6}             final RDBOptions options ={color}
>  {color:#0747a6}                new 
> DBOptions().tablePrefix(dbDetails.get(DB_TABLE_PREFIX)).dropTablesOnClose({color}
>  {color:#0747a6}                   false);{color}
>  {color:#0747a6}            final DataSource ds ={color}
>  {color:#0747a6}            RDBDataSourceFactory.forJdbcUrl({color}
>  {color:#0747a6}                 dbDetails.get("dbURL"),{color}
>  {color:#0747a6}                 dbDetails.get("dbUser"),{color}
>  {color:#0747a6}                 dbDetails.get("dbPassword"));{color}
>  {color:#0747a6}          final Properties properties = 
> buildS3Properties(dbDetails);{color}
>  {color:#0747a6}          final S3DataStore s3DataStore = 
> buildS3DataStore(properties);{color}
>  {color:#0747a6}          final DataStoreBlobStore dataStoreBlobStore = new 
> DataStoreBlobStore(s3DataStore);{color}
>  {color:#0747a6}          final Whiteboard wb = new 
> DefaultWhiteboard();{color}
> {color:#0747a6}         bapRegistration ={color}
>  {color:#0747a6}                         
> wb.register(BlobAccessProvider.class,(BlobAccessProvider)             {color}
> {color:#0747a6}                           
> dataStoreBlobStore,properties);{color}
> {color:#0747a6}         *documentNodeStore =*{color}
>  {color:#0747a6}                *new RDBDocumentNodeStoreBuilder()*{color}
>  {color:#0747a6}                    *.setBlobStore(dataStoreBlobStore)*{color}
>  {color:#0747a6}                    *.setBundlingDisabled(true)*{color}
>  {color:#0747a6}                    *.setRDBConnection(ds, options)*{color}
>  {color:#0747a6}                    *.build();*{color}
> {color:#0747a6}           repository = new 
> Jcr(documentNodeStore).with(wb).createRepository();{color}
>  {color:#0747a6}          return repository;{color}
>  {color:#0747a6}      } catch (final DataStoreException e) {{color}
>  {color:#0747a6}               log.error("S3 Connection could not be 
> created." + e);{color}
>  {color:#0747a6}              throw new DataStoreException("S3 Connection 
> could not be created");{color}
>  {color:#0747a6}      }{color}
>  {color:#0747a6}  }{color}
> {color:#172b4d}Even after setting collectLimit in code, still it is taking 
> 10 as limit.{color}
> {color:#172b4d}Expected :{color}
> versionGCOptions.collectLimit should set to custom value to avoid 
> DocumentStoreException or solution to avoid DocumentStoreException if 
> documents exceeded to 10.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OAK-8912) Version garbage collector is not working if documents exceeded 100000

2020-05-06 Thread Ankush Nagapure (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100624#comment-17100624
 ] 

Ankush Nagapure commented on OAK-8912:
--

Any updates on this bug?

> Version garbage collector is not working if documents exceeded 10
> -
>
> Key: OAK-8912
> URL: https://issues.apache.org/jira/browse/OAK-8912
> Project: Jackrabbit Oak
>  Issue Type: Bug
>Reporter: Ankush Nagapure
>Priority: Major
> Attachments: exception.txt
>
>
> Oak version - 1.10.2, PostgreSQL 10.7 (10.7), using driver: PostgreSQL JDBC 
> Driver 42.2.2 (42.2).
> *Actual :*
> After below code run, if document collectlimit exceeded 10, it throws 
> exception attached in .txt
>   {color:#0747a6}public static void runVersionGC() { {color}
>  {color:#0747a6}          log.info("Running garbage collection for 
> DocumentNodeStore");{color}
>  {color:#0747a6}          try {{color}
>  {color:#0747a6}                 final VersionGCOptions versionGCOptions = 
> new VersionGCOptions();{color}
>  {color:#0747a6}                
> *versionGCOptions.withCollectLimit(100);*{color}
>  {color:#0747a6}                
> *documentNodeStore.getVersionGarbageCollector().setOptions(versionGCOptions);*{color}
>  {color:#0747a6}                 log.info("versionGCOptions.collectLimit : " 
> + versionGCOptions.collectLimit);{color}
>  {color:#0747a6}                
> documentNodeStore.getVersionGarbageCollector().gc(0, TimeUnit.DAYS);{color}
>  {color:#0747a6}           } catch (final DocumentStoreException e) {{color}
>  {color:#0747a6}             //{color}
>  {color:#0747a6}     }{color}
> Below is the code to create repository and get documentNodeStore object for 
> version garbage collection.
>    {color:#0747a6}private static Repository createRepo(final Map String> dbDetails){color}
>  {color:#0747a6}             throws DataStoreException {{color}
>  {color:#0747a6}       try {{color}
>  {color:#0747a6}             final RDBOptions options ={color}
>  {color:#0747a6}                new 
> DBOptions().tablePrefix(dbDetails.get(DB_TABLE_PREFIX)).dropTablesOnClose({color}
>  {color:#0747a6}                   false);{color}
>  {color:#0747a6}            final DataSource ds ={color}
>  {color:#0747a6}            RDBDataSourceFactory.forJdbcUrl({color}
>  {color:#0747a6}                 dbDetails.get("dbURL"),{color}
>  {color:#0747a6}                 dbDetails.get("dbUser"),{color}
>  {color:#0747a6}                 dbDetails.get("dbPassword"));{color}
>  {color:#0747a6}          final Properties properties = 
> buildS3Properties(dbDetails);{color}
>  {color:#0747a6}          final S3DataStore s3DataStore = 
> buildS3DataStore(properties);{color}
>  {color:#0747a6}          final DataStoreBlobStore dataStoreBlobStore = new 
> DataStoreBlobStore(s3DataStore);{color}
>  {color:#0747a6}          final Whiteboard wb = new 
> DefaultWhiteboard();{color}
> {color:#0747a6}         bapRegistration ={color}
>  {color:#0747a6}                         
> wb.register(BlobAccessProvider.class,(BlobAccessProvider)             {color}
> {color:#0747a6}                           
> dataStoreBlobStore,properties);{color}
> {color:#0747a6}         *documentNodeStore =*{color}
>  {color:#0747a6}                *new RDBDocumentNodeStoreBuilder()*{color}
>  {color:#0747a6}                    *.setBlobStore(dataStoreBlobStore)*{color}
>  {color:#0747a6}                    *.setBundlingDisabled(true)*{color}
>  {color:#0747a6}                    *.setRDBConnection(ds, options)*{color}
>  {color:#0747a6}                    *.build();*{color}
> {color:#0747a6}           repository = new 
> Jcr(documentNodeStore).with(wb).createRepository();{color}
>  {color:#0747a6}          return repository;{color}
>  {color:#0747a6}      } catch (final DataStoreException e) {{color}
>  {color:#0747a6}               log.error("S3 Connection could not be 
> created." + e);{color}
>  {color:#0747a6}              throw new DataStoreException("S3 Connection 
> could not be created");{color}
>  {color:#0747a6}      }{color}
>  {color:#0747a6}  }{color}
> {color:#172b4d}Even after setting collectLimit in code, still it is taking 
> 10 as limit.{color}
> {color:#172b4d}Expected :{color}
> versionGCOptions.collectLimit should set to custom value to avoid 
> DocumentStoreException or solution to avoid DocumentStoreException if 
> documents exceeded to 10.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (OAK-8912) Version garbage collector is not working if documents exceeded 100000

2020-05-06 Thread Ankush Nagapure (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-8912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankush Nagapure updated OAK-8912:
-
Description: 
Oak version - 1.10.2, PostgreSQL 10.7 (10.7), using driver: PostgreSQL JDBC 
Driver 42.2.2 (42.2).

*Actual :*

After below code run, if document collectlimit exceeded 10, it throws 
exception attached in .txt

  {color:#0747a6}public static void runVersionGC() { {color}
 {color:#0747a6}          log.info("Running garbage collection for 
DocumentNodeStore");{color}
 {color:#0747a6}          try {{color}
 {color:#0747a6}                 final VersionGCOptions versionGCOptions = new 
VersionGCOptions();{color}
 {color:#0747a6}                
*versionGCOptions.withCollectLimit(100);*{color}
 {color:#0747a6}                
*documentNodeStore.getVersionGarbageCollector().setOptions(versionGCOptions);*{color}
 {color:#0747a6}                 log.info("versionGCOptions.collectLimit : " + 
versionGCOptions.collectLimit);{color}
 {color:#0747a6}                
documentNodeStore.getVersionGarbageCollector().gc(0, TimeUnit.DAYS);{color}
 {color:#0747a6}           } catch (final DocumentStoreException e) {{color}
 {color:#0747a6}             //{color}
 {color:#0747a6}     }{color}

Below is the code to create repository and get documentNodeStore object for 
version garbage collection.

   {color:#0747a6}private static Repository createRepo(final Map dbDetails){color}
 {color:#0747a6}             throws DataStoreException {{color}
 {color:#0747a6}       try {{color}
 {color:#0747a6}             final RDBOptions options ={color}
 {color:#0747a6}                new 
DBOptions().tablePrefix(dbDetails.get(DB_TABLE_PREFIX)).dropTablesOnClose({color}
 {color:#0747a6}                   false);{color}
 {color:#0747a6}            final DataSource ds ={color}
 {color:#0747a6}            RDBDataSourceFactory.forJdbcUrl({color}
 {color:#0747a6}                 dbDetails.get("dbURL"),{color}
 {color:#0747a6}                 dbDetails.get("dbUser"),{color}
 {color:#0747a6}                 dbDetails.get("dbPassword"));{color}
 {color:#0747a6}          final Properties properties = 
buildS3Properties(dbDetails);{color}
 {color:#0747a6}          final S3DataStore s3DataStore = 
buildS3DataStore(properties);{color}
 {color:#0747a6}          final DataStoreBlobStore dataStoreBlobStore = new 
DataStoreBlobStore(s3DataStore);{color}
 {color:#0747a6}          final Whiteboard wb = new DefaultWhiteboard();{color}

{color:#0747a6}         bapRegistration ={color}
 {color:#0747a6}                         
wb.register(BlobAccessProvider.class,(BlobAccessProvider)             {color}

{color:#0747a6}                           dataStoreBlobStore,properties);{color}

{color:#0747a6}         *documentNodeStore =*{color}
 {color:#0747a6}                *new RDBDocumentNodeStoreBuilder()*{color}
 {color:#0747a6}                    *.setBlobStore(dataStoreBlobStore)*{color}
 {color:#0747a6}                    *.setBundlingDisabled(true)*{color}
 {color:#0747a6}                    *.setRDBConnection(ds, options)*{color}
 {color:#0747a6}                    *.build();*{color}

{color:#0747a6}           repository = new 
Jcr(documentNodeStore).with(wb).createRepository();{color}
 {color:#0747a6}          return repository;{color}
 {color:#0747a6}      } catch (final DataStoreException e) {{color}
 {color:#0747a6}               log.error("S3 Connection could not be created." 
+ e);{color}
 {color:#0747a6}              throw new DataStoreException("S3 Connection could 
not be created");{color}
 {color:#0747a6}      }{color}
 {color:#0747a6}  }{color}

{color:#172b4d}Even after setting collectLimit in code, still it is taking 
10 as limit.{color}

{color:#172b4d}Expected :{color}

versionGCOptions.collectLimit should set to custom value to avoid 
DocumentStoreException or solution to avoid DocumentStoreException if documents 
exceeded to 10.

  was:
Oak version - 1.10.2, PostgreSQL 10.7 (10.7), using driver: PostgreSQL JDBC 
Driver 42.2.2 (42.2).

*Actual :*

After below code run, if document collectlimit exceeded 10, it throws 
exception attached in .txt

  {color:#0747a6}public static void runVersionGC() { {color}
 {color:#0747a6}          log.info("Running garbage collection for 
DocumentNodeStore");{color}
 {color:#0747a6}          try {{color}
 {color:#0747a6}                 final VersionGCOptions versionGCOptions = new 
VersionGCOptions();{color}
 {color:#0747a6}                
*versionGCOptions.withCollectLimit(100);*{color}
 {color:#0747a6}                
*documentNodeStore.getVersionGarbageCollector().setOptions(versionGCOptions);*{color}
 {color:#0747a6}                 log.info("versionGCOptions.collectLimit : " + 
versionGCOptions.collectLimit);{color}
 {color:#0747a6}                
documentNodeStore.getVersionGarbageCollector().gc(0, TimeUnit.DAYS);{color}
 {color:#0747a6}           } catch (final Documen

[jira] [Updated] (OAK-9053) Reindexing Strategy for ES indexes

2020-05-06 Thread Amrit Verma (Jira)


 [ 
https://issues.apache.org/jira/browse/OAK-9053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Verma updated OAK-9053:
-
Description: 
There are two approaches for handling re-indexing of ES indexes.

The simpler strategy would be to:
 * create the new index
 * move writes and reads to the new index
 * delete old index

A more sophisticated strategy could:
 * create the new index
 * move writes to the new index
 * reads will continue to use the old index until the new one catches up
 * when the new one is in sync, move reads to the new index & delete the old one

Both strategies can be implemented using Aliases in Elasticsearch to avoid race 
conditions. To implement the second solution we need something that tells us 
when the new index has caught up with the initial load.

  was:
Index names in elastic follow the pattern *-*

This has to be changed in order to support multi-tenancy.

We need to pass a parameter with the customer ID so we can create indexes like 
*.-*.  ES indexes cannot be longer than 255 
bytes and must comply with the following criteria 
[https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-create-index.html#indices-create-api-path-params]

 

We need also to decide what to do on re-index. The simpler strategy would be to:
 * create the new index
 * move writes and reads to the new index
 * delete old index

A more sophisticated strategy could:
 * create the new index
 * move writes to the new index
 * reads will continue to use the old index until the new one catches up
 * when the new one is in sync, move reads to the new index & delete the old one

Both strategies can be implemented using Aliases in Elasticsearch to avoid race 
conditions. To implement the second solution we need something that tells us 
when the new index has caught up with the initial load.


> Reindexing Strategy for ES indexes
> --
>
> Key: OAK-9053
> URL: https://issues.apache.org/jira/browse/OAK-9053
> Project: Jackrabbit Oak
>  Issue Type: Task
>  Components: indexing
>Reporter: Amrit Verma
>Priority: Major
> Fix For: 1.28.0
>
>
> There are two approaches for handling re-indexing of ES indexes.
> The simpler strategy would be to:
>  * create the new index
>  * move writes and reads to the new index
>  * delete old index
> A more sophisticated strategy could:
>  * create the new index
>  * move writes to the new index
>  * reads will continue to use the old index until the new one catches up
>  * when the new one is in sync, move reads to the new index & delete the old 
> one
> Both strategies can be implemented using Aliases in Elasticsearch to avoid 
> race conditions. To implement the second solution we need something that 
> tells us when the new index has caught up with the initial load.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (OAK-9024) oak-solr-osgi imports org.slf4j.impl

2020-05-06 Thread Julian Sedding (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-9024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100554#comment-17100554
 ] 

Julian Sedding edited comment on OAK-9024 at 5/6/20, 7:55 AM:
--

[~baedke] I performed a simple test. In the pom of the oak-solr-osgi module I 
prevent the zookeeper dependency from being embedded by changing the dependency 
scope to "provided". Then I created a diff of the Import-Package statement of 
the oak-solr-osgi bundle before and after this change.
{noformat}
14d13
<   com.ibm.security.krb5.internal {resolution:=optional}
42,44d40
<   javax.security.auth
<   javax.security.auth.callback   
<   javax.security.auth.kerberos   
46d41
<   javax.security.auth.spi
48d42
<   javax.security.sasl
135d128
<   org.apache.log4j.jmx   {resolution:=optional, 
version=[1.2,2)}
156a150,154
>   org.apache.zookeeper   {version=[3.4,4)}
>   org.apache.zookeeper.data  {version=[3.4,4)}
>   org.apache.zookeeper.server{version=[3.4,4)}
>   org.apache.zookeeper.server.auth   {version=[3.4,4)}
>   org.apache.zookeeper.server.quorum {version=[3.4,4)}
166,170d163
<   org.jboss.netty.bootstrap  {resolution:=optional, 
version=[3.7,4)}
<   org.jboss.netty.buffer {resolution:=optional, 
version=[3.7,4)}
<   org.jboss.netty.channel{resolution:=optional, 
version=[3.7,4)}
<   org.jboss.netty.channel.group  {resolution:=optional, 
version=[3.7,4)}
<   org.jboss.netty.channel.socket.nio {resolution:=optional, 
version=[3.7,4)}
187d179
<   sun.security.krb5  {resolution:=optional}
{noformat}
As you can see the zookeeper packages are now imported (because they are not 
provided within the bundle), and some other packages that are only used by 
zookeeper are no longer imported. If zookeeper with its org.slf4j.slf4j-log4j12 
dependency was the cause for {{org.slf4j.impl}} to be imported, then we would 
expect it to disappear when zookeeper is no longer embedded.

This is not the case, ergo zookeeper is not the root cause you're looking for. 
As I outlined before, the root cause is Solr itself.

> Note that IIUC embedding dependencies of oak-solr-core is one of the points 
> of oak-solr-osgi.

I don't think embedding a logging binding was ever the goal of oak-solr-osgi. 
If you are interested in the original reasoning why this bundle was created, 
the discussion [https://markmail.org/thread/5xsyx5l4c6euqtt2] may be 
interesting.

 

EDIT: The list of imports for each of the two bundles was generated using {{bnd 
print -i }} and the diff was generated by diffing the unedited 
outputs.


was (Author: jsedding):
[~baedke] I performed a simple test. In the pom of the oak-solr-osgi module I 
prevent the zookeeper dependency from being embedded by changing the dependency 
scope to "provided". Then I created a diff of the Import-Package statement of 
the oak-solr-osgi bundle before and after this change.

{noformat}
14d13
<   com.ibm.security.krb5.internal {resolution:=optional}
42,44d40
<   javax.security.auth
<   javax.security.auth.callback   
<   javax.security.auth.kerberos   
46d41
<   javax.security.auth.spi
48d42
<   javax.security.sasl
135d128
<   org.apache.log4j.jmx   {resolution:=optional, 
version=[1.2,2)}
156a150,154
>   org.apache.zookeeper   {version=[3.4,4)}
>   org.apache.zookeeper.data  {version=[3.4,4)}
>   org.apache.zookeeper.server{version=[3.4,4)}
>   org.apache.zookeeper.server.auth   {version=[3.4,4)}
>   org.apache.zookeeper.server.quorum {version=[3.4,4)}
166,170d163
<   org.jboss.netty.bootstrap  {resolution:=optional, 
version=[3.7,4)}
<   org.jboss.netty.buffer {resolution:=optional, 
version=[3.7,4)}
<   org.jboss.netty.channel{resolution:=optional, 
version=[3.7,4)}
<   org.jboss.netty.channel.group  {resolution:=optional, 
version=[3.7,4)}
<   org.jboss.netty.channel.socket.nio {resolution:=optional, 
version=[3.7,4)}
187d179
<   sun.security.krb5  {resolution:=optional}
{noformat}

As you can see the zookeeper packages are now imported (because they are not 
provided within the bundle), and some other packages that are only used by 
zookeeper are no longer imported. If zookeeper with its org.slf4j.slf4j-log4j12 
dependency was the cause for {{org.slf4j.impl}} to be imported, then we would 
expect it to disappear when zookeeper is no longer embedded.

This is not the case, ergo zookeeper is not the root cause you're looking for. 
As I outlined before, the root cause is Solr itself.

> Note that IIUC embedding dependencies of oak-solr-core is one o

[jira] [Created] (OAK-9053) Reindexing Strategy for ES indexes

2020-05-06 Thread Amrit Verma (Jira)
Amrit Verma created OAK-9053:


 Summary: Reindexing Strategy for ES indexes
 Key: OAK-9053
 URL: https://issues.apache.org/jira/browse/OAK-9053
 Project: Jackrabbit Oak
  Issue Type: Task
  Components: indexing
Reporter: Amrit Verma
 Fix For: 1.28.0


Index names in elastic follow the pattern *-*

This has to be changed in order to support multi-tenancy.

We need to pass a parameter with the customer ID so we can create indexes like 
*.-*.  ES indexes cannot be longer than 255 
bytes and must comply with the following criteria 
[https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-create-index.html#indices-create-api-path-params]

 

We need also to decide what to do on re-index. The simpler strategy would be to:
 * create the new index
 * move writes and reads to the new index
 * delete old index

A more sophisticated strategy could:
 * create the new index
 * move writes to the new index
 * reads will continue to use the old index until the new one catches up
 * when the new one is in sync, move reads to the new index & delete the old one

Both strategies can be implemented using Aliases in Elasticsearch to avoid race 
conditions. To implement the second solution we need something that tells us 
when the new index has caught up with the initial load.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (OAK-9024) oak-solr-osgi imports org.slf4j.impl

2020-05-06 Thread Julian Sedding (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-9024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100554#comment-17100554
 ] 

Julian Sedding commented on OAK-9024:
-

[~baedke] I performed a simple test. In the pom of the oak-solr-osgi module I 
prevent the zookeeper dependency from being embedded by changing the dependency 
scope to "provided". Then I created a diff of the Import-Package statement of 
the oak-solr-osgi bundle before and after this change.

{noformat}
14d13
<   com.ibm.security.krb5.internal {resolution:=optional}
42,44d40
<   javax.security.auth
<   javax.security.auth.callback   
<   javax.security.auth.kerberos   
46d41
<   javax.security.auth.spi
48d42
<   javax.security.sasl
135d128
<   org.apache.log4j.jmx   {resolution:=optional, 
version=[1.2,2)}
156a150,154
>   org.apache.zookeeper   {version=[3.4,4)}
>   org.apache.zookeeper.data  {version=[3.4,4)}
>   org.apache.zookeeper.server{version=[3.4,4)}
>   org.apache.zookeeper.server.auth   {version=[3.4,4)}
>   org.apache.zookeeper.server.quorum {version=[3.4,4)}
166,170d163
<   org.jboss.netty.bootstrap  {resolution:=optional, 
version=[3.7,4)}
<   org.jboss.netty.buffer {resolution:=optional, 
version=[3.7,4)}
<   org.jboss.netty.channel{resolution:=optional, 
version=[3.7,4)}
<   org.jboss.netty.channel.group  {resolution:=optional, 
version=[3.7,4)}
<   org.jboss.netty.channel.socket.nio {resolution:=optional, 
version=[3.7,4)}
187d179
<   sun.security.krb5  {resolution:=optional}
{noformat}

As you can see the zookeeper packages are now imported (because they are not 
provided within the bundle), and some other packages that are only used by 
zookeeper are no longer imported. If zookeeper with its org.slf4j.slf4j-log4j12 
dependency was the cause for {{org.slf4j.impl}} to be imported, then we would 
expect it to disappear when zookeeper is no longer embedded.

This is not the case, ergo zookeeper is not the root cause you're looking for. 
As I outlined before, the root cause is Solr itself.

> Note that IIUC embedding dependencies of oak-solr-core is one of the points 
> of oak-solr-osgi.

I don't think embedding a logging binding was ever the goal of oak-solr-osgi. 
If you are interested in the original reasoning why this bundle was created, 
the discussion https://markmail.org/thread/5xsyx5l4c6euqtt2 may be interesting.

> oak-solr-osgi imports org.slf4j.impl
> 
>
> Key: OAK-9024
> URL: https://issues.apache.org/jira/browse/OAK-9024
> Project: Jackrabbit Oak
>  Issue Type: Bug
>  Components: solr
>Reporter: Julian Reschke
>Assignee: Manfred Baedke
>Priority: Minor
> Fix For: 1.28.0
>
> Attachments: OAK-9024.patch
>
>
> From the manifest:
> {{org.slf4j.impl;version="[1.6,2)"}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)