[jira] [Commented] (OAK-7182) Make it possible to update Guava
[ https://issues.apache.org/jira/browse/OAK-7182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17146780#comment-17146780 ] Marcel Wagner commented on OAK-7182: Any progress on this? Our application embeds the oak repository and other libraries also use guava but newer versions. We now get always : java.lang.NoSuchMethodError: com.google.common.util.concurrent.MoreExecutors.sameThreadExecutor()Lcom/google/common/util/concurrent/ListeningExecutorService; This method was changed in guava 21 from the year 2017. You are using 15 from the year 2013. Any chance to get your guava library updated to a newer version? > Make it possible to update Guava > > > Key: OAK-7182 > URL: https://issues.apache.org/jira/browse/OAK-7182 > Project: Jackrabbit Oak > Issue Type: Wish >Reporter: Julian Reschke >Priority: Minor > Attachments: GuavaTests.java, OAK-7182-guava-21-3.diff, > OAK-7182-guava-21-4.diff, OAK-7182-guava-21.diff, OAK-7182-guava-23.6.1.diff, > guava.diff > > > We currently rely on Guava 15, and this affects all users of Oak because they > essentially need to use the same version. > This is an overall issue to investigate what would need to be done in Oak in > order to make updates possible. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (OAK-9114) Build Jackrabbit Oak #2793 failed
[ https://issues.apache.org/jira/browse/OAK-9114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17146546#comment-17146546 ] Hudson commented on OAK-9114: - Build is still failing. Failed run: [Jackrabbit Oak #2804|https://builds.apache.org/job/Jackrabbit%20Oak/2804/] [console log|https://builds.apache.org/job/Jackrabbit%20Oak/2804/console] > Build Jackrabbit Oak #2793 failed > - > > Key: OAK-9114 > URL: https://issues.apache.org/jira/browse/OAK-9114 > Project: Jackrabbit Oak > Issue Type: Bug > Components: continuous integration >Reporter: Hudson >Priority: Major > > No description is provided > The build Jackrabbit Oak #2793 has failed. > First failed run: [Jackrabbit Oak > #2793|https://builds.apache.org/job/Jackrabbit%20Oak/2793/] [console > log|https://builds.apache.org/job/Jackrabbit%20Oak/2793/console] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (OAK-9123) Error: Document contains at least one immense term
[ https://issues.apache.org/jira/browse/OAK-9123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fabrizio Fortino resolved OAK-9123. --- Resolution: Fixed Fixed with revision 1879243 > Error: Document contains at least one immense term > -- > > Key: OAK-9123 > URL: https://issues.apache.org/jira/browse/OAK-9123 > Project: Jackrabbit Oak > Issue Type: Bug > Components: elastic-search, indexing, search >Reporter: Fabrizio Fortino >Assignee: Fabrizio Fortino >Priority: Major > > {code:java} > 11:35:09.400 [I/O dispatcher 1] ERROR o.a.j.o.p.i.e.i.ElasticIndexWriter - > Bulk item with id /wikipedia/76/84/National Palace (Mexico) failed > org.elasticsearch.ElasticsearchException: Elasticsearch exception > [type=illegal_argument_exception, reason=Document contains at least one > immense term in field="text.keyword" (whose UTF8 encoding is longer than the > max length 32766), all of which were skipped. Please correct the analyzer to > not produce such terms. The prefix of the first immense term is: '[123, 123, > 73, 110, 102, 111, 98, 111, 120, 32, 104, 105, 115, 116, 111, 114, 105, 99, > 32, 98, 117, 105, 108, 100, 105, 110, 103, 10, 124, 110]...', original > message: bytes can be at most 32766 in length; got 33409] > at > org.elasticsearch.ElasticsearchException.innerFromXContent(ElasticsearchException.java:496) > at > org.elasticsearch.ElasticsearchException.fromXContent(ElasticsearchException.java:407) > at > org.elasticsearch.action.bulk.BulkItemResponse.fromXContent(BulkItemResponse.java:138) > at > org.elasticsearch.action.bulk.BulkResponse.fromXContent(BulkResponse.java:196) > at > org.elasticsearch.client.RestHighLevelClient.parseEntity(RestHighLevelClient.java:1888) > at > org.elasticsearch.client.RestHighLevelClient.lambda$performRequestAsyncAndParseEntity$10(RestHighLevelClient.java:1676) > at > org.elasticsearch.client.RestHighLevelClient$1.onSuccess(RestHighLevelClient.java:1758) > at > org.elasticsearch.client.RestClient$FailureTrackingResponseListener.onSuccess(RestClient.java:590) > at org.elasticsearch.client.RestClient$1.completed(RestClient.java:333) > at org.elasticsearch.client.RestClient$1.completed(RestClient.java:327) > at org.apache.http.concurrent.BasicFuture.completed(BasicFuture.java:122) > at > org.apache.http.impl.nio.client.DefaultClientExchangeHandlerImpl.responseCompleted(DefaultClientExchangeHandlerImpl.java:181) > at > org.apache.http.nio.protocol.HttpAsyncRequestExecutor.processResponse(HttpAsyncRequestExecutor.java:448) > at > org.apache.http.nio.protocol.HttpAsyncRequestExecutor.inputReady(HttpAsyncRequestExecutor.java:338) > at > org.apache.http.impl.nio.DefaultNHttpClientConnection.consumeInput(DefaultNHttpClientConnection.java:265) > at > org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:81) > at > org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:39) > at > org.apache.http.impl.nio.reactor.AbstractIODispatch.inputReady(AbstractIODispatch.java:114) > at > org.apache.http.impl.nio.reactor.BaseIOReactor.readable(BaseIOReactor.java:162) > at > org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvent(AbstractIOReactor.java:337) > at > org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvents(AbstractIOReactor.java:315) > at > org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOReactor.java:276) > at > org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.java:104) > at > org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.run(AbstractMultiworkerIOReactor.java:591) > at java.lang.Thread.run(Thread.java:745) > Caused by: org.elasticsearch.ElasticsearchException: Elasticsearch exception > [type=max_bytes_length_exceeded_exception, reason=bytes can be at most 32766 > in length; got 33409] > at > org.elasticsearch.ElasticsearchException.innerFromXContent(ElasticsearchException.java:496) > at > org.elasticsearch.ElasticsearchException.fromXContent(ElasticsearchException.java:407) > at > org.elasticsearch.ElasticsearchException.innerFromXContent(ElasticsearchException.java:437) > ... 24 common frames omitted{code} > This happens with huge keyword fields since Lucene doesn't allow terms with > more than 32k bytes. > See > [https://discuss.elastic.co/t/error-document-contains-at-least-one-immense-term-in-field/66486] > We have decided to always create keyword fields to remove the need to specify > properties like ordered or facet. In this way every field can be sorted or > used as facet. > In this specific case the keyword field won't be needed at all but it would > be hard to decide when include it or not. To solve this we are going to use > `ignore_above=256` so huge keyword fields will be
[jira] [Commented] (OAK-9114) Build Jackrabbit Oak #2793 failed
[ https://issues.apache.org/jira/browse/OAK-9114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17146426#comment-17146426 ] Hudson commented on OAK-9114: - Build is still failing. Failed run: [Jackrabbit Oak #2803|https://builds.apache.org/job/Jackrabbit%20Oak/2803/] [console log|https://builds.apache.org/job/Jackrabbit%20Oak/2803/console] > Build Jackrabbit Oak #2793 failed > - > > Key: OAK-9114 > URL: https://issues.apache.org/jira/browse/OAK-9114 > Project: Jackrabbit Oak > Issue Type: Bug > Components: continuous integration >Reporter: Hudson >Priority: Major > > No description is provided > The build Jackrabbit Oak #2793 has failed. > First failed run: [Jackrabbit Oak > #2793|https://builds.apache.org/job/Jackrabbit%20Oak/2793/] [console > log|https://builds.apache.org/job/Jackrabbit%20Oak/2793/console] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (OAK-9125) Oak-run indexing with MongoDB logs errors in a background thread
[ https://issues.apache.org/jira/browse/OAK-9125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thomas Mueller updated OAK-9125: Affects Version/s: 1.22.3 1.8.22 > Oak-run indexing with MongoDB logs errors in a background thread > > > Key: OAK-9125 > URL: https://issues.apache.org/jira/browse/OAK-9125 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: indexing, oak-run >Affects Versions: 1.22.3, 1.8.22 >Reporter: Thomas Mueller >Priority: Major > > The following error message can be logged in a background thread: > {noformat} > Caused by: com.mongodb.MongoQueryException: > Query failed with error code 13 and error message > 'not authorized on ... to execute command > { find: "nodes", filter: { _id: "0:/" }, limit: 1, singleBatch: true }' > on server ... at > com.mongodb.operation.FindOperation$1.call(FindOperation.java:722) > {noformat} > Those error messages have a negative impact on the index command. However, > they are annoying. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (OAK-9125) Oak-run indexing with MongoDB logs errors in a background thread
[ https://issues.apache.org/jira/browse/OAK-9125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thomas Mueller updated OAK-9125: Component/s: oak-run indexing > Oak-run indexing with MongoDB logs errors in a background thread > > > Key: OAK-9125 > URL: https://issues.apache.org/jira/browse/OAK-9125 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: indexing, oak-run >Reporter: Thomas Mueller >Priority: Major > > The following error message can be logged in a background thread: > {noformat} > Caused by: com.mongodb.MongoQueryException: > Query failed with error code 13 and error message > 'not authorized on ... to execute command > { find: "nodes", filter: { _id: "0:/" }, limit: 1, singleBatch: true }' > on server ... at > com.mongodb.operation.FindOperation$1.call(FindOperation.java:722) > {noformat} > Those error messages have a negative impact on the index command. However, > they are annoying. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (OAK-9125) Oak-run indexing with MongoDB logs errors in a background thread
Thomas Mueller created OAK-9125: --- Summary: Oak-run indexing with MongoDB logs errors in a background thread Key: OAK-9125 URL: https://issues.apache.org/jira/browse/OAK-9125 Project: Jackrabbit Oak Issue Type: Improvement Reporter: Thomas Mueller The following error message can be logged in a background thread: {noformat} Caused by: com.mongodb.MongoQueryException: Query failed with error code 13 and error message 'not authorized on ... to execute command { find: "nodes", filter: { _id: "0:/" }, limit: 1, singleBatch: true }' on server ... at com.mongodb.operation.FindOperation$1.call(FindOperation.java:722) {noformat} Those error messages have a negative impact on the index command. However, they are annoying. -- This message was sent by Atlassian Jira (v8.3.4#803005)