[GitHub] jihoonson commented on issue #6002: Add IRC#druid-dev shields.io into README

2018-07-20 Thread GitBox
jihoonson commented on issue #6002: Add IRC#druid-dev shields.io into README
URL: https://github.com/apache/incubator-druid/pull/6002#issuecomment-406768847
 
 
   LGTM. Please resolve conflicts.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@druid.apache.org
For additional commands, e-mail: dev-h...@druid.apache.org



[GitHub] asdf2014 commented on issue #5980: Various changes about a few coding specifications

2018-07-20 Thread GitBox
asdf2014 commented on issue #5980: Various changes about a few coding 
specifications
URL: https://github.com/apache/incubator-druid/pull/5980#issuecomment-406767668
 
 
   Hi, @leventov . Thanks for your review. The `RedundantTypeArguments` has 
been added as an ERROR level check and `teamcity` has also been passed. PTAL.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@druid.apache.org
For additional commands, e-mail: dev-h...@druid.apache.org



[GitHub] jon-wei closed pull request #6029: Add comment and code tweak to Basic HTTP Authenticator

2018-07-20 Thread GitBox
jon-wei closed pull request #6029: Add comment and code tweak to Basic HTTP 
Authenticator
URL: https://github.com/apache/incubator-druid/pull/6029
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/extensions-core/druid-basic-security/src/main/java/io/druid/security/basic/authentication/BasicHTTPAuthenticator.java
 
b/extensions-core/druid-basic-security/src/main/java/io/druid/security/basic/authentication/BasicHTTPAuthenticator.java
index 1a4d717125f..bdd0aabf9a8 100644
--- 
a/extensions-core/druid-basic-security/src/main/java/io/druid/security/basic/authentication/BasicHTTPAuthenticator.java
+++ 
b/extensions-core/druid-basic-security/src/main/java/io/druid/security/basic/authentication/BasicHTTPAuthenticator.java
@@ -149,6 +149,7 @@ public void init(FilterConfig filterConfig)
 
 }
 
+
 @Override
 public void doFilter(
 ServletRequest servletRequest, ServletResponse servletResponse, 
FilterChain filterChain
@@ -163,9 +164,12 @@ public void doFilter(
 return;
   }
 
+  // At this point, encodedUserSecret is not null, indicating that the 
request intends to perform
+  // Basic HTTP authentication. If any errors occur with the 
authentication, we send a 401 response immediately
+  // and do not proceed further down the filter chain.
   String decodedUserSecret = 
BasicAuthUtils.decodeUserSecret(encodedUserSecret);
   if (decodedUserSecret == null) {
-// we recognized a Basic auth header, but could not decode the user 
secret
+// We recognized a Basic auth header, but could not decode the user 
secret.
 httpResp.sendError(HttpServletResponse.SC_UNAUTHORIZED);
 return;
   }
@@ -182,12 +186,10 @@ public void doFilter(
   if (checkCredentials(user, password)) {
 AuthenticationResult authenticationResult = new 
AuthenticationResult(user, authorizerName, name, null);
 servletRequest.setAttribute(AuthConfig.DRUID_AUTHENTICATION_RESULT, 
authenticationResult);
+filterChain.doFilter(servletRequest, servletResponse);
   } else {
 httpResp.sendError(HttpServletResponse.SC_UNAUTHORIZED);
-return;
   }
-
-  filterChain.doFilter(servletRequest, servletResponse);
 }
 
 @Override


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@druid.apache.org
For additional commands, e-mail: dev-h...@druid.apache.org



[GitHub] asdf2014 commented on a change in pull request #5980: Various changes about a few coding specifications

2018-07-20 Thread GitBox
asdf2014 commented on a change in pull request #5980: Various changes about a 
few coding specifications
URL: https://github.com/apache/incubator-druid/pull/5980#discussion_r204198679
 
 

 ##
 File path: 
extensions-contrib/time-min-max/src/main/java/io/druid/query/aggregation/TimestampAggregatorFactory.java
 ##
 @@ -148,7 +147,8 @@ public AggregatorFactory 
getMergingFactory(AggregatorFactory other) throws Aggre
   @Override
   public List getRequiredColumns()
   {
-return Arrays.asList(new 
TimestampAggregatorFactory(fieldName, fieldName, timeFormat, comparator, 
initValue));
+return Collections.singletonList(
 
 Review comment:
   Fixed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@druid.apache.org
For additional commands, e-mail: dev-h...@druid.apache.org



[GitHub] asdf2014 commented on a change in pull request #5980: Various changes about a few coding specifications

2018-07-20 Thread GitBox
asdf2014 commented on a change in pull request #5980: Various changes about a 
few coding specifications
URL: https://github.com/apache/incubator-druid/pull/5980#discussion_r204198676
 
 

 ##
 File path: 
extensions-contrib/distinctcount/src/main/java/io/druid/query/aggregation/distinctcount/DistinctCountAggregatorFactory.java
 ##
 @@ -137,7 +136,8 @@ public AggregatorFactory getCombiningFactory()
   @Override
   public List getRequiredColumns()
   {
-return Arrays.asList(new 
DistinctCountAggregatorFactory(fieldName, fieldName, bitMapFactory));
+return Collections.singletonList(
+new DistinctCountAggregatorFactory(fieldName, fieldName, 
bitMapFactory));
 
 Review comment:
   Fixed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@druid.apache.org
For additional commands, e-mail: dev-h...@druid.apache.org



[GitHub] asdf2014 commented on a change in pull request #5980: Various changes about a few coding specifications

2018-07-20 Thread GitBox
asdf2014 commented on a change in pull request #5980: Various changes about a 
few coding specifications
URL: https://github.com/apache/incubator-druid/pull/5980#discussion_r204198669
 
 

 ##
 File path: codestyle/checkstyle.xml
 ##
 @@ -171,5 +171,17 @@
   
   
 
+
+
 
 Review comment:
   Added.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@druid.apache.org
For additional commands, e-mail: dev-h...@druid.apache.org



[GitHub] joshlemer commented on issue #3236: gitter community channel?

2018-07-20 Thread GitBox
joshlemer commented on issue #3236: gitter community channel?
URL: 
https://github.com/apache/incubator-druid/issues/3236#issuecomment-406758752
 
 
   @gianm @leventov just because you don't personally like to use these tools 
doesn't mean that others don't find them useful. I personally participate in 
many gitter chats including those for the Scala and Rust communities, and 
including apache projects like Beam and Spark (Beam has a slack channel on the 
official apache slack instance, and spark has many gitter channels). While many 
questions that go on in these channels are beginner questions, that's fine. And 
at the same time, lots of very senior people in the community hang out in there 
and talk about very advanced topics. 
   
   IRC is terrible for online discussion because you can't see history, you 
can't share code, you can't see who has seen your messages, there are no 
avatars / connection to identity, etc etc etc. At least consider the fact that 
multiple people have asked for it, which implies a much larger amount of people 
who would use it but haven't asked.   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@druid.apache.org
For additional commands, e-mail: dev-h...@druid.apache.org



[GitHub] leventov edited a comment on issue #6028: Error in SqlMetadataRuleManagerTest

2018-07-20 Thread GitBox
leventov edited a comment on issue #6028: Error in SqlMetadataRuleManagerTest
URL: 
https://github.com/apache/incubator-druid/issues/6028#issuecomment-406756225
 
 
   I think the problem is that `SQLMetadataRuleManager.poll()` is scheduled to 
be called asynchronously in a dedicated executor in 
`SQLMetadataRuleManager.start()`. `poll()` makes an inner join of the rules 
table with itself, that appears to be enough to cause a deadlock when the drop 
of this table is called in parallel (it is called in 
`SQLMetadataRuleManagerTest.cleanup()`.
   
   The problem was introduced in #5554, where the 
`SQLMetadataRuleManagerTest.testMultipleStopAndStart()` was added which calls 
`start()` and `stop()` repeatedly. Before, `SQLMetadataRuleManager.start()` was 
not ever called in the context of `SQLMetadataRuleManagerTest`.
   
   Extra coincidental problem that leads to this condition is that `stop()` 
(that pairs `start()` calls in `testMultipleStopAndStart()`) is not actually 
synchronous. By the time when `stop()` exists, some `poll()` may still be 
executed. To ensure this is not the case, `exec.awaitTermination()` should be 
called. But this is awkward, because we don't know what timeout we should use. 
Alternatively, `poll()` could be synchronized together with `start()` and 
`stop()` (the `started` flag should be checked under the lock in `poll()`).
   
   Extra:
- There is no point of `future` and `exec` being volatile.
- `future.cancel` is redundant before `shutdownNow()`.
- There is a race on `retryStartTime` updates in `poll()`.
- It feels to me that `SQLMetadataRuleManager` mixes two separate 
abstractions - one of scheduled polling, and another of the access to the 
database itself (including the implementation of `poll()`). But I'm uncertain 
about this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@druid.apache.org
For additional commands, e-mail: dev-h...@druid.apache.org



[GitHub] leventov commented on issue #6028: Error in SqlMetadataRuleManagerTest

2018-07-20 Thread GitBox
leventov commented on issue #6028: Error in SqlMetadataRuleManagerTest
URL: 
https://github.com/apache/incubator-druid/issues/6028#issuecomment-406756225
 
 
   I think the problem is that `SQLMetadataRuleManager.poll()` is scheduled to 
be called asynchronously in a dedicated executor in 
`SQLMetadataRuleManager.start()`. `poll()` makes an inner join of the rules 
table with itself, that appears to be enough to cause a deadlock when the drop 
of this table is called in parallel (it is called in 
`SQLMetadataRuleManagerTest.cleanup()`.
   
   The problem was introduced in #5554, where the 
`SQLMetadataRuleManagerTest.testMultipleStopAndStart()` was added which calls 
`start()` and `stop()` repeatedly. Before, `SQLMetadataRuleManager.start()` was 
not ever called in the context of `SQLMetadataRuleManagerTest`.
   
   Extra coincidental problem that leads to this condition is that `stop()` 
(that pairs `start()` calls in `testMultipleStopAndStart()` is not actually 
synchronous at the moment. By the time when `stop()` exists, some `poll()` may 
still be executed. To ensure this is not the case, `exec.awaitTermination()` 
should be called. But this is awkward, because we don't know what timeout we 
should use. Alternatively, `poll()` could be synchronized together with 
`start()` and `stop()` (the `started` flag should be checked under the lock in 
`poll()`).
   
   Extra:
- There is no point of `future` and `exec` being volatile.
- `future.cancel` is redundant before `shutdownNow()`.
- There is a race on `retryStartTime` updates in `poll()`.
- It feels to me that `SQLMetadataRuleManager` mixes two separate 
abstractions - one of scheduled polling, and another of the access to the 
database itself (including the implementation of `poll()`). But I'm uncertain 
about this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@druid.apache.org
For additional commands, e-mail: dev-h...@druid.apache.org



[GitHub] clintropolis opened a new pull request #6032: Fix coordinator balancer 'moved' logs and metrics

2018-07-20 Thread GitBox
clintropolis opened a new pull request #6032: Fix coordinator balancer 'moved' 
logs and metrics
URL: https://github.com/apache/incubator-druid/pull/6032
 
 
   Fixes a cosmetic issue where the incorrect 'movedCount' metric and log 
messaging because the value was sourced from `currentlyMovingSegments.size()` 
rather than the total count of movement operations done, which could be 
inaccurate if the load finishes before the balancer operation completes and 
emits metrics and logs. 
   
   This changes metrics and logs to use the counter value rather than the 
snapshot of currently moving segments at emit time.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@druid.apache.org
For additional commands, e-mail: dev-h...@druid.apache.org



[GitHub] drcrallen commented on issue #5913: Move Caching Cluster Client to java streams and allow parallel intermediate merges

2018-07-20 Thread GitBox
drcrallen commented on issue #5913: Move Caching Cluster Client to java streams 
and allow parallel intermediate merges
URL: https://github.com/apache/incubator-druid/pull/5913#issuecomment-406755108
 
 
   This will require some changes before going in. A huge portion of merge work 
is not accounted for in a parallel mechanism. I'm working on a fix


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@druid.apache.org
For additional commands, e-mail: dev-h...@druid.apache.org



[GitHub] yurmix opened a new pull request #6031: Remove JDK 7 from build documentation.

2018-07-20 Thread GitBox
yurmix opened a new pull request #6031: Remove JDK 7 from build documentation.
URL: https://github.com/apache/incubator-druid/pull/6031
 
 
   See issue #6030 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@druid.apache.org
For additional commands, e-mail: dev-h...@druid.apache.org



[GitHub] yurmix opened a new issue #6030: "Build from Source" page states an outdated JDK version 7

2018-07-20 Thread GitBox
yurmix opened a new issue #6030: "Build from Source" page states an outdated 
JDK version 7
URL: https://github.com/apache/incubator-druid/issues/6030
 
 
   [Copied from [website issue 
#470](https://github.com/druid-io/druid-io.github.io/issues/470)]
   
   Problem:
   The documentation page [Build from 
Source](http://druid.io/docs/latest/development/build.html) states the 
following:
   
   `Building Druid requires the following: - JDK 7 or JDK 8`
   
   While in fact Druid mandates JDK 8 since version 0.10.0 (as evident in Druid 
pr #3914 by @gianm).
   
   Solution:
   Remove the text "JDK 7 or ".


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@druid.apache.org
For additional commands, e-mail: dev-h...@druid.apache.org



[GitHub] leventov commented on a change in pull request #5980: Various changes about a few coding specifications

2018-07-20 Thread GitBox
leventov commented on a change in pull request #5980: Various changes about a 
few coding specifications
URL: https://github.com/apache/incubator-druid/pull/5980#discussion_r204190823
 
 

 ##
 File path: 
extensions-contrib/time-min-max/src/main/java/io/druid/query/aggregation/TimestampAggregatorFactory.java
 ##
 @@ -148,7 +147,8 @@ public AggregatorFactory 
getMergingFactory(AggregatorFactory other) throws Aggre
   @Override
   public List getRequiredColumns()
   {
-return Arrays.asList(new 
TimestampAggregatorFactory(fieldName, fieldName, timeFormat, comparator, 
initValue));
+return Collections.singletonList(
 
 Review comment:
   Same


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@druid.apache.org
For additional commands, e-mail: dev-h...@druid.apache.org



[GitHub] leventov commented on a change in pull request #5980: Various changes about a few coding specifications

2018-07-20 Thread GitBox
leventov commented on a change in pull request #5980: Various changes about a 
few coding specifications
URL: https://github.com/apache/incubator-druid/pull/5980#discussion_r204190338
 
 

 ##
 File path: codestyle/checkstyle.xml
 ##
 @@ -171,5 +171,17 @@
   
   
 
+
+
 
 Review comment:
   Please add a comment that this regex should be replaced with an IntelliJ 
inspection when teamcity.jetbrains.com updates to at least IntelliJ 2018.1 
(currently it uses 2017.2)


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@druid.apache.org
For additional commands, e-mail: dev-h...@druid.apache.org



[GitHub] leventov commented on a change in pull request #5980: Various changes about a few coding specifications

2018-07-20 Thread GitBox
leventov commented on a change in pull request #5980: Various changes about a 
few coding specifications
URL: https://github.com/apache/incubator-druid/pull/5980#discussion_r204190768
 
 

 ##
 File path: 
extensions-contrib/distinctcount/src/main/java/io/druid/query/aggregation/distinctcount/DistinctCountAggregatorFactory.java
 ##
 @@ -137,7 +136,8 @@ public AggregatorFactory getCombiningFactory()
   @Override
   public List getRequiredColumns()
   {
-return Arrays.asList(new 
DistinctCountAggregatorFactory(fieldName, fieldName, bitMapFactory));
+return Collections.singletonList(
+new DistinctCountAggregatorFactory(fieldName, fieldName, 
bitMapFactory));
 
 Review comment:
   It's not formatted properly, the closing `);` should be on the next line (or 
the whole expression on a single line).


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@druid.apache.org
For additional commands, e-mail: dev-h...@druid.apache.org



[GitHub] leventov commented on a change in pull request #5980: Various changes about a few coding specifications

2018-07-20 Thread GitBox
leventov commented on a change in pull request #5980: Various changes about a 
few coding specifications
URL: https://github.com/apache/incubator-druid/pull/5980#discussion_r204190613
 
 

 ##
 File path: benchmarks/src/main/java/io/druid/benchmark/query/TopNBenchmark.java
 ##
 @@ -372,7 +372,7 @@ public void queryMultiQueryableIndex(Blackhole blackhole)
 
 Sequence> queryResult = theRunner.run(
 QueryPlus.wrap(query),
-Maps.newHashMap()
+Maps.newHashMap()
 
 Review comment:
   Did you remove redundant type arguments in the whole codebase? If so, you 
could try to change the level of the corresponding IntelliJ inspection 
("Redundant type arguments") to ERROR.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@druid.apache.org
For additional commands, e-mail: dev-h...@druid.apache.org



[GitHub] leventov commented on a change in pull request #5998: Add support to filter on datasource for active tasks

2018-07-20 Thread GitBox
leventov commented on a change in pull request #5998: Add support to filter on 
datasource for active tasks
URL: https://github.com/apache/incubator-druid/pull/5998#discussion_r204187389
 
 

 ##
 File path: 
indexing-service/src/main/java/io/druid/indexing/overlord/TaskStorage.java
 ##
 @@ -127,17 +127,20 @@
* Returns a list of currently running or pending tasks as stored in the 
storage facility as {@link TaskInfo}. No particular order
* is guaranteed, but implementations are encouraged to return tasks in 
ascending order of creation.
*
+   * @param datasource datasource
 
 Review comment:
   This line broke TeamCity CI: 
https://teamcity.jetbrains.com/viewLog.html?buildId=1531809=Inspection=OpenSourceProjects_Druid_Inspections


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@druid.apache.org
For additional commands, e-mail: dev-h...@druid.apache.org



[GitHub] leventov commented on issue #6029: Add comment and code tweak to Basic HTTP Authenticator

2018-07-20 Thread GitBox
leventov commented on issue #6029: Add comment and code tweak to Basic HTTP 
Authenticator
URL: https://github.com/apache/incubator-druid/pull/6029#issuecomment-406736568
 
 
   Thank you


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@druid.apache.org
For additional commands, e-mail: dev-h...@druid.apache.org



[GitHub] jon-wei opened a new pull request #6029: Add comment and code tweak to Basic HTTP Authenticator

2018-07-20 Thread GitBox
jon-wei opened a new pull request #6029: Add comment and code tweak to Basic 
HTTP Authenticator
URL: https://github.com/apache/incubator-druid/pull/6029
 
 
   Adds a comment clarifying error handling in BasicHTTPAuthenticator and a 
small code tweak.
   
   See https://github.com/apache/incubator-druid/pull/5856#discussion_r204147376
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@druid.apache.org
For additional commands, e-mail: dev-h...@druid.apache.org



[GitHub] clintropolis edited a comment on issue #6028: Error in SqlMetadataRuleManagerTest

2018-07-20 Thread GitBox
clintropolis edited a comment on issue #6028: Error in 
SqlMetadataRuleManagerTest
URL: 
https://github.com/apache/incubator-druid/issues/6028#issuecomment-406727444
 
 
   I have encountered this error before as well, it does not happen often, 
though it was rather recent (~2 weeks ago) so I wonder if something has changed 
that has agitated this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@druid.apache.org
For additional commands, e-mail: dev-h...@druid.apache.org



[GitHub] clintropolis commented on issue #6028: Error in SqlMetadataRuleManagerTest

2018-07-20 Thread GitBox
clintropolis commented on issue #6028: Error in SqlMetadataRuleManagerTest
URL: 
https://github.com/apache/incubator-druid/issues/6028#issuecomment-406727444
 
 
   I have encountered this error before as well, it does not happen often


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@druid.apache.org
For additional commands, e-mail: dev-h...@druid.apache.org



[GitHub] jihoonson commented on issue #6028: Error in SqlMetadataRuleManagerTest

2018-07-20 Thread GitBox
jihoonson commented on issue #6028: Error in SqlMetadataRuleManagerTest
URL: 
https://github.com/apache/incubator-druid/issues/6028#issuecomment-406726910
 
 
   Where did you see this error? This test passes in my local machine.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@druid.apache.org
For additional commands, e-mail: dev-h...@druid.apache.org



[GitHub] leventov opened a new issue #6028: Error in SqlMetadataRuleManagerTest

2018-07-20 Thread GitBox
leventov opened a new issue #6028: Error in SqlMetadataRuleManagerTest
URL: https://github.com/apache/incubator-druid/issues/6028
 
 
   ```
   Tests run: 4, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 22.619 sec 
<<< FAILURE! - in io.druid.metadata.SQLMetadataRuleManagerTest
   testMultipleStopAndStart(io.druid.metadata.SQLMetadataRuleManagerTest)  Time 
elapsed: 20.698 sec  <<< ERROR!
   org.skife.jdbi.v2.exceptions.CallbackFailedException: 
   org.skife.jdbi.v2.exceptions.UnableToExecuteStatementException: 
java.sql.SQLTransactionRollbackException: A lock could not be obtained due to a 
deadlock, cycle of locks and waiters is:
   Lock : ROW, SYSCOLUMNS, (5,12)
 Waiting XID : {237, X} , APP, DROP TABLE 
druidTestc0dd9b47c4df453daeba2aca4613783e_rules
 Granted XID : {217, S} 
   Lock : TABLE, DRUIDTESTC0DD9B47C4DF453DAEBA2ACA4613783E_RULES, Tablelock
 Waiting XID : {217, IS} , APP, SELECT r.dataSource, r.payload FROM 
druidTestc0dd9b47c4df453daeba2aca4613783e_rules r INNER JOIN(SELECT dataSource, 
max(version) as version FROM druidTestc0dd9b47c4df453daeba2aca4613783e_rules 
GROUP BY dataSource) ds ON r.datasource = ds.datasource and r.version = 
ds.version
 Granted XID : {237, X} 
   . The selected victim is XID : 237. [statement:"DROP TABLE 
druidTestc0dd9b47c4df453daeba2aca4613783e_rules", located:"DROP TABLE 
druidTestc0dd9b47c4df453daeba2aca4613783e_rules", rewritten:"DROP TABLE 
druidTestc0dd9b47c4df453daeba2aca4613783e_rules", arguments:{ positional:{}, 
named:{}, finder:[]}]
at 
io.druid.metadata.SQLMetadataRuleManagerTest.dropTable(SQLMetadataRuleManagerTest.java:209)
at 
io.druid.metadata.SQLMetadataRuleManagerTest.cleanup(SQLMetadataRuleManagerTest.java:204)
   Caused by: org.skife.jdbi.v2.exceptions.UnableToExecuteStatementException: 
   java.sql.SQLTransactionRollbackException: A lock could not be obtained due 
to a deadlock, cycle of locks and waiters is:
   Lock : ROW, SYSCOLUMNS, (5,12)
 Waiting XID : {237, X} , APP, DROP TABLE 
druidTestc0dd9b47c4df453daeba2aca4613783e_rules
 Granted XID : {217, S} 
   Lock : TABLE, DRUIDTESTC0DD9B47C4DF453DAEBA2ACA4613783E_RULES, Tablelock
 Waiting XID : {217, IS} , APP, SELECT r.dataSource, r.payload FROM 
druidTestc0dd9b47c4df453daeba2aca4613783e_rules r INNER JOIN(SELECT dataSource, 
max(version) as version FROM druidTestc0dd9b47c4df453daeba2aca4613783e_rules 
GROUP BY dataSource) ds ON r.datasource = ds.datasource and r.version = 
ds.version
 Granted XID : {237, X} 
   . The selected victim is XID : 237. [statement:"DROP TABLE 
druidTestc0dd9b47c4df453daeba2aca4613783e_rules", located:"DROP TABLE 
druidTestc0dd9b47c4df453daeba2aca4613783e_rules", rewritten:"DROP TABLE 
druidTestc0dd9b47c4df453daeba2aca4613783e_rules", arguments:{ positional:{}, 
named:{}, finder:[]}]
at 
io.druid.metadata.SQLMetadataRuleManagerTest.dropTable(SQLMetadataRuleManagerTest.java:209)
at 
io.druid.metadata.SQLMetadataRuleManagerTest.cleanup(SQLMetadataRuleManagerTest.java:204)
   Caused by: java.sql.SQLTransactionRollbackException: 
   A lock could not be obtained due to a deadlock, cycle of locks and waiters 
is:
   Lock : ROW, SYSCOLUMNS, (5,12)
 Waiting XID : {237, X} , APP, DROP TABLE 
druidTestc0dd9b47c4df453daeba2aca4613783e_rules
 Granted XID : {217, S} 
   Lock : TABLE, DRUIDTESTC0DD9B47C4DF453DAEBA2ACA4613783E_RULES, Tablelock
 Waiting XID : {217, IS} , APP, SELECT r.dataSource, r.payload FROM 
druidTestc0dd9b47c4df453daeba2aca4613783e_rules r INNER JOIN(SELECT dataSource, 
max(version) as version FROM druidTestc0dd9b47c4df453daeba2aca4613783e_rules 
GROUP BY dataSource) ds ON r.datasource = ds.datasource and r.version = 
ds.version
 Granted XID : {237, X} 
   . The selected victim is XID : 237.
at 
io.druid.metadata.SQLMetadataRuleManagerTest.dropTable(SQLMetadataRuleManagerTest.java:209)
at 
io.druid.metadata.SQLMetadataRuleManagerTest.cleanup(SQLMetadataRuleManagerTest.java:204)
   Caused by: org.apache.derby.iapi.error.StandardException: 
   A lock could not be obtained due to a deadlock, cycle of locks and waiters 
is:
   Lock : ROW, SYSCOLUMNS, (5,12)
 Waiting XID : {237, X} , APP, DROP TABLE 
druidTestc0dd9b47c4df453daeba2aca4613783e_rules
 Granted XID : {217, S} 
   Lock : TABLE, DRUIDTESTC0DD9B47C4DF453DAEBA2ACA4613783E_RULES, Tablelock
 Waiting XID : {217, IS} , APP, SELECT r.dataSource, r.payload FROM 
druidTestc0dd9b47c4df453daeba2aca4613783e_rules r INNER JOIN(SELECT dataSource, 
max(version) as version FROM druidTestc0dd9b47c4df453daeba2aca4613783e_rules 
GROUP BY dataSource) ds ON r.datasource = ds.datasource and r.version = 
ds.version
 Granted XID : {237, X} 
   . The selected victim is XID : 237.
at 
io.druid.metadata.SQLMetadataRuleManagerTest.dropTable(SQLMetadataRuleManagerTest.java:209)
at 
io.druid.metadata.SQLMetadataRuleManagerTest.cleanup(SQLMetadataRuleManagerTest.java:204)
   ```


[GitHub] leventov opened a new pull request #6027: Make Parser.parseToMap() to return a mutable Map

2018-07-20 Thread GitBox
leventov opened a new pull request #6027: Make Parser.parseToMap() to return a 
mutable Map
URL: https://github.com/apache/incubator-druid/pull/6027
 
 
   I encountered this error somewhere inside Druid/Spark/Kryo/Jackson interop:
   ```
   java.lang.UnsupportedOperationException
at 
io.druid.java.util.common.parsers.ObjectFlatteners$1$1.put(ObjectFlatteners.java:121)
 ~[java-util-0.12.1-rc3-SNAPSHOT.jar:0.12.1-rc3-SNAPSHOT]
at 
io.druid.java.util.common.parsers.ObjectFlatteners$1$1.put(ObjectFlatteners.java:77)
 ~[java-util-0.12.1-rc3-SNAPSHOT.jar:0.12.1-rc3-SNAPSHOT]
at 
com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:162)
 ~[kryo-shaded-3.0.3.jar:?]
at 
com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:39) 
~[kryo-shaded-3.0.3.jar:?]
at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:793) 
~[kryo-shaded-3.0.3.jar:?]
at com.twitter.chill.Tuple2Serializer.read(TupleSerializers.scala:42) 
~[chill_2.10-0.8.0.jar:0.8.0]
at com.twitter.chill.Tuple2Serializer.read(TupleSerializers.scala:33) 
~[chill_2.10-0.8.0.jar:0.8.0]
at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:793) 
~[kryo-shaded-3.0.3.jar:?]
at 
org.apache.spark.serializer.KryoDeserializationStream.readObject(KryoSerializer.scala:244)
 ~[spark-core_2.10-2.1.0.jar:2.1.0]
at 
org.apache.spark.serializer.DeserializationStream.readKey(Serializer.scala:157) 
~[spark-core_2.10-2.1.0.jar:2.1.0]
at 
org.apache.spark.serializer.DeserializationStream$$anon$2.getNext(Serializer.scala:189)
 ~[spark-core_2.10-2.1.0.jar:2.1.0]
at 
org.apache.spark.serializer.DeserializationStream$$anon$2.getNext(Serializer.scala:186)
 ~[spark-core_2.10-2.1.0.jar:2.1.0]
at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73) 
~[spark-core_2.10-2.1.0.jar:2.1.0]
at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) 
~[scala-library-2.10.6.jar:?]
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) 
~[scala-library-2.10.6.jar:?]
at 
org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:32) 
~[spark-core_2.10-2.1.0.jar:2.1.0]
at 
org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39) 
~[spark-core_2.10-2.1.0.jar:2.1.0]
at scala.collection.Iterator$GroupedIterator.fill(Iterator.scala:966) 
~[scala-library-2.10.6.jar:?]
at 
scala.collection.Iterator$GroupedIterator.hasNext(Iterator.scala:972) 
~[scala-library-2.10.6.jar:?]
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) 
~[scala-library-2.10.6.jar:?]
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) 
~[scala-library-2.10.6.jar:?]
at scala.collection.Iterator$class.foreach(Iterator.scala:727) 
~[scala-library-2.10.6.jar:?]
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157) 
~[scala-library-2.10.6.jar:?]
at 
scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48) 
~[scala-library-2.10.6.jar:?]
at 
scala.collection.mutable.ListBuffer.$plus$plus$eq(ListBuffer.scala:176) 
~[scala-library-2.10.6.jar:?]
at 
scala.collection.mutable.ListBuffer.$plus$plus$eq(ListBuffer.scala:45) 
~[scala-library-2.10.6.jar:?]
at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273) 
~[scala-library-2.10.6.jar:?]
at scala.collection.AbstractIterator.to(Iterator.scala:1157) 
~[scala-library-2.10.6.jar:?]
at 
scala.collection.TraversableOnce$class.toList(TraversableOnce.scala:257) 
~[scala-library-2.10.6.jar:?]
at scala.collection.AbstractIterator.toList(Iterator.scala:1157) 
~[scala-library-2.10.6.jar:?]
at 
io.druid.indexer.spark.SparkDruidIndexer$$anonfun$14.apply(SparkDruidIndexer.scala:341)
 ~[classes/:?]
at 
io.druid.indexer.spark.SparkDruidIndexer$$anonfun$14.apply(SparkDruidIndexer.scala:239)
 ~[classes/:?]
at 
org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndex$1$$anonfun$apply$26.apply(RDD.scala:843)
 ~[spark-core_2.10-2.1.0.jar:2.1.0]
at 
org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndex$1$$anonfun$apply$26.apply(RDD.scala:843)
 ~[spark-core_2.10-2.1.0.jar:2.1.0]
at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
~[spark-core_2.10-2.1.0.jar:2.1.0]
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323) 
~[spark-core_2.10-2.1.0.jar:2.1.0]
at org.apache.spark.rdd.RDD.iterator(RDD.scala:287) 
~[spark-core_2.10-2.1.0.jar:2.1.0]
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) 
~[spark-core_2.10-2.1.0.jar:2.1.0]
at org.apache.spark.scheduler.Task.run(Task.scala:99) 
~[spark-core_2.10-2.1.0.jar:2.1.0]
at 
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282) 
[spark-core_2.10-2.1.0.jar:2.1.0]
at 

[GitHub] jon-wei commented on issue #6026: CORs Issue and Access-Control-Allow-Origin

2018-07-20 Thread GitBox
jon-wei commented on issue #6026: CORs Issue and Access-Control-Allow-Origin
URL: 
https://github.com/apache/incubator-druid/issues/6026#issuecomment-406711317
 
 
   Core Druid doesn't add CORS headers, I believe some people have been using 
this extension https://github.com/acesinc/druid-cors-filter-extension for that


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@druid.apache.org
For additional commands, e-mail: dev-h...@druid.apache.org



[GitHub] leventov commented on a change in pull request #5856: Immediately send 401 on basic HTTP authentication failure

2018-07-20 Thread GitBox
leventov commented on a change in pull request #5856: Immediately send 401 on 
basic HTTP authentication failure
URL: https://github.com/apache/incubator-druid/pull/5856#discussion_r204147376
 
 

 ##
 File path: 
extensions-core/druid-basic-security/src/main/java/io/druid/security/basic/authentication/BasicHTTPAuthenticator.java
 ##
 @@ -175,6 +182,9 @@ public void doFilter(
   if (checkCredentials(user, password)) {
 AuthenticationResult authenticationResult = new 
AuthenticationResult(user, authorizerName, name, null);
 servletRequest.setAttribute(AuthConfig.DRUID_AUTHENTICATION_RESULT, 
authenticationResult);
+  } else {
+httpResp.sendError(HttpServletResponse.SC_UNAUTHORIZED);
+return;
 
 Review comment:
   @jon-wei if this return is intentional, would be clearer to move the line 
`filterChain.doFilter(servletRequest, servletResponse);` from under the `if` 
block into the first branch.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@druid.apache.org
For additional commands, e-mail: dev-h...@druid.apache.org



[GitHub] jihoonson commented on issue #5471: Implement force push down for nested group by query

2018-07-20 Thread GitBox
jihoonson commented on issue #5471: Implement force push down for nested group 
by query
URL: https://github.com/apache/incubator-druid/pull/5471#issuecomment-406691347
 
 
   @samarthjain thanks. It looks the huge patch size is because of test data. I 
think it's fine. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@druid.apache.org
For additional commands, e-mail: dev-h...@druid.apache.org



[GitHub] leventov commented on issue #5496: Segment filtering should be done by looking at the inner most query o…

2018-07-20 Thread GitBox
leventov commented on issue #5496: Segment filtering should be done by looking 
at the inner most query o…
URL: https://github.com/apache/incubator-druid/pull/5496#issuecomment-406690368
 
 
   @gianm @jihoonson you could change any PR title, please do this in such 
situations to keep commit messages readable.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@druid.apache.org
For additional commands, e-mail: dev-h...@druid.apache.org



[GitHub] fjy closed pull request #6005: Add concat and textcat SQL functions

2018-07-20 Thread GitBox
fjy closed pull request #6005: Add concat and textcat SQL functions
URL: https://github.com/apache/incubator-druid/pull/6005
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/docs/content/querying/sql.md b/docs/content/querying/sql.md
index 09778119f2c..26ed4202319 100644
--- a/docs/content/querying/sql.md
+++ b/docs/content/querying/sql.md
@@ -125,6 +125,8 @@ String functions accept strings, and return a type 
appropriate to the function.
 |Function|Notes|
 ||-|
 |`x \|\| y`|Concat strings x and y.|
+|`CONCAT(expr, expr...)`|Concats a list of expressions.|
+|`TEXTCAT(expr, expr)`|Two argument version of CONCAT.|
 |`LENGTH(expr)`|Length of expr in UTF-16 code units.|
 |`CHAR_LENGTH(expr)`|Synonym for `LENGTH`.|
 |`CHARACTER_LENGTH(expr)`|Synonym for `LENGTH`.|
diff --git 
a/sql/src/main/java/io/druid/sql/calcite/expression/builtin/ConcatOperatorConversion.java
 
b/sql/src/main/java/io/druid/sql/calcite/expression/builtin/ConcatOperatorConversion.java
new file mode 100644
index 000..21268b98318
--- /dev/null
+++ 
b/sql/src/main/java/io/druid/sql/calcite/expression/builtin/ConcatOperatorConversion.java
@@ -0,0 +1,72 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.sql.calcite.expression.builtin;
+
+import io.druid.sql.calcite.expression.DruidExpression;
+import io.druid.sql.calcite.expression.OperatorConversions;
+import io.druid.sql.calcite.expression.SqlOperatorConversion;
+import io.druid.sql.calcite.planner.Calcites;
+import io.druid.sql.calcite.planner.PlannerContext;
+import io.druid.sql.calcite.table.RowSignature;
+import org.apache.calcite.rex.RexNode;
+import org.apache.calcite.sql.SqlFunction;
+import org.apache.calcite.sql.SqlFunctionCategory;
+import org.apache.calcite.sql.SqlKind;
+import org.apache.calcite.sql.type.OperandTypes;
+import org.apache.calcite.sql.type.ReturnTypes;
+import org.apache.calcite.sql.type.SqlTypeName;
+
+public class ConcatOperatorConversion implements SqlOperatorConversion
+{
+  private static final SqlFunction SQL_FUNCTION = new SqlFunction(
+  "CONCAT",
+  SqlKind.OTHER_FUNCTION,
+  ReturnTypes.explicit(
+  factory -> Calcites.createSqlType(factory, SqlTypeName.VARCHAR)
+  ),
+  null,
+  OperandTypes.SAME_VARIADIC,
+  SqlFunctionCategory.STRING
+  );
+
+  @Override
+  public SqlFunction calciteOperator()
+  {
+return SQL_FUNCTION;
+  }
+
+  @Override
+  public DruidExpression toDruidExpression(
+  final PlannerContext plannerContext,
+  final RowSignature rowSignature,
+  final RexNode rexNode
+  )
+  {
+return OperatorConversions.convertCall(
+plannerContext,
+rowSignature,
+rexNode,
+druidExpressions -> DruidExpression.of(
+null,
+DruidExpression.functionCall("concat", druidExpressions)
+)
+);
+  }
+}
diff --git 
a/sql/src/main/java/io/druid/sql/calcite/expression/builtin/TextcatOperatorConversion.java
 
b/sql/src/main/java/io/druid/sql/calcite/expression/builtin/TextcatOperatorConversion.java
new file mode 100644
index 000..a34b57fe05b
--- /dev/null
+++ 
b/sql/src/main/java/io/druid/sql/calcite/expression/builtin/TextcatOperatorConversion.java
@@ -0,0 +1,66 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * 

[GitHub] fjy closed pull request #6015: Check the kafka topic when comparing checkpoints from tasks with the one stored in metastore

2018-07-20 Thread GitBox
fjy closed pull request #6015: Check the kafka topic when comparing checkpoints 
from tasks with the one stored in metastore
URL: https://github.com/apache/incubator-druid/pull/6015
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/extensions-core/kafka-indexing-service/src/main/java/io/druid/indexing/kafka/supervisor/KafkaSupervisor.java
 
b/extensions-core/kafka-indexing-service/src/main/java/io/druid/indexing/kafka/supervisor/KafkaSupervisor.java
index ed287fa0591..046331e946f 100644
--- 
a/extensions-core/kafka-indexing-service/src/main/java/io/druid/indexing/kafka/supervisor/KafkaSupervisor.java
+++ 
b/extensions-core/kafka-indexing-service/src/main/java/io/druid/indexing/kafka/supervisor/KafkaSupervisor.java
@@ -514,9 +514,7 @@ public void checkpoint(int taskGroupId, DataSourceMetadata 
previousCheckpoint, D
 Preconditions.checkNotNull(previousCheckpoint, "previousCheckpoint");
 Preconditions.checkNotNull(currentCheckpoint, "current checkpoint cannot 
be null");
 Preconditions.checkArgument(
-ioConfig.getTopic()
-.equals(((KafkaDataSourceMetadata) 
currentCheckpoint).getKafkaPartitions()
- 
.getTopic()),
+ioConfig.getTopic().equals(((KafkaDataSourceMetadata) 
currentCheckpoint).getKafkaPartitions().getTopic()),
 "Supervisor topic [%s] and topic in checkpoint [%s] does not match",
 ioConfig.getTopic(),
 ((KafkaDataSourceMetadata) 
currentCheckpoint).getKafkaPartitions().getTopic()
@@ -661,6 +659,8 @@ public void handle() throws ExecutionException, 
InterruptedException
 int index = checkpoints.size();
 for (int sequenceId : checkpoints.descendingKeySet()) {
   Map checkpoint = checkpoints.get(sequenceId);
+  // We have already verified the topic of the current checkpoint is 
same with that in ioConfig.
+  // See checkpoint().
   if 
(checkpoint.equals(previousCheckpoint.getKafkaPartitions().getPartitionOffsetMap()))
 {
 break;
   }
@@ -1183,16 +1183,22 @@ public void onFailure(Throwable t)
   Futures.allAsList(futures).get(futureTimeoutInSeconds, TimeUnit.SECONDS);
 }
 catch (Exception e) {
-  Throwables.propagate(e);
+  throw new RuntimeException(e);
 }
 
 final KafkaDataSourceMetadata latestDataSourceMetadata = 
(KafkaDataSourceMetadata) indexerMetadataStorageCoordinator
 .getDataSourceMetadata(dataSource);
-final Map latestOffsetsFromDb = (latestDataSourceMetadata 
== null
-|| 
latestDataSourceMetadata.getKafkaPartitions() == null) ? null
-   
   : latestDataSourceMetadata
-   .getKafkaPartitions()
-   
.getPartitionOffsetMap();
+final boolean hasValidOffsetsFromDb = latestDataSourceMetadata != null &&
+  
latestDataSourceMetadata.getKafkaPartitions() != null &&
+  ioConfig.getTopic().equals(
+  
latestDataSourceMetadata.getKafkaPartitions().getTopic()
+  );
+final Map latestOffsetsFromDb;
+if (hasValidOffsetsFromDb) {
+  latestOffsetsFromDb = 
latestDataSourceMetadata.getKafkaPartitions().getPartitionOffsetMap();
+} else {
+  latestOffsetsFromDb = null;
+}
 
 // order tasks of this taskGroup by the latest sequenceId
 taskSequences.sort((o1, o2) -> 
o2.rhs.firstKey().compareTo(o1.rhs.firstKey()));
@@ -1203,22 +1209,21 @@ public void onFailure(Throwable t)
 
 while (taskIndex < taskSequences.size()) {
   if (earliestConsistentSequenceId.get() == -1) {
-// find the first replica task with earliest sequenceId consistent 
with datasource metadata in the metadata store
+// find the first replica task with earliest sequenceId consistent 
with datasource metadata in the metadata
+// store
 if (taskSequences.get(taskIndex).rhs.entrySet().stream().anyMatch(
 sequenceCheckpoint -> 
sequenceCheckpoint.getValue().entrySet().stream().allMatch(
 partitionOffset -> Longs.compare(
 partitionOffset.getValue(),
-latestOffsetsFromDb == null
-?
-partitionOffset.getValue()
-: 
latestOffsetsFromDb.getOrDefault(partitionOffset.getKey(), 
partitionOffset.getValue())
+latestOffsetsFromDb == null ?
+ 

[GitHub] samarthjain commented on issue #5471: Implement force push down for nested group by query

2018-07-20 Thread GitBox
samarthjain commented on issue #5471: Implement force push down for nested 
group by query
URL: https://github.com/apache/incubator-druid/pull/5471#issuecomment-406683868
 
 
   @jihoonson - the PR contains my changes only.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@druid.apache.org
For additional commands, e-mail: dev-h...@druid.apache.org



Re: Build failure on 0.13.SNAPSHOT

2018-07-20 Thread Jihoon Son
Hi Dongjin,

that is weird. It looks like the vm crashed because of out of memory while
testing.
It might be a real issue or not.
Have you set any memory configuration for your maven?

Jihoon

On Thu, Jul 19, 2018 at 7:09 PM Dongjin Lee  wrote:

> Hi Jihoon,
>
> I ran `mvn clean package` following development/build
> <
> https://github.com/apache/incubator-druid/blob/master/docs/content/development/build.md
> >
> .
>
> Dongjin
>
> On Fri, Jul 20, 2018 at 12:30 AM Jihoon Son  wrote:
>
> > Hi Dongjin,
> >
> > what maven command did you run?
> >
> > Jihoon
> >
> > On Wed, Jul 18, 2018 at 10:38 PM Dongjin Lee  wrote:
> >
> > > Hello. I am trying to build druid, but it fails. My environment is like
> > the
> > > following:
> > >
> > > - CPU: Intel(R) Core(TM) i7-7560U CPU @ 2.40GHz
> > > - RAM: 7704 MB
> > > - OS: ubuntu 18.04
> > > - JDK: openjdk version "1.8.0_171" (default configuration, with
> > MaxHeapSize
> > > = 1928 MB)
> > > - Branch: master (commit: cd8ea3d)
> > >
> > > The error message I got is:
> > >
> > > [INFO]
> > > >
> > 
> > > > [INFO] Reactor Summary:
> > > > [INFO]
> > > > [INFO] io.druid:druid . SUCCESS [
> > > > 50.258 s]
> > > > [INFO] java-util .. SUCCESS
> > > [03:57
> > > > min]
> > > > [INFO] druid-api .. SUCCESS [
> > > > 22.694 s]
> > > > [INFO] druid-common ... SUCCESS [
> > > > 14.083 s]
> > > > [INFO] druid-hll .. SUCCESS [
> > > > 17.126 s]
> > > > [INFO] extendedset  SUCCESS [
> > > > 10.856 s]
> > > >
> > > > *[INFO] druid-processing ... FAILURE
> > > > [04:36 min]*[INFO] druid-aws-common
> ...
> > > > SKIPPED
> > > > [INFO] druid-server ... SKIPPED
> > > > [INFO] druid-examples . SKIPPED
> > > > ...
> > > > [INFO]
> > > >
> > 
> > > > [INFO] BUILD FAILURE
> > > > [INFO]
> > > >
> > 
> > > > [INFO] Total time: 10:29 min
> > > > [INFO] Finished at: 2018-07-19T13:23:31+09:00
> > > > [INFO] Final Memory: 88M/777M
> > > > [INFO]
> > > >
> > 
> > > >
> > > > *[ERROR] Failed to execute goal
> > > > org.apache.maven.plugins:maven-surefire-plugin:2.19.1:test
> > (default-test)
> > > > on project druid-processing: Execution default-test of goal
> > > > org.apache.maven.plugins:maven-surefire-plugin:2.19.1:test failed:
> The
> > > > forked VM terminated without properly saying goodbye. VM crash or
> > > > System.exit called?*[ERROR] Command was /bin/sh -c cd
> > > > /home/djlee/workspace/java/druid/processing &&
> > > > /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xmx3000m
> > > -Duser.language=en
> > > > -Duser.country=US -Dfile.encoding=UTF-8 -Duser.timezone=UTC
> > > > -Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager
> > > > -Ddruid.indexing.doubleStorage=double -jar
> > > >
> > >
> >
> /home/djlee/workspace/java/druid/processing/target/surefire/surefirebooter1075382243904099051.jar
> > > >
> > >
> >
> /home/djlee/workspace/java/druid/processing/target/surefire/surefire559351134757209tmp
> > > >
> > >
> >
> /home/djlee/workspace/java/druid/processing/target/surefire/surefire_5173894389718744688tmp
> > >
> > >
> > > It seems like it fails when it runs tests on `druid-processing` module
> > but
> > > I can't certain. Is there anyone who can give me some hints? Thanks in
> > > advance.
> > >
> > > Best,
> > > Dongjin
> > >
> > > --
> > > *Dongjin Lee*
> > >
> > > *A hitchhiker in the mathematical world.*
> > >
> > > *github:  github.com/dongjinleekr
> > > linkedin:
> > kr.linkedin.com/in/dongjinleekr
> > > slideshare:
> > > www.slideshare.net/dongjinleekr
> > > *
> > >
> >
> > --
> > *Dongjin Lee*
> >
> > *A hitchhiker in the mathematical world.*
> >
> > *github:  github.com/dongjinleekr
> > linkedin:
> kr.linkedin.com/in/dongjinleekr
> > slideshare:
> www.slideshare.net/dongjinleekr
> > *
> >
>


[GitHub] diliptechno commented on issue #5117: Second task fail - out of memory error

2018-07-20 Thread GitBox
diliptechno commented on issue #5117: Second task fail - out of memory error
URL: 
https://github.com/apache/incubator-druid/issues/5117#issuecomment-406658412
 
 
   how did you fix this problem?
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@druid.apache.org
For additional commands, e-mail: dev-h...@druid.apache.org



[GitHub] murphycrosby opened a new issue #6026: CORs Issue and Access-Control-Allow-Origin

2018-07-20 Thread GitBox
murphycrosby opened a new issue #6026: CORs Issue and 
Access-Control-Allow-Origin
URL: https://github.com/apache/incubator-druid/issues/6026
 
 
   I've set the druid.auth.allowUnauthenticatedHttpOptions flag to true, but 
Druid is still not adding the Cross Origin headers, and I am getting "Response 
to preflight request doesn't pass access control check: No 
'Access-Control-Allow-Origin' header is present on the requested resource." in 
my angular app.  
   
   Is there a way to set the headers in the Druid properties conf?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@druid.apache.org
For additional commands, e-mail: dev-h...@druid.apache.org



[GitHub] ashukhira opened a new issue #6025: Druid Query Error

2018-07-20 Thread GitBox
ashukhira opened a new issue #6025: Druid Query Error
URL: https://github.com/apache/incubator-druid/issues/6025
 
 
   Run below query by curl call that connect to druid broker, getting error:--
   
   curl -X POST "http://localhost:8082/druid/v2/?pretty; -H 'content-type: 
pplication/json' -d'{"aggregations": [{"fieldName": "impressions","fieldNames": 
["impressions"],"name": "SUM(impressions)","type": "doubleSum"}],"dataSource": 
"vmaxmediation","dimensions": ["adpartner_id", "adspot_format"],"granularity": 
"all","intervals": 
"2018-07-13T00:00:00+00:00/2018-07-20T13:48:22+00:00","limitSpec": {"columns": 
[{"dimension": {"aggregate": "SUM","column": {"column_name": 
"impressions","description": null,"expression": null,"filterable": 
false,"groupby": false,"is_dttm": null,"optionName": "_col_impressions","type": 
"doubleSum","verbose_name": null},"expressionType": "SIMPLE","fromFormData": 
false,"hasCustomLabel": false,"label": "SUM(impressions)","optionName": 
"metric_6nkhk39wj0j_ym12qdvuhqa","sqlExpression": null},"direction": 
"descending"}],"limit": 1,"type": "default"},"postAggregations": 
[],"queryType": "groupBy"}'
   
   error:--
   {
   "error" : "Unknown exception",
   "errorMessage" : "Can not deserialize instance of java.lang.String out of 
START_OBJECT token\n at [Source: 
HttpInputOverHTTP@68716b4c[c=837,q=1,[0]=EOF,s=STREAM]; line: 1, column: 814] 
(through reference chain: java.util.ArrayList[0])",
   "errorClass" : "com.fasterxml.jackson.databind.JsonMappingException",
   "host" : null
   }


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@druid.apache.org
For additional commands, e-mail: dev-h...@druid.apache.org