[GitHub] [druid] AndyLin0128 opened a new issue #10077: build druid-0.18.1 failed

2020-06-25 Thread GitBox


AndyLin0128 opened a new issue #10077:
URL: https://github.com/apache/druid/issues/10077


   [INFO] Analysis Started
   [INFO] Finished Archive Analyzer (3 seconds)
   [INFO] Finished File Name Analyzer (0 seconds)
   [INFO] Finished Jar Analyzer (2 seconds)
   [ERROR] 
   [ERROR] .NET Assembly Analyzer could not be initialized and at least one 
'exe' or 'dll' was scanned. The 'dotnet' executable could not be found on the 
path; either disable the Assembly Analyzer or add the path to dotnet core in 
the configuration.
   [ERROR] 
   [INFO] Finished Dependency Merging Analyzer (0 seconds)
   [INFO] Finished Version Filter Analyzer (0 seconds)
   [INFO] Finished Hint Analyzer (0 seconds)
   [INFO] Created CPE Index (3 seconds)
   [INFO] Finished CPE Analyzer (6 seconds)
   [INFO] Finished False Positive Analyzer (0 seconds)
   [INFO] Finished NVD CVE Analyzer (0 seconds)
   [ERROR] Exception occurred initializing RetireJS Analyzer.
   [INFO] Finished Sonatype OSS Index Analyzer (0 seconds)
   [INFO] Finished Vulnerability Suppression Analyzer (0 seconds)
   [INFO] Finished Dependency Bundling Analyzer (0 seconds)
   [INFO] Analysis Complete (14 seconds)
   [WARNING]
   
   [INFO] BUILD FAILURE
   [INFO] 

   [INFO] Total time:  01:03 min
   [INFO] Finished at: 2020-06-25T15:05:54+08:00
   [INFO] 

   [ERROR] Failed to execute goal org.owasp:dependency-check-maven:5.3.2:check 
(default) on project druid-core: One or more exceptions occurred during 
dependency-check analysis: One or more exceptions occurred during analysis:
   [ERROR]  Failed to initialize the RetireJS repo: 
`/root/.m2/repository/org/owasp/dependency-check-utils/5.3.2/../../dependency-check-data/4.0/jsrepository.json`
 appears to be malformed. Please delete the file or run the dependency-check 
purge command and re-try running dependency-check.
   [ERROR] -> [Help 1]
   [ERROR]
   [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
   [ERROR] Re-run Maven using the -X switch to enable full debug logging.
   [ERROR]
   [ERROR] For more information about the errors and possible solutions, please 
read the following articles:
   [ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
   [ERROR]
   [ERROR] After correcting the problems, you can resume the build with the 
command
   [ERROR]   mvn  -rf :druid-core



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [druid] ArvinZheng closed issue #8882: equals method in SegmentId looks buggy

2020-06-25 Thread GitBox


ArvinZheng closed issue #8882:
URL: https://github.com/apache/druid/issues/8882


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [druid] ArvinZheng closed issue #8609: Zero filling for TopN

2020-06-25 Thread GitBox


ArvinZheng closed issue #8609:
URL: https://github.com/apache/druid/issues/8609


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [druid] jp707049 opened a new issue #10078: How to use `environment variables` in runtime.properties?

2020-06-25 Thread GitBox


jp707049 opened a new issue #10078:
URL: https://github.com/apache/druid/issues/10078


   Hello,
   
   - We've have an env variable defined in a node called `DRUID_HOST` which 
points to the actual hostname of the node.
   - How can we use `DRUID_HOST` in runtime.properties of a druid service?
   
   I tried the below way but didn't work.
   ```
   druid.host={"type":"environment","variable":"DRUID_HOST"}
   ```
   
   Thank you,
   Jeet



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [druid] pjain1 merged pull request #10070: Fix balancer strategy

2020-06-25 Thread GitBox


pjain1 merged pull request #10070:
URL: https://github.com/apache/druid/pull/10070


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [druid] pjain1 closed issue #10067: CostBalancerStrategy over assigns segments to historicals over their max size

2020-06-25 Thread GitBox


pjain1 closed issue #10067:
URL: https://github.com/apache/druid/issues/10067


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] branch master updated: Fix balancer strategy (#10070)

2020-06-25 Thread pjain1
This is an automated email from the ASF dual-hosted git repository.

pjain1 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/master by this push:
 new 422a8af  Fix balancer strategy (#10070)
422a8af is described below

commit 422a8af14e932d4da0cd7b78d4b729884dd25a34
Author: Parag Jain 
AuthorDate: Thu Jun 25 16:45:00 2020 +0530

Fix balancer strategy (#10070)

* fix server overassignment

* fix random balancer strategy, add more tests

* comment

* added more tests

* fix forbidden apis

* fix typo
---
 .../server/coordinator/CostBalancerStrategy.java   |   9 +-
 .../server/coordinator/RandomBalancerStrategy.java |  14 +-
 .../server/coordinator/BalancerStrategyTest.java   | 124 ++
 .../druid/server/coordinator/RunRulesTest.java | 270 -
 4 files changed, 406 insertions(+), 11 deletions(-)

diff --git 
a/server/src/main/java/org/apache/druid/server/coordinator/CostBalancerStrategy.java
 
b/server/src/main/java/org/apache/druid/server/coordinator/CostBalancerStrategy.java
index 5d656d6..e5e3cb5 100644
--- 
a/server/src/main/java/org/apache/druid/server/coordinator/CostBalancerStrategy.java
+++ 
b/server/src/main/java/org/apache/druid/server/coordinator/CostBalancerStrategy.java
@@ -367,7 +367,8 @@ public class CostBalancerStrategy implements 
BalancerStrategy
   final boolean includeCurrentServer
   )
   {
-Pair bestServer = Pair.of(Double.POSITIVE_INFINITY, 
null);
+final Pair noServer = 
Pair.of(Double.POSITIVE_INFINITY, null);
+Pair bestServer = noServer;
 
 List>> futures = new 
ArrayList<>();
 
@@ -391,7 +392,11 @@ public class CostBalancerStrategy implements 
BalancerStrategy
   bestServers.add(server);
 }
   }
-
+  // If the best server list contains server whose cost of serving the 
segment is INFINITE then this means
+  // no usable servers are found so return a null server so that segment 
assignment does not happen
+  if (bestServers.get(0).lhs.isInfinite()) {
+return noServer;
+  }
   // Randomly choose a server from the best servers
   bestServer = 
bestServers.get(ThreadLocalRandom.current().nextInt(bestServers.size()));
 }
diff --git 
a/server/src/main/java/org/apache/druid/server/coordinator/RandomBalancerStrategy.java
 
b/server/src/main/java/org/apache/druid/server/coordinator/RandomBalancerStrategy.java
index 72fdedf..de3e46e 100644
--- 
a/server/src/main/java/org/apache/druid/server/coordinator/RandomBalancerStrategy.java
+++ 
b/server/src/main/java/org/apache/druid/server/coordinator/RandomBalancerStrategy.java
@@ -28,20 +28,22 @@ import java.util.List;
 import java.util.NavigableSet;
 import java.util.Set;
 import java.util.concurrent.ThreadLocalRandom;
+import java.util.stream.Collectors;
 
 public class RandomBalancerStrategy implements BalancerStrategy
 {
   @Override
   public ServerHolder findNewSegmentHomeReplicator(DataSegment 
proposalSegment, List serverHolders)
   {
-if (serverHolders.size() == 1) {
+// filter out servers whose avaialable size is less than required for this 
segment and those already serving this segment
+final List usableServerHolders = 
serverHolders.stream().filter(
+serverHolder -> serverHolder.getAvailableSize() >= 
proposalSegment.getSize() && !serverHolder.isServingSegment(
+proposalSegment)
+).collect(Collectors.toList());
+if (usableServerHolders.size() == 0) {
   return null;
 } else {
-  ServerHolder holder = 
serverHolders.get(ThreadLocalRandom.current().nextInt(serverHolders.size()));
-  while (holder.isServingSegment(proposalSegment)) {
-holder = 
serverHolders.get(ThreadLocalRandom.current().nextInt(serverHolders.size()));
-  }
-  return holder;
+  return 
usableServerHolders.get(ThreadLocalRandom.current().nextInt(usableServerHolders.size()));
 }
   }
 
diff --git 
a/server/src/test/java/org/apache/druid/server/coordinator/BalancerStrategyTest.java
 
b/server/src/test/java/org/apache/druid/server/coordinator/BalancerStrategyTest.java
new file mode 100644
index 000..b4d3ac5
--- /dev/null
+++ 
b/server/src/test/java/org/apache/druid/server/coordinator/BalancerStrategyTest.java
@@ -0,0 +1,124 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS

[GitHub] [druid] liran-funaro commented on pull request #10001: Optimizing incremental-index ingestion using off-heap key/value map (OakMap)

2020-06-25 Thread GitBox


liran-funaro commented on pull request #10001:
URL: https://github.com/apache/druid/pull/10001#issuecomment-649515567


   Hi @jihoonson, have you had a chance to check out our issue/PR? We will be 
happy to answer any questions you might have.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [druid] zenfenan commented on issue #6743: IncrementalIndex generally overestimates theta sketch size

2020-06-25 Thread GitBox


zenfenan commented on issue #6743:
URL: https://github.com/apache/druid/issues/6743#issuecomment-649569424


   This is still an issue. We are having `thetaSketch` based metrics and 
ingestion does a frequent flush with fewer records. For the time being, we have 
set `maxBytesInMemory` to -1 and configured `maxRowsInMemory` to a guesstimated 
number but would really help if we can have `maxBytesInMemory` to work since we 
can then have a segment at the recommended 500-700MB range.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [druid] suneet-s commented on a change in pull request #10060: More prominent instructions on code coverage failure

2020-06-25 Thread GitBox


suneet-s commented on a change in pull request #10060:
URL: https://github.com/apache/druid/pull/10060#discussion_r445611807



##
File path: .travis.yml
##
@@ -183,7 +183,7 @@ jobs:
   --log-template "totals-complete"
   --log-template "errors"
   --
-  || { printf "\nDiff code coverage check failed. To view coverage 
report, run 'mvn clean test jacoco:report' and open 
'target/site/jacoco/index.html'\n" && false; }
+  || { printf "\n\nFAILED\nDiff code coverage check failed. To 
view coverage report, run 'mvn clean test jacoco:report' and open 
'target/site/jacoco/index.html'\nFor more instructions on how to run code 
coverage locally, follow instructions here - 
https://github.com/apache/druid/blob/master/dev/code-review/code-coverage.md#running-code-coverage-locally\n\n";
 && false; }

Review comment:
   ```suggestion
 || { printf "\n\nFAILED\nDiff code coverage check failed. 
To view coverage report, run 'mvn clean test jacoco:report' and open 
'target/site/jacoco/index.html'\nFor more details on how to run code coverage 
locally, follow instructions here - 
https://github.com/apache/druid/blob/master/dev/code-review/code-coverage.md#running-code-coverage-locally\n\n";
 && false; }
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [druid] FrankChen021 commented on a change in pull request #9898: support Aliyun OSS service as deep storage

2020-06-25 Thread GitBox


FrankChen021 commented on a change in pull request #9898:
URL: https://github.com/apache/druid/pull/9898#discussion_r445612812



##
File path: 
extensions-contrib/aliyun-oss-extensions/src/main/java/org/apache/druid/data/input/aliyun/OssInputSource.java
##
@@ -0,0 +1,178 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.data.input.aliyun;
+
+import com.aliyun.oss.OSS;
+import com.aliyun.oss.model.OSSObjectSummary;
+import com.fasterxml.jackson.annotation.JacksonInject;
+import com.fasterxml.jackson.annotation.JsonCreator;
+import com.fasterxml.jackson.annotation.JsonProperty;
+import com.google.common.base.Preconditions;
+import com.google.common.base.Supplier;
+import com.google.common.base.Suppliers;
+import org.apache.druid.data.input.InputEntity;
+import org.apache.druid.data.input.InputFileAttribute;
+import org.apache.druid.data.input.InputSplit;
+import org.apache.druid.data.input.SplitHintSpec;
+import org.apache.druid.data.input.impl.CloudObjectInputSource;
+import org.apache.druid.data.input.impl.CloudObjectLocation;
+import org.apache.druid.data.input.impl.SplittableInputSource;
+import org.apache.druid.storage.aliyun.OssInputDataConfig;
+import org.apache.druid.storage.aliyun.OssStorageDruidModule;
+import org.apache.druid.storage.aliyun.OssUtils;
+import org.apache.druid.utils.Streams;
+
+import javax.annotation.Nonnull;
+import javax.annotation.Nullable;
+import java.net.URI;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Objects;
+import java.util.stream.Collectors;
+import java.util.stream.Stream;
+
+public class OssInputSource extends CloudObjectInputSource
+{
+  private final Supplier clientSupplier;
+  @JsonProperty("properties")
+  private final OssClientConfig inputSourceConfig;
+  private final OssInputDataConfig inputDataConfig;
+
+  /**
+   * Constructor for OssInputSource
+   *
+   * @param clientThe default client built with all default configs
+   *  from Guice. This injected singleton client is 
used when {@param inputSourceConfig}
+   *  is not provided and hence
+   * @param inputDataConfig   Stores the configuration for options related to 
reading input data
+   * @param uris  User provided uris to read input data
+   * @param prefixes  User provided prefixes to read input data
+   * @param objects   User provided cloud objects values to read input 
data
+   * @param inputSourceConfig User provided properties for overriding the 
default aliyun-oss configuration
+   */
+  @JsonCreator
+  public OssInputSource(
+  @JacksonInject OSS client,
+  @JacksonInject OssInputDataConfig inputDataConfig,
+  @JsonProperty("uris") @Nullable List uris,
+  @JsonProperty("prefixes") @Nullable List prefixes,
+  @JsonProperty("objects") @Nullable List objects,
+  @JsonProperty("properties") @Nullable OssClientConfig inputSourceConfig
+  )
+  {
+super(OssStorageDruidModule.SCHEME, uris, prefixes, objects);
+this.inputDataConfig = Preconditions.checkNotNull(inputDataConfig, 
"inputDataConfig");
+Preconditions.checkNotNull(client, "client");
+this.inputSourceConfig = inputSourceConfig;
+this.clientSupplier = Suppliers.memoize(
+() -> {
+  if (inputSourceConfig != null) {
+return inputSourceConfig.buildClient();
+  } else {
+return client;
+  }
+}
+);
+  }
+
+
+  @Nullable
+  @JsonProperty("properties")
+  public OssClientConfig getOssInputSourceConfig()
+  {
+return inputSourceConfig;
+  }
+
+  @Override
+  protected InputEntity createEntity(CloudObjectLocation location)
+  {
+return new OssEntity(clientSupplier.get(), location);
+  }
+
+  @Override
+  protected Stream>> 
getPrefixesSplitStream(@Nonnull SplitHintSpec splitHintSpec)
+  {
+final Iterator> splitIterator = splitHintSpec.split(
+getIterableObjectsFromPrefixes().iterator(),
+object -> new InputFileAttribute(object.getSize())
+);
+
+return Streams.sequentialStreamFrom(splitIterator)
+  .map(objects -> objects.stream()
+

[GitHub] [druid] FrankChen021 commented on a change in pull request #9898: support Aliyun OSS service as deep storage

2020-06-25 Thread GitBox


FrankChen021 commented on a change in pull request #9898:
URL: https://github.com/apache/druid/pull/9898#discussion_r445615267



##
File path: 
extensions-contrib/aliyun-oss-extensions/src/main/java/org/apache/druid/storage/aliyun/ObjectSummaryIterator.java
##
@@ -0,0 +1,163 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.storage.aliyun;
+
+import com.aliyun.oss.OSS;
+import com.aliyun.oss.OSSException;
+import com.aliyun.oss.model.ListObjectsRequest;
+import com.aliyun.oss.model.OSSObjectSummary;
+import com.aliyun.oss.model.ObjectListing;
+import org.apache.druid.java.util.common.RE;
+
+import java.net.URI;
+import java.util.Iterator;
+import java.util.NoSuchElementException;
+
+/**
+ * Iterator class used by {@link OssUtils#objectSummaryIterator}.
+ * 
+ * As required by the specification of that method, this iterator is computed 
incrementally in batches of
+ * {@code maxListLength}. The first call is made at the same time the iterator 
is constructed.
+ *
+ */
+public class ObjectSummaryIterator implements Iterator
+{
+  private final OSS client;
+  private final Iterator prefixesIterator;
+  private final int maxListingLength;
+
+  private ListObjectsRequest request;
+  private ObjectListing result;
+  private Iterator objectSummaryIterator;
+  private OSSObjectSummary currentObjectSummary;
+
+  ObjectSummaryIterator(
+  final OSS client,
+  final Iterable prefixes,
+  final int maxListingLength
+  )
+  {
+this.client = client;
+this.prefixesIterator = prefixes.iterator();
+this.maxListingLength = maxListingLength;
+
+prepareNextRequest();
+fetchNextBatch();
+advanceObjectSummary();
+  }
+
+  @Override
+  public boolean hasNext()
+  {
+return currentObjectSummary != null;
+  }
+
+  @Override
+  public OSSObjectSummary next()
+  {
+if (currentObjectSummary == null) {
+  throw new NoSuchElementException();
+}
+
+final OSSObjectSummary retVal = currentObjectSummary;
+advanceObjectSummary();
+return retVal;
+  }
+
+  private void prepareNextRequest()
+  {
+final URI currentUri = prefixesIterator.next();
+final String currentBucket = currentUri.getAuthority();
+final String currentPrefix = OssUtils.extractKey(currentUri);
+
+request = new ListObjectsRequest(currentBucket, currentPrefix, null, null, 
maxListingLength);
+  }
+
+  private void fetchNextBatch()
+  {
+try {
+  result = OssUtils.retry(() -> client.listObjects(request));
+  request.setMarker(result.getNextMarker());
+  objectSummaryIterator = result.getObjectSummaries().iterator();
+}
+catch (OSSException e) {
+  throw new RE(
+  e,
+  "Failed to get object summaries from S3 bucket[%s], prefix[%s]; S3 
error: %s",
+  request.getBucketName(),
+  request.getPrefix(),
+  e.getMessage()
+  );
+}
+catch (Exception e) {
+  throw new RE(
+  e,
+  "Failed to get object summaries from S3 bucket[%s], prefix[%s]",
+  request.getBucketName(),
+  request.getPrefix()
+  );
+}
+  }
+
+  /**
+   * Advance objectSummaryIterator to the next non-placeholder, updating 
"currentObjectSummary".
+   */
+  private void advanceObjectSummary()
+  {
+while (objectSummaryIterator.hasNext() || result.isTruncated() || 
prefixesIterator.hasNext()) {
+  while (objectSummaryIterator.hasNext()) {
+currentObjectSummary = objectSummaryIterator.next();
+// skips directories and empty objects
+if (!isDirectoryPlaceholder(currentObjectSummary) && 
currentObjectSummary.getSize() > 0) {
+  return;
+}
+  }
+
+  // Exhausted "objectSummaryIterator" without finding a non-placeholder.
+  if (result.isTruncated()) {
+fetchNextBatch();
+  } else if (prefixesIterator.hasNext()) {
+prepareNextRequest();
+fetchNextBatch();
+  }
+}
+
+// Truly nothing left to read.
+currentObjectSummary = null;
+  }
+
+  /**
+   * Checks if a given object is a directory placeholder and should be ignored.

Review comment:
   There's no directory placeholder in aliyun OSS, so I made change to thi

[GitHub] [druid] FrankChen021 commented on a change in pull request #9898: support Aliyun OSS service as deep storage

2020-06-25 Thread GitBox


FrankChen021 commented on a change in pull request #9898:
URL: https://github.com/apache/druid/pull/9898#discussion_r445616439



##
File path: 
extensions-contrib/aliyun-oss-extensions/src/main/java/org/apache/druid/storage/aliyun/OssDataSegmentPusher.java
##
@@ -0,0 +1,131 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.storage.aliyun;
+
+import com.aliyun.oss.OSS;
+import com.aliyun.oss.OSSException;
+import com.google.common.collect.ImmutableList;
+import com.google.common.collect.ImmutableMap;
+import com.google.inject.Inject;
+import org.apache.druid.java.util.common.StringUtils;
+import org.apache.druid.java.util.emitter.EmittingLogger;
+import org.apache.druid.segment.SegmentUtils;
+import org.apache.druid.segment.loading.DataSegmentPusher;
+import org.apache.druid.timeline.DataSegment;
+import org.apache.druid.utils.CompressionUtils;
+
+import java.io.File;
+import java.io.IOException;
+import java.net.URI;
+import java.util.List;
+import java.util.Map;
+
+public class OssDataSegmentPusher implements DataSegmentPusher
+{
+  private static final EmittingLogger log = new 
EmittingLogger(OssDataSegmentPusher.class);
+
+  private final OSS client;
+  private final OssStorageConfig config;
+
+  @Inject
+  public OssDataSegmentPusher(
+  OSS client,
+  OssStorageConfig config
+  )
+  {
+this.client = client;
+this.config = config;
+  }
+
+  @Override
+  public String getPathForHadoop()
+  {
+return StringUtils.format("%s/%s", config.getBucket(), config.getPrefix());

Review comment:
   It has not been tested in hadoop cluster yet, because in our test 
environment, all data are ingested from kafka.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [druid] FrankChen021 commented on pull request #9898: support Aliyun OSS service as deep storage

2020-06-25 Thread GitBox


FrankChen021 commented on pull request #9898:
URL: https://github.com/apache/druid/pull/9898#issuecomment-649602917


   Hi @jon-wei , I've updated the code, and re-tested all core functions. IT 
testcases runs OK, and real time ingestion task, compact task, kill task runs 
well in our test cluster. Please check it again.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [druid] FrankChen021 edited a comment on pull request #9898: support Aliyun OSS service as deep storage

2020-06-25 Thread GitBox


FrankChen021 edited a comment on pull request #9898:
URL: https://github.com/apache/druid/pull/9898#issuecomment-649602917


   Hi @jon-wei , I've updated the code, and re-tested all core functions. IT 
testcases runs OK, and real time ingestion task, compact task, kill task, index 
task log persistence runs well in our test cluster. Please check it again.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [druid] jihoonson commented on a change in pull request #10076: ensure ParallelMergeCombiningSequence closes it's closeables

2020-06-25 Thread GitBox


jihoonson commented on a change in pull request #10076:
URL: https://github.com/apache/druid/pull/10076#discussion_r445696870



##
File path: 
core/src/main/java/org/apache/druid/java/util/common/guava/ParallelMergeCombiningSequence.java
##
@@ -1350,4 +1363,24 @@ long getTotalCpuTimeNanos()
   return totalCpuTimeNanos;
 }
   }
+
+  private static  void closeAllCursors(final 
PriorityQueue> pQueue)
+  {
+Closer closer = Closer.create();
+while (!pQueue.isEmpty()) {
+  final BatchedResultsCursor yielder = pQueue.poll();
+  if (yielder != null) {
+// Note: yielder can be null if our comparator threw an exception 
during queue.add.
+closer.register(yielder);

Review comment:
   nit: `Closer.register()` has null check. You can use `registerAll()` 
instead.

##
File path: 
core/src/main/java/org/apache/druid/java/util/common/guava/ParallelMergeCombiningSequence.java
##
@@ -1036,11 +1047,13 @@ public boolean isReleasable()
 @Override
 public void close()
 {
-  try {
-yielder.close();
-  }
-  catch (IOException e) {
-throw new RuntimeException("Failed to close yielder", e);
+  if (yielder != null) {
+try {
+  yielder.close();
+}
+catch (IOException e) {
+  throw new RuntimeException("Failed to close yielder", e);

Review comment:
   It seems the exception will be eventually handled by `CloseQuietly`.

##
File path: 
core/src/main/java/org/apache/druid/java/util/common/guava/ParallelMergeCombiningSequence.java
##
@@ -1350,4 +1363,24 @@ long getTotalCpuTimeNanos()
   return totalCpuTimeNanos;
 }
   }
+
+  private static  void closeAllCursors(final 
PriorityQueue> pQueue)
+  {
+Closer closer = Closer.create();
+while (!pQueue.isEmpty()) {
+  final BatchedResultsCursor yielder = pQueue.poll();
+  if (yielder != null) {
+// Note: yielder can be null if our comparator threw an exception 
during queue.add.
+closer.register(yielder);
+  }
+}
+CloseQuietly.close(closer);
+  }
+
+  private static  void closeAllCursors(final List> 
list)

Review comment:
   You can merge these methods if you use `registerAll()` above.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [druid] jihoonson commented on a change in pull request #10027: fix query memory leak

2020-06-25 Thread GitBox


jihoonson commented on a change in pull request #10027:
URL: https://github.com/apache/druid/pull/10027#discussion_r445720081



##
File path: 
processing/src/main/java/org/apache/druid/query/ChainedExecutionQueryRunner.java
##
@@ -141,33 +144,34 @@ public ChainedExecutionQueryRunner(
   );
 }
 )
-)
-);
+);
 
-queryWatcher.registerQueryFuture(query, futures);
+ListenableFuture>> future = 
Futures.allAsList(futures);
+queryWatcher.registerQueryFuture(query, future);
 
 try {
   return new MergeIterable<>(
   ordering.nullsFirst(),
   QueryContexts.hasTimeout(query) ?
-  futures.get(QueryContexts.getTimeout(query), 
TimeUnit.MILLISECONDS) :
-  futures.get()
+  future.get(QueryContexts.getTimeout(query), 
TimeUnit.MILLISECONDS) :
+  future.get()
   ).iterator();
 }
 catch (InterruptedException e) {
   log.noStackTrace().warn(e, "Query interrupted, cancelling 
pending results, query id [%s]", query.getId());
-  futures.cancel(true);
+  GuavaUtils.cancelAll(true, 
ImmutableList.builder().add(future).addAll(futures).build());

Review comment:
   It seems easy to forget canceling `future` and so error-prone. How about 
modifying `GuavaUtils.cancelAll()` to take `future` as well? So it would be like
   
   ```java
 public static > void cancelAll(
 boolean mayInterruptIfRunning,
 @Nullable ListenableFuture combinedFuture,
 List futures
 )
 {
   final List allFuturesToCancel = new ArrayList<>(futures);
   allFuturesToCancel.add(combinedFuture);
   if (allFuturesToCancel.isEmpty()) {
 return;
   }
   allFuturesToCancel.forEach(f -> {
 try {
   f.cancel(mayInterruptIfRunning);
 }
 catch (Throwable t) {
   log.warn(t, "Error while cancelling future.");
 }
   });
 }
   ```

##
File path: 
processing/src/main/java/org/apache/druid/query/ChainedExecutionQueryRunner.java
##
@@ -141,33 +144,34 @@ public ChainedExecutionQueryRunner(
   );
 }
 )
-)
-);
+);
 
-queryWatcher.registerQueryFuture(query, futures);
+ListenableFuture>> future = 
Futures.allAsList(futures);
+queryWatcher.registerQueryFuture(query, future);
 
 try {
   return new MergeIterable<>(
   ordering.nullsFirst(),
   QueryContexts.hasTimeout(query) ?
-  futures.get(QueryContexts.getTimeout(query), 
TimeUnit.MILLISECONDS) :
-  futures.get()
+  future.get(QueryContexts.getTimeout(query), 
TimeUnit.MILLISECONDS) :
+  future.get()
   ).iterator();
 }
 catch (InterruptedException e) {
   log.noStackTrace().warn(e, "Query interrupted, cancelling 
pending results, query id [%s]", query.getId());
-  futures.cancel(true);
+  GuavaUtils.cancelAll(true, 
ImmutableList.builder().add(future).addAll(futures).build());

Review comment:
   Or, more structured way to do could be adding a new `CombinedFuture` 
like this
   
   ```java
 public static class CombinedFuture implements Future>
 {
   private final List> underlyingFutures;
   private final ListenableFuture> combined;
   
   public CombinedFuture(List> futures)
   {
 this.underlyingFutures = futures;
 this.combined = Futures.allAsList(futures);
   }
   
   @Override
   public boolean cancel(boolean mayInterruptIfRunning)
   {
 if (combined.isDone() || combined.isCancelled()) {
   return false;
 } else {
   cancelAll(mayInterruptIfRunning, combined, underlyingFutures);
   return true;
 }
   }
   
   @Override
   public boolean isCancelled()
   {
 return combined.isCancelled();
   }
   
   @Override
   public boolean isDone()
   {
 return combined.isDone();
   }
   
   @Override
   public List get() throws InterruptedException, ExecutionException
   {
 return combined.get();
   }
   
   @Override
   public List get(long timeout, TimeUnit unit) throws 
InterruptedException, ExecutionException, TimeoutException
   {
 return combined.get(timeout, unit);
   }
 }
   ```
   
   I'm fine with either way.

##
File path: 
processing/src/test/java/org/apache/druid/query/groupby/GroupByQueryRunnerFailureTest.java
##
@@ -281,4 +281,41 @@ public void tes

[GitHub] [druid] jihoonson edited a comment on pull request #10013: Add NonnullPair

2020-06-25 Thread GitBox


jihoonson edited a comment on pull request #10013:
URL: https://github.com/apache/druid/pull/10013#issuecomment-649726856


   @clintropolis thank you for the review.
   
   > Though now I wonder if `Pair` should be renamed `NullablePair` and this 
should be renamed `Pair` 🤔 😜
   
   Yeah, I agree it would be better. However, Intellij finds 1286 usages of 
`Pair` as of now, doing so will cause bunch of potential conflicts.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [druid] jihoonson commented on pull request #10013: Add NonnullPair

2020-06-25 Thread GitBox


jihoonson commented on pull request #10013:
URL: https://github.com/apache/druid/pull/10013#issuecomment-649726856


   > Though now I wonder if `Pair` should be renamed `NullablePair` and this 
should be renamed `Pair` 🤔 😜
   
   Yeah, I agree it would be better. However, Intellij finds 1286 usages of 
`Pair` as of now, doing so will cause bunch of potential conflicts.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [druid] ccaominh commented on pull request #10033: Allow append to existing datasources when dynamic partitioning is used

2020-06-25 Thread GitBox


ccaominh commented on pull request #10033:
URL: https://github.com/apache/druid/pull/10033#issuecomment-649737980


   > ```
   > 133 F   default boolean sharePartitionSpace(PartialShardSpec 
partialShardSpec)
   > 134 F   {
   > 135 F | L | B(0/2)return 
!partialShardSpec.useNonRootGenerationPartitionSpace();
   > 136 }
   > ```
   > 
   > The test coverage still complains about this default method, but I believe 
it's being tested in the new tests I added in 
[b423e96](https://github.com/apache/druid/commit/b423e964d258d2ba2f9760fa3224026157e98d0d).
   
   I think I know what's going on with the code coverage check:
   
   The coverage check is looking for more unit tests in `core`, since that's 
where `SharedSpec` lives. 
`NumberedOverwriteShardSpecTest.testSharePartitionSpace()` is in `core`; 
however, it is using `NumberedOverwriteShardSpec` and since 
`OverwriteShardSpec` overrides `sharePartitionSpace()`, the test is not 
covering `ShardSpec.sharePartitionSpace()`. The other unit tests that were 
added in 
https://github.com/apache/druid/commit/b423e964d258d2ba2f9760fa3224026157e98d0d 
are in `server`, so they're not counted.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [druid] jon-wei commented on a change in pull request #9898: support Aliyun OSS service as deep storage

2020-06-25 Thread GitBox


jon-wei commented on a change in pull request #9898:
URL: https://github.com/apache/druid/pull/9898#discussion_r445754298



##
File path: 
extensions-contrib/aliyun-oss-extensions/src/main/java/org/apache/druid/storage/aliyun/ObjectSummaryIterator.java
##
@@ -0,0 +1,163 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.storage.aliyun;
+
+import com.aliyun.oss.OSS;
+import com.aliyun.oss.OSSException;
+import com.aliyun.oss.model.ListObjectsRequest;
+import com.aliyun.oss.model.OSSObjectSummary;
+import com.aliyun.oss.model.ObjectListing;
+import org.apache.druid.java.util.common.RE;
+
+import java.net.URI;
+import java.util.Iterator;
+import java.util.NoSuchElementException;
+
+/**
+ * Iterator class used by {@link OssUtils#objectSummaryIterator}.
+ * 
+ * As required by the specification of that method, this iterator is computed 
incrementally in batches of
+ * {@code maxListLength}. The first call is made at the same time the iterator 
is constructed.
+ *
+ */
+public class ObjectSummaryIterator implements Iterator
+{
+  private final OSS client;
+  private final Iterator prefixesIterator;
+  private final int maxListingLength;
+
+  private ListObjectsRequest request;
+  private ObjectListing result;
+  private Iterator objectSummaryIterator;
+  private OSSObjectSummary currentObjectSummary;
+
+  ObjectSummaryIterator(
+  final OSS client,
+  final Iterable prefixes,
+  final int maxListingLength
+  )
+  {
+this.client = client;
+this.prefixesIterator = prefixes.iterator();
+this.maxListingLength = maxListingLength;
+
+prepareNextRequest();
+fetchNextBatch();
+advanceObjectSummary();
+  }
+
+  @Override
+  public boolean hasNext()
+  {
+return currentObjectSummary != null;
+  }
+
+  @Override
+  public OSSObjectSummary next()
+  {
+if (currentObjectSummary == null) {
+  throw new NoSuchElementException();
+}
+
+final OSSObjectSummary retVal = currentObjectSummary;
+advanceObjectSummary();
+return retVal;
+  }
+
+  private void prepareNextRequest()
+  {
+final URI currentUri = prefixesIterator.next();
+final String currentBucket = currentUri.getAuthority();
+final String currentPrefix = OssUtils.extractKey(currentUri);
+
+request = new ListObjectsRequest(currentBucket, currentPrefix, null, null, 
maxListingLength);
+  }
+
+  private void fetchNextBatch()
+  {
+try {
+  result = OssUtils.retry(() -> client.listObjects(request));
+  request.setMarker(result.getNextMarker());
+  objectSummaryIterator = result.getObjectSummaries().iterator();
+}
+catch (OSSException e) {
+  throw new RE(
+  e,
+  "Failed to get object summaries from S3 bucket[%s], prefix[%s]; S3 
error: %s",
+  request.getBucketName(),
+  request.getPrefix(),
+  e.getMessage()
+  );
+}
+catch (Exception e) {
+  throw new RE(
+  e,
+  "Failed to get object summaries from S3 bucket[%s], prefix[%s]",
+  request.getBucketName(),
+  request.getPrefix()
+  );
+}
+  }
+
+  /**
+   * Advance objectSummaryIterator to the next non-placeholder, updating 
"currentObjectSummary".
+   */
+  private void advanceObjectSummary()
+  {
+while (objectSummaryIterator.hasNext() || result.isTruncated() || 
prefixesIterator.hasNext()) {
+  while (objectSummaryIterator.hasNext()) {
+currentObjectSummary = objectSummaryIterator.next();
+// skips directories and empty objects
+if (!isDirectoryPlaceholder(currentObjectSummary) && 
currentObjectSummary.getSize() > 0) {
+  return;
+}
+  }
+
+  // Exhausted "objectSummaryIterator" without finding a non-placeholder.
+  if (result.isTruncated()) {
+fetchNextBatch();
+  } else if (prefixesIterator.hasNext()) {
+prepareNextRequest();
+fetchNextBatch();
+  }
+}
+
+// Truly nothing left to read.
+currentObjectSummary = null;
+  }
+
+  /**
+   * Checks if a given object is a directory placeholder and should be ignored.

Review comment:
   @FrankChen021 The code is still derived from jets3t code, so it must be 
acc

[GitHub] [druid] ccaominh commented on pull request #10033: Allow append to existing datasources when dynamic partitioning is used

2020-06-25 Thread GitBox


ccaominh commented on pull request #10033:
URL: https://github.com/apache/druid/pull/10033#issuecomment-649750336


   Looking at the code coverage issue a bit more:
   
   All the ShardSpecs live in `core`:
   ```
   find . -name "*ShardSpec.java"
   
   
./core/src/main/java/org/apache/druid/timeline/partition/NumberedOverwriteShardSpec.java
   
./core/src/main/java/org/apache/druid/timeline/partition/OverwriteShardSpec.java
   
./core/src/main/java/org/apache/druid/timeline/partition/RangeBucketShardSpec.java
   
./core/src/main/java/org/apache/druid/timeline/partition/NumberedPartialShardSpec.java
   
./core/src/main/java/org/apache/druid/timeline/partition/HashBucketShardSpec.java
   ./core/src/main/java/org/apache/druid/timeline/partition/NoneShardSpec.java
   
./core/src/main/java/org/apache/druid/timeline/partition/SingleDimensionShardSpec.java
   
./core/src/main/java/org/apache/druid/timeline/partition/NumberedShardSpec.java
   
./core/src/main/java/org/apache/druid/timeline/partition/HashBasedNumberedPartialShardSpec.java
   
./core/src/main/java/org/apache/druid/timeline/partition/HashBasedNumberedShardSpec.java
   ./core/src/main/java/org/apache/druid/timeline/partition/LinearShardSpec.java
   
./core/src/main/java/org/apache/druid/timeline/partition/SingleDimensionPartialShardSpec.java
   
./core/src/main/java/org/apache/druid/timeline/partition/NumberedOverwritePartialShardSpec.java
   
./core/src/main/java/org/apache/druid/timeline/partition/BucketNumberedShardSpec.java
   
./core/src/main/java/org/apache/druid/timeline/partition/BuildingHashBasedNumberedShardSpec.java
   
./core/src/main/java/org/apache/druid/timeline/partition/BuildingNumberedShardSpec.java
   ./core/src/main/java/org/apache/druid/timeline/partition/ShardSpec.java
   
./core/src/main/java/org/apache/druid/timeline/partition/PartialShardSpec.java
   
./core/src/main/java/org/apache/druid/timeline/partition/LinearPartialShardSpec.java
   
./core/src/main/java/org/apache/druid/timeline/partition/BuildingSingleDimensionShardSpec.java
   
./core/src/main/java/org/apache/druid/timeline/partition/BuildingShardSpec.java
   
./server/src/main/java/org/apache/druid/segment/realtime/appenderator/SegmentIdWithShardSpec.java
   
./indexing-hadoop/src/main/java/org/apache/druid/indexer/HadoopyShardSpec.java
   ```
   
   But some of their unit tests live in `server` (e.g., 
`SingleDimensionShardSpecTest`):
   ```
   find . -name "*ShardSpecTest.java"
   
   
./core/src/test/java/org/apache/druid/timeline/partition/NumberedOverwritePartialShardSpecTest.java
   
./core/src/test/java/org/apache/druid/timeline/partition/NoneShardSpecTest.java
   
./core/src/test/java/org/apache/druid/timeline/partition/BuildingSingleDimensionShardSpecTest.java
   
./core/src/test/java/org/apache/druid/timeline/partition/BuildingHashBasedNumberedShardSpecTest.java
   
./core/src/test/java/org/apache/druid/timeline/partition/HashBucketShardSpecTest.java
   
./core/src/test/java/org/apache/druid/timeline/partition/SingleDimensionPartialShardSpecTest.java
   
./core/src/test/java/org/apache/druid/timeline/partition/NumberedOverwriteShardSpecTest.java
   
./core/src/test/java/org/apache/druid/timeline/partition/NumberedPartialShardSpecTest.java
   
./core/src/test/java/org/apache/druid/timeline/partition/HashBasedNumberedPartialShardSpecTest.java
   
./core/src/test/java/org/apache/druid/timeline/partition/BuildingNumberedShardSpecTest.java
   
./core/src/test/java/org/apache/druid/timeline/partition/RangeBucketShardSpecTest.java
   
./server/src/test/java/org/apache/druid/segment/realtime/appenderator/SegmentIdWithShardSpecTest.java
   
./server/src/test/java/org/apache/druid/server/shard/NumberedShardSpecTest.java
   
./server/src/test/java/org/apache/druid/server/shard/SingleDimensionShardSpecTest.java
   
./server/src/test/java/org/apache/druid/timeline/partition/HashBasedNumberedShardSpecTest.java
   ```
   
   If those unit tests had been in `core` instead of `server` then the coverage 
check for this PR would have passed since the relevant unit tests were added to 
`SingleDimensionShardSpecTest`, for example.
   
   I suggest we do a followup PR to move the `ShardSpec` tests from `server` to 
`core` and proceed with merging this PR, since the coverage check failure is a 
result of the prior misplacement of test classes.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [druid] clintropolis merged pull request #10075: fix dropwizard emitter jvm bufferpoolName metric

2020-06-25 Thread GitBox


clintropolis merged pull request #10075:
URL: https://github.com/apache/druid/pull/10075


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] branch master updated (422a8af -> 0f51b3c)

2020-06-25 Thread cwylie
This is an automated email from the ASF dual-hosted git repository.

cwylie pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git.


from 422a8af  Fix balancer strategy (#10070)
 add 0f51b3c  fix dropwizard emitter jvm bufferpoolName metric (#10075)

No new revisions were added by this update.

Summary of changes:
 docs/design/extensions-contrib/dropwizard.md  | 15 +--
 docs/operations/metrics.md|  6 +++---
 .../WhiteListBasedDruidToTimelineEventConverterTest.java  |  2 +-
 .../src/main/resources/defaultMetricDimensions.json   | 15 +--
 .../emitter/graphite/WhiteListBasedConverterTest.java |  2 +-
 website/.spelling |  2 +-
 6 files changed, 24 insertions(+), 18 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [druid] pjain1 closed issue #10068: RandomBalancerStrategy gets stuck into loop

2020-06-25 Thread GitBox


pjain1 closed issue #10068:
URL: https://github.com/apache/druid/issues/10068


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [druid] pjain1 closed issue #10069: RandomBalancerStrategy does not assign segments if there is only one historical

2020-06-25 Thread GitBox


pjain1 closed issue #10069:
URL: https://github.com/apache/druid/issues/10069


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [druid] pjain1 commented on issue #10069: RandomBalancerStrategy does not assign segments if there is only one historical

2020-06-25 Thread GitBox


pjain1 commented on issue #10069:
URL: https://github.com/apache/druid/issues/10069#issuecomment-649804334


   Fixed in https://github.com/apache/druid/pull/10070



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [druid] pjain1 commented on issue #10068: RandomBalancerStrategy gets stuck into loop

2020-06-25 Thread GitBox


pjain1 commented on issue #10068:
URL: https://github.com/apache/druid/issues/10068#issuecomment-649804164


   fixed in https://github.com/apache/druid/pull/10070



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [druid] jihoonson commented on pull request #10033: Allow append to existing datasources when dynamic partitioning is used

2020-06-25 Thread GitBox


jihoonson commented on pull request #10033:
URL: https://github.com/apache/druid/pull/10033#issuecomment-649804890


   > If those unit tests had been in `core` instead of `server` then the 
coverage check for this PR would have passed since the relevant unit tests were 
added to `SingleDimensionShardSpecTest`, for example.
   > 
   > I suggest we do a followup PR to move the `ShardSpec` tests from `server` 
to `core` and proceed with merging this PR, since the coverage check failure is 
a result of the prior misplacement of test classes.
   
   Ah cool, makes sense. I will do as a follow-up. Thank you for the review 
@clintropolis @maytasm @ccaominh!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [druid] jihoonson merged pull request #10033: Allow append to existing datasources when dynamic partitioning is used

2020-06-25 Thread GitBox


jihoonson merged pull request #10033:
URL: https://github.com/apache/druid/pull/10033


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [druid] jihoonson closed issue #9352: Broken feature: appending linearly partitioned segments into a hash partitioned datasource

2020-06-25 Thread GitBox


jihoonson closed issue #9352:
URL: https://github.com/apache/druid/issues/9352


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] branch master updated (0f51b3c -> aaee72c)

2020-06-25 Thread jihoonson
This is an automated email from the ASF dual-hosted git repository.

jihoonson pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git.


from 0f51b3c  fix dropwizard emitter jvm bufferpoolName metric (#10075)
 add aaee72c  Allow append to existing datasources when dynamic 
partitioning is used (#10033)

No new revisions were added by this update.

Summary of changes:
 .../org/apache/druid/segment/SegmentUtils.java |  13 +
 .../org/apache/druid/timeline/DataSegment.java |   2 +-
 .../partition/BucketNumberedShardSpec.java |   6 -
 .../timeline/partition/BuildingShardSpec.java  |   6 -
 .../HashBasedNumberedPartialShardSpec.java |  18 +-
 .../partition/HashBasedNumberedShardSpec.java  |   8 +-
 .../timeline/partition/LinearPartialShardSpec.java |  13 +-
 .../druid/timeline/partition/LinearShardSpec.java  |   6 -
 .../druid/timeline/partition/NoneShardSpec.java|   6 -
 .../NumberedOverwritePartialShardSpec.java |  25 +-
 .../partition/NumberedOverwriteShardSpec.java  |   6 -
 .../partition/NumberedPartialShardSpec.java|  24 +-
 .../timeline/partition/NumberedShardSpec.java  |   6 -
 .../timeline/partition/OverwriteShardSpec.java |  12 +
 .../druid/timeline/partition/PartialShardSpec.java |  32 ++-
 .../apache/druid/timeline/partition/ShardSpec.java |  12 +-
 .../partition/SingleDimensionPartialShardSpec.java |  29 +--
 .../partition/SingleDimensionShardSpec.java|  17 --
 .../org/apache/druid/segment/SegmentUtilsTest.java |  54 +
 .../org/apache/druid/timeline/DataSegmentTest.java |   6 -
 .../BuildingHashBasedNumberedShardSpecTest.java|   2 +-
 .../HashBasedNumberedPartialShardSpecTest.java |  18 +-
 .../NumberedOverwritePartialShardSpecTest.java |  17 +-
 .../partition/NumberedOverwriteShardSpecTest.java  |  16 ++
 ...Test.java => NumberedPartialShardSpecTest.java} |  33 +--
 .../partition/PartitionHolderCompletenessTest.java |  14 +-
 .../SingleDimensionPartialShardSpecTest.java   |  16 +-
 docs/ingestion/native-batch.md |   4 +-
 .../common/task/AbstractBatchIndexTask.java|  12 +-
 .../indexing/common/task/CompactionInputSpec.java  |   6 +-
 .../common/task/CompactionIntervalSpec.java|   3 +-
 .../druid/indexing/common/task/CompactionTask.java |  26 +-
 .../druid/indexing/common/task/IndexTask.java  |   8 +-
 .../indexing/common/task/SpecificSegmentsSpec.java |   9 +-
 .../druid/indexing/common/task/TaskLockHelper.java |  11 +-
 .../parallel/ParallelIndexSupervisorTask.java  |   4 +-
 .../common/actions/SegmentAllocateActionTest.java  |  32 ---
 .../common/task/CompactionInputSpecTest.java   |   9 +-
 .../common/task/CompactionTaskParallelRunTest.java | 261 +++--
 .../indexing/common/task/CompactionTaskTest.java   |  12 +
 .../indexing/common/task/IndexTaskSerdeTest.java   |   4 +-
 .../druid/indexing/common/task/IndexTaskTest.java  |   4 +-
 .../AbstractMultiPhaseParallelIndexingTest.java|  96 
 .../AbstractParallelIndexSupervisorTaskTest.java   |  95 +---
 ...ashPartitionMultiPhaseParallelIndexingTest.java |  92 +++-
 .../parallel/ParallelIndexSupervisorTaskTest.java  |  92 
 .../parallel/ParallelIndexTuningConfigTest.java| 125 ++
 .../task/batch/parallel/PartialCompactionTest.java | 245 +++
 ...ngePartitionMultiPhaseParallelIndexingTest.java | 123 --
 .../parallel/SinglePhaseParallelIndexingTest.java  | 114 ++---
 .../TestIndexerMetadataStorageCoordinator.java |   2 +-
 .../IndexerSQLMetadataStorageCoordinator.java  |  52 ++--
 .../IndexerSQLMetadataStorageCoordinatorTest.java  |  47 
 .../appenderator/SegmentIdWithShardSpecTest.java   |   2 +-
 .../druid/server/shard/NumberedShardSpecTest.java  |  27 ++-
 .../server/shard/SingleDimensionShardSpecTest.java |  14 ++
 .../partition/HashBasedNumberedShardSpecTest.java  |  21 +-
 57 files changed, 1434 insertions(+), 535 deletions(-)
 copy 
extensions-contrib/graphite-emitter/src/test/java/org/apache/druid/emitter/graphite/DruidToWhiteListBasedConverterTest.java
 => 
core/src/test/java/org/apache/druid/timeline/partition/NumberedOverwritePartialShardSpecTest.java
 (59%)
 copy 
core/src/test/java/org/apache/druid/timeline/partition/{SingleDimensionPartialShardSpecTest.java
 => NumberedPartialShardSpecTest.java} (59%)
 create mode 100644 
indexing-service/src/test/java/org/apache/druid/indexing/common/task/batch/parallel/PartialCompactionTest.java


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [druid] jihoonson opened a new pull request #10079: Move shardSpec tests to core

2020-06-25 Thread GitBox


jihoonson opened a new pull request #10079:
URL: https://github.com/apache/druid/pull/10079


   ### Description
   
   Follow-up for 
https://github.com/apache/druid/pull/10033#issuecomment-649750336.
   
   
   
   This PR has:
   - [x] been self-reviewed.
  - [ ] using the [concurrency 
checklist](https://github.com/apache/druid/blob/master/dev/code-review/concurrency.md)
 (Remove this item if the PR doesn't have any relation to concurrency.)
   - [ ] added documentation for new or modified features or behaviors.
   - [ ] added Javadocs for most classes and all non-trivial methods. Linked 
related entities via Javadoc links.
   - [ ] added or updated version, license, or notice information in 
[licenses.yaml](https://github.com/apache/druid/blob/master/licenses.yaml)
   - [ ] added comments explaining the "why" and the intent of the code 
wherever would not be obvious for an unfamiliar reader.
   - [ ] added unit tests or modified existing tests to cover new code paths, 
ensuring the threshold for [code 
coverage](https://github.com/apache/druid/blob/master/dev/code-review/code-coverage.md)
 is met.
   - [ ] added integration tests.
   - [ ] been tested in a test Druid cluster.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [druid] clintropolis closed issue #9467: Ingestion fails on 'Connection reset by peer' when using native parallel ingestion

2020-06-25 Thread GitBox


clintropolis closed issue #9467:
URL: https://github.com/apache/druid/issues/9467


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [druid] clintropolis merged pull request #10046: Fix missing temp dir for native single_dim

2020-06-25 Thread GitBox


clintropolis merged pull request #10046:
URL: https://github.com/apache/druid/pull/10046


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] branch master updated (aaee72c -> f6594ff)

2020-06-25 Thread cwylie
This is an automated email from the ASF dual-hosted git repository.

cwylie pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git.


from aaee72c  Allow append to existing datasources when dynamic 
partitioning is used (#10033)
 add f6594ff  Fix missing temp dir for native single_dim (#10046)

No new revisions were added by this update.

Summary of changes:
 indexing-service/pom.xml |  4 
 .../java/org/apache/druid/indexing/common/TaskToolbox.java   | 10 +-
 .../common/task/AppenderatorDriverRealtimeIndexTask.java |  8 +---
 .../org/apache/druid/indexing/common/task/IndexTask.java |  3 ---
 .../apache/druid/indexing/common/task/RealtimeIndexTask.java |  8 +---
 .../task/batch/parallel/PartialSegmentGenerateTask.java  | 12 ++--
 .../common/task/batch/parallel/SinglePhaseSubTask.java   |  7 +--
 7 files changed, 18 insertions(+), 34 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [druid] maytasm opened a new pull request #10080: Add integration tests for SqlInputSource

2020-06-25 Thread GitBox


maytasm opened a new pull request #10080:
URL: https://github.com/apache/druid/pull/10080


   Add integration tests for SqlInputSource
   
   ### Description
   
   Add integration tests for SqlInputSource
   
   This PR has:
   - [x] been self-reviewed.
   - [ ] added documentation for new or modified features or behaviors.
   - [ ] added Javadocs for most classes and all non-trivial methods. Linked 
related entities via Javadoc links.
   - [ ] added or updated version, license, or notice information in 
[licenses.yaml](https://github.com/apache/druid/blob/master/licenses.yaml)
   - [ ] added comments explaining the "why" and the intent of the code 
wherever would not be obvious for an unfamiliar reader.
   - [ ] added unit tests or modified existing tests to cover new code paths, 
ensuring the threshold for [code 
coverage](https://github.com/apache/druid/blob/master/dev/code-review/code-coverage.md)
 is met.
   - [x] added integration tests.
   - [ ] been tested in a test Druid cluster.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [druid] maytasm commented on pull request #9449: Add Sql InputSource

2020-06-25 Thread GitBox


maytasm commented on pull request #9449:
URL: https://github.com/apache/druid/pull/9449#issuecomment-649888286


   Have a PR up for adding integration tests to SqlInputSource. Please see: 
https://github.com/apache/druid/pull/10080



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] branch master updated: More prominent instructions on code coverage failure (#10060)

2020-06-25 Thread fjy
This is an automated email from the ASF dual-hosted git repository.

fjy pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/master by this push:
 new b7d771f  More prominent instructions on code coverage failure (#10060)
b7d771f is described below

commit b7d771f633fb3c54490ea1a1f8df6691ea4bb4e1
Author: Suneet Saldanha 
AuthorDate: Thu Jun 25 19:48:30 2020 -0700

More prominent instructions on code coverage failure (#10060)

* More prominent instructions on code coverage failure

* Update .travis.yml
---
 .travis.yml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/.travis.yml b/.travis.yml
index 9b3c755..61450ef 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -183,7 +183,7 @@ jobs:
   --log-template "totals-complete"
   --log-template "errors"
   --
-  || { printf "\nDiff code coverage check failed. To view coverage 
report, run 'mvn clean test jacoco:report' and open 
'target/site/jacoco/index.html'\n" && false; }
+  || { printf "\n\nFAILED\nDiff code coverage check failed. To 
view coverage report, run 'mvn clean test jacoco:report' and open 
'target/site/jacoco/index.html'\nFor more details on how to run code coverage 
locally, follow instructions here - 
https://github.com/apache/druid/blob/master/dev/code-review/code-coverage.md#running-code-coverage-locally\n\n";
 && false; }
   fi
   after_success:
 # retry in case of network error


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [druid] fjy merged pull request #10060: More prominent instructions on code coverage failure

2020-06-25 Thread GitBox


fjy merged pull request #10060:
URL: https://github.com/apache/druid/pull/10060


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [druid] clintropolis opened a new pull request #10081: Information schema doc update

2020-06-25 Thread GitBox


clintropolis opened a new pull request #10081:
URL: https://github.com/apache/druid/pull/10081


   Follow-up to #10041, documents `IS_JOINABLE` and `IS_BROADCAST` and fills 
out a bit more details on `INFORMATION_SCHEMA` tables. 
   
   Also fixed a bunch of links on the SQL query docs page to use `.md` instead 
of `.html`, because building the docs substitutes the `.html` and if we use 
`.md` then they also work on github.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [druid] clintropolis closed issue #9798: [Draft] 0.18.1 Release notes

2020-06-25 Thread GitBox


clintropolis closed issue #9798:
URL: https://github.com/apache/druid/issues/9798


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [druid] stale[bot] commented on issue #8456: ingesting with high cardinality dimension have low performance

2020-06-25 Thread GitBox


stale[bot] commented on issue #8456:
URL: https://github.com/apache/druid/issues/8456#issuecomment-649938318


   This issue has been marked as stale due to 280 days of inactivity. It will 
be closed in 4 weeks if no further activity occurs. If this issue is still 
relevant, please simply write any comment. Even if closed, you can still revive 
the issue at any time or discuss it on the d...@druid.apache.org list. Thank 
you for your contributions.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [druid] stale[bot] commented on issue #8560: Feature request: Implement bitwise operators & and |

2020-06-25 Thread GitBox


stale[bot] commented on issue #8560:
URL: https://github.com/apache/druid/issues/8560#issuecomment-649938319


   This issue has been marked as stale due to 280 days of inactivity. It will 
be closed in 4 weeks if no further activity occurs. If this issue is still 
relevant, please simply write any comment. Even if closed, you can still revive 
the issue at any time or discuss it on the d...@druid.apache.org list. Thank 
you for your contributions.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [druid] ccaominh commented on a change in pull request #9956: Segment timeline doesn't show results older than 3 months

2020-06-25 Thread GitBox


ccaominh commented on a change in pull request #9956:
URL: https://github.com/apache/druid/pull/9956#discussion_r445959200



##
File path: web-console/src/components/segment-timeline/segment-timeline.tsx
##
@@ -394,14 +394,16 @@ ORDER BY "start" DESC`;
   onTimeSpanChange = (e: any) => {
 const dStart = new Date();
 const dEnd = new Date();
-dStart.setMonth(dStart.getMonth() - e);
+const capabilities = this.props.capabilities;
+const timeSpan = parseInt(e, 10) || 3;

Review comment:
   Something like this is what I had in mind: 
https://github.com/apache/druid/compare/master...ccaominh:fix-console-segment-timeline-query-timespan.
 If your fix is reverted so that `this.dataQueryManager.rerunLastQuery()` is 
called instead of `this.dataQueryManager.runQuery()`, then the test that 
selects a new period will fail.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [druid] clintropolis commented on a change in pull request #9938: Add https to druid-influxdb-emitter extension

2020-06-25 Thread GitBox


clintropolis commented on a change in pull request #9938:
URL: https://github.com/apache/druid/pull/9938#discussion_r445968955



##
File path: 
extensions-contrib/influxdb-emitter/src/main/java/org/apache/druid/emitter/influxdb/InfluxdbEmitterConfig.java
##
@@ -130,6 +159,10 @@ public int hashCode()
   {
 int result = getHostname().hashCode();
 result = 31 * result + getPort();
+result = 31 * result + getProtocol().hashCode();

Review comment:
   Is there any reason not to use
   ```
   return Objects.hash(
   return Objects.hash(
   hostname,
   port,
   ...
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [druid] jihoonson opened a new pull request #10082: Fix RetryQueryRunner to actually do the job

2020-06-25 Thread GitBox


jihoonson opened a new pull request #10082:
URL: https://github.com/apache/druid/pull/10082


   ### Description
   
   `RetryQueryRunner` is responsible for retrying the query when some segments 
are missing during the query (it's possible since the coordinator can move 
segments anytime). However, it currently doesn't work as expected since it 
checks the missing segments in the response context _before_ issuing the query 
to the query nodes. This PR fixes this bug and adds a sanity check that makes 
checking missing segments failed if the broker hasn't gotten all responses from 
the query nodes yet.
   
   Some integration tests will be added as a follow-up.
   
   
   
   This PR has:
   - [x] been self-reviewed.
  - [ ] using the [concurrency 
checklist](https://github.com/apache/druid/blob/master/dev/code-review/concurrency.md)
 (Remove this item if the PR doesn't have any relation to concurrency.)
   - [ ] added documentation for new or modified features or behaviors.
   - [x] added Javadocs for most classes and all non-trivial methods. Linked 
related entities via Javadoc links.
   - [ ] added or updated version, license, or notice information in 
[licenses.yaml](https://github.com/apache/druid/blob/master/licenses.yaml)
   - [x] added comments explaining the "why" and the intent of the code 
wherever would not be obvious for an unfamiliar reader.
   - [x] added unit tests or modified existing tests to cover new code paths, 
ensuring the threshold for [code 
coverage](https://github.com/apache/druid/blob/master/dev/code-review/code-coverage.md)
 is met.
   - [ ] added integration tests.
   - [x] been tested in a test Druid cluster.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [druid] stale[bot] commented on issue #8560: Feature request: Implement bitwise operators & and |

2020-06-25 Thread GitBox


stale[bot] commented on issue #8560:
URL: https://github.com/apache/druid/issues/8560#issuecomment-650005065


   This issue is no longer marked as stale.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [druid] teyeheimans commented on issue #8560: Feature request: Implement bitwise operators & and |

2020-06-25 Thread GitBox


teyeheimans commented on issue #8560:
URL: https://github.com/apache/druid/issues/8560#issuecomment-650005046


   Any chance that this feature is going to be implemented? It is also a great 
functionality which can be used in the query language. 
   
   Like `SELECT * FROM dataSource WHERE (dimension & 4) = 4`
   
   This is something we frequently use. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [druid] ShilpaSivanesan opened a new issue #10083: Druid New Console Changes

2020-06-25 Thread GitBox


ShilpaSivanesan opened a new issue #10083:
URL: https://github.com/apache/druid/issues/10083


   **Changes**
   
   **Main Page**
The old console displayed historical nodes count by Tiers, the 
new one only shows total. It's helpful to know by Tiering
   
   **DataSource Tab**
   
   -  Could have shown hot tier and cold tier period highlighted on the 
timeline
   -  The period filter on the timeline is limited up to 1 year, can we 
have an option to show it for full-time rage or selected time range
   -  Also would be better if total size for the segment set as default 
instead of segment count 
   -  In legacy, we had an option to download daily/ monthly summary 
unreplicated
   
   **Servers Tab**
   
   - It would be useful if the average disk usage for each node type is 
mentioned.
   - In legacy coordinator console, historicals has the avg disk usage 
for each tier, it was really helpful
   
   **Segments tab**
   
   -When looking for a date in the segment, it lists down all the 
partitions for a day with size for each, but to get overall size had to go back 
to Datasource tab, whereas as in legacy it's not so
   -Also, each partition of the date should show the info on no of 
dimensions and metrics available and its list
   
   
   the legacy console had all of the features above and mostly used to that, 
would be helpful if we get these on new druid console



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org