(druid) branch master updated: Kafka emitter wasn't given the correct number of threads. It should be 1 thread per scheduled task. (#15719)

2024-01-18 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/master by this push:
 new 18d42cae3ff Kafka emitter wasn't given the correct number of threads. 
It should be 1 thread per scheduled task. (#15719)
18d42cae3ff is described below

commit 18d42cae3ff9ebe5f255ed356864793b20c034c1
Author: Tom 
AuthorDate: Thu Jan 18 13:27:40 2024 -0800

Kafka emitter wasn't given the correct number of threads. It should be 1 
thread per scheduled task. (#15719)

This change intelligently provisions the correct number of threads per 
scheduled task. 1 for each event type, and 1 for logging the lost events.
This is a change to make this work. But in the future it would be 
worthwhile to make each task not be greedy and share threads so there isn't a 
need of a thread per task.
---
 .../src/main/java/org/apache/druid/emitter/kafka/KafkaEmitter.java| 4 +++-
 .../main/java/org/apache/druid/emitter/kafka/KafkaEmitterConfig.java  | 2 ++
 2 files changed, 5 insertions(+), 1 deletion(-)

diff --git 
a/extensions-contrib/kafka-emitter/src/main/java/org/apache/druid/emitter/kafka/KafkaEmitter.java
 
b/extensions-contrib/kafka-emitter/src/main/java/org/apache/druid/emitter/kafka/KafkaEmitter.java
index 7485cbaab6d..87183d62fe7 100644
--- 
a/extensions-contrib/kafka-emitter/src/main/java/org/apache/druid/emitter/kafka/KafkaEmitter.java
+++ 
b/extensions-contrib/kafka-emitter/src/main/java/org/apache/druid/emitter/kafka/KafkaEmitter.java
@@ -84,7 +84,9 @@ public class KafkaEmitter implements Emitter
 this.alertQueue = new MemoryBoundLinkedBlockingQueue<>(queueMemoryBound);
 this.requestQueue = new MemoryBoundLinkedBlockingQueue<>(queueMemoryBound);
 this.segmentMetadataQueue = new 
MemoryBoundLinkedBlockingQueue<>(queueMemoryBound);
-this.scheduler = Executors.newScheduledThreadPool(4);
+// need one thread per scheduled task. Scheduled tasks are per eventType 
and 1 for reporting the lost events
+int numOfThreads = config.getEventTypes().size() + 1;
+this.scheduler = Executors.newScheduledThreadPool(numOfThreads);
 this.metricLost = new AtomicLong(0L);
 this.alertLost = new AtomicLong(0L);
 this.requestLost = new AtomicLong(0L);
diff --git 
a/extensions-contrib/kafka-emitter/src/main/java/org/apache/druid/emitter/kafka/KafkaEmitterConfig.java
 
b/extensions-contrib/kafka-emitter/src/main/java/org/apache/druid/emitter/kafka/KafkaEmitterConfig.java
index d6d823c0a88..c7038079aa4 100644
--- 
a/extensions-contrib/kafka-emitter/src/main/java/org/apache/druid/emitter/kafka/KafkaEmitterConfig.java
+++ 
b/extensions-contrib/kafka-emitter/src/main/java/org/apache/druid/emitter/kafka/KafkaEmitterConfig.java
@@ -30,6 +30,7 @@ import org.apache.druid.metadata.DynamicConfigProvider;
 import org.apache.druid.metadata.MapStringDynamicConfigProvider;
 import org.apache.kafka.clients.producer.ProducerConfig;
 
+import javax.annotation.Nonnull;
 import javax.annotation.Nullable;
 import java.util.HashSet;
 import java.util.Map;
@@ -124,6 +125,7 @@ public class KafkaEmitterConfig
   }
 
   @JsonProperty
+  @Nonnull
   public Set getEventTypes()
   {
 return eventTypes;


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



(druid) branch master updated (141c214b469 -> 5d1e66b8f96)

2024-01-08 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


from 141c214b469 docs: add note about finalizeaggregations for sql-based 
ingestion (#15631)
 add 5d1e66b8f96 Allow broker to use catalog for datasource schemas for SQL 
queries (#15469)

No new revisions were added by this update.

Summary of changes:
 extensions-core/druid-catalog/pom.xml  | 112 +++---
 .../druid/catalog/guice/CatalogBrokerModule.java   | 100 +
 .../catalog/http/CatalogListenerResource.java  |  18 ++
 .../apache/druid/catalog/http/CatalogResource.java |   3 +-
 .../druid/catalog/sql/LiveCatalogResolver.java | 216 +++
 .../druid/catalog/sync/CachedMetadataCatalog.java  |  41 
 .../apache/druid/catalog/sync/CatalogClient.java   |   4 +-
 .../druid/catalog/sync/CatalogUpdateListener.java  |   2 +
 .../druid/catalog/sync/CatalogUpdateNotifier.java  |  16 +-
 .../druid/catalog/sync/CatalogUpdateReceiver.java  |  56 +
 .../org.apache.druid.initialization.DruidModule|   1 +
 .../apache/druid/catalog/sql/CatalogQueryTest.java | 124 +++
 .../apache/druid/catalog/sql/LiveCatalogTest.java  | 228 +
 .../druid/catalog/storage/TableManagerTest.java|   4 +-
 .../druid/catalog/sync/CatalogCacheTest.java   | 154 ++
 .../apache/druid/catalog/sync/CatalogSyncTest.java |  28 +--
 .../apache/druid/catalog/sync/MockCatalogSync.java |  10 +
 .../server/http/catalog/CatalogResourceTest.java   |   6 +-
 .../druid/server/http/catalog/EditorTest.java  |  12 +-
 .../ingest/insertFromTable-logicalPlan.txt |   3 +
 .../insertWithCatalogClusteredBy-logicalPlan.txt   |   4 +
 .../insertWithCatalogClusteredBy2-logicalPlan.txt  |   4 +
 .../ingest/insertWithClusteredBy-logicalPlan.txt   |   4 +
 .../ingest/insertWithClusteredBy2-logicalPlan.txt  |   4 +
 .../org/apache/druid/msq/test/MSQTestBase.java |   3 +-
 .../catalog/model/table/S3InputSourceDefnTest.java |  32 +--
 .../apache/druid/catalog/model/CatalogUtils.java   |   2 +-
 .../org/apache/druid/catalog/model/ColumnSpec.java |  41 ++--
 .../org/apache/druid/catalog/model/Columns.java| 102 +
 .../org/apache/druid/catalog/model/TableId.java|   5 +
 .../catalog/model/facade/DatasourceFacade.java |  83 
 .../catalog/model/facade/ExternalTableFacade.java  |   3 +-
 .../druid/catalog/model/facade/TableFacade.java|   6 +-
 .../catalog/model/table/BaseInputSourceDefn.java   |   2 +-
 .../druid/catalog/model/table/DatasourceDefn.java  |  12 +-
 .../catalog/model/table/ExternalTableDefn.java |   5 +
 .../model/table/FormattedInputSourceDefn.java  |   4 +-
 .../druid/catalog/model/table/TableBuilder.java|   2 +-
 .../catalog/model/table/BaseExternTableTest.java   |   4 +-
 .../catalog/model/table/CsvInputFormatTest.java|   6 +-
 .../catalog/model/table/DatasourceTableTest.java   | 109 +-
 .../model/table/DelimitedInputFormatTest.java  |   6 +-
 .../catalog/model/table/ExternalTableTest.java |  28 +--
 .../model/table/HttpInputSourceDefnTest.java   |  36 ++--
 .../model/table/InlineInputSourceDefnTest.java |  22 +-
 .../catalog/model/table/JsonInputFormatTest.java   |   6 +-
 .../model/table/LocalInputSourceDefnTest.java  |  32 +--
 .../druid/sql/calcite/schema/DruidSchema.java  |  10 +-
 .../druid/sql/calcite/table/DatasourceTable.java   | 129 +++-
 .../druid/sql/calcite/IngestTableFunctionTest.java |   4 +-
 .../druid/sql/calcite/SqlTestFrameworkConfig.java  |   1 +
 .../calcite/schema/DruidSchemaNoDataInitTest.java  |   3 +-
 .../sql/calcite/schema/InformationSchemaTest.java  |   4 +-
 .../druid/sql/calcite/schema/SystemSchemaTest.java |   3 +-
 .../sql/calcite/util/QueryFrameworkUtils.java  |  18 +-
 .../druid/sql/calcite/util/SqlTestFramework.java   |   8 +-
 56 files changed, 1511 insertions(+), 374 deletions(-)
 create mode 100644 
extensions-core/druid-catalog/src/main/java/org/apache/druid/catalog/guice/CatalogBrokerModule.java
 create mode 100644 
extensions-core/druid-catalog/src/main/java/org/apache/druid/catalog/sql/LiveCatalogResolver.java
 create mode 100644 
extensions-core/druid-catalog/src/main/java/org/apache/druid/catalog/sync/CatalogUpdateReceiver.java
 create mode 100644 
extensions-core/druid-catalog/src/test/java/org/apache/druid/catalog/sql/CatalogQueryTest.java
 create mode 100644 
extensions-core/druid-catalog/src/test/java/org/apache/druid/catalog/sql/LiveCatalogTest.java
 create mode 100644 
extensions-core/druid-catalog/src/test/java/org/apache/druid/catalog/sync/CatalogCacheTest.java
 create mode 100644 
extensions-core/druid-catalog/src/test/resources/calcite/expected/ingest/insertFromTable-logicalPlan.txt
 create mode 100644 
extensions-core/druid-catalog/src/test/resources/calcite/expected/ingest/insertWithCatalogClusteredBy-logicalPlan.txt
 create mode

(druid) 01/01: Multiarch docker build test

2023-10-31 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch multiarchtest
in repository https://gitbox.apache.org/repos/asf/druid.git

commit 3a6e202853dbe5f38e0c7117e35dcc8127d1b815
Author: jon-wei 
AuthorDate: Tue Oct 31 13:29:41 2023 -0500

Multiarch docker build test
---
 distribution/docker/Dockerfile | 31 +++-
 .../docker/{Dockerfile => Dockerfile.arm64}| 33 --
 2 files changed, 35 insertions(+), 29 deletions(-)

diff --git a/distribution/docker/Dockerfile b/distribution/docker/Dockerfile
index 1c7933f09d3..07c62c0adbf 100644
--- a/distribution/docker/Dockerfile
+++ b/distribution/docker/Dockerfile
@@ -17,7 +17,7 @@
 # under the License.
 #
 
-ARG JDK_VERSION=11
+ARG JDK_VERSION=17
 
 # The platform is explicitly specified as x64 to build the Druid distribution.
 # This is because it's not able to build the distribution on arm64 due to 
dependency problem of web-console. See: 
https://github.com/apache/druid/issues/13012
@@ -49,17 +49,8 @@ RUN --mount=type=cache,target=/root/.m2 VERSION=$(mvn -B -q 
org.apache.maven.plu
  && tar -zxf ./distribution/target/apache-druid-${VERSION}-bin.tar.gz -C /opt \
  && mv /opt/apache-druid-${VERSION} /opt/druid
 
-FROM busybox:1.34.1-glibc as busybox
-
-FROM gcr.io/distroless/java$JDK_VERSION-debian11
-LABEL maintainer="Apache Druid Developers "
-
-COPY --from=busybox /bin/busybox /busybox/busybox
-RUN ["/busybox/busybox", "--install", "/bin"]
-
-# Predefined builtin arg, see: 
https://docs.docker.com/engine/reference/builder/#automatic-platform-args-in-the-global-scope
+FROM alpine:3 as bash-static
 ARG TARGETARCH
-
 #
 # Download bash-static binary to execute scripts that require bash.
 # Although bash-static supports multiple platforms, but there's no need for us 
to support all those platform, amd64 and arm64 are enough.
@@ -73,12 +64,24 @@ RUN if [ "$TARGETARCH" = "arm64" ]; then \
   echo "Unsupported architecture ($TARGETARCH)" && exit 1; \
 fi; \
 echo "Downloading bash-static from ${BASH_URL}" \
-&& wget ${BASH_URL} -O /bin/bash \
-&& chmod 755 /bin/bash
+&& wget ${BASH_URL} -O /bin/bash
+
+FROM busybox:1.35.0-glibc as busybox
+
+FROM gcr.io/distroless/java$JDK_VERSION-debian12
+LABEL maintainer="Apache Druid Developers "
+
+COPY --from=busybox /bin/busybox /busybox/busybox
+RUN ["/busybox/busybox", "--install", "/bin"]
+
 
 RUN addgroup -S -g 1000 druid \
  && adduser -S -u 1000 -D -H -h /opt/druid -s /bin/sh -g '' -G druid druid
 
+
+COPY --from=bash-static /bin/bash /bin/bash
+RUN chmod 755 /bin/bash
+
 COPY --chown=druid:druid --from=builder /opt /opt
 COPY distribution/docker/druid.sh /druid.sh
 COPY distribution/docker/peon.sh /peon.sh
@@ -94,4 +97,4 @@ USER druid
 VOLUME /opt/druid/var
 WORKDIR /opt/druid
 
-ENTRYPOINT ["/druid.sh"]
+ENTRYPOINT ["/druid.sh"]
\ No newline at end of file
diff --git a/distribution/docker/Dockerfile 
b/distribution/docker/Dockerfile.arm64
similarity index 90%
copy from distribution/docker/Dockerfile
copy to distribution/docker/Dockerfile.arm64
index 1c7933f09d3..1773df14a4a 100644
--- a/distribution/docker/Dockerfile
+++ b/distribution/docker/Dockerfile.arm64
@@ -17,13 +17,13 @@
 # under the License.
 #
 
-ARG JDK_VERSION=11
+ARG JDK_VERSION=17
 
 # The platform is explicitly specified as x64 to build the Druid distribution.
 # This is because it's not able to build the distribution on arm64 due to 
dependency problem of web-console. See: 
https://github.com/apache/druid/issues/13012
 # Since only java jars are shipped in the final image, it's OK to build the 
distribution on x64.
 # Once the web-console dependency problem is resolved, we can remove the 
--platform directive.
-FROM --platform=linux/amd64 maven:3.8.6-jdk-11-slim as builder
+FROM --platform=linux/arm64 maven:3.8.6-jdk-11-slim as builder
 
 # Rebuild from source in this stage
 # This can be unset if the tarball was already built outside of Docker
@@ -49,17 +49,8 @@ RUN --mount=type=cache,target=/root/.m2 VERSION=$(mvn -B -q 
org.apache.maven.plu
  && tar -zxf ./distribution/target/apache-druid-${VERSION}-bin.tar.gz -C /opt \
  && mv /opt/apache-druid-${VERSION} /opt/druid
 
-FROM busybox:1.34.1-glibc as busybox
-
-FROM gcr.io/distroless/java$JDK_VERSION-debian11
-LABEL maintainer="Apache Druid Developers "
-
-COPY --from=busybox /bin/busybox /busybox/busybox
-RUN ["/busybox/busybox", "--install", "/bin"]
-
-# Predefined builtin arg, see: 
https://docs.docker.com/engine/reference/builder/#automatic-platform-args-in-the-global-scope
+FROM alpine:3 as bash-static
 ARG TARGETARCH
-
 #
 # Download bash-static binary to execute scripts that require bash.
 # Although bash-stat

(druid) branch multiarchtest created (now 3a6e202853d)

2023-10-31 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch multiarchtest
in repository https://gitbox.apache.org/repos/asf/druid.git


  at 3a6e202853d Multiarch docker build test

This branch includes the following new commits:

 new 3a6e202853d Multiarch docker build test

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] branch master updated: Skip streaming auto-scaling action if supervisor is idle (#14773)

2023-08-17 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/master by this push:
 new a8eaa1e4ed Skip streaming auto-scaling action if supervisor is idle 
(#14773)
a8eaa1e4ed is described below

commit a8eaa1e4ed81f94fe53ae14bbb078678e35de105
Author: Jonathan Wei 
AuthorDate: Thu Aug 17 19:43:25 2023 -0500

Skip streaming auto-scaling action if supervisor is idle (#14773)

* Skip streaming auto-scaling action if supervisor is idle

* Update 
indexing-service/src/main/java/org/apache/druid/indexing/seekablestream/supervisor/SeekableStreamSupervisor.java

Co-authored-by: Abhishek Radhakrishnan 

-

Co-authored-by: Abhishek Radhakrishnan 
---
 .../supervisor/SeekableStreamSupervisor.java   |  7 +++
 .../SeekableStreamSupervisorSpecTest.java  | 65 ++
 2 files changed, 72 insertions(+)

diff --git 
a/indexing-service/src/main/java/org/apache/druid/indexing/seekablestream/supervisor/SeekableStreamSupervisor.java
 
b/indexing-service/src/main/java/org/apache/druid/indexing/seekablestream/supervisor/SeekableStreamSupervisor.java
index 29fd16d1a4..0d1e32c49b 100644
--- 
a/indexing-service/src/main/java/org/apache/druid/indexing/seekablestream/supervisor/SeekableStreamSupervisor.java
+++ 
b/indexing-service/src/main/java/org/apache/druid/indexing/seekablestream/supervisor/SeekableStreamSupervisor.java
@@ -443,6 +443,13 @@ public abstract class 
SeekableStreamSupervisor

[druid] branch master updated: Add a configurable bufferPeriod between when a segment is marked unused and deleted by KillUnusedSegments duty (#12599)

2023-08-17 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/master by this push:
 new 9c124f2cde Add a configurable bufferPeriod between when a segment is 
marked unused and deleted by KillUnusedSegments duty (#12599)
9c124f2cde is described below

commit 9c124f2cde074d268f95ccf5989f548e495237b0
Author: Lucas Capistrant 
AuthorDate: Thu Aug 17 19:32:51 2023 -0500

Add a configurable bufferPeriod between when a segment is marked unused and 
deleted by KillUnusedSegments duty (#12599)

* Add new configurable buffer period to create gap between mark unused and 
kill of segment

* Changes after testing

* fixes and improvements

* changes after initial self review

* self review changes

* update sql statement that was lacking last_used

* shore up some code in SqlMetadataConnector after self review

* fix derby compatibility and improve testing/docs

* fix checkstyle violations

* Fixes post merge with master

* add some unit tests to improve coverage

* ignore test coverage on new UpdateTools cli tool

* another attempt to ignore UpdateTables in coverage check

* change column name to used_flag_last_updated

* fix a method signature after column name switch

* update docs spelling

* Update spelling dictionary

* Fixing up docs/spelling and integrating altering tasks table with my 
alteration code

* Update NULL values for used_flag_last_updated in the background

* Remove logic to allow segs with null used_flag_last_updated to be killed 
regardless of bufferPeriod

* remove unneeded things now that the new column is automatically updated

* Test new background row updater method

* fix broken tests

* fix create table statement

* cleanup DDL formatting

* Revert adding columns to entry table by default

* fix compilation issues after merge with master

* discovered and fixed metastore inserts that were breaking integration 
tests

* fixup forgotten insert by using pattern of sharing now timestamp across 
columns

* fix issue introduced by merge

* fixup after merge with master

* add some directions to docs in the case of segment table validation issues
---
 docs/configuration/index.md|   1 +
 docs/design/metadata-storage.md|   4 +-
 docs/operations/upgrade-prep.md|  71 
 .../test-data/high-availability-sample-data.sql|  10 +-
 .../docker/test-data/ldap-security-sample-data.sql |   2 +-
 .../docker/test-data/query-error-sample-data.sql   |  10 +-
 .../docker/test-data/query-retry-sample-data.sql   |  10 +-
 .../docker/test-data/query-sample-data.sql |  10 +-
 .../docker/test-data/security-sample-data.sql  |   2 +-
 pom.xml|   3 +
 .../druid/metadata/MetadataStorageConnector.java   |   8 +
 .../metadata/TestMetadataStorageConnector.java |   6 +
 .../SQLMetadataStorageUpdaterJobHandler.java   |  11 +-
 .../IndexerSQLMetadataStorageCoordinator.java  |  10 +-
 .../druid/metadata/SQLMetadataConnector.java   | 180 -
 .../metadata/SQLMetadataSegmentPublisher.java  |  14 +-
 .../druid/metadata/SegmentsMetadataManager.java|  20 ++-
 .../druid/metadata/SqlSegmentsMetadataManager.java | 124 +-
 .../druid/metadata/SqlSegmentsMetadataQuery.java   |  11 +-
 .../metadata/storage/derby/DerbyConnector.java |  58 ---
 .../druid/server/coordinator/DruidCoordinator.java |   2 +
 .../server/coordinator/DruidCoordinatorConfig.java |   4 +
 .../coordinator/duty/KillUnusedSegments.java   |   7 +-
 .../IndexerSQLMetadataStorageCoordinatorTest.java  |  11 +-
 .../druid/metadata/SQLMetadataConnectorTest.java   |  44 -
 .../metadata/SqlSegmentsMetadataManagerTest.java   | 118 +-
 .../apache/druid/metadata/TestDerbyConnector.java  |  26 +++
 .../coordinator/TestDruidCoordinatorConfig.java|  22 ++-
 .../coordinator/duty/KillUnusedSegmentsTest.java   |   7 +-
 .../simulate/TestSegmentsMetadataManager.java  |  12 +-
 .../src/main/java/org/apache/druid/cli/Main.java   |   3 +-
 .../java/org/apache/druid/cli/UpdateTables.java| 134 +++
 website/.spelling  |   1 +
 33 files changed, 832 insertions(+), 124 deletions(-)

diff --git a/docs/configuration/index.md b/docs/configuration/index.md
index 4924fb478f..deb1e7c541 100644
--- a/docs/configuration/index.md
+++ b/docs/configuration/index.md
@@ -858,6 +858,7 @@ These Coordinator static configurations can be defined in 
the `coordinator/runti
 |`druid.coordinator.kill.period`|How often to send kill tasks

[druid] branch master updated: Fix a resource leak with Window processing (#14573)

2023-07-12 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/master by this push:
 new 65e1b27aa7 Fix a resource leak with Window processing (#14573)
65e1b27aa7 is described below

commit 65e1b27aa709dc3e11ae75993656243377744666
Author: imply-cheddar <86940447+imply-ched...@users.noreply.github.com>
AuthorDate: Thu Jul 13 07:25:42 2023 +0900

Fix a resource leak with Window processing (#14573)

* Fix a resource leak with Window processing

Additionally, in order to find the leak, there were
adjustments to the StupidPool to track leaks a bit better.
It would appear that the pool objects get GC'd during testing
for some reason which was causing some incorrect identification
of leaks from objects that had been returned but were GC'd along
with the pool.

* Suppress unused warning
---
 .../org/apache/druid/collections/StupidPool.java   |  62 ++---
 .../query/operator/LimitTimeIntervalOperator.java  |  20 ++---
 .../WindowOperatorQueryQueryRunnerFactory.java |  53 ++-
 .../rowsandcols/LazilyDecoratedRowsAndColumns.java | 100 +
 .../druid/query/rowsandcols/RowsAndColumns.java|  31 +++
 .../druid/query/rowsandcols/SemanticCreator.java   |  37 
 .../concrete/QueryableIndexRowsAndColumns.java |  14 ++-
 .../semantic/DefaultNaiveSortMaker.java|   9 +-
 .../org/apache/druid/segment/IndexBuilder.java |   3 +-
 .../druid/sql/calcite/DrillWindowQueryTest.java|  18 +++-
 10 files changed, 275 insertions(+), 72 deletions(-)

diff --git 
a/processing/src/main/java/org/apache/druid/collections/StupidPool.java 
b/processing/src/main/java/org/apache/druid/collections/StupidPool.java
index ced36e3a9d..06536c5d33 100644
--- a/processing/src/main/java/org/apache/druid/collections/StupidPool.java
+++ b/processing/src/main/java/org/apache/druid/collections/StupidPool.java
@@ -24,12 +24,13 @@ import com.google.common.base.Preconditions;
 import com.google.common.base.Supplier;
 import org.apache.druid.java.util.common.Cleaners;
 import org.apache.druid.java.util.common.ISE;
-import org.apache.druid.java.util.common.RE;
+import org.apache.druid.java.util.common.StringUtils;
 import org.apache.druid.java.util.common.logger.Logger;
 
 import java.lang.ref.WeakReference;
 import java.util.Queue;
 import java.util.concurrent.ConcurrentLinkedQueue;
+import java.util.concurrent.CopyOnWriteArrayList;
 import java.util.concurrent.atomic.AtomicBoolean;
 import java.util.concurrent.atomic.AtomicLong;
 import java.util.concurrent.atomic.AtomicReference;
@@ -105,7 +106,8 @@ public class StupidPool implements NonBlockingPool
   private final AtomicLong createdObjectsCounter = new AtomicLong(0);
   private final AtomicLong leakedObjectsCounter = new AtomicLong(0);
 
-  private final AtomicReference capturedException = new 
AtomicReference<>(null);
+  private final AtomicReference> 
capturedException =
+  new AtomicReference<>(null);
 
   //note that this is just the max entries in the cache, pool can still create 
as many buffers as needed.
   private final int objectsCacheMaxCount;
@@ -149,30 +151,41 @@ public class StupidPool implements NonBlockingPool
 ObjectResourceHolder resourceHolder = objects.poll();
 if (resourceHolder == null) {
   if (POISONED.get() && capturedException.get() != null) {
-throw capturedException.get();
+throw makeExceptionForLeaks(capturedException.get());
   }
   return makeObjectWithHandler();
 } else {
   poolSize.decrementAndGet();
   if (POISONED.get()) {
-final RuntimeException exception = capturedException.get();
-if (exception == null) {
-  resourceHolder.notifier.except = new RE("Thread[%s]: leaky leak!", 
Thread.currentThread().getName());
+final CopyOnWriteArrayList exceptionList = 
capturedException.get();
+if (exceptionList == null) {
+  resourceHolder.notifier.except = new 
LeakedException(Thread.currentThread().getName());
 } else {
-  throw exception;
+  throw makeExceptionForLeaks(exceptionList);
 }
   }
   return resourceHolder;
 }
   }
 
+  private RuntimeException 
makeExceptionForLeaks(CopyOnWriteArrayList exceptionList)
+  {
+RuntimeException toThrow = new RuntimeException(
+"Leaks happened, each suppressed exception represents one code path 
that checked out an object and didn't return it."
+);
+for (LeakedException exception : exceptionList) {
+  toThrow.addSuppressed(exception);
+}
+return toThrow;
+  }
+
   private ObjectResourceHolder makeObjectWithHandler()
   {
 T object = generator.get();
 createdObjectsCounter.incrementAndGet();
 ObjectId objectId = new O

[druid] branch master updated: Better surfacing of invalid pattern errors for SQL REGEXP_EXTRACT function (#14505)

2023-07-05 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/master by this push:
 new f29a9faa94 Better surfacing of invalid pattern errors for SQL 
REGEXP_EXTRACT function (#14505)
f29a9faa94 is described below

commit f29a9faa94839bacfaad94bdeb4f439fe379dd18
Author: Jonathan Wei 
AuthorDate: Wed Jul 5 17:12:54 2023 -0500

Better surfacing of invalid pattern errors for SQL REGEXP_EXTRACT function 
(#14505)
---
 .../builtin/RegexpExtractOperatorConversion.java   | 36 --
 .../apache/druid/sql/calcite/CalciteQueryTest.java | 21 +
 2 files changed, 47 insertions(+), 10 deletions(-)

diff --git 
a/sql/src/main/java/org/apache/druid/sql/calcite/expression/builtin/RegexpExtractOperatorConversion.java
 
b/sql/src/main/java/org/apache/druid/sql/calcite/expression/builtin/RegexpExtractOperatorConversion.java
index d131e5a66a..0de6095413 100644
--- 
a/sql/src/main/java/org/apache/druid/sql/calcite/expression/builtin/RegexpExtractOperatorConversion.java
+++ 
b/sql/src/main/java/org/apache/druid/sql/calcite/expression/builtin/RegexpExtractOperatorConversion.java
@@ -24,6 +24,7 @@ import org.apache.calcite.sql.SqlFunction;
 import org.apache.calcite.sql.SqlFunctionCategory;
 import org.apache.calcite.sql.type.SqlTypeFamily;
 import org.apache.calcite.sql.type.SqlTypeName;
+import org.apache.druid.error.InvalidSqlInput;
 import org.apache.druid.java.util.common.StringUtils;
 import org.apache.druid.math.expr.Expr;
 import org.apache.druid.query.extraction.RegexDimExtractionFn;
@@ -33,6 +34,8 @@ import 
org.apache.druid.sql.calcite.expression.OperatorConversions;
 import org.apache.druid.sql.calcite.expression.SqlOperatorConversion;
 import org.apache.druid.sql.calcite.planner.PlannerContext;
 
+import java.util.regex.PatternSyntaxException;
+
 public class RegexpExtractOperatorConversion implements SqlOperatorConversion
 {
   private static final SqlFunction SQL_FUNCTION = OperatorConversions
@@ -74,16 +77,29 @@ public class RegexpExtractOperatorConversion implements 
SqlOperatorConversion
   if (arg.isSimpleExtraction() && patternExpr.isLiteral() && 
(indexExpr == null || indexExpr.isLiteral())) {
 final String pattern = (String) patternExpr.getLiteralValue();
 
-return arg.getSimpleExtraction().cascade(
-new RegexDimExtractionFn(
-// Undo the empty-to-null conversion from patternExpr 
parsing (patterns cannot be null, even in
-// non-SQL-compliant null handling mode).
-StringUtils.nullToEmptyNonDruidDataString(pattern),
-indexExpr == null ? DEFAULT_INDEX : ((Number) 
indexExpr.getLiteralValue()).intValue(),
-true,
-null
-)
-);
+
+try {
+  return arg.getSimpleExtraction().cascade(
+  new RegexDimExtractionFn(
+  // Undo the empty-to-null conversion from patternExpr 
parsing (patterns cannot be null, even in
+  // non-SQL-compliant null handling mode).
+  StringUtils.nullToEmptyNonDruidDataString(pattern),
+  indexExpr == null ? DEFAULT_INDEX : ((Number) 
indexExpr.getLiteralValue()).intValue(),
+  true,
+  null
+  )
+  );
+}
+catch (PatternSyntaxException e) {
+  throw InvalidSqlInput.exception(
+  e,
+  StringUtils.format(
+  "An invalid pattern [%s] was provided for the 
REGEXP_EXTRACT function, error: [%s]",
+  e.getPattern(),
+  e.getMessage()
+  )
+  );
+}
   } else {
 return null;
   }
diff --git 
a/sql/src/test/java/org/apache/druid/sql/calcite/CalciteQueryTest.java 
b/sql/src/test/java/org/apache/druid/sql/calcite/CalciteQueryTest.java
index de6697388f..54c8b36070 100644
--- a/sql/src/test/java/org/apache/druid/sql/calcite/CalciteQueryTest.java
+++ b/sql/src/test/java/org/apache/druid/sql/calcite/CalciteQueryTest.java
@@ -7511,6 +7511,27 @@ public class CalciteQueryTest extends 
BaseCalciteQueryTest
 );
   }
 
+  @Test
+  public void testRegexpExtractWithBadRegexPattern()
+  {
+// Cannot vectorize due to extractionFn in dimension spec.
+cannotVectorize();
+
+expectedException.expect(DruidException.class);
+expectedException.expectMessage(
+"An invalid pattern [^(.))] was provided for the REGEXP_EXTRACT 
function, " +
+"error: [Unmatched closing ')' near index 3\n^(.))\n   ^]"
+);
+
+testQuery(
+"SELECT DISTINCT\n"
++ &qu

[druid] branch master updated: Support complex variance object inputs for variance SQL agg function (#14463)

2023-06-28 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/master by this push:
 new c36f12f1d8 Support complex variance object inputs for variance SQL agg 
function (#14463)
c36f12f1d8 is described below

commit c36f12f1d8bf81cf110dd41853627be175816b00
Author: Jonathan Wei 
AuthorDate: Wed Jun 28 13:14:19 2023 -0500

Support complex variance object inputs for variance SQL agg function 
(#14463)

* Support complex variance object inputs for variance SQL agg function

* Add test

* Include complexTypeChecker, address PR comments

* Checkstyle, javadoc link
---
 .../variance/VarianceAggregatorFactory.java|  2 +-
 .../variance/sql/BaseVarianceSqlAggregator.java| 67 ++
 .../variance/sql/VarianceSqlAggregatorTest.java| 58 ++-
 .../druid/sql/calcite/table/RowSignatures.java | 82 --
 4 files changed, 188 insertions(+), 21 deletions(-)

diff --git 
a/extensions-core/stats/src/main/java/org/apache/druid/query/aggregation/variance/VarianceAggregatorFactory.java
 
b/extensions-core/stats/src/main/java/org/apache/druid/query/aggregation/variance/VarianceAggregatorFactory.java
index 47eccfbffd..40d06bbbe0 100644
--- 
a/extensions-core/stats/src/main/java/org/apache/druid/query/aggregation/variance/VarianceAggregatorFactory.java
+++ 
b/extensions-core/stats/src/main/java/org/apache/druid/query/aggregation/variance/VarianceAggregatorFactory.java
@@ -60,7 +60,7 @@ import java.util.Objects;
 @JsonTypeName("variance")
 public class VarianceAggregatorFactory extends AggregatorFactory
 {
-  private static final String VARIANCE_TYPE_NAME = "variance";
+  public static final String VARIANCE_TYPE_NAME = "variance";
   public static final ColumnType TYPE = 
ColumnType.ofComplex(VARIANCE_TYPE_NAME);
 
   protected final String fieldName;
diff --git 
a/extensions-core/stats/src/main/java/org/apache/druid/query/aggregation/variance/sql/BaseVarianceSqlAggregator.java
 
b/extensions-core/stats/src/main/java/org/apache/druid/query/aggregation/variance/sql/BaseVarianceSqlAggregator.java
index 3eb3f49816..0b1562eb83 100644
--- 
a/extensions-core/stats/src/main/java/org/apache/druid/query/aggregation/variance/sql/BaseVarianceSqlAggregator.java
+++ 
b/extensions-core/stats/src/main/java/org/apache/druid/query/aggregation/variance/sql/BaseVarianceSqlAggregator.java
@@ -26,7 +26,10 @@ import org.apache.calcite.rel.type.RelDataType;
 import org.apache.calcite.rex.RexBuilder;
 import org.apache.calcite.rex.RexNode;
 import org.apache.calcite.sql.SqlAggFunction;
-import org.apache.calcite.sql.fun.SqlStdOperatorTable;
+import org.apache.calcite.sql.SqlFunctionCategory;
+import org.apache.calcite.sql.SqlKind;
+import org.apache.calcite.sql.type.OperandTypes;
+import org.apache.calcite.sql.type.ReturnTypes;
 import org.apache.druid.java.util.common.IAE;
 import org.apache.druid.java.util.common.StringUtils;
 import org.apache.druid.query.aggregation.AggregatorFactory;
@@ -42,15 +45,33 @@ import 
org.apache.druid.sql.calcite.aggregation.Aggregations;
 import org.apache.druid.sql.calcite.aggregation.SqlAggregator;
 import org.apache.druid.sql.calcite.expression.DruidExpression;
 import org.apache.druid.sql.calcite.expression.Expressions;
+import org.apache.druid.sql.calcite.expression.OperatorConversions;
 import org.apache.druid.sql.calcite.planner.Calcites;
 import org.apache.druid.sql.calcite.planner.PlannerContext;
 import org.apache.druid.sql.calcite.rel.VirtualColumnRegistry;
+import org.apache.druid.sql.calcite.table.RowSignatures;
 
 import javax.annotation.Nullable;
 import java.util.List;
 
 public abstract class BaseVarianceSqlAggregator implements SqlAggregator
 {
+  private static final String VARIANCE_NAME = "VARIANCE";
+  private static final String STDDEV_NAME = "STDDEV";
+
+  private static final SqlAggFunction VARIANCE_SQL_AGG_FUNC_INSTANCE =
+  buildSqlAvgAggFunction(VARIANCE_NAME);
+  private static final SqlAggFunction VARIANCE_POP_SQL_AGG_FUNC_INSTANCE =
+  buildSqlAvgAggFunction(SqlKind.VAR_POP.name());
+  private static final SqlAggFunction VARIANCE_SAMP_SQL_AGG_FUNC_INSTANCE =
+  buildSqlAvgAggFunction(SqlKind.VAR_SAMP.name());
+  private static final SqlAggFunction STDDEV_SQL_AGG_FUNC_INSTANCE =
+  buildSqlAvgAggFunction(STDDEV_NAME);
+  private static final SqlAggFunction STDDEV_POP_SQL_AGG_FUNC_INSTANCE =
+  buildSqlAvgAggFunction(SqlKind.STDDEV_POP.name());
+  private static final SqlAggFunction STDDEV_SAMP_SQL_AGG_FUNC_INSTANCE =
+  buildSqlAvgAggFunction(SqlKind.STDDEV_SAMP.name());
+
   @Nullable
   @Override
   public Aggregation toDruidAggregation(
@@ -104,12 +125,13 @@ public abstract class BaseVarianceSqlAggregator 
implements SqlAggregator
 
 if (inputType.isNumeric()

[druid] branch master updated: Return `RESOURCES` in `EXPLAIN PLAN` as an ordered collection (#14323)

2023-05-22 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/master by this push:
 new 338bdb35ea Return `RESOURCES` in `EXPLAIN PLAN` as an ordered 
collection (#14323)
338bdb35ea is described below

commit 338bdb35ea19b495bc6eb587ab99978edb16ce46
Author: Abhishek Radhakrishnan 
AuthorDate: Mon May 22 22:55:00 2023 -0700

Return `RESOURCES` in `EXPLAIN PLAN` as an ordered collection (#14323)

* Make resources an ordered collection so it's deterministic.

* test cleanup

* fixup docs.

* Replace deprecated ObjectNode#put() calls with ObjectNode#set().
---
 docs/querying/sql-translation.md   |   2 +-
 .../druid/sql/calcite/planner/QueryHandler.java|  14 ++-
 .../druid/sql/calcite/CalciteInsertDmlTest.java| 116 +++--
 .../druid/sql/calcite/CalciteReplaceDmlTest.java   |  15 ++-
 4 files changed, 126 insertions(+), 21 deletions(-)

diff --git a/docs/querying/sql-translation.md b/docs/querying/sql-translation.md
index 4b0b2d8fbc..7c2876c68d 100644
--- a/docs/querying/sql-translation.md
+++ b/docs/querying/sql-translation.md
@@ -66,7 +66,7 @@ The [EXPLAIN PLAN](sql.md#explain-plan) functionality can 
help you understand ho
 be translated to native.
 EXPLAIN PLAN statements return:
 - a `PLAN` column that contains a JSON array of native queries that Druid will 
run
-- a `RESOURCES` column that describes the resource being queried as well as a 
`PLAN` column that contains a JSON array of native queries that Druid will run
+- a `RESOURCES` column that describes the resources used in the query
 - a `ATTRIBUTES` column that describes the attributes of a query, such as the 
statement type and target data source
 
 For example, consider the following query:
diff --git 
a/sql/src/main/java/org/apache/druid/sql/calcite/planner/QueryHandler.java 
b/sql/src/main/java/org/apache/druid/sql/calcite/planner/QueryHandler.java
index 5dd94f2473..28e9beffc6 100644
--- a/sql/src/main/java/org/apache/druid/sql/calcite/planner/QueryHandler.java
+++ b/sql/src/main/java/org/apache/druid/sql/calcite/planner/QueryHandler.java
@@ -75,6 +75,7 @@ import org.apache.druid.utils.Throwables;
 
 import javax.annotation.Nullable;
 import java.util.ArrayList;
+import java.util.Comparator;
 import java.util.HashSet;
 import java.util.Iterator;
 import java.util.List;
@@ -376,8 +377,11 @@ public abstract class QueryHandler extends 
SqlStatementHandler.BaseStatementHand
   }
 }
   }
-  final Set resources =
-  
plannerContext.getResourceActions().stream().map(ResourceAction::getResource).collect(Collectors.toSet());
+  final List resources = plannerContext.getResourceActions()
+  .stream()
+  .map(ResourceAction::getResource)
+  .sorted(Comparator.comparing(Resource::getName))
+  .collect(Collectors.toList());
   resourcesString = 
plannerContext.getJsonMapper().writeValueAsString(resources);
 }
 catch (JsonProcessingException jpe) {
@@ -431,9 +435,9 @@ public abstract class QueryHandler extends 
SqlStatementHandler.BaseStatementHand
 for (DruidQuery druidQuery : druidQueryList) {
   Query nativeQuery = druidQuery.getQuery();
   ObjectNode objectNode = jsonMapper.createObjectNode();
-  objectNode.put("query", jsonMapper.convertValue(nativeQuery, 
ObjectNode.class));
-  objectNode.put("signature", 
jsonMapper.convertValue(druidQuery.getOutputRowSignature(), ArrayNode.class));
-  objectNode.put(
+  objectNode.set("query", jsonMapper.convertValue(nativeQuery, 
ObjectNode.class));
+  objectNode.set("signature", 
jsonMapper.convertValue(druidQuery.getOutputRowSignature(), ArrayNode.class));
+  objectNode.set(
   "columnMappings",
   
jsonMapper.convertValue(QueryUtils.buildColumnMappings(relRoot.fields, 
druidQuery), ArrayNode.class));
   nativeQueriesArrayNode.add(objectNode);
diff --git 
a/sql/src/test/java/org/apache/druid/sql/calcite/CalciteInsertDmlTest.java 
b/sql/src/test/java/org/apache/druid/sql/calcite/CalciteInsertDmlTest.java
index 51fa017670..d0f06fffa6 100644
--- a/sql/src/test/java/org/apache/druid/sql/calcite/CalciteInsertDmlTest.java
+++ b/sql/src/test/java/org/apache/druid/sql/calcite/CalciteInsertDmlTest.java
@@ -856,6 +856,11 @@ public class CalciteInsertDmlTest extends 
CalciteIngestionDmlTest
 // Skip vectorization since otherwise the "context" will change for each 
subtest.
 skipVectorize();
 
+final String query = StringUtils.format(
+"EXPLAIN PLAN FOR INSERT INTO dst SELECT * FROM %s PARTITIONED BY ALL 
TIME",
+externSql(externalDataSource)
+);
+
 ObjectMapper queryJsonMapper = queryFramework().queryJsonMapper();
 final ScanQuery expectedQuery = new

[druid] branch master updated (6b3a6113c4 -> a5e04d95a4)

2023-05-22 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


from 6b3a6113c4 Doc: List supported values for Kafka `headerFormat` (#14316)
 add a5e04d95a4 Add `TYPE_NAME` to the complex serde classes and replace 
the hardcoded names. (#14317)

No new revisions were added by this update.

Summary of changes:
 .../druid/benchmark/FilterPartitionBenchmark.java|  2 +-
 .../druid/benchmark/FilteredAggregatorBenchmark.java |  2 +-
 .../benchmark/GroupByTypeInterfaceBenchmark.java |  2 +-
 .../druid/benchmark/TopNTypeInterfaceBenchmark.java  |  2 +-
 .../indexing/IncrementalIndexReadBenchmark.java  |  2 +-
 .../benchmark/indexing/IndexIngestionBenchmark.java  |  2 +-
 .../benchmark/indexing/IndexMergeBenchmark.java  |  2 +-
 .../benchmark/indexing/IndexPersistBenchmark.java|  2 +-
 .../druid/benchmark/query/GroupByBenchmark.java  |  2 +-
 .../apache/druid/benchmark/query/ScanBenchmark.java  |  2 +-
 .../druid/benchmark/query/SearchBenchmark.java   |  2 +-
 .../druid/benchmark/query/TimeseriesBenchmark.java   |  2 +-
 .../apache/druid/benchmark/query/TopNBenchmark.java  |  2 +-
 .../query/timecompare/TimeCompareBenchmark.java  |  2 +-
 .../histogram/ApproximateHistogramDruidModule.java   |  2 +-
 .../histogram/ApproximateHistogramFoldingSerde.java  |  4 +++-
 .../query/aggregation/stats/DruidStatsModule.java|  2 +-
 .../query/aggregation/variance/VarianceSerde.java|  4 +++-
 .../org/apache/druid/jackson/AggregatorsModule.java  |  6 +++---
 .../aggregation/hyperloglog/HyperUniquesSerde.java   |  4 +++-
 .../hyperloglog/PreComputedHyperUniquesSerde.java|  2 ++
 .../apache/druid/frame/write/FrameWriterTest.java|  2 +-
 .../apache/druid/frame/write/FrameWritersTest.java   |  2 +-
 .../apache/druid/segment/SchemalessIndexTest.java|  2 +-
 .../java/org/apache/druid/segment/TestIndex.java |  2 +-
 .../druid/segment/generator/SegmentGenerator.java|  4 ++--
 .../druid/segment/serde/ComplexMetricsTest.java  | 20 ++--
 27 files changed, 46 insertions(+), 38 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] branch master updated: Make RecordSupplierInputSource respect sampler timeout when stream is empty (#13296)

2022-11-03 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/master by this push:
 new 2fdaa2fcab Make RecordSupplierInputSource respect sampler timeout when 
stream is empty (#13296)
2fdaa2fcab is described below

commit 2fdaa2fcabc7ceb91568ce1e6b1fcede2da7602c
Author: Jonathan Wei 
AuthorDate: Thu Nov 3 17:45:35 2022 -0500

Make RecordSupplierInputSource respect sampler timeout when stream is empty 
(#13296)

* Make RecordSupplierInputSource respect sampler timeout when stream is 
empty

* Rename timeout param, make it nullable, add timeout test
---
 .../seekablestream/RecordSupplierInputSource.java  | 24 -
 .../seekablestream/SeekableStreamSamplerSpec.java  |  6 +++--
 .../overlord/sampler/InputSourceSamplerTest.java   |  2 +-
 .../RecordSupplierInputSourceTest.java | 31 +-
 4 files changed, 58 insertions(+), 5 deletions(-)

diff --git 
a/indexing-service/src/main/java/org/apache/druid/indexing/seekablestream/RecordSupplierInputSource.java
 
b/indexing-service/src/main/java/org/apache/druid/indexing/seekablestream/RecordSupplierInputSource.java
index c387571507..ee54f2ac22 100644
--- 
a/indexing-service/src/main/java/org/apache/druid/indexing/seekablestream/RecordSupplierInputSource.java
+++ 
b/indexing-service/src/main/java/org/apache/druid/indexing/seekablestream/RecordSupplierInputSource.java
@@ -31,6 +31,7 @@ import 
org.apache.druid.indexing.overlord.sampler.SamplerException;
 import 
org.apache.druid.indexing.seekablestream.common.OrderedPartitionableRecord;
 import org.apache.druid.indexing.seekablestream.common.RecordSupplier;
 import org.apache.druid.indexing.seekablestream.common.StreamPartition;
+import org.apache.druid.java.util.common.logger.Logger;
 import org.apache.druid.java.util.common.parsers.CloseableIterator;
 
 import javax.annotation.Nullable;
@@ -45,19 +46,28 @@ import java.util.stream.Collectors;
  */
 public class RecordSupplierInputSource extends AbstractInputSource
 {
+  private static final Logger LOG = new 
Logger(RecordSupplierInputSource.class);
+
   private final String topic;
   private final RecordSupplier recordSupplier;
   private final boolean useEarliestOffset;
 
+  /**
+   * Maximum amount of time in which the entity iterator will return results. 
If null, no timeout is applied.
+   */
+  private final Integer iteratorTimeoutMs;
+
   public RecordSupplierInputSource(
   String topic,
   RecordSupplier 
recordSupplier,
-  boolean useEarliestOffset
+  boolean useEarliestOffset,
+  Integer iteratorTimeoutMs
   )
   {
 this.topic = topic;
 this.recordSupplier = recordSupplier;
 this.useEarliestOffset = useEarliestOffset;
+this.iteratorTimeoutMs = iteratorTimeoutMs;
 try {
   assignAndSeek(recordSupplier);
 }
@@ -123,13 +133,24 @@ public class RecordSupplierInputSource> recordIterator;
   private Iterator bytesIterator;
   private volatile boolean closed;
+  private final long createTime = System.currentTimeMillis();
+  private final Long terminationTime = iteratorTimeoutMs != null ? 
createTime + iteratorTimeoutMs : null;
 
   private void waitNextIteratorIfNecessary()
   {
 while (!closed && (bytesIterator == null || !bytesIterator.hasNext())) 
{
   while (!closed && (recordIterator == null || 
!recordIterator.hasNext())) {
+if (terminationTime != null && System.currentTimeMillis() > 
terminationTime) {
+  LOG.info(
+  "Configured sampler timeout [%s] has been exceeded, 
returning without a bytesIterator.",
+  iteratorTimeoutMs
+  );
+  bytesIterator = null;
+  return;
+}
 recordIterator = 
recordSupplier.poll(SeekableStreamSamplerSpec.POLL_TIMEOUT_MS).iterator();
   }
+
   if (!closed) {
 bytesIterator = recordIterator.next().getData().iterator();
   }
@@ -152,6 +173,7 @@ public class RecordSupplierInputSource(
   ioConfig.getStream(),
   recordSupplier,
-  ioConfig.isUseEarliestSequenceNumber()
+  ioConfig.isUseEarliestSequenceNumber(),
+  samplerConfig.getTimeoutMs() <= 0 ? null : 
samplerConfig.getTimeoutMs()
   );
   inputFormat = Preconditions.checkNotNull(
   ioConfig.getInputFormat(),
@@ -173,7 +174,8 @@ public abstract class 
SeekableStreamSamplerSpec inputSource = new RecordSupplierInputSource<>(
   ioConfig.getStream(),
   createRecordSupplier(),
-  ioConfig.isUseEarliestSequenceNumber()
+  ioConfig.isUseEarliestSequenceNumber(),
+  samplerConfig.getTimeoutMs() <= 0 ? null : 
samplerConfig.getTimeoutMs()
   );
   this.entityIterator = i

[druid] branch master updated: Add inline descriptor Protobuf bytes decoder (#13192)

2022-10-11 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/master by this push:
 new 9b8e69c99a Add inline descriptor Protobuf bytes decoder (#13192)
9b8e69c99a is described below

commit 9b8e69c99a410ba10496e375fc8cbb9c84f6d59b
Author: Jonathan Wei 
AuthorDate: Tue Oct 11 13:37:28 2022 -0500

Add inline descriptor Protobuf bytes decoder (#13192)

* Add inline descriptor Protobuf bytes decoder

* PR comments

* Update tests, check for IllegalArgumentException

* Fix license, add equals test

* Update 
extensions-core/protobuf-extensions/src/main/java/org/apache/druid/data/input/protobuf/InlineDescriptorProtobufBytesDecoder.java

Co-authored-by: Frank Chen 

Co-authored-by: Frank Chen 
---
 docs/ingestion/data-formats.md |  20 
 extensions-core/protobuf-extensions/pom.xml|   5 +
 ...va => DescriptorBasedProtobufBytesDecoder.java} |  80 +++---
 .../protobuf/FileBasedProtobufBytesDecoder.java|  80 +++---
 .../InlineDescriptorProtobufBytesDecoder.java  |  95 
 .../data/input/protobuf/ProtobufBytesDecoder.java  |   3 +-
 .../FileBasedProtobufBytesDecoderTest.java |  20 
 .../InlineDescriptorProtobufBytesDecoderTest.java  | 123 +
 .../input/protobuf/ProtobufInputFormatTest.java|  31 +-
 website/.spelling  |   1 +
 10 files changed, 327 insertions(+), 131 deletions(-)

diff --git a/docs/ingestion/data-formats.md b/docs/ingestion/data-formats.md
index 22c1027647..db4e7f062f 100644
--- a/docs/ingestion/data-formats.md
+++ b/docs/ingestion/data-formats.md
@@ -1308,6 +1308,26 @@ Sample spec:
 }
 ```
 
+ Inline Descriptor Protobuf Bytes Decoder
+
+This Protobuf bytes decoder allows the user to provide the contents of a 
Protobuf descriptor file inline, encoded as a Base64 string, and then parse it 
to get schema used to decode the Protobuf record from bytes.
+
+| Field | Type | Description | Required |
+|---|--|-|--|
+| type | String | Set value to `inline`. | yes |
+| descriptorString | String | A compiled Protobuf descriptor, encoded as a 
Base64 string. | yes |
+| protoMessageType | String | Protobuf message type in the descriptor.  Both 
short name and fully qualified name are accepted. The parser uses the first 
message type found in the descriptor if not specified. | no |
+
+Sample spec:
+
+```json
+"protoBytesDecoder": {
+  "type": "inline",
+  "descriptorString": ,
+  "protoMessageType": "Metrics"
+}
+```
+
 # Confluent Schema Registry-based Protobuf Bytes Decoder
 
 This Protobuf bytes decoder first extracts a unique `id` from input message 
bytes, and then uses it to look up the schema in the Schema Registry used to 
decode the Avro record from bytes.
diff --git a/extensions-core/protobuf-extensions/pom.xml 
b/extensions-core/protobuf-extensions/pom.xml
index e2e8c4a116..c7b7fc6e8b 100644
--- a/extensions-core/protobuf-extensions/pom.xml
+++ b/extensions-core/protobuf-extensions/pom.xml
@@ -163,6 +163,11 @@
   ${project.parent.version}
   test
 
+
+  nl.jqno.equalsverifier
+  equalsverifier
+  test
+
   
 
   
diff --git 
a/extensions-core/protobuf-extensions/src/main/java/org/apache/druid/data/input/protobuf/FileBasedProtobufBytesDecoder.java
 
b/extensions-core/protobuf-extensions/src/main/java/org/apache/druid/data/input/protobuf/DescriptorBasedProtobufBytesDecoder.java
similarity index 58%
copy from 
extensions-core/protobuf-extensions/src/main/java/org/apache/druid/data/input/protobuf/FileBasedProtobufBytesDecoder.java
copy to 
extensions-core/protobuf-extensions/src/main/java/org/apache/druid/data/input/protobuf/DescriptorBasedProtobufBytesDecoder.java
index ed52f7443b..d4c65c6f99 100644
--- 
a/extensions-core/protobuf-extensions/src/main/java/org/apache/druid/data/input/protobuf/FileBasedProtobufBytesDecoder.java
+++ 
b/extensions-core/protobuf-extensions/src/main/java/org/apache/druid/data/input/protobuf/DescriptorBasedProtobufBytesDecoder.java
@@ -19,7 +19,6 @@
 
 package org.apache.druid.data.input.protobuf;
 
-import com.fasterxml.jackson.annotation.JsonCreator;
 import com.fasterxml.jackson.annotation.JsonProperty;
 import com.github.os72.protobuf.dynamic.DynamicSchema;
 import com.google.common.annotations.VisibleForTesting;
@@ -29,52 +28,44 @@ import com.google.protobuf.DynamicMessage;
 import org.apache.druid.java.util.common.StringUtils;
 import org.apache.druid.java.util.common.parsers.ParseException;
 
-import java.io.IOException;
-import java.io.InputStream;
-import java.net.MalformedURLException;
-import java.net.URL;
 import java.nio.ByteBuffer;
 import java.util.Objects;
 import java.uti

[druid] branch master updated (e839660b6a -> 1f1fced6d4)

2022-09-26 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


from e839660b6a Grab the thread name in a poisoned pool (#13143)
 add 1f1fced6d4 Add JsonInputFormat option to assume newline delimited 
JSON, improve parse exception handling for multiline JSON (#13089)

No new revisions were added by this update.

Summary of changes:
 .../druid/data/input/impl/JsonInputFormat.java |  75 +++--
 .../impl/{JsonReader.java => JsonNodeReader.java}  | 125 +
 .../input/impl/CloudObjectInputSourceTest.java |   8 +-
 .../druid/data/input/impl/JsonInputFormatTest.java |  12 +-
 .../druid/data/input/impl/JsonLineReaderTest.java  |  16 ++-
 ...JsonReaderTest.java => JsonNodeReaderTest.java} |  73 +++-
 .../druid/data/input/impl/JsonReaderTest.java  |  28 +++--
 docs/ingestion/data-formats.md |   7 ++
 .../data/input/aliyun/OssInputSourceTest.java  |  10 +-
 .../google/GoogleCloudStorageInputSourceTest.java  |   8 +-
 .../input/kafkainput/KafkaInputFormatTest.java |  15 ++-
 .../druid/indexing/kafka/KafkaSamplerSpecTest.java |   4 +-
 .../kafka/supervisor/KafkaSupervisorTest.java  |   2 +
 .../indexing/kinesis/KinesisSamplerSpecTest.java   |   2 +-
 .../kinesis/supervisor/KinesisSupervisorTest.java  |   3 +
 .../apache/druid/msq/querykit/DataSourcePlan.java  |   2 +-
 .../org/apache/druid/msq/exec/MSQSelectTest.java   |   2 +-
 .../druid/msq/indexing/error/MSQWarningsTest.java  |   2 +-
 .../external/ExternalInputSpecSlicerTest.java  |   2 +-
 .../druid/data/input/s3/S3InputSourceTest.java |  18 +--
 .../druid/indexing/common/task/IndexTaskTest.java  |   2 +-
 ...ltiPhaseParallelIndexingWithNullColumnTest.java |   6 +-
 .../parallel/ParallelIndexSupervisorTaskTest.java  |   2 +-
 .../parallel/ParallelIndexTestingFactory.java  |   2 +-
 .../PartialHashSegmentGenerateTaskTest.java|   4 +-
 .../parallel/SinglePhaseParallelIndexingTest.java  |   2 +
 .../batch/parallel/SinglePhaseSubTaskSpecTest.java |   2 +-
 .../overlord/sampler/InputSourceSamplerTest.java   |   2 +-
 .../SeekableStreamIndexTaskTestBase.java   |   2 +
 .../SeekableStreamSupervisorSpecTest.java  |   6 +-
 .../seekablestream/StreamChunkParserTest.java  |   6 +-
 .../SeekableStreamSupervisorStateTest.java |   4 +-
 website/.spelling  |   2 +
 33 files changed, 311 insertions(+), 145 deletions(-)
 copy core/src/main/java/org/apache/druid/data/input/impl/{JsonReader.java => 
JsonNodeReader.java} (56%)
 copy core/src/test/java/org/apache/druid/data/input/impl/{JsonReaderTest.java 
=> JsonNodeReaderTest.java} (88%)


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] branch master updated: Worker level task metrics (#12446)

2022-04-26 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/master by this push:
 new 564d6defd4 Worker level task metrics (#12446)
564d6defd4 is described below

commit 564d6defd47749d55dd07e5549d7264cbc1c4019
Author: zachjsh 
AuthorDate: Tue Apr 26 12:44:44 2022 -0400

Worker level task metrics (#12446)

* * fix metric name inconsistency

* * add task slot metrics for middle managers

* * add new WorkerTaskCountStatsMonitor to report task count metrics
  from worker

* * more stuff

* * remove unused variable

* * more stuff

* * add javadocs

* * fix checkstyle

* * fix hadoop test failure

* * cleanup

* * add more code coverage in tests

* * fix test failure

* * add docs

* * increase code coverage

* * fix spelling

* * fix failing tests

* * remove dead code

* * fix spelling
---
 docs/configuration/index.md|   2 +
 docs/operations/metrics.md |   5 +
 .../main/resources/defaultMetricDimensions.json|   8 +-
 .../druid/indexing/overlord/ForkingTaskRunner.java |  73 ++-
 .../indexing/overlord/ForkingTaskRunnerTest.java   |  16 ++
 .../metrics/WorkerTaskCountStatsMonitor.java   |  79 
 .../metrics/WorkerTaskCountStatsProvider.java  |  63 ++
 .../metrics/WorkerTaskCountStatsMonitorTest.java   | 216 +
 .../org/apache/druid/cli/CliMiddleManager.java |   2 +
 website/.spelling  |   1 +
 10 files changed, 454 insertions(+), 11 deletions(-)

diff --git a/docs/configuration/index.md b/docs/configuration/index.md
index f4c268ae9d..e598b36ea0 100644
--- a/docs/configuration/index.md
+++ b/docs/configuration/index.md
@@ -383,6 +383,8 @@ Metric monitoring is an essential part of Druid operations. 
 The following monit
 |`org.apache.druid.server.metrics.QueryCountStatsMonitor`|Reports how many 
queries have been successful/failed/interrupted.|
 |`org.apache.druid.server.emitter.HttpEmittingMonitor`|Reports internal 
metrics of `http` or `parametrized` emitter (see below). Must not be used with 
another emitter type. See the description of the metrics here: 
https://github.com/apache/druid/pull/4973.|
 |`org.apache.druid.server.metrics.TaskCountStatsMonitor`|Reports how many 
ingestion tasks are currently running/pending/waiting and also the number of 
successful/failed tasks per emission period.|
+|`org.apache.druid.server.metrics.TaskSlotCountStatsMonitor`|Reports metrics 
about task slot usage per emission period.|
+|`org.apache.druid.server.metrics.WorkerTaskCountStatsMonitor`|Reports how 
many ingestion tasks are currently running/pending/waiting, the number of 
successful/failed tasks, and metrics about task slot usage for the reporting 
worker, per emission period. Only supported by middleManager node types.|
 
 For example, you might configure monitors on all processes for system and JVM 
information within `common.runtime.properties` as follows:
 
diff --git a/docs/operations/metrics.md b/docs/operations/metrics.md
index a5894ed005..8124acd1c9 100644
--- a/docs/operations/metrics.md
+++ b/docs/operations/metrics.md
@@ -214,6 +214,11 @@ Note: If the JVM does not support CPU time measurement for 
the current thread, i
 |`taskSlot/lazy/count`|Number of total task slots in lazy marked 
MiddleManagers and Indexers per emission period. This metric is only available 
if the TaskSlotCountStatsMonitor module is included.|category.|Varies.|
 |`taskSlot/blacklisted/count`|Number of total task slots in blacklisted 
MiddleManagers and Indexers per emission period. This metric is only available 
if the TaskSlotCountStatsMonitor module is included.|category.|Varies.|
 |`task/segmentAvailability/wait/time`|The amount of milliseconds a batch 
indexing task waited for newly created segments to become available for 
querying.|dataSource, taskType, taskId, segmentAvailabilityConfirmed|Varies.|
+|`worker/task/failed/count`|Number of failed tasks run on the reporting worker 
per emission period. This metric is only available if the 
WorkerTaskCountStatsMonitor module is included, and is only supported for 
middleManager nodes.|category, version.|Varies.|
+|`worker/task/success/count`|Number of successful tasks run on the reporting 
worker per emission period. This metric is only available if the 
WorkerTaskCountStatsMonitor module is included, and is only supported for 
middleManager nodes.|category, version.|Varies.|
+|`worker/taskSlot/idle/count`|Number of idle task slots on the reporting 
worker per emission period. This metric is only available if the 
WorkerTaskCountStatsMonitor module is included, and is only supported for 
middleManager nodes.|category, version.|Varies.|
+|`worker

[druid] branch master updated: Re-enable segment metadata cache when using external schema (#12264)

2022-02-22 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/master by this push:
 new b1640a7  Re-enable segment metadata cache when using external schema 
(#12264)
b1640a7 is described below

commit b1640a72ee1090bd3c90240bf47a5bfd5b690f5a
Author: Jonathan Wei 
AuthorDate: Tue Feb 22 19:50:29 2022 -0600

Re-enable segment metadata cache when using external schema (#12264)
---
 .../java/org/apache/druid/sql/calcite/schema/DruidSchema.java  | 10 ++
 1 file changed, 2 insertions(+), 8 deletions(-)

diff --git 
a/sql/src/main/java/org/apache/druid/sql/calcite/schema/DruidSchema.java 
b/sql/src/main/java/org/apache/druid/sql/calcite/schema/DruidSchema.java
index 2cf8b60..702ad82 100644
--- a/sql/src/main/java/org/apache/druid/sql/calcite/schema/DruidSchema.java
+++ b/sql/src/main/java/org/apache/druid/sql/calcite/schema/DruidSchema.java
@@ -228,9 +228,7 @@ public class DruidSchema extends AbstractSchema
 this.brokerInternalQueryConfig = brokerInternalQueryConfig;
 this.druidSchemaManager = druidSchemaManager;
 
-if (druidSchemaManager == null || druidSchemaManager instanceof 
NoopDruidSchemaManager) {
-  initServerViewTimelineCallback(serverView);
-}
+initServerViewTimelineCallback(serverView);
   }
 
   private void initServerViewTimelineCallback(final TimelineServerView 
serverView)
@@ -375,11 +373,7 @@ public class DruidSchema extends AbstractSchema
   @LifecycleStart
   public void start() throws InterruptedException
   {
-if (druidSchemaManager == null || druidSchemaManager instanceof 
NoopDruidSchemaManager) {
-  startCacheExec();
-} else {
-  initialized.countDown();
-}
+startCacheExec();
 
 if (config.isAwaitInitializationOnStart()) {
   final long startNanos = System.nanoTime();

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] branch master updated (0d23713 -> 33bc922)

2022-02-09 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git.


from 0d23713  Web console: update dev dependencies (#12240)
 add 33bc922  Move task creation under stateChangeLock in 
SeekableStreamSupervisor (#12178)

No new revisions were added by this update.

Summary of changes:
 .../supervisor/SeekableStreamSupervisor.java   | 25 +-
 1 file changed, 15 insertions(+), 10 deletions(-)

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] branch master updated (1b8808c -> f906f2f)

2022-01-27 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git.


from 1b8808c  Fix SQL queries for inline datasource with null values 
(#12092)
 add f906f2f  Fix HttpRemoteTaskRunner LifecycleStart / LifecycleStop race 
condition (#12184)

No new revisions were added by this update.

Summary of changes:
 .../overlord/hrtr/HttpRemoteTaskRunner.java|  83 +--
 .../overlord/hrtr/HttpRemoteTaskRunnerTest.java| 272 -
 .../apache/druid/discovery/DruidNodeDiscovery.java |   5 +
 .../discovery/DruidNodeDiscoveryProvider.java  |  10 +
 .../coordination/ChangeRequestHttpSyncer.java  |  25 +-
 .../discovery/DruidNodeDiscoveryProviderTest.java  |   9 +
 6 files changed, 252 insertions(+), 152 deletions(-)

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] branch master updated (1dba089 -> 74c876e)

2022-01-13 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git.


from 1dba089  fix array type strategy write size tracking (#12150)
 add 74c876e  Throw parse exceptions on schema get errors for 
SchemaRegistryBasedAvroBytesDecoder (#12080)

No new revisions were added by this update.

Summary of changes:
 docs/ingestion/data-formats.md |  6 ++
 .../avro/SchemaRegistryBasedAvroBytesDecoder.java  | 24 --
 .../SchemaRegistryBasedAvroBytesDecoderTest.java   |  5 ++---
 3 files changed, 16 insertions(+), 19 deletions(-)

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] branch master updated (377edff -> 3f79453)

2021-12-15 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git.


from 377edff  Ingestion metrics doc fix (#12066)
 add 3f79453  Lock count guardrail for parallel single phase/sequential 
task (#12052)

No new revisions were added by this update.

Summary of changes:
 .../common/task/AbstractBatchIndexTask.java|  15 ++-
 .../druid/indexing/common/task/CompactionTask.java |   5 +-
 .../druid/indexing/common/task/IndexTask.java  |   9 +-
 .../batch/MaxAllowedLocksExceededException.java|  17 ++-
 .../task/batch/parallel/AbstractBatchSubtask.java  |   2 +-
 .../batch/parallel/ParallelIndexPhaseRunner.java   |  11 +-
 .../parallel/ParallelIndexSupervisorTask.java  |  26 +++--
 .../batch/parallel/ParallelIndexTaskRunner.java|   4 +-
 .../batch/parallel/ParallelIndexTuningConfig.java  |  25 +++-
 .../SinglePhaseParallelIndexTaskRunner.java|   6 +
 .../common/task/batch/parallel/TaskMonitor.java|   2 +-
 .../apache/druid/indexing/overlord/TaskQueue.java  |  10 +-
 .../task/ClientCompactionTaskQuerySerdeTest.java   |   1 +
 .../common/task/CompactionTaskRunTest.java |   1 +
 .../indexing/common/task/CompactionTaskTest.java   |   1 +
 .../AbstractParallelIndexSupervisorTaskTest.java   |   2 +
 .../batch/parallel/HashPartitionTaskKillTest.java  |   8 +-
 .../ParallelIndexSupervisorTaskKillTest.java   |   1 +
 .../ParallelIndexSupervisorTaskResourceTest.java   |   1 +
 .../ParallelIndexSupervisorTaskSerdeTest.java  |   1 +
 .../parallel/ParallelIndexSupervisorTaskTest.java  |   1 +
 .../parallel/ParallelIndexTestingFactory.java  |   1 +
 .../parallel/ParallelIndexTuningConfigTest.java|   7 ++
 .../batch/parallel/RangePartitionTaskKillTest.java |   8 +-
 .../parallel/SinglePhaseParallelIndexingTest.java  | 127 +
 25 files changed, 262 insertions(+), 30 deletions(-)
 copy 
processing/src/main/java/org/apache/druid/segment/SegmentMissingException.java 
=> 
indexing-service/src/main/java/org/apache/druid/indexing/common/task/batch/MaxAllowedLocksExceededException.java
 (68%)

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] branch master updated (ffc5ade -> 229f82a)

2021-12-09 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git.


from ffc5ade  Remove use of deprecated PMD ruleset (#12044)
 add 229f82a  Add parse error list API for stream supervisors, use 
structured object for parse exceptions, simplify parse exception message 
(#11961)

No new revisions were added by this update.

Summary of changes:
 .../data/input/InputRowListPlusRawValues.java  |   2 +-
 .../data/input/IntermediateRowParsingReader.java   |  13 +-
 .../java/org/apache/druid/data/input/Rows.java |   4 +-
 .../apache/druid/data/input/impl/JsonReader.java   |  10 +-
 .../druid/data/input/impl/MapInputRowParser.java   |  12 +-
 .../apache/druid/data/input/impl/RegexReader.java  |   4 +-
 .../data/input/impl/StringInputRowParser.java  |   2 +-
 .../parsers/AbstractFlatTextFormatParser.java  |   4 +-
 .../java/util/common/parsers/JSONPathParser.java   |   2 +-
 .../util/common/parsers/JSONToLowerParser.java |   2 +-
 .../java/util/common/parsers/JavaScriptParser.java |   4 +-
 .../java/util/common/parsers/ParseException.java   |  39 -
 .../java/util/common/parsers/ParserUtils.java  |   2 +-
 .../java/util/common/parsers/RegexParser.java  |   4 +-
 .../parsers/UnparseableColumnsParseException.java  |  33 ++--
 .../druid/data/input/influx/InfluxParser.java  |   6 +-
 .../input/avro/InlineSchemaAvroBytesDecoder.java   |   3 +-
 .../input/avro/InlineSchemasAvroBytesDecoder.java  |   9 +-
 .../avro/SchemaRegistryBasedAvroBytesDecoder.java  |   4 +-
 .../avro/SchemaRepoBasedAvroBytesDecoder.java  |   3 +-
 .../data/input/kafkainput/KafkaInputReader.java|   5 +-
 .../druid/indexing/kafka/KafkaIndexTaskTest.java   |  71 +---
 .../kafka/supervisor/KafkaSupervisorTest.java  |  72 +++-
 .../indexing/kinesis/KinesisIndexTaskTest.java |  74 ++---
 .../protobuf/FileBasedProtobufBytesDecoder.java|  17 +-
 .../input/protobuf/ProtobufInputRowParser.java |   4 +-
 .../input/protobuf/ProtobufInputRowSchema.java |   2 +-
 .../druid/data/input/protobuf/ProtobufReader.java  |   4 +-
 .../SchemaRegistryBasedProtobufBytesDecoder.java   |   6 +-
 .../druid/indexer/HadoopDruidIndexerMapper.java|   2 +-
 .../task/AppenderatorDriverRealtimeIndexTask.java  |  23 +--
 .../druid/indexing/common/task/IndexTask.java  |  32 ++--
 .../druid/indexing/common/task/IndexTaskUtils.java |  31 +++-
 .../parallel/ParallelIndexSupervisorTask.java  |   9 +-
 .../task/batch/parallel/SinglePhaseSubTask.java|  11 +-
 .../overlord/supervisor/SupervisorManager.java |   7 +
 .../overlord/supervisor/SupervisorResource.java|  28 
 .../SeekableStreamIndexTaskClient.java |  53 +-
 .../SeekableStreamIndexTaskRunner.java |   9 +-
 .../supervisor/SeekableStreamSupervisor.java   | 162 +-
 .../apache/druid/indexing/common/TestFirehose.java |   2 +-
 .../AppenderatorDriverRealtimeIndexTaskTest.java   | 101 +---
 .../FilteringCloseableInputRowIteratorTest.java|   6 +-
 .../druid/indexing/common/task/IndexTaskTest.java  | 183 -
 .../parallel/SinglePhaseParallelIndexingTest.java  |  84 --
 .../druid/segment/DimensionHandlerUtils.java   |  12 +-
 .../segment/incremental/IncrementalIndex.java  |  30 +++-
 .../segment/incremental/ParseExceptionHandler.java |  25 ++-
 ...MetersTotals.java => ParseExceptionReport.java} |  67 
 .../druid/segment/transform/Transformer.java   |   5 +
 .../incremental/IncrementalIndexAddResultTest.java |   4 +-
 .../segment/incremental/IncrementalIndexTest.java  |  26 ++-
 .../incremental/ParseExceptionHandlerTest.java |  32 ++--
 .../client/indexing/HttpIndexingServiceClient.java |   9 +-
 .../indexing/overlord/supervisor/Supervisor.java   |   8 +
 55 files changed, 1063 insertions(+), 315 deletions(-)
 copy 
indexing-service/src/main/java/org/apache/druid/indexing/common/task/batch/parallel/GeneratedPartitionsMetadataReport.java
 => 
core/src/main/java/org/apache/druid/java/util/common/parsers/UnparseableColumnsParseException.java
 (51%)
 copy 
processing/src/main/java/org/apache/druid/segment/incremental/{RowIngestionMetersTotals.java
 => ParseExceptionReport.java} (50%)

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] branch master updated (9ca8f1e -> a96aed0)

2021-10-27 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git.


from 9ca8f1e  Remove IncrementalIndex template modifier (#11160)
 add a96aed0  Fix indefinite WAITING batch task when lock is revoked 
(#11788)

No new revisions were added by this update.

Summary of changes:
 .../common/actions/TimeChunkLockAcquireAction.java |  2 +-
 .../actions/TimeChunkLockTryAcquireAction.java |  3 +-
 .../common/task/AbstractBatchIndexTask.java|  4 ++
 .../common/task/AbstractFixedIntervalTask.java | 15 +-
 .../task/AppenderatorDriverRealtimeIndexTask.java  | 12 -
 .../indexing/common/task/HadoopIndexTask.java  | 18 ++-
 .../indexing/common/task/RealtimeIndexTask.java| 15 --
 .../SinglePhaseParallelIndexTaskRunner.java|  4 ++
 .../apache/druid/indexing/overlord/LockResult.java | 11 +++--
 .../druid/indexing/overlord/TaskLockbox.java   | 12 +++--
 .../SeekableStreamIndexTaskRunner.java | 12 -
 .../druid/indexing/overlord/TaskLifecycleTest.java | 55 ++
 12 files changed, 145 insertions(+), 18 deletions(-)

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] branch master updated: Simplify ITHttpInputSourceTest to mitigate flakiness (#11751)

2021-10-12 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/master by this push:
 new 887cecf  Simplify ITHttpInputSourceTest to mitigate flakiness (#11751)
887cecf is described below

commit 887cecf29e8b813029911fb05745764cce155c94
Author: Agustin Gonzalez 
AuthorDate: Tue Oct 12 09:51:27 2021 -0700

Simplify ITHttpInputSourceTest to mitigate flakiness (#11751)

* Increment retry count to add more time for tests to pass

* Re-enable ITHttpInputSourceTest

* Restore original count

* This test is about input source, hash partitioning takes longer and not 
required thus changing to dynamic

* Further simplify by removing sketches
---
 .../druid/tests/indexer/ITHttpInputSourceTest.java |  2 -
 .../wikipedia_http_inputsource_queries.json| 87 +-
 .../indexer/wikipedia_http_inputsource_task.json   | 18 +
 3 files changed, 19 insertions(+), 88 deletions(-)

diff --git 
a/integration-tests/src/test/java/org/apache/druid/tests/indexer/ITHttpInputSourceTest.java
 
b/integration-tests/src/test/java/org/apache/druid/tests/indexer/ITHttpInputSourceTest.java
index c72f080..bb0d7c5 100644
--- 
a/integration-tests/src/test/java/org/apache/druid/tests/indexer/ITHttpInputSourceTest.java
+++ 
b/integration-tests/src/test/java/org/apache/druid/tests/indexer/ITHttpInputSourceTest.java
@@ -36,8 +36,6 @@ public class ITHttpInputSourceTest extends 
AbstractITBatchIndexTest
   private static final String INDEX_TASK = 
"/indexer/wikipedia_http_inputsource_task.json";
   private static final String INDEX_QUERIES_RESOURCE = 
"/indexer/wikipedia_http_inputsource_queries.json";
 
-  // Ignore while we debug...
-  @Test(enabled = false)
   public void doTest() throws IOException
   {
 final String indexDatasource = "wikipedia_http_inputsource_test_" + 
UUID.randomUUID();
diff --git 
a/integration-tests/src/test/resources/indexer/wikipedia_http_inputsource_queries.json
 
b/integration-tests/src/test/resources/indexer/wikipedia_http_inputsource_queries.json
index 11496c2..f0cbb1c 100644
--- 
a/integration-tests/src/test/resources/indexer/wikipedia_http_inputsource_queries.json
+++ 
b/integration-tests/src/test/resources/indexer/wikipedia_http_inputsource_queries.json
@@ -16,82 +16,31 @@
 ]
 },
 {
-"description": "timeseries, datasketch aggs, all",
+"description": "simple aggr",
 "query":{
-"queryType" : "timeseries",
-"dataSource": "%%DATASOURCE%%",
-"granularity":"day",
-"intervals":[
-"2016-06-27/P1D"
-],
-"filter":null,
-"aggregations":[
+"queryType" : "topN",
+"dataSource" : "%%DATASOURCE%%",
+"intervals" : ["2016-06-27/2016-06-28"],
+"granularity" : "all",
+"dimension" : "page",
+"metric" : "count",
+"threshold" : 3,
+"aggregations" : [
 {
-"type": "HLLSketchMerge",
-"name": "approxCountHLL",
-"fieldName": "HLLSketchBuild",
-"lgK": 12,
-"tgtHllType": "HLL_4",
-"round": true
-},
-{
-"type":"thetaSketch",
-"name":"approxCountTheta",
-"fieldName":"thetaSketch",
-"size":16384,
-"shouldFinalize":true,
-"isInputThetaSketch":false,
-"errorBoundsStdDev":null
-},
-{
-"type":"quantilesDoublesSketch",
-"name":"quantilesSketch",
-"fieldName":"quantilesDoublesSketch",
-"k":128
+"type" : "count",
+"name" : "count"
 }
 ]
 },
 "expectedResults":[
 {
-"timestamp" : "2016-06-27T00:00:00.000Z",
-"result" : {
- 

[druid] branch master updated: Minor processor quota computation fix + docs (#11783)

2021-10-08 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/master by this push:
 new b6b42d3  Minor processor quota computation fix + docs (#11783)
b6b42d3 is described below

commit b6b42d39367f1ff1d7a1aa7e0064ca0ed9c2e92f
Author: Arun Ramani <84351090+arunram...@users.noreply.github.com>
AuthorDate: Fri Oct 8 20:52:03 2021 -0700

Minor processor quota computation fix + docs (#11783)

* cpu/cpuset cgroup and procfs data gathering

* Renames and default values

* Formatting

* Trigger Build

* Add cgroup monitors

* Return 0 if no period

* Update

* Minor processor quota computation fix + docs

* Address comments

* Address comments

* Fix spellcheck

Co-authored-by: arunramani-imply 
<84351090+arunramani-im...@users.noreply.github.com>
---
 .../druid/java/util/metrics/CgroupCpuMonitor.java| 20 
 .../java/util/metrics/CgroupCpuMonitorTest.java  | 10 ++
 docs/configuration/index.md  |  5 -
 docs/operations/metrics.md   | 18 --
 website/.spelling|  1 +
 5 files changed, 47 insertions(+), 7 deletions(-)

diff --git 
a/core/src/main/java/org/apache/druid/java/util/metrics/CgroupCpuMonitor.java 
b/core/src/main/java/org/apache/druid/java/util/metrics/CgroupCpuMonitor.java
index 826465b..ac4d545 100644
--- 
a/core/src/main/java/org/apache/druid/java/util/metrics/CgroupCpuMonitor.java
+++ 
b/core/src/main/java/org/apache/druid/java/util/metrics/CgroupCpuMonitor.java
@@ -65,12 +65,24 @@ public class CgroupCpuMonitor extends FeedDefiningMonitor
 emitter.emit(builder.build("cgroup/cpu/shares", cpuSnapshot.getShares()));
 emitter.emit(builder.build(
 "cgroup/cpu/cores_quota",
-cpuSnapshot.getPeriodUs() == 0
-? 0
-: ((double) cpuSnapshot.getQuotaUs()
-  ) / cpuSnapshot.getPeriodUs()
+computeProcessorQuota(cpuSnapshot.getQuotaUs(), 
cpuSnapshot.getPeriodUs())
 ));
 
 return true;
   }
+
+  /**
+   * Calculates the total cores allocated through quotas. A negative value 
indicates that no quota has been specified.
+   * We use -1 because that's the default value used in the cgroup.
+   *
+   * @param quotaUs  the cgroup quota value.
+   * @param periodUs the cgroup period value.
+   * @return the calculated processor quota, -1 if no quota or period set.
+   */
+  public static double computeProcessorQuota(long quotaUs, long periodUs)
+  {
+return quotaUs < 0 || periodUs == 0
+   ? -1
+   : (double) quotaUs / periodUs;
+  }
 }
diff --git 
a/core/src/test/java/org/apache/druid/java/util/metrics/CgroupCpuMonitorTest.java
 
b/core/src/test/java/org/apache/druid/java/util/metrics/CgroupCpuMonitorTest.java
index 4a05f5f..67c03d2 100644
--- 
a/core/src/test/java/org/apache/druid/java/util/metrics/CgroupCpuMonitorTest.java
+++ 
b/core/src/test/java/org/apache/druid/java/util/metrics/CgroupCpuMonitorTest.java
@@ -79,4 +79,14 @@ public class CgroupCpuMonitorTest
 Assert.assertEquals("cgroup/cpu/cores_quota", coresEvent.get("metric"));
 Assert.assertEquals(3.0D, coresEvent.get("value"));
   }
+
+  @Test
+  public void testQuotaCompute()
+  {
+Assert.assertEquals(-1, CgroupCpuMonitor.computeProcessorQuota(-1, 
10), 0);
+Assert.assertEquals(0, CgroupCpuMonitor.computeProcessorQuota(0, 10), 
0);
+Assert.assertEquals(-1, CgroupCpuMonitor.computeProcessorQuota(10, 0), 
0);
+Assert.assertEquals(2.0D, CgroupCpuMonitor.computeProcessorQuota(20, 
10), 0);
+Assert.assertEquals(0.5D, CgroupCpuMonitor.computeProcessorQuota(5, 
10), 0);
+  }
 }
diff --git a/docs/configuration/index.md b/docs/configuration/index.md
index 1d20029..c20d801 100644
--- a/docs/configuration/index.md
+++ b/docs/configuration/index.md
@@ -362,12 +362,15 @@ The following monitors are available:
 ||---|
 |`org.apache.druid.client.cache.CacheMonitor`|Emits metrics (to logs) about 
the segment results cache for Historical and Broker processes. Reports typical 
cache statistics include hits, misses, rates, and size (bytes and number of 
entries), as well as timeouts and and errors.|
 |`org.apache.druid.java.util.metrics.SysMonitor`|Reports on various system 
activities and statuses using the [SIGAR 
library](https://github.com/hyperic/sigar). Requires execute privileges on 
files in `java.io.tmpdir`. Do not set `java.io.tmpdir` to `noexec` when using 
`SysMonitor`.|
-|`org.apache.druid.server.metrics.HistoricalMetricsMonitor`|Reports statistics 
on Historical processes. Available only on Historical processes.|
 |`org.apache.druid.java

[druid] branch master updated: Avoid primary key violation in segment tables under certain conditions when appending data to same interval (#11714)

2021-09-22 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/master by this push:
 new 2355a60  Avoid primary key violation in segment tables under certain 
conditions when appending data to same interval (#11714)
2355a60 is described below

commit 2355a60419fe423faae9af7d95b97199b11309d7
Author: Agustin Gonzalez 
AuthorDate: Wed Sep 22 17:21:48 2021 -0700

Avoid primary key violation in segment tables under certain conditions when 
appending data to same interval (#11714)

* Fix issue of duplicate key  under certain conditions when loading late 
data in streaming. Also fixes a documentation issue with 
skipSegmentLineageCheck.

* maxId may be null at this point, need to check for that

* Remove hypothetical case (it cannot happen)

* Revert compaction is simply "killing" the compacted segment and 
previously, used, overshadowed segments are visible again

* Add comments
---
 .../IndexerSQLMetadataStorageCoordinator.java  |  85 +++--
 .../appenderator/BaseAppenderatorDriver.java   |   2 +-
 .../realtime/appenderator/SegmentAllocator.java|   2 +-
 .../appenderator/StreamAppenderatorDriver.java |   6 +-
 .../IndexerSQLMetadataStorageCoordinatorTest.java  | 407 -
 5 files changed, 476 insertions(+), 26 deletions(-)

diff --git 
a/server/src/main/java/org/apache/druid/metadata/IndexerSQLMetadataStorageCoordinator.java
 
b/server/src/main/java/org/apache/druid/metadata/IndexerSQLMetadataStorageCoordinator.java
index 4887c90..c5081f9 100644
--- 
a/server/src/main/java/org/apache/druid/metadata/IndexerSQLMetadataStorageCoordinator.java
+++ 
b/server/src/main/java/org/apache/druid/metadata/IndexerSQLMetadataStorageCoordinator.java
@@ -253,13 +253,13 @@ public class IndexerSQLMetadataStorageCoordinator 
implements IndexerMetadataStor
 return numSegmentsMarkedUnused;
   }
 
-  private List getPendingSegmentsForIntervalWithHandle(
+  private Set getPendingSegmentsForIntervalWithHandle(
   final Handle handle,
   final String dataSource,
   final Interval interval
   ) throws IOException
   {
-final List identifiers = new ArrayList<>();
+final Set identifiers = new HashSet<>();
 
 final ResultIterator dbSegments =
 handle.createQuery(
@@ -843,15 +843,30 @@ public class IndexerSQLMetadataStorageCoordinator 
implements IndexerMetadataStor
   .execute();
   }
 
+  /**
+   * This function creates a new segment for the given 
datasource/interval/etc. A critical
+   * aspect of the creation is to make sure that the new version & new 
partition number will make
+   * sense given the existing segments & pending segments also very important 
is to avoid
+   * clashes with existing pending & used/unused segments.
+   * @param handle Database handle
+   * @param dataSource datasource for the new segment
+   * @param interval interval for the new segment
+   * @param partialShardSpec Shard spec info minus segment id stuff
+   * @param existingVersion Version of segments in interval, used to compute 
the version of the very first segment in
+   *interval
+   * @return
+   * @throws IOException
+   */
   @Nullable
   private SegmentIdWithShardSpec createNewSegment(
   final Handle handle,
   final String dataSource,
   final Interval interval,
   final PartialShardSpec partialShardSpec,
-  final String maxVersion
+  final String existingVersion
   ) throws IOException
   {
+// Get the time chunk and associated data segments for the given interval, 
if any
 final List> existingChunks = 
getTimelineForIntervalsWithHandle(
 handle,
 dataSource,
@@ -884,66 +899,94 @@ public class IndexerSQLMetadataStorageCoordinator 
implements IndexerMetadataStor
 // See PartitionIds.
 .filter(segment -> 
segment.getShardSpec().sharePartitionSpace(partialShardSpec))) {
   // Don't use the stream API for performance.
+  // Note that this will compute the max id of existing, visible, data 
segments in the time chunk:
   if (maxId == null || maxId.getShardSpec().getPartitionNum() < 
segment.getShardSpec().getPartitionNum()) {
 maxId = SegmentIdWithShardSpec.fromDataSegment(segment);
   }
 }
   }
 
-  final List pendings = 
getPendingSegmentsForIntervalWithHandle(
+  // Get the version of the existing chunk, we might need it in some of 
the cases below
+  // to compute the new identifier's version
+  @Nullable
+  final String versionOfExistingChunk;
+  if (!existingChunks.isEmpty()) {
+// remember only one chunk possible for given interval so get the 
first & only one
+versionOfExistingChunk = existingChunks.get(0).getVersion();
+  

[druid] branch master updated: Task reports for parallel task: single phase and sequential mode (#11688)

2021-09-16 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/master by this push:
 new 22b41dd  Task reports for parallel task: single phase and sequential 
mode (#11688)
22b41dd is described below

commit 22b41ddbbfe2b07b085e295ba171bcdc07e04900
Author: Jonathan Wei 
AuthorDate: Thu Sep 16 13:58:11 2021 -0500

Task reports for parallel task: single phase and sequential mode (#11688)

* Task reports for parallel task: single phase and sequential mode

* Address comments

* Add null check for currentSubTaskHolder
---
 .../druid/indexing/common/task/IndexTask.java  |  17 +-
 .../parallel/ParallelIndexSupervisorTask.java  | 222 ++-
 .../batch/parallel/PartialSegmentMergeTask.java|   6 +-
 .../task/batch/parallel/PushedSegmentsReport.java  |  22 +-
 .../task/batch/parallel/SinglePhaseSubTask.java| 315 ++---
 .../AbstractParallelIndexSupervisorTaskTest.java   |  19 +-
 .../ParallelIndexSupervisorTaskResourceTest.java   |   3 +-
 .../batch/parallel/PushedSegmentsReportTest.java   |  32 +++
 .../parallel/SinglePhaseParallelIndexingTest.java  | 169 ++-
 .../incremental/RowIngestionMetersTotals.java  |  35 +++
 .../incremental/RowIngestionMetersTotalsTest.java  |  32 +++
 .../client/indexing/HttpIndexingServiceClient.java |  26 ++
 .../client/indexing/IndexingServiceClient.java |   3 +
 .../indexing/HttpIndexingServiceClientTest.java|  67 +
 .../client/indexing/NoopIndexingServiceClient.java |   7 +
 15 files changed, 903 insertions(+), 72 deletions(-)

diff --git 
a/indexing-service/src/main/java/org/apache/druid/indexing/common/task/IndexTask.java
 
b/indexing-service/src/main/java/org/apache/druid/indexing/common/task/IndexTask.java
index a798522..f22c2a0 100644
--- 
a/indexing-service/src/main/java/org/apache/druid/indexing/common/task/IndexTask.java
+++ 
b/indexing-service/src/main/java/org/apache/druid/indexing/common/task/IndexTask.java
@@ -286,7 +286,12 @@ public class IndexTask extends AbstractBatchIndexTask 
implements ChatHandler
   )
   {
 IndexTaskUtils.datasourceAuthorizationCheck(req, Action.READ, 
getDataSource(), authorizerMapper);
-Map> events = new HashMap<>();
+return Response.ok(doGetUnparseableEvents(full)).build();
+  }
+
+  public Map doGetUnparseableEvents(String full)
+  {
+Map events = new HashMap<>();
 
 boolean needsDeterminePartitions = false;
 boolean needsBuildSegments = false;
@@ -325,11 +330,10 @@ public class IndexTask extends AbstractBatchIndexTask 
implements ChatHandler
   )
   );
 }
-
-return Response.ok(events).build();
+return events;
   }
 
-  private Map doGetRowStats(String full)
+  public Map doGetRowStats(String full)
   {
 Map returnMap = new HashMap<>();
 Map totalsMap = new HashMap<>();
@@ -784,6 +788,11 @@ public class IndexTask extends AbstractBatchIndexTask 
implements ChatHandler
 return hllCollectors;
   }
 
+  public IngestionState getIngestionState()
+  {
+return ingestionState;
+  }
+
   /**
* This method reads input data row by row and adds the read row to a proper 
segment using {@link BaseAppenderatorDriver}.
* If there is no segment for the row, a new one is created.  Segments can 
be published in the middle of reading inputs
diff --git 
a/indexing-service/src/main/java/org/apache/druid/indexing/common/task/batch/parallel/ParallelIndexSupervisorTask.java
 
b/indexing-service/src/main/java/org/apache/druid/indexing/common/task/batch/parallel/ParallelIndexSupervisorTask.java
index b66f49f..19b965c 100644
--- 
a/indexing-service/src/main/java/org/apache/druid/indexing/common/task/batch/parallel/ParallelIndexSupervisorTask.java
+++ 
b/indexing-service/src/main/java/org/apache/druid/indexing/common/task/batch/parallel/ParallelIndexSupervisorTask.java
@@ -26,6 +26,7 @@ import com.google.common.annotations.VisibleForTesting;
 import com.google.common.base.Preconditions;
 import com.google.common.base.Throwables;
 import com.google.common.collect.ArrayListMultimap;
+import com.google.common.collect.ImmutableMap;
 import com.google.common.collect.Multimap;
 import it.unimi.dsi.fastutil.objects.Object2IntMap;
 import it.unimi.dsi.fastutil.objects.Object2IntOpenHashMap;
@@ -66,6 +67,8 @@ import org.apache.druid.java.util.common.Pair;
 import org.apache.druid.java.util.common.StringUtils;
 import org.apache.druid.java.util.common.granularity.Granularity;
 import org.apache.druid.java.util.common.logger.Logger;
+import org.apache.druid.segment.incremental.RowIngestionMeters;
+import org.apache.druid.segment.incremental.RowIngestionMetersTotals;
 import org.apache.druid.segment.indexing.TuningConfig;
 import org.apache.druid.segment.indexing.granularity.Arbitr

[druid] branch master updated: Allow kill task to mark segments as unused (#11501)

2021-07-29 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/master by this push:
 new 9b250c5  Allow kill task to mark segments as unused (#11501)
9b250c5 is described below

commit 9b250c54aa1b18c21ff8369ee4a4a6015bbafc40
Author: Jonathan Wei 
AuthorDate: Thu Jul 29 10:48:43 2021 -0500

Allow kill task to mark segments as unused (#11501)

* Allow kill task to mark segments as unused

* Add IndexerSQLMetadataStorageCoordinator test

* Update docs/ingestion/data-management.md

Co-authored-by: Jihoon Son 

* Add warning to kill task doc

Co-authored-by: Jihoon Son 
---
 docs/ingestion/data-management.md  |  9 ++-
 .../common/actions/MarkSegmentsAsUnusedAction.java | 67 --
 .../druid/indexing/common/actions/TaskAction.java  |  1 +
 .../common/task/KillUnusedSegmentsTask.java| 23 +++-
 ...ClientKillUnusedSegmentsTaskQuerySerdeTest.java |  8 ++-
 .../common/task/KillUnusedSegmentsTaskTest.java| 53 -
 .../druid/indexing/overlord/TaskLifecycleTest.java |  8 ++-
 .../TestIndexerMetadataStorageCoordinator.java |  6 ++
 .../ClientKillUnusedSegmentsTaskQuery.java | 20 +--
 .../client/indexing/HttpIndexingServiceClient.java |  2 +-
 .../IndexerMetadataStorageCoordinator.java | 10 
 .../IndexerSQLMetadataStorageCoordinator.java  | 27 +
 .../ClientKillUnusedSegmentsTaskQueryTest.java |  9 ++-
 .../IndexerSQLMetadataStorageCoordinatorTest.java  | 32 +++
 14 files changed, 218 insertions(+), 57 deletions(-)

diff --git a/docs/ingestion/data-management.md 
b/docs/ingestion/data-management.md
index c9e592f..eb176a0 100644
--- a/docs/ingestion/data-management.md
+++ b/docs/ingestion/data-management.md
@@ -95,7 +95,9 @@ A data deletion tutorial is available at [Tutorial: Deleting 
data](../tutorials/
 
 ## Kill Task
 
-Kill tasks delete all information about a segment and removes it from deep 
storage. Segments to kill must be unused (used==0) in the Druid segment table. 
The available grammar is:
+The kill task deletes all information about segments and removes them from 
deep storage. Segments to kill must be unused (used==0) in the Druid segment 
table.
+
+The available grammar is:
 
 ```json
 {
@@ -103,10 +105,15 @@ Kill tasks delete all information about a segment and 
removes it from deep stora
 "id": ,
 "dataSource": ,
 "interval" : ,
+"markAsUnused": ,
 "context": 
 }
 ```
 
+If `markAsUnused` is true (default is false), the kill task will first mark 
any segments within the specified interval as unused, before deleting the 
unused segments within the interval.
+
+**WARNING!** The kill task permanently removes all information about the 
affected segments from the metadata store and deep storage. These segments 
cannot be recovered after the kill task runs, this operation cannot be undone. 
+
 ## Retention
 
 Druid supports retention rules, which are used to define intervals of time 
where data should be preserved, and intervals where data should be discarded.
diff --git 
a/server/src/main/java/org/apache/druid/client/indexing/ClientKillUnusedSegmentsTaskQuery.java
 
b/indexing-service/src/main/java/org/apache/druid/indexing/common/actions/MarkSegmentsAsUnusedAction.java
similarity index 52%
copy from 
server/src/main/java/org/apache/druid/client/indexing/ClientKillUnusedSegmentsTaskQuery.java
copy to 
indexing-service/src/main/java/org/apache/druid/indexing/common/actions/MarkSegmentsAsUnusedAction.java
index ec008d3..5ed7b7e 100644
--- 
a/server/src/main/java/org/apache/druid/client/indexing/ClientKillUnusedSegmentsTaskQuery.java
+++ 
b/indexing-service/src/main/java/org/apache/druid/indexing/common/actions/MarkSegmentsAsUnusedAction.java
@@ -17,56 +17,34 @@
  * under the License.
  */
 
-package org.apache.druid.client.indexing;
+package org.apache.druid.indexing.common.actions;
 
 import com.fasterxml.jackson.annotation.JsonCreator;
+import com.fasterxml.jackson.annotation.JsonIgnore;
 import com.fasterxml.jackson.annotation.JsonProperty;
-import com.google.common.base.Preconditions;
+import com.fasterxml.jackson.core.type.TypeReference;
+import org.apache.druid.indexing.common.task.Task;
 import org.joda.time.Interval;
 
-import java.util.Objects;
-
-/**
- * Client representation of 
org.apache.druid.indexing.common.task.KillUnusedSegmentsTask. JSON 
searialization
- * fields of this class must correspond to those of 
org.apache.druid.indexing.common.task.KillUnusedSegmentsTask, except
- * for "id" and "context" fields.
- */
-public class ClientKillUnusedSegmentsTaskQuery implements ClientTaskQuery
+public class MarkSegmentsAsUnusedAction implements TaskAction
 {
-  public static final String 

[druid] branch master updated (51f9831 -> 6b272c8)

2021-06-10 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git.


from 51f9831  Fix wrong encoding in 
PredicateFilteredDimensionSelector.getRow (#11339)
 add 6b272c8  adjust topn heap algorithm to only use known cardinality path 
when dictionary is unique (#11186)

No new revisions were added by this update.

Summary of changes:
 .../druid/query/topn/HeapBasedTopNAlgorithm.java   |  1 -
 .../query/topn/TimeExtractionTopNAlgorithm.java|  4 ---
 .../apache/druid/query/topn/TopNQueryEngine.java   |  2 +-
 .../types/StringTopNColumnAggregatesProcessor.java | 25 --
 .../TopNColumnAggregatesProcessorFactory.java  |  2 +-
 .../apache/druid/sql/calcite/CalciteQueryTest.java | 39 ++
 6 files changed, 63 insertions(+), 10 deletions(-)

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] branch master updated: Revert "Adjust HadoopIndexTask temp segment renaming to avoid potential race conditions (#11075)" (#11151)

2021-04-22 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/master by this push:
 new 49a9c3f  Revert "Adjust HadoopIndexTask temp segment renaming to avoid 
potential race conditions (#11075)" (#11151)
49a9c3f is described below

commit 49a9c3ffb7b2da3401696d583bc2cd52e83f77bf
Author: Jonathan Wei 
AuthorDate: Thu Apr 22 15:33:27 2021 -0700

Revert "Adjust HadoopIndexTask temp segment renaming to avoid potential 
race conditions (#11075)" (#11151)

This reverts commit a2892d9c40793027ba8a8977c85b3de4a949e11c.
---
 indexing-hadoop/pom.xml|  15 -
 .../indexer/DataSegmentAndIndexZipFilePath.java|  97 
 .../org/apache/druid/indexer/FileSystemHelper.java |  38 --
 .../HadoopDruidDetermineConfigurationJob.java  |   7 +-
 .../druid/indexer/HadoopDruidIndexerJob.java   |  13 +-
 .../apache/druid/indexer/IndexGeneratorJob.java|  20 +-
 .../java/org/apache/druid/indexer/JobHelper.java   | 105 ++---
 .../druid/indexer/MetadataStorageUpdaterJob.java   |   4 +-
 .../druid/indexer/BatchDeltaIngestionTest.java |  10 +-
 .../DataSegmentAndIndexZipFilePathTest.java| 185 
 .../druid/indexer/HadoopDruidIndexerJobTest.java   |  76 
 .../druid/indexer/IndexGeneratorJobTest.java   |  16 +-
 .../druid/indexer/JobHelperPowerMockTest.java  | 216 -
 .../indexer/MetadataStorageUpdaterJobTest.java |  82 
 .../indexing/common/task/HadoopIndexTask.java  | 494 ++---
 integration-tests/pom.xml  |   4 -
 .../apache/druid/cli/CliInternalHadoopIndexer.java |   7 +-
 17 files changed, 212 insertions(+), 1177 deletions(-)

diff --git a/indexing-hadoop/pom.xml b/indexing-hadoop/pom.xml
index 8eacc7e..9557ab5 100644
--- a/indexing-hadoop/pom.xml
+++ b/indexing-hadoop/pom.xml
@@ -202,21 +202,6 @@
 mockito-core
 test
 
-
-org.powermock
-powermock-core
-test
-
-
-org.powermock
-powermock-module-junit4
-test
-
-
-org.powermock
-powermock-api-easymock
-test
-
 
 
 
diff --git 
a/indexing-hadoop/src/main/java/org/apache/druid/indexer/DataSegmentAndIndexZipFilePath.java
 
b/indexing-hadoop/src/main/java/org/apache/druid/indexer/DataSegmentAndIndexZipFilePath.java
deleted file mode 100644
index e12f7fb..000
--- 
a/indexing-hadoop/src/main/java/org/apache/druid/indexer/DataSegmentAndIndexZipFilePath.java
+++ /dev/null
@@ -1,97 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *   http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing,
- * software distributed under the License is distributed on an
- * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
- * KIND, either express or implied.  See the License for the
- * specific language governing permissions and limitations
- * under the License.
- */
-
-package org.apache.druid.indexer;
-
-import com.fasterxml.jackson.annotation.JsonCreator;
-import com.fasterxml.jackson.annotation.JsonProperty;
-import org.apache.druid.timeline.DataSegment;
-
-import java.util.List;
-import java.util.Objects;
-
-/**
- * holds a {@link DataSegment} with the temporary file path where the 
corresponding index zip file is currently stored
- * and the final path where the index zip file should eventually be moved to.
- * see {@link JobHelper#renameIndexFilesForSegments(HadoopIngestionSpec, List)}
- */
-public class DataSegmentAndIndexZipFilePath
-{
-  private final DataSegment segment;
-  private final String tmpIndexZipFilePath;
-  private final String finalIndexZipFilePath;
-
-  @JsonCreator
-  public DataSegmentAndIndexZipFilePath(
-  @JsonProperty("segment") DataSegment segment,
-  @JsonProperty("tmpIndexZipFilePath") String tmpIndexZipFilePath,
-  @JsonProperty("finalIndexZipFilePath") String finalIndexZipFilePath
-  )
-  {
-this.segment = segment;
-this.tmpIndexZipFilePath = tmpIndexZipFilePath;
-this.finalIndexZipFilePath = finalIndexZipFilePath;
-  }
-
-  @JsonProperty
-  public DataSegment getSegment()
-  {
-return segment;
-  }
-
-  @JsonProperty
-  public String getTmpIndexZipFilePath()
-  {
-return tmpIndexZipFilePath;
-  }
-
-  @JsonProperty
-  public St

[druid] branch master updated (6d2b5cd -> a2892d9)

2021-04-21 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git.


from 6d2b5cd  Add feature to automatically remove audit logs based on 
retention period (#11084)
 add a2892d9  Adjust HadoopIndexTask temp segment renaming to avoid 
potential race conditions (#11075)

No new revisions were added by this update.

Summary of changes:
 indexing-hadoop/pom.xml|  15 +
 .../indexer/DataSegmentAndIndexZipFilePath.java|  97 
 .../org/apache/druid/indexer/FileSystemHelper.java |  19 +-
 .../HadoopDruidDetermineConfigurationJob.java  |   7 +-
 .../druid/indexer/HadoopDruidIndexerJob.java   |  13 +-
 .../apache/druid/indexer/IndexGeneratorJob.java|  20 +-
 .../java/org/apache/druid/indexer/JobHelper.java   | 105 +++--
 .../druid/indexer/MetadataStorageUpdaterJob.java   |   4 +-
 .../druid/indexer/BatchDeltaIngestionTest.java |  10 +-
 .../DataSegmentAndIndexZipFilePathTest.java| 185 
 .../druid/indexer/HadoopDruidIndexerJobTest.java   |  76 
 .../druid/indexer/IndexGeneratorJobTest.java   |  16 +-
 .../druid/indexer/JobHelperPowerMockTest.java  | 216 +
 .../indexer/MetadataStorageUpdaterJobTest.java |  82 
 .../indexing/common/task/HadoopIndexTask.java  | 494 +++--
 integration-tests/pom.xml  |   4 +
 .../apache/druid/cli/CliInternalHadoopIndexer.java |   7 +-
 17 files changed, 1149 insertions(+), 221 deletions(-)
 create mode 100644 
indexing-hadoop/src/main/java/org/apache/druid/indexer/DataSegmentAndIndexZipFilePath.java
 copy core/src/main/java/org/apache/druid/java/util/common/IOE.java => 
indexing-hadoop/src/main/java/org/apache/druid/indexer/FileSystemHelper.java 
(65%)
 create mode 100644 
indexing-hadoop/src/test/java/org/apache/druid/indexer/DataSegmentAndIndexZipFilePathTest.java
 create mode 100644 
indexing-hadoop/src/test/java/org/apache/druid/indexer/HadoopDruidIndexerJobTest.java
 create mode 100644 
indexing-hadoop/src/test/java/org/apache/druid/indexer/JobHelperPowerMockTest.java
 create mode 100644 
indexing-hadoop/src/test/java/org/apache/druid/indexer/MetadataStorageUpdaterJobTest.java

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] branch master updated: Add retry around query loop in ITWikipediaQueryTest.testQueryLaningLaneIsLimited (#11077)

2021-04-09 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/master by this push:
 new e7b2ecd  Add retry around query loop in 
ITWikipediaQueryTest.testQueryLaningLaneIsLimited (#11077)
e7b2ecd is described below

commit e7b2ecd0fd3f79069aa8f6065b85e8072ee773eb
Author: Jonathan Wei 
AuthorDate: Fri Apr 9 10:54:34 2021 -0700

Add retry around query loop in 
ITWikipediaQueryTest.testQueryLaningLaneIsLimited (#11077)
---
 .../druid/tests/query/ITWikipediaQueryTest.java| 82 ++
 1 file changed, 51 insertions(+), 31 deletions(-)

diff --git 
a/integration-tests/src/test/java/org/apache/druid/tests/query/ITWikipediaQueryTest.java
 
b/integration-tests/src/test/java/org/apache/druid/tests/query/ITWikipediaQueryTest.java
index b5652b9..9fc5209 100644
--- 
a/integration-tests/src/test/java/org/apache/druid/tests/query/ITWikipediaQueryTest.java
+++ 
b/integration-tests/src/test/java/org/apache/druid/tests/query/ITWikipediaQueryTest.java
@@ -21,6 +21,7 @@ package org.apache.druid.tests.query;
 
 import com.google.common.collect.ImmutableMap;
 import com.google.inject.Inject;
+import org.apache.druid.java.util.common.logger.Logger;
 import org.apache.druid.java.util.http.client.response.StatusResponseHolder;
 import org.apache.druid.query.Druids;
 import org.apache.druid.query.QueryCapacityExceededException;
@@ -47,6 +48,8 @@ import java.util.concurrent.Future;
 @Guice(moduleFactory = DruidTestModuleFactory.class)
 public class ITWikipediaQueryTest
 {
+  private static final Logger LOG = new Logger(ITWikipediaQueryTest.class);
+
   public static final String WIKIPEDIA_DATA_SOURCE = "wikipedia_editstream";
   private static final String WIKI_LOOKUP = "wiki-simple";
   private static final String WIKIPEDIA_QUERIES_RESOURCE = 
"/queries/wikipedia_editstream_queries.json";
@@ -85,37 +88,54 @@ public class ITWikipediaQueryTest
   @Test
   public void testQueryLaningLaneIsLimited() throws Exception
   {
-// the broker is configured with a manually defined query lane, 'one' with 
limit 1
-//  -Ddruid.query.scheduler.laning.type=manual
-//  -Ddruid.query.scheduler.laning.lanes.one=1
-// by issuing 50 queries, at least 1 of them will succeed on 'one', and at 
least 1 of them will overlap enough to
-// get limited
-final int numQueries = 50;
-List> futures = new ArrayList<>(numQueries);
-for (int i = 0; i < numQueries; i++) {
-  futures.add(
-  queryClient.queryAsync(
-  queryHelper.getQueryURL(config.getBrokerUrl()),
-  getQueryBuilder().build()
-  )
-  );
-}
-
-int success = 0;
-int limited = 0;
-
-for (Future future : futures) {
-  StatusResponseHolder status = future.get();
-  if (status.getStatus().getCode() == 
QueryCapacityExceededException.STATUS_CODE) {
-limited++;
-
Assert.assertTrue(status.getContent().contains(QueryCapacityExceededException.makeLaneErrorMessage("one",
 1)));
-  } else if (status.getStatus().getCode() == 
HttpResponseStatus.OK.getCode()) {
-success++;
-  }
-}
-
-Assert.assertTrue(success > 0);
-Assert.assertTrue(limited > 0);
+ITRetryUtil.retryUntil(
+() -> {
+  // the broker is configured with a manually defined query lane, 
'one' with limit 1
+  //  -Ddruid.query.scheduler.laning.type=manual
+  //  -Ddruid.query.scheduler.laning.lanes.one=1
+  // by issuing 50 queries, at least 1 of them will succeed on 'one', 
and at least 1 of them will overlap enough to
+  // get limited.
+  // It's possible but unlikely that these queries execute in a way 
that none of them overlap, so we
+  // retry this test a few times to compensate for this.
+  final int numQueries = 50;
+  List> futures = new 
ArrayList<>(numQueries);
+  for (int i = 0; i < numQueries; i++) {
+futures.add(
+queryClient.queryAsync(
+queryHelper.getQueryURL(config.getBrokerUrl()),
+getQueryBuilder().build()
+)
+);
+  }
+
+  int success = 0;
+  int limited = 0;
+
+  for (Future future : futures) {
+StatusResponseHolder status = future.get();
+if (status.getStatus().getCode() == 
QueryCapacityExceededException.STATUS_CODE) {
+  limited++;
+  
Assert.assertTrue(status.getContent().contains(QueryCapacityExceededException.makeLaneErrorMessage("one",
 1)));
+} else if (status.getStatus().getCode() == 
HttpResponseStatus.OK.getCode()) {
+  success++;
+}
+  }
+
+  try {
+Assert.assertTrue(

[druid] branch master updated (67dd61e -> d7f5293)

2021-04-01 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git.


from 67dd61e  remove outdated info from faq (#11053)
 add d7f5293  Add an option for ingestion task to drop (mark unused) all 
existing segments that are contained by interval in the ingestionSpec (#11025)

No new revisions were added by this update.

Summary of changes:
 docs/ingestion/compaction.md   |   4 +-
 docs/ingestion/native-batch.md |  35 +-
 .../actions/SegmentTransactionalInsertAction.java  |  37 +-
 .../common/task/AbstractBatchIndexTask.java|  23 +
 .../task/AppenderatorDriverRealtimeIndexTask.java  |  12 +-
 .../druid/indexing/common/task/CompactionTask.java |   3 +-
 .../druid/indexing/common/task/IndexTask.java  |  32 +-
 .../InputSourceSplitParallelIndexTaskRunner.java   |   3 +-
 .../task/batch/parallel/ParallelIndexIOConfig.java |   7 +-
 .../parallel/ParallelIndexSupervisorTask.java  |  12 +-
 .../SinglePhaseParallelIndexTaskRunner.java|   3 +-
 .../indexing/seekablestream/SequenceMetadata.java  |  13 +-
 .../SegmentTransactionalInsertActionTest.java  |  58 ++-
 .../indexing/common/actions/TaskActionTestKit.java |   9 +-
 .../AppenderatorDriverRealtimeIndexTaskTest.java   |   3 +-
 .../common/task/CompactionTaskParallelRunTest.java |  38 +-
 .../common/task/CompactionTaskRunTest.java |  55 ++-
 .../common/task/IndexIngestionSpecTest.java|   4 +
 .../druid/indexing/common/task/IndexTaskTest.java  | 461 +++--
 .../druid/indexing/common/task/TaskSerdeTest.java  |   4 +-
 .../AbstractMultiPhaseParallelIndexingTest.java|   3 +-
 .../ParallelIndexSupervisorTaskKillTest.java   |   9 +-
 .../ParallelIndexSupervisorTaskResourceTest.java   |   6 +-
 .../ParallelIndexSupervisorTaskSerdeTest.java  |   3 +-
 .../parallel/ParallelIndexSupervisorTaskTest.java  |   3 +-
 .../parallel/ParallelIndexTestingFactory.java  |   2 +-
 .../parallel/SinglePhaseParallelIndexingTest.java  |   3 +-
 .../batch/parallel/SinglePhaseSubTaskSpecTest.java |   1 +
 .../druid/indexing/overlord/TaskLifecycleTest.java |   8 +-
 .../seekablestream/SequenceMetadataTest.java   | 148 +++
 .../TestIndexerMetadataStorageCoordinator.java |   1 +
 .../coordinator/duty/ITAutoCompactionTest.java |  26 +-
 .../AbstractLocalInputSourceParallelIndexTest.java |   5 +
 .../tests/indexer/ITAppendBatchIndexTest.java  |   5 +
 .../ITCombiningInputSourceParallelIndexTest.java   |   5 +
 .../tests/indexer/ITOverwriteBatchIndexTest.java   | 156 +++
 ...son => wikipedia_index_queries_only_data3.json} | 122 --
 .../wikipedia_local_input_source_index_task.json   |   1 +
 .../indexing/overlord/DataSourceMetadata.java  |   2 +-
 .../IndexerMetadataStorageCoordinator.java |  19 +-
 .../IndexerSQLMetadataStorageCoordinator.java  | 124 +-
 .../druid/metadata/SqlSegmentsMetadataManager.java |   2 +-
 .../druid/segment/indexing/BatchIOConfig.java  |   2 +
 .../appenderator/BaseAppenderatorDriver.java   |   2 +
 .../appenderator/BatchAppenderatorDriver.java  |   3 +
 .../appenderator/StreamAppenderatorDriver.java |   1 +
 .../TransactionalSegmentPublisher.java |   3 +
 .../IndexerSQLMetadataStorageCoordinatorTest.java  | 239 ++-
 .../appenderator/BatchAppenderatorDriverTest.java  |   6 +-
 .../appenderator/StreamAppenderatorDriverTest.java |   4 +-
 website/.spelling  |   1 +
 51 files changed, 1550 insertions(+), 181 deletions(-)
 create mode 100644 
indexing-service/src/test/java/org/apache/druid/indexing/seekablestream/SequenceMetadataTest.java
 create mode 100644 
integration-tests/src/test/java/org/apache/druid/tests/indexer/ITOverwriteBatchIndexTest.java
 copy 
integration-tests/src/test/resources/indexer/{wikipedia_index_queries_hour_query_granularity.json
 => wikipedia_index_queries_only_data3.json} (56%)

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] branch master updated: Add resources used to EXPLAIN PLAN FOR output (#11024)

2021-03-23 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/master by this push:
 new 8296123  Add resources used to EXPLAIN PLAN FOR output (#11024)
8296123 is described below

commit 8296123d895db7d06bc4517db5e767afb7862b83
Author: Jonathan Wei 
AuthorDate: Tue Mar 23 17:21:15 2021 -0700

Add resources used to EXPLAIN PLAN FOR output (#11024)
---
 .../druid/sql/calcite/planner/DruidPlanner.java| 28 ++
 .../druid/sql/calcite/planner/PlannerFactory.java  |  6 +++--
 .../druid/sql/avatica/DruidAvaticaHandlerTest.java |  4 +++-
 .../apache/druid/sql/calcite/CalciteQueryTest.java | 17 +
 .../org/apache/druid/sql/http/SqlResourceTest.java |  4 +++-
 5 files changed, 46 insertions(+), 13 deletions(-)

diff --git 
a/sql/src/main/java/org/apache/druid/sql/calcite/planner/DruidPlanner.java 
b/sql/src/main/java/org/apache/druid/sql/calcite/planner/DruidPlanner.java
index 772f650..6c91673 100644
--- a/sql/src/main/java/org/apache/druid/sql/calcite/planner/DruidPlanner.java
+++ b/sql/src/main/java/org/apache/druid/sql/calcite/planner/DruidPlanner.java
@@ -19,6 +19,8 @@
 
 package org.apache.druid.sql.calcite.planner;
 
+import com.fasterxml.jackson.core.JsonProcessingException;
+import com.fasterxml.jackson.databind.ObjectMapper;
 import com.google.common.base.Preconditions;
 import com.google.common.base.Supplier;
 import com.google.common.base.Suppliers;
@@ -62,6 +64,7 @@ import org.apache.calcite.util.Pair;
 import org.apache.druid.java.util.common.guava.BaseSequence;
 import org.apache.druid.java.util.common.guava.Sequence;
 import org.apache.druid.java.util.common.guava.Sequences;
+import org.apache.druid.java.util.emitter.EmittingLogger;
 import org.apache.druid.segment.DimensionHandlerUtils;
 import org.apache.druid.sql.calcite.rel.DruidConvention;
 import org.apache.druid.sql.calcite.rel.DruidRel;
@@ -75,19 +78,24 @@ import java.util.Properties;
 
 public class DruidPlanner implements Closeable
 {
+  private static final EmittingLogger log = new 
EmittingLogger(DruidPlanner.class);
+
   private final FrameworkConfig frameworkConfig;
   private final Planner planner;
   private final PlannerContext plannerContext;
+  private final ObjectMapper jsonMapper;
   private RexBuilder rexBuilder;
 
   public DruidPlanner(
   final FrameworkConfig frameworkConfig,
-  final PlannerContext plannerContext
+  final PlannerContext plannerContext,
+  final ObjectMapper jsonMapper
   )
   {
 this.frameworkConfig = frameworkConfig;
 this.planner = Frameworks.getPlanner(frameworkConfig);
 this.plannerContext = plannerContext;
+this.jsonMapper = jsonMapper;
   }
 
   /**
@@ -358,8 +366,17 @@ public class DruidPlanner implements Closeable
   )
   {
 final String explanation = RelOptUtil.dumpPlan("", rel, 
explain.getFormat(), explain.getDetailLevel());
+String resources;
+try {
+  resources = jsonMapper.writeValueAsString(plannerContext.getResources());
+}
+catch (JsonProcessingException jpe) {
+  // this should never happen, we create the Resources here, not a user
+  log.error(jpe, "Encountered exception while serializing Resources for 
explain output");
+  resources = null;
+}
 final Supplier> resultsSupplier = Suppliers.ofInstance(
-Sequences.simple(ImmutableList.of(new Object[]{explanation})));
+Sequences.simple(ImmutableList.of(new Object[]{explanation, 
resources})));
 return new PlannerResult(resultsSupplier, 
getExplainStructType(rel.getCluster().getTypeFactory()));
   }
 
@@ -414,8 +431,11 @@ public class DruidPlanner implements Closeable
   private static RelDataType getExplainStructType(RelDataTypeFactory 
typeFactory)
   {
 return typeFactory.createStructType(
-ImmutableList.of(Calcites.createSqlType(typeFactory, 
SqlTypeName.VARCHAR)),
-ImmutableList.of("PLAN")
+ImmutableList.of(
+Calcites.createSqlType(typeFactory, SqlTypeName.VARCHAR),
+Calcites.createSqlType(typeFactory, SqlTypeName.VARCHAR)
+),
+ImmutableList.of("PLAN", "RESOURCES")
 );
   }
 
diff --git 
a/sql/src/main/java/org/apache/druid/sql/calcite/planner/PlannerFactory.java 
b/sql/src/main/java/org/apache/druid/sql/calcite/planner/PlannerFactory.java
index fc584d7..9f86eda 100644
--- a/sql/src/main/java/org/apache/druid/sql/calcite/planner/PlannerFactory.java
+++ b/sql/src/main/java/org/apache/druid/sql/calcite/planner/PlannerFactory.java
@@ -107,7 +107,8 @@ public class PlannerFactory
 
 return new DruidPlanner(
 frameworkConfig,
-plannerContext
+plannerContext,
+jsonMapper
 );
   }
 
@@ -121,7 +122,8 @@ public class PlannerFactory
 
 return new DruidPlanner(
 frameworkCo

[druid] branch master updated: fix SQL issue for group by queries with time filter that gets optimized to false (#10968)

2021-03-09 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/master by this push:
 new 5829432  fix SQL issue for group by queries with time filter that gets 
optimized to false (#10968)
5829432 is described below

commit 58294329b77a563c5eb9327e9365c48ad60c0021
Author: Clint Wylie 
AuthorDate: Tue Mar 9 19:41:16 2021 -0800

fix SQL issue for group by queries with time filter that gets optimized to 
false (#10968)

* fix SQL issue for group by queries with time filter that gets optimized 
to false

* short circuit always false in CombineAndSimplifyBounds

* adjust

* javadocs

* add preconditions for and/or filters to ensure they have children

* add comments, remove preconditions
---
 .../apache/druid/query/groupby/GroupByQuery.java   |  9 +++-
 .../filtration/CombineAndSimplifyBounds.java   | 16 +-
 .../apache/druid/sql/calcite/CalciteQueryTest.java | 25 +-
 3 files changed, 47 insertions(+), 3 deletions(-)

diff --git 
a/processing/src/main/java/org/apache/druid/query/groupby/GroupByQuery.java 
b/processing/src/main/java/org/apache/druid/query/groupby/GroupByQuery.java
index 8162f24..b4fb075 100644
--- a/processing/src/main/java/org/apache/druid/query/groupby/GroupByQuery.java
+++ b/processing/src/main/java/org/apache/druid/query/groupby/GroupByQuery.java
@@ -345,6 +345,8 @@ public class GroupByQuery extends BaseQuery
   /**
* If this query has a single universal timestamp, return it. Otherwise 
return null.
*
+   * If {@link #getIntervals()} is empty, there are no results (or timestamps) 
so this method returns null.
+   *
* This method will return a nonnull timestamp in the following two cases:
*
* 1) CTX_KEY_FUDGE_TIMESTAMP is set (in which case this timestamp will be 
returned).
@@ -715,7 +717,12 @@ public class GroupByQuery extends BaseQuery
 if (!timestampStringFromContext.isEmpty()) {
   return DateTimes.utc(Long.parseLong(timestampStringFromContext));
 } else if (Granularities.ALL.equals(granularity)) {
-  final DateTime timeStart = getIntervals().get(0).getStart();
+  final List intervals = getIntervals();
+  if (intervals.isEmpty()) {
+// null, the "universal timestamp" of nothing
+return null;
+  }
+  final DateTime timeStart = intervals.get(0).getStart();
   return granularity.getIterable(new Interval(timeStart, 
timeStart.plus(1))).iterator().next().getStart();
 } else {
   return null;
diff --git 
a/sql/src/main/java/org/apache/druid/sql/calcite/filtration/CombineAndSimplifyBounds.java
 
b/sql/src/main/java/org/apache/druid/sql/calcite/filtration/CombineAndSimplifyBounds.java
index 971ba16..7b4f4b6 100644
--- 
a/sql/src/main/java/org/apache/druid/sql/calcite/filtration/CombineAndSimplifyBounds.java
+++ 
b/sql/src/main/java/org/apache/druid/sql/calcite/filtration/CombineAndSimplifyBounds.java
@@ -27,6 +27,7 @@ import org.apache.druid.java.util.common.ISE;
 import org.apache.druid.query.filter.AndDimFilter;
 import org.apache.druid.query.filter.BoundDimFilter;
 import org.apache.druid.query.filter.DimFilter;
+import org.apache.druid.query.filter.FalseDimFilter;
 import org.apache.druid.query.filter.NotDimFilter;
 import org.apache.druid.query.filter.OrDimFilter;
 
@@ -52,7 +53,10 @@ public class CombineAndSimplifyBounds extends 
BottomUpTransform
   @Override
   public DimFilter process(DimFilter filter)
   {
-if (filter instanceof AndDimFilter) {
+if (filter instanceof FalseDimFilter) {
+  // we might sometimes come into here with just a false from optimizing 
impossible conditions
+  return filter;
+} else if (filter instanceof AndDimFilter) {
   final List children = getAndFilterChildren((AndDimFilter) 
filter);
   final DimFilter one = doSimplifyAnd(children);
   final DimFilter two = negate(doSimplifyOr(negateAll(children)));
@@ -130,15 +134,25 @@ public class CombineAndSimplifyBounds extends 
BottomUpTransform
 // Group Bound filters by dimension, extractionFn, and comparator and 
compute a RangeSet for each one.
 final Map> bounds = new HashMap<>();
 
+// all and/or filters have at least 1 child
+boolean allFalse = true;
 for (final DimFilter child : newChildren) {
   if (child instanceof BoundDimFilter) {
 final BoundDimFilter bound = (BoundDimFilter) child;
 final BoundRefKey boundRefKey = BoundRefKey.from(bound);
 final List filterList = 
bounds.computeIfAbsent(boundRefKey, k -> new ArrayList<>());
 filterList.add(bound);
+allFalse = false;
+  } else {
+allFalse &= child instanceof FalseDimFilter;
   }
 }
 
+// short circuit if can never be true
+if (allFalse) {
+  return Filtration.matc

[druid] branch master updated: Ldap integration tests (#10901)

2021-02-23 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/master by this push:
 new 553f5c8  Ldap integration tests (#10901)
553f5c8 is described below

commit 553f5c8570970a951cfef35b60dd5cc217999587
Author: zachjsh 
AuthorDate: Tue Feb 23 16:29:57 2021 -0500

Ldap integration tests (#10901)

* Add integration tests for ldap extension

* * refactor

* * add ldap-security integration test to travis

* * fix license error

* * Fix failing other integration test

* * break up large tests
* refactor
* address review comments

* * fix intellij inspections failure

* * remove dead code
---
 .travis.yml|  20 +-
 integration-tests/docker/docker-compose.base.yml   |  17 +
 .../docker/docker-compose.ldap-security.yml| 132 
 integration-tests/docker/druid.sh  |   2 +-
 .../docker/environment-configs/common-ldap |  80 +++
 .../docker/environment-configs/overlord|   1 -
 .../docker/ldap-configs/bootstrap.ldif | 138 +
 .../docker/test-data/ldap-security-sample-data.sql |  17 +
 integration-tests/script/docker_compose_args.sh|   6 +-
 .../java/org/apache/druid/tests/TestNGGroup.java   |   5 +
 .../security/AbstractAuthConfigurationTest.java| 471 +++
 .../security/ITBasicAuthConfigurationTest.java | 666 ++---
 .../security/ITBasicAuthLdapConfigurationTest.java | 541 +
 13 files changed, 1610 insertions(+), 486 deletions(-)

diff --git a/.travis.yml b/.travis.yml
index 9fbc4a6..355e87c 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -487,6 +487,15 @@ jobs:
   script: *run_integration_test
   after_failure: *integration_test_diags
 
+- _ldap_security
+  name: "(Compile=openjdk8, Run=openjdk8) ldap security integration test"
+  stage: Tests - phase 2
+  jdk: openjdk8
+  services: *integration_test_services
+  env: TESTNG_GROUPS='-Dgroups=ldap-security' 
JVM_RUNTIME='-Djvm.runtime=8' USE_INDEXER='middleManager'
+  script: *run_integration_test
+  after_failure: *integration_test_diags
+
 - _realtime_index
   name: "(Compile=openjdk8, Run=openjdk8) realtime index integration test"
   stage: Tests - phase 2
@@ -527,13 +536,13 @@ jobs:
   stage: Tests - phase 2
   jdk: openjdk8
   services: *integration_test_services
-  env: 
TESTNG_GROUPS='-DexcludedGroups=batch-index,input-format,input-source,perfect-rollup-parallel-batch-index,kafka-index,query,query-retry,realtime-index,security,s3-deep-storage,gcs-deep-storage,azure-deep-storage,hdfs-deep-storage,s3-ingestion,kinesis-index,kinesis-data-format,kafka-transactional-index,kafka-index-slow,kafka-transactional-index-slow,kafka-data-format,hadoop-s3-to-s3-deep-storage,hadoop-s3-to-hdfs-deep-storage,hadoop-azure-to-azure-deep-storage,hadoop-azure-to-h
 [...]
+  env: 
TESTNG_GROUPS='-DexcludedGroups=batch-index,input-format,input-source,perfect-rollup-parallel-batch-index,kafka-index,query,query-retry,realtime-index,security,ldap-security,s3-deep-storage,gcs-deep-storage,azure-deep-storage,hdfs-deep-storage,s3-ingestion,kinesis-index,kinesis-data-format,kafka-transactional-index,kafka-index-slow,kafka-transactional-index-slow,kafka-data-format,hadoop-s3-to-s3-deep-storage,hadoop-s3-to-hdfs-deep-storage,hadoop-azure-to-azure-deep-storage,had
 [...]
   script: *run_integration_test
   after_failure: *integration_test_diags
 
 - <<: *integration_tests
   name: "(Compile=openjdk8, Run=openjdk8) other integration tests with 
Indexer"
-  env: 
TESTNG_GROUPS='-DexcludedGroups=batch-index,input-format,input-source,perfect-rollup-parallel-batch-index,kafka-index,query,query-retry,realtime-index,security,s3-deep-storage,gcs-deep-storage,azure-deep-storage,hdfs-deep-storage,s3-ingestion,kinesis-index,kinesis-data-format,kafka-transactional-index,kafka-index-slow,kafka-transactional-index-slow,kafka-data-format,hadoop-s3-to-s3-deep-storage,hadoop-s3-to-hdfs-deep-storage,hadoop-azure-to-azure-deep-storage,hadoop-azure-to-h
 [...]
+  env: 
TESTNG_GROUPS='-DexcludedGroups=batch-index,input-format,input-source,perfect-rollup-parallel-batch-index,kafka-index,query,query-retry,realtime-index,security,ldap-security,s3-deep-storage,gcs-deep-storage,azure-deep-storage,hdfs-deep-storage,s3-ingestion,kinesis-index,kinesis-data-format,kafka-transactional-index,kafka-index-slow,kafka-transactional-index-slow,kafka-data-format,hadoop-s3-to-s3-deep-storage,hadoop-s3-to-hdfs-deep-storage,hadoop-azure-to-azure-deep-storage,had
 [...]
 
 - <<: *integration_tests
   name: "(Compile=openjdk8, Run=openjdk8) leadership and high availability 
integration tests"
@@ -58

[druid] branch master updated (eabad0f -> 8434173)

2021-02-18 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git.


from eabad0f  Keep query granularity of compacted segments after compaction 
(#10856)
 add 8434173  Add property for binding view manager type (#10895)

No new revisions were added by this update.

Summary of changes:
 .../druid/sql/calcite/view/NoopViewManager.java|   2 +
 .../java/org/apache/druid/sql/guice/SqlModule.java |  31 ++-
 .../org/apache/druid/sql/guice/SqlModuleTest.java  | 256 +
 3 files changed, 285 insertions(+), 4 deletions(-)
 create mode 100644 
sql/src/test/java/org/apache/druid/sql/guice/SqlModuleTest.java


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] branch master updated (1e40f51 -> 8ad6813)

2021-02-16 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git.


from 1e40f51  Fix example names of security artifacts in docs (#10882)
 add 8ad6813  Filter unauthorized views in InformationSchema (#10874)

No new revisions were added by this update.

Summary of changes:
 .../org/apache/druid/server/security/AuthorizationUtils.java |  9 +
 .../apache/druid/sql/calcite/schema/InformationSchema.java   | 10 ++
 .../java/org/apache/druid/sql/calcite/CalciteQueryTest.java  |  1 -
 .../java/org/apache/druid/sql/calcite/util/CalciteTests.java | 12 ++--
 4 files changed, 25 insertions(+), 7 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] branch master updated: refactor sql lifecycle, druid planner, views, and view permissions (#10812)

2021-02-05 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/master by this push:
 new fe30f4b  refactor sql lifecycle, druid planner, views, and view 
permissions (#10812)
fe30f4b is described below

commit fe30f4b4144b782075db2c7e068f2bd9565a6cfd
Author: Clint Wylie 
AuthorDate: Fri Feb 5 12:56:55 2021 -0800

refactor sql lifecycle, druid planner, views, and view permissions (#10812)

* before i leaped i should've seen, the view from halfway down

* fixes

* fixes, more test

* rename

* fix style

* further refactoring

* review stuffs

* rename

* more javadoc and comments
---
 .../apache/druid/benchmark/query/SqlBenchmark.java |  16 +-
 .../benchmark/query/SqlExpressionBenchmark.java|   9 +-
 .../benchmark/query/SqlVsNativeBenchmark.java  |   7 +-
 ...natorBasicAuthorizerMetadataStorageUpdater.java |  12 +-
 .../org/apache/druid/server/QueryLifecycle.java|  53 ++---
 .../druid/server/security/AuthorizationUtils.java  |   8 +
 .../apache/druid/server/security/ResourceType.java |   1 +
 .../apache/druid/server/QueryLifecycleTest.java| 186 
 .../java/org/apache/druid/sql/SqlLifecycle.java| 242 +---
 .../apache/druid/sql/avatica/DruidStatement.java   |   7 +-
 .../druid/sql/calcite/planner/DruidPlanner.java| 231 +--
 .../druid/sql/calcite/planner/PlannerContext.java  |  66 --
 .../druid/sql/calcite/planner/PlannerFactory.java  |  82 +--
 .../druid/sql/calcite/planner/PlannerResult.java   |  19 +-
 .../druid/sql/calcite/planner/PrepareResult.java   |   4 +
 .../planner/SqlResourceCollectorShuttle.java   |  87 
 .../{PrepareResult.java => ValidationResult.java}  |  31 +--
 .../apache/druid/sql/calcite/rel/QueryMaker.java   |   4 +-
 .../calcite/schema/DruidCalciteSchemaModule.java   |   1 +
 .../druid/sql/calcite/schema/DruidSchema.java  |  20 +-
 .../NamedViewSchema.java}  |  27 ++-
 .../druid/sql/calcite/schema/ViewSchema.java   |  54 +
 .../druid/sql/calcite/view/DruidViewMacro.java |   6 +-
 .../org/apache/druid/sql/http/SqlResource.java |   4 +-
 .../druid/sql/calcite/BaseCalciteQueryTest.java| 123 ---
 .../apache/druid/sql/calcite/CalciteQueryTest.java | 231 +--
 .../calcite/DruidPlannerResourceAnalyzeTest.java   | 246 +
 .../calcite/SqlVectorizedExpressionSanityTest.java |   9 +-
 .../calcite/expression/ExpressionTestHelper.java   |   5 +-
 .../schema/DruidCalciteSchemaModuleTest.java   |   2 +-
 .../calcite/schema/DruidSchemaNoDataInitTest.java  |   2 -
 .../druid/sql/calcite/schema/DruidSchemaTest.java  |   3 -
 .../druid/sql/calcite/schema/SystemSchemaTest.java |   2 -
 .../druid/sql/calcite/util/CalciteTests.java   |   9 +-
 .../org/apache/druid/sql/http/SqlResourceTest.java |   4 +-
 35 files changed, 1447 insertions(+), 366 deletions(-)

diff --git 
a/benchmarks/src/test/java/org/apache/druid/benchmark/query/SqlBenchmark.java 
b/benchmarks/src/test/java/org/apache/druid/benchmark/query/SqlBenchmark.java
index 38b5c3a..8ffed2f 100644
--- 
a/benchmarks/src/test/java/org/apache/druid/benchmark/query/SqlBenchmark.java
+++ 
b/benchmarks/src/test/java/org/apache/druid/benchmark/query/SqlBenchmark.java
@@ -35,8 +35,6 @@ import org.apache.druid.segment.generator.GeneratorSchemaInfo;
 import org.apache.druid.segment.generator.SegmentGenerator;
 import org.apache.druid.server.QueryStackTests;
 import org.apache.druid.server.security.AuthTestUtils;
-import org.apache.druid.server.security.AuthenticationResult;
-import org.apache.druid.server.security.NoopEscalator;
 import org.apache.druid.sql.calcite.planner.Calcites;
 import org.apache.druid.sql.calcite.planner.DruidPlanner;
 import org.apache.druid.sql.calcite.planner.PlannerConfig;
@@ -439,10 +437,9 @@ public class SqlBenchmark
 QueryContexts.VECTORIZE_KEY, vectorize,
 QueryContexts.VECTORIZE_VIRTUAL_COLUMNS_KEY, vectorize
 );
-final AuthenticationResult authenticationResult = 
NoopEscalator.getInstance()
-   
.createEscalatedAuthenticationResult();
-try (final DruidPlanner planner = plannerFactory.createPlanner(context, 
ImmutableList.of(), authenticationResult)) {
-  final PlannerResult plannerResult = 
planner.plan(QUERIES.get(Integer.parseInt(query)));
+final String sql = QUERIES.get(Integer.parseInt(query));
+try (final DruidPlanner planner = 
plannerFactory.createPlannerForTesting(context, sql)) {
+  final PlannerResult plannerResult = planner.plan(sql);
   final Sequence resultSequence = plannerResult.run();
   final Object[] lastRow = resultSequence.accumulate(null, (accumula

[druid] branch master updated (fe0511b -> 3984457)

2021-01-11 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git.


from fe0511b  Coordinator Dynamic Config changes to ease upgrading with new 
config value (#10724)
 add 3984457  Add missing unit tests for segment loading in historicals 
(#10737)

No new revisions were added by this update.

Summary of changes:
 .../loading/SegmentLoaderLocalCacheManager.java|  9 +++-
 .../SegmentLoaderLocalCacheManagerTest.java| 59 ++---
 .../coordination/SegmentLoadDropHandlerTest.java   | 60 --
 3 files changed, 104 insertions(+), 24 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] branch master updated (d2e6240 -> 68bb038)

2021-01-05 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git.


from d2e6240  k8s-int-test-build: zk-less druid cluster and http based 
segment/task managment (#10686)
 add 68bb038  Multiphase segment merge for IndexMergerV9 (#10689)

No new revisions were added by this update.

Summary of changes:
 .../benchmark/indexing/IndexMergeBenchmark.java|   3 +-
 docs/ingestion/native-batch.md |   1 +
 .../apache/druid/indexer/IndexGeneratorJob.java|   2 +-
 .../druid/indexing/common/task/CompactionTask.java |   3 +-
 .../druid/indexing/common/task/IndexTask.java  |  32 +-
 .../parallel/ParallelIndexSupervisorTask.java  |   3 +-
 .../batch/parallel/ParallelIndexTuningConfig.java  |  10 +-
 .../batch/parallel/PartialSegmentMergeTask.java|   3 +-
 .../task/ClientCompactionTaskQuerySerdeTest.java   |   1 +
 .../common/task/CompactionTaskRunTest.java |   1 +
 .../indexing/common/task/CompactionTaskTest.java   |   6 +
 .../indexing/common/task/IndexTaskSerdeTest.java   |  18 +-
 .../druid/indexing/common/task/IndexTaskTest.java  |  12 +-
 .../druid/indexing/common/task/TaskSerdeTest.java  |   2 +
 .../AbstractParallelIndexSupervisorTaskTest.java   |   2 +
 .../ParallelIndexSupervisorTaskKillTest.java   |   1 +
 .../ParallelIndexSupervisorTaskResourceTest.java   |   1 +
 .../ParallelIndexSupervisorTaskSerdeTest.java  |   1 +
 .../parallel/ParallelIndexSupervisorTaskTest.java  |   1 +
 .../parallel/ParallelIndexTestingFactory.java  |   3 +-
 .../parallel/ParallelIndexTuningConfigTest.java|   7 +
 .../parallel/SinglePhaseParallelIndexingTest.java  |   1 +
 .../druid/indexing/overlord/TaskLifecycleTest.java |   4 +
 .../apache/druid/tests/indexer/ITIndexerTest.java  |  21 ++
 ...ipedia_index_with_merge_column_limit_task.json} |   3 +-
 .../java/org/apache/druid/segment/IndexMerger.java |  10 +-
 .../org/apache/druid/segment/IndexMergerV9.java| 165 ++-
 .../query/aggregation/AggregationTestHelper.java   |   2 +-
 .../org/apache/druid/segment/EmptyIndexTest.java   |   3 +-
 .../org/apache/druid/segment/IndexBuilder.java |   3 +-
 .../druid/segment/IndexMergerRollupTest.java   |   2 +-
 .../apache/druid/segment/IndexMergerTestBase.java  | 328 +++--
 .../segment/IndexMergerV9WithSpatialIndexTest.java |   3 +-
 .../apache/druid/segment/SchemalessIndexTest.java  |  11 +-
 .../java/org/apache/druid/segment/TestIndex.java   |   3 +-
 .../segment/filter/SpatialFilterBonusTest.java |   3 +-
 .../druid/segment/filter/SpatialFilterTest.java|   3 +-
 .../druid/segment/generator/SegmentGenerator.java  |   3 +-
 .../realtime/appenderator/AppenderatorConfig.java  |   5 +
 .../realtime/appenderator/AppenderatorImpl.java|   3 +-
 .../UnifiedIndexerAppenderatorsManager.java|  12 +-
 .../segment/realtime/plumber/RealtimePlumber.java  |   3 +-
 website/.spelling  |   1 +
 43 files changed, 626 insertions(+), 79 deletions(-)
 copy integration-tests/src/test/resources/indexer/{wikipedia_index_task.json 
=> wikipedia_index_with_merge_column_limit_task.json} (97%)


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] branch master updated (045b29f -> 769c21c)

2021-01-05 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git.


from 045b29f  Correctly handle null values in time column results (#10642)
 add 769c21c  Add sample method to IndexingServiceClient (#10729)

No new revisions were added by this update.

Summary of changes:
 .../druid/indexing/kafka/KafkaSamplerSpecTest.java |   2 +-
 .../indexing/kinesis/KinesisSamplerSpecTest.java   |   2 +-
 .../overlord/sampler/IndexTaskSamplerSpec.java |   2 +
 .../overlord/sampler/InputSourceSampler.java   |   3 +-
 .../indexing/overlord/sampler/SamplerResource.java |   2 +
 .../seekablestream/SeekableStreamSamplerSpec.java  |   4 +-
 .../sampler/CsvInputSourceSamplerTest.java |   3 +-
 .../overlord/sampler/IndexTaskSamplerSpecTest.java |   1 +
 .../overlord/sampler/InputSourceSamplerTest.java   |   3 +-
 .../overlord/sampler/SamplerResponseTest.java  |   1 +
 .../client/indexing/HttpIndexingServiceClient.java |  28 
 .../client/indexing/IndexingServiceClient.java |   2 +
 .../druid/client/indexing}/SamplerResponse.java|  40 -
 .../apache/druid/client/indexing}/SamplerSpec.java |   2 +-
 .../indexing/HttpIndexingServiceClientTest.java| 164 +
 .../client/indexing/NoopIndexingServiceClient.java |   6 +
 16 files changed, 251 insertions(+), 14 deletions(-)
 rename 
{indexing-service/src/main/java/org/apache/druid/indexing/overlord/sampler => 
server/src/main/java/org/apache/druid/client/indexing}/SamplerResponse.java 
(75%)
 rename 
{indexing-service/src/main/java/org/apache/druid/indexing/overlord/sampler => 
server/src/main/java/org/apache/druid/client/indexing}/SamplerSpec.java (94%)
 create mode 100644 
server/src/test/java/org/apache/druid/client/indexing/HttpIndexingServiceClientTest.java


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] branch master updated (6ae8059 -> 796c255)

2020-12-17 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git.


from 6ae8059  cleaning up and fixing links (#10528)
 add 796c255  Fix post-aggregator computation when used with subtotals 
(#10653)

No new revisions were added by this update.

Summary of changes:
 .../epinephelinae/RowBasedGrouperHelper.java   |  6 +-
 .../query/groupby/strategy/GroupByStrategyV2.java  | 22 +-
 .../query/groupby/GroupByQueryRunnerTest.java  | 87 ++
 .../apache/druid/sql/calcite/CalciteQueryTest.java | 18 ++---
 4 files changed, 101 insertions(+), 32 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] branch master updated (52d46ce -> 7eb5f59)

2020-12-04 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git.


from 52d46ce  Move common configurations to TuningConfig (#10478)
 add 7eb5f59  Fix string byte calculation in StringDimensionIndexer (#10623)

No new revisions were added by this update.

Summary of changes:
 .../druid/segment/StringDimensionIndexer.java  | 10 +++--
 .../incremental/IncrementalIndexRowSizeTest.java   | 45 ++
 .../realtime/appenderator/AppenderatorTest.java| 16 
 3 files changed, 52 insertions(+), 19 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] branch master updated (2f4d6da -> ba915b7)

2020-11-19 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git.


from 2f4d6da  Updates segment metadata query documentation  (#10589)
 add ba915b7  Security overview documentation (#10339)

No new revisions were added by this update.

Summary of changes:
 docs/assets/security-model-1.png   | Bin 0 -> 85098 bytes
 docs/assets/security-model-2.png   | Bin 0 -> 29613 bytes
 .../extensions-core/druid-basic-security.md| 101 +---
 docs/operations/auth-ldap.md   | 196 +++
 docs/operations/password-provider.md   |  21 +-
 docs/operations/security-overview.md   | 265 +
 docs/operations/security-user-auth.md  | 151 
 docs/operations/tls-support.md |  17 +-
 docs/querying/sql.md   |   2 +-
 website/.spelling  |   4 +
 website/i18n/en.json   |  12 +
 website/sidebars.json  |  35 ++-
 12 files changed, 677 insertions(+), 127 deletions(-)
 create mode 100644 docs/assets/security-model-1.png
 create mode 100644 docs/assets/security-model-2.png
 create mode 100644 docs/operations/auth-ldap.md
 create mode 100644 docs/operations/security-overview.md
 create mode 100644 docs/operations/security-user-auth.md


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] branch master updated (8366717 -> cd231d8)

2020-11-09 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git.


from 8366717  Add missing coordinator dynamic config to the web-console 
dialog for dynamic coordinator config (#10545)
 add cd231d8  Run integration test queries once (#10564)

No new revisions were added by this update.

Summary of changes:
 .../testing/utils/AbstractTestQueryHelper.java | 69 +++---
 .../coordinator/duty/ITAutoCompactionTest.java |  2 +-
 .../tests/indexer/AbstractITBatchIndexTest.java| 10 ++--
 .../indexer/AbstractITRealtimeIndexTaskTest.java   |  4 +-
 .../tests/indexer/AbstractStreamIndexingTest.java  |  4 +-
 .../tests/indexer/ITAppendBatchIndexTest.java  |  6 +-
 .../druid/tests/indexer/ITCompactionTaskTest.java  |  4 +-
 .../tests/indexer/ITNestedQueryPushDownTest.java   |  2 +-
 .../tests/query/ITBroadcastJoinQueryTest.java  |  6 +-
 .../druid/tests/query/ITSystemTableQueryTest.java  |  2 +-
 .../druid/tests/query/ITTwitterQueryTest.java  |  2 +-
 .../apache/druid/tests/query/ITUnionQueryTest.java |  4 +-
 .../druid/tests/query/ITWikipediaQueryTest.java|  2 +-
 13 files changed, 56 insertions(+), 61 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] tag druid-0.20.0 created (now acdc6ee)

2020-10-16 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to tag druid-0.20.0
in repository https://gitbox.apache.org/repos/asf/druid.git.


  at acdc6ee  (commit)
No new revisions were added by this update.


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid-website] 01/01: Merge pull request #105 from apache/20_release

2020-10-16 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/druid-website.git

commit 1da765b25b0df51ae5d1db14ca5b0de2b570a6b2
Merge: 99746f8 520c3e4
Author: Jonathan Wei 
AuthorDate: Fri Oct 16 17:57:38 2020 -0700

Merge pull request #105 from apache/20_release

0.20.0 release update

 community/index.html   |1 +
 .../comparisons/druid-vs-elasticsearch.html|2 +-
 .../comparisons/druid-vs-key-value.html|2 +-
 .../comparisons/druid-vs-kudu.html |2 +-
 .../comparisons/druid-vs-redshift.html |2 +-
 .../comparisons/druid-vs-spark.html|2 +-
 .../comparisons/druid-vs-sql-on-hadoop.html|2 +-
 docs/0.13.0-incubating/configuration/index.html|2 +-
 docs/0.13.0-incubating/configuration/logging.html  |2 +-
 docs/0.13.0-incubating/configuration/realtime.html |2 +-
 .../dependencies/cassandra-deep-storage.html   |2 +-
 .../dependencies/deep-storage.html |2 +-
 .../dependencies/metadata-storage.html |2 +-
 docs/0.13.0-incubating/dependencies/zookeeper.html |2 +-
 docs/0.13.0-incubating/design/auth.html|2 +-
 docs/0.13.0-incubating/design/broker.html  |2 +-
 docs/0.13.0-incubating/design/coordinator.html |2 +-
 docs/0.13.0-incubating/design/historical.html  |2 +-
 docs/0.13.0-incubating/design/index.html   |2 +-
 .../0.13.0-incubating/design/indexing-service.html |2 +-
 docs/0.13.0-incubating/design/middlemanager.html   |2 +-
 docs/0.13.0-incubating/design/overlord.html|2 +-
 docs/0.13.0-incubating/design/peons.html   |2 +-
 docs/0.13.0-incubating/design/plumber.html |2 +-
 docs/0.13.0-incubating/design/realtime.html|2 +-
 docs/0.13.0-incubating/design/segments.html|2 +-
 docs/0.13.0-incubating/development/build.html  |2 +-
 .../development/experimental.html  |2 +-
 .../extensions-contrib/ambari-metrics-emitter.html |2 +-
 .../development/extensions-contrib/azure.html  |2 +-
 .../development/extensions-contrib/cassandra.html  |2 +-
 .../development/extensions-contrib/cloudfiles.html |2 +-
 .../extensions-contrib/distinctcount.html  |2 +-
 .../development/extensions-contrib/google.html |2 +-
 .../development/extensions-contrib/graphite.html   |2 +-
 .../development/extensions-contrib/influx.html |2 +-
 .../extensions-contrib/kafka-emitter.html  |2 +-
 .../extensions-contrib/kafka-simple.html   |2 +-
 .../extensions-contrib/materialized-view.html  |2 +-
 .../extensions-contrib/opentsdb-emitter.html   |2 +-
 .../development/extensions-contrib/orc.html|2 +-
 .../development/extensions-contrib/parquet.html|2 +-
 .../development/extensions-contrib/rabbitmq.html   |2 +-
 .../extensions-contrib/redis-cache.html|2 +-
 .../development/extensions-contrib/rocketmq.html   |2 +-
 .../development/extensions-contrib/sqlserver.html  |2 +-
 .../development/extensions-contrib/statsd.html |2 +-
 .../development/extensions-contrib/thrift.html |2 +-
 .../extensions-contrib/time-min-max.html   |2 +-
 .../extensions-core/approximate-histograms.html|2 +-
 .../development/extensions-core/avro.html  |2 +-
 .../development/extensions-core/bloom-filter.html  |2 +-
 .../extensions-core/datasketches-extension.html|2 +-
 .../extensions-core/datasketches-hll.html  |2 +-
 .../extensions-core/datasketches-quantiles.html|2 +-
 .../extensions-core/datasketches-theta.html|2 +-
 .../extensions-core/datasketches-tuple.html|2 +-
 .../extensions-core/druid-basic-security.html  |2 +-
 .../extensions-core/druid-kerberos.html|2 +-
 .../development/extensions-core/druid-lookups.html |2 +-
 .../development/extensions-core/examples.html  |2 +-
 .../development/extensions-core/hdfs.html  |2 +-
 .../extensions-core/kafka-eight-firehose.html  |2 +-
 .../kafka-extraction-namespace.html|2 +-
 .../extensions-core/kafka-ingestion.html   |2 +-
 .../extensions-core/lookups-cached-global.html |2 +-
 .../development/extensions-core/mysql.html |2 +-
 .../development/extensions-core/postgresql.html|2 +-
 .../development/extensions-core/protobuf.html  |2 +-
 .../development/extensions-core/s3.html|2 +-
 .../extensions-core/simple-client-sslcontext.html  |2 +-
 .../development/extensions-core/stats.html |2 +-
 .../development/extensions-core/test-stats.html|2 +-
 docs/0.13.0-incubating/development/extensions.html |2 +-
 docs

[druid-website] branch asf-site updated (99746f8 -> 1da765b)

2020-10-16 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch asf-site
in repository https://gitbox.apache.org/repos/asf/druid-website.git.


from 99746f8  Autobuild (#103)
 add 520c3e4  0.20.0 release update
 new 1da765b  Merge pull request #105 from apache/20_release

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 community/index.html   |1 +
 .../comparisons/druid-vs-elasticsearch.html|2 +-
 .../comparisons/druid-vs-key-value.html|2 +-
 .../comparisons/druid-vs-kudu.html |2 +-
 .../comparisons/druid-vs-redshift.html |2 +-
 .../comparisons/druid-vs-spark.html|2 +-
 .../comparisons/druid-vs-sql-on-hadoop.html|2 +-
 docs/0.13.0-incubating/configuration/index.html|2 +-
 docs/0.13.0-incubating/configuration/logging.html  |2 +-
 docs/0.13.0-incubating/configuration/realtime.html |2 +-
 .../dependencies/cassandra-deep-storage.html   |2 +-
 .../dependencies/deep-storage.html |2 +-
 .../dependencies/metadata-storage.html |2 +-
 docs/0.13.0-incubating/dependencies/zookeeper.html |2 +-
 docs/0.13.0-incubating/design/auth.html|2 +-
 docs/0.13.0-incubating/design/broker.html  |2 +-
 docs/0.13.0-incubating/design/coordinator.html |2 +-
 docs/0.13.0-incubating/design/historical.html  |2 +-
 docs/0.13.0-incubating/design/index.html   |2 +-
 .../0.13.0-incubating/design/indexing-service.html |2 +-
 docs/0.13.0-incubating/design/middlemanager.html   |2 +-
 docs/0.13.0-incubating/design/overlord.html|2 +-
 docs/0.13.0-incubating/design/peons.html   |2 +-
 docs/0.13.0-incubating/design/plumber.html |2 +-
 docs/0.13.0-incubating/design/realtime.html|2 +-
 docs/0.13.0-incubating/design/segments.html|2 +-
 docs/0.13.0-incubating/development/build.html  |2 +-
 .../development/experimental.html  |2 +-
 .../extensions-contrib/ambari-metrics-emitter.html |2 +-
 .../development/extensions-contrib/azure.html  |2 +-
 .../development/extensions-contrib/cassandra.html  |2 +-
 .../development/extensions-contrib/cloudfiles.html |2 +-
 .../extensions-contrib/distinctcount.html  |2 +-
 .../development/extensions-contrib/google.html |2 +-
 .../development/extensions-contrib/graphite.html   |2 +-
 .../development/extensions-contrib/influx.html |2 +-
 .../extensions-contrib/kafka-emitter.html  |2 +-
 .../extensions-contrib/kafka-simple.html   |2 +-
 .../extensions-contrib/materialized-view.html  |2 +-
 .../extensions-contrib/opentsdb-emitter.html   |2 +-
 .../development/extensions-contrib/orc.html|2 +-
 .../development/extensions-contrib/parquet.html|2 +-
 .../development/extensions-contrib/rabbitmq.html   |2 +-
 .../extensions-contrib/redis-cache.html|2 +-
 .../development/extensions-contrib/rocketmq.html   |2 +-
 .../development/extensions-contrib/sqlserver.html  |2 +-
 .../development/extensions-contrib/statsd.html |2 +-
 .../development/extensions-contrib/thrift.html |2 +-
 .../extensions-contrib/time-min-max.html   |2 +-
 .../extensions-core/approximate-histograms.html|2 +-
 .../development/extensions-core/avro.html  |2 +-
 .../development/extensions-core/bloom-filter.html  |2 +-
 .../extensions-core/datasketches-extension.html|2 +-
 .../extensions-core/datasketches-hll.html  |2 +-
 .../extensions-core/datasketches-quantiles.html|2 +-
 .../extensions-core/datasketches-theta.html|2 +-
 .../extensions-core/datasketches-tuple.html|2 +-
 .../extensions-core/druid-basic-security.html  |2 +-
 .../extensions-core/druid-kerberos.html|2 +-
 .../development/extensions-core/druid-lookups.html |2 +-
 .../development/extensions-core/examples.html  |2 +-
 .../development/extensions-core/hdfs.html  |2 +-
 .../extensions-core/kafka-eight-firehose.html  |2 +-
 .../kafka-extraction-namespace.html|2 +-
 .../extensions-core/kafka-ingestion.html   |2 +-
 .../extensions-core/lookups-cached-global.html |2 +-
 .../development/extensions-core/mysql.html |2 +-
 .../development/extensions-core/postgresql.html|2 +-
 .../development/extensions-core/protobuf.html  |2 +-
 .../development/extensions-core/s3.html|2 +-
 .../extensions-core/simple-client-sslcontext.html  |2

[druid-website-src] branch master updated (8caaf92 -> 0de3a23)

2020-10-16 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/druid-website-src.git.


from 8caaf92  Merge pull request #177 from apache/20rc2_update
 add 3e20479  0.20.0 release update
 new 0de3a23  Merge pull request #179 from apache/20_release

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 _config.yml   |  8 ++---
 docs/0.20.0/operations/api-reference.html | 50 +++
 docs/0.20.0/operations/metrics.html   | 12 
 docs/latest/operations/api-reference.html | 50 +++
 docs/latest/operations/metrics.html   | 12 
 5 files changed, 104 insertions(+), 28 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid-website-src] 01/01: Merge pull request #179 from apache/20_release

2020-10-16 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid-website-src.git

commit 0de3a2390ded3d90212bce3916c9d0264b89e3e7
Merge: 8caaf92 3e20479
Author: Jonathan Wei 
AuthorDate: Fri Oct 16 17:50:04 2020 -0700

Merge pull request #179 from apache/20_release

0.20.0 release update

 _config.yml   |  8 ++---
 docs/0.20.0/operations/api-reference.html | 50 +++
 docs/0.20.0/operations/metrics.html   | 12 
 docs/latest/operations/api-reference.html | 50 +++
 docs/latest/operations/metrics.html   | 12 
 5 files changed, 104 insertions(+), 28 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid-website] branch 20_release created (now 520c3e4)

2020-10-16 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch 20_release
in repository https://gitbox.apache.org/repos/asf/druid-website.git.


  at 520c3e4  0.20.0 release update

This branch includes the following new commits:

 new 520c3e4  0.20.0 release update

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid-website-src] 01/01: 0.20.0 release update

2020-10-16 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch 20_release
in repository https://gitbox.apache.org/repos/asf/druid-website-src.git

commit 3e2047984d032dd066479a7cc2ad3c9d4ab2bf0d
Author: jon-wei 
AuthorDate: Fri Oct 16 17:38:56 2020 -0700

0.20.0 release update
---
 _config.yml   |  8 ++---
 docs/0.20.0/operations/api-reference.html | 50 +++
 docs/0.20.0/operations/metrics.html   | 12 
 docs/latest/operations/api-reference.html | 50 +++
 docs/latest/operations/metrics.html   | 12 
 5 files changed, 104 insertions(+), 28 deletions(-)

diff --git a/_config.yml b/_config.yml
index b0ab908..2c3165e 100644
--- a/_config.yml
+++ b/_config.yml
@@ -26,14 +26,14 @@ description: 'Real²time Exploratory Analytics on Large 
Datasets'
 
 
 druid_versions:
+  - release: 0.20
+versions:
+  - version: 0.20.0
+date: 2020-10-16
   - release: 0.19
 versions:
   - version: 0.19.0
 date: 2020-07-21
-  - release: 0.18
-versions:
-  - version: 0.18.1
-date: 2020-05-13
 
 tranquility_stable_version: 0.8.3
 
diff --git a/docs/0.20.0/operations/api-reference.html 
b/docs/0.20.0/operations/api-reference.html
index 89bacbf..7f8a71d 100644
--- a/docs/0.20.0/operations/api-reference.html
+++ b/docs/0.20.0/operations/api-reference.html
@@ -429,9 +429,35 @@ result of this API call.
 
 Returns the total size of segments awaiting compaction for the given 
dataSource.
 This is only valid for dataSource which has compaction enabled.
-Compa
 
 Removes the compaction config for a dataSource.
 coordinator setting
 which automates this operation to perform periodically.
 Overlord 
Dynamic Configuration for details.
 Note that all interval URL parameters are ISO 8601 strings 
delimited by a _ instead of a /
 (e.g., 2016-06-27_2016-06-28).
-three-server 
configuration.
 {"task":"index_kafka_wikiticker_f7011f8ffba384b_fpeclode"}
 
 task reports for more 
details.
 dynamic 
configuration, then log 
entries for class
diff --git a/docs/latest/operations/api-reference.html 
b/docs/latest/operations/api-reference.html
index 9c46416..dc7a2d0 100644
--- a/docs/latest/operations/api-reference.html
+++ b/docs/latest/operations/api-reference.html
@@ -429,9 +429,35 @@ result of this API call.
 
 Returns the total size of segments awaiting compaction for the given 
dataSource.
 This is only valid for dataSource which has compaction enabled.
-Compa
 
 Removes the compaction config for a dataSource.
 coordinator setting
 which automates this operation to perform periodically.
 Overlord 
Dynamic Configuration for details.
 Note that all interval URL parameters are ISO 8601 strings 
delimited by a _ instead of a /
 (e.g., 2016-06-27_2016-06-28).
-three-server 
configuration.
 {"task":"index_kafka_wikiticker_f7011f8ffba384b_fpeclode"}
 
 task reports for more 
details.
 dynamic 
configuration, then log 
entries for class


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid-website-src] branch 20_release created (now 3e20479)

2020-10-16 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch 20_release
in repository https://gitbox.apache.org/repos/asf/druid-website-src.git.


  at 3e20479  0.20.0 release update

This branch includes the following new commits:

 new 3e20479  0.20.0 release update

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



svn commit: r41943 - /release/druid/0.20.0/

2020-10-15 Thread jonwei
Author: jonwei
Date: Thu Oct 15 22:31:38 2020
New Revision: 41943

Log:
Add 0.20.0 artifacts

Added:
release/druid/0.20.0/
release/druid/0.20.0/apache-druid-0.20.0-bin.tar.gz   (with props)
release/druid/0.20.0/apache-druid-0.20.0-bin.tar.gz.asc
release/druid/0.20.0/apache-druid-0.20.0-bin.tar.gz.sha512
release/druid/0.20.0/apache-druid-0.20.0-src.tar.gz   (with props)
release/druid/0.20.0/apache-druid-0.20.0-src.tar.gz.asc
release/druid/0.20.0/apache-druid-0.20.0-src.tar.gz.sha512

Added: release/druid/0.20.0/apache-druid-0.20.0-bin.tar.gz
==
Binary file - no diff available.

Propchange: release/druid/0.20.0/apache-druid-0.20.0-bin.tar.gz
--
svn:mime-type = application/octet-stream

Added: release/druid/0.20.0/apache-druid-0.20.0-bin.tar.gz.asc
==
--- release/druid/0.20.0/apache-druid-0.20.0-bin.tar.gz.asc (added)
+++ release/druid/0.20.0/apache-druid-0.20.0-bin.tar.gz.asc Thu Oct 15 22:31:38 
2020
@@ -0,0 +1,16 @@
+-BEGIN PGP SIGNATURE-
+
+iQIzBAABCgAdFiEEBJXgMcNdEyR5wkHskoPEkhrESD4FAl9/8gAACgkQkoPEkhrE
+SD6LRBAAkZgqvR3U3dUyIRdclf0fQaWh3bpyIAoLEQYy1jytdx0Btfe8dGaVQYQH
+bWdA1r38jn3Uz5eUI5Tn92XWCHr6qRtScB2R2Bnoi+odYWm74JIdeJ0CgnuMlWwv
+/VaFUAV/nyP/8HeH89B/tRKWQgPJKScbRLhi4v4pzvxWeq8OAhGEAU7WKY9RBSY8
+dKldDQct8tH9WsY1K+aNGnNlRwnNwgJ8jOcSTwg/7BZAYG4o3QfTppdQETtdnrkc
+2vJyLLqexgZfAAHhafzfDrkKly1BcfGT/OtFq6kiHnUOJRWx6ZbyEpW062qgE2NP
+aBDYq/J1QlVbqKqCHHWgfmwjx0sQes61y7clF9XmUvtsyeiVa+lsmHmN0ruuOsTJ
+Gda6t02MgR9Zzl6sAapsALKXNsiLmh2Pan3ly+Zg97h8rZUWehU8O2TuCr0CpcfJ
+cUTMbkMDUGYx3NUAqFBWLa4YDqxVu2c94vgY3hnzbiubG7clvljUBPeW72MsEO35
+nKygupcGNFsy8C4WkPHQlai/GymrkQsvvJ/SbNulONSnVOoTf5G3gvdYTYE4Mse4
+oTcFZf4HWLkUzJ5eOzfe/hAbgJBqBJ1Xdo1ozT83JxCU1e45jCYbgce1LhRYxzdi
+nIo0PxKjOWM1XW4UYmRkqAWgtjjKUEucph+jBmAzcaZVZkU0+lg=
+=mnCX
+-END PGP SIGNATURE-

Added: release/druid/0.20.0/apache-druid-0.20.0-bin.tar.gz.sha512
==
--- release/druid/0.20.0/apache-druid-0.20.0-bin.tar.gz.sha512 (added)
+++ release/druid/0.20.0/apache-druid-0.20.0-bin.tar.gz.sha512 Thu Oct 15 
22:31:38 2020
@@ -0,0 +1 @@
+67fbb50071d09bc3328fe2172ee6f27ce1d960a2ce11fa863a38173897126d5e0009495da3f145d1ad110e65db6d6a7e8f37ea2204a2f76a89e10886f6d2aa41
\ No newline at end of file

Added: release/druid/0.20.0/apache-druid-0.20.0-src.tar.gz
==
Binary file - no diff available.

Propchange: release/druid/0.20.0/apache-druid-0.20.0-src.tar.gz
--
svn:mime-type = application/octet-stream

Added: release/druid/0.20.0/apache-druid-0.20.0-src.tar.gz.asc
==
--- release/druid/0.20.0/apache-druid-0.20.0-src.tar.gz.asc (added)
+++ release/druid/0.20.0/apache-druid-0.20.0-src.tar.gz.asc Thu Oct 15 22:31:38 
2020
@@ -0,0 +1,16 @@
+-BEGIN PGP SIGNATURE-
+
+iQIzBAABCgAdFiEEBJXgMcNdEyR5wkHskoPEkhrESD4FAl9/8gEACgkQkoPEkhrE
+SD52pQ/5ATWgPl0VNn1PlASQ6SovmdkRaslukxlcnmpJiGRa7LnLN8O5bvxSQNZj
+XDoHNXGNewDFlaniqLCEMtMwOLjeuY0Lrs0g6v5kiNvw+ygvV1vhTyjE1SiJT5yN
+q2/E4kN3MXl+AZePRNlupE9zq7s0GcmV/8qa/QcELHACfEc5owUHklVQPdD6Ea16
+NsPY+YeELaq9gRGnUka4Sy51yuvfC7jM2hJHhEn6Tze0QkVxHauvXxXr+qKX/XzP
+w3BfGyJi7cXsiTtdPIyazknQQaOd1s15hVFYQre86Uc5oS22AtrMtLHaaxGAAuPc
+YqQgvlx3Rk0RRmNcxUu24IdhjYwXpn+rME6V0XVwPDz7/pTVjzMar90muZKLGyte
+3a3iNu9oB6PtURTvhkpffutouwJxi/JDvMSj4dOIvhFr9v1pzyQMfx9LqsjzhHPP
+1d1qpM3lKBThexheJ/dhllhxS/OzeQ3NTJQFItPxogPbJxQk8j1yCP7v0NlzDcB0
+T9CyIgyoTCSllGBtuRAPZoldiniD4IxX8LN7Ke40Otv8rdM0T79Z0UvQQ/ZJXVpL
+czTDw0ChnSOUeTG2o1tRu8YvlqpiDkaYZeFr6g6uGo5FIccCcqwKFPSu5AN8ZEIc
+ONt6UBMZL23bugGVG1yIKWsMyfHiCME0cfTOYi2y2s6ojt5PMO4=
+=YXA9
+-END PGP SIGNATURE-

Added: release/druid/0.20.0/apache-druid-0.20.0-src.tar.gz.sha512
==
--- release/druid/0.20.0/apache-druid-0.20.0-src.tar.gz.sha512 (added)
+++ release/druid/0.20.0/apache-druid-0.20.0-src.tar.gz.sha512 Thu Oct 15 
22:31:38 2020
@@ -0,0 +1 @@
+15a424cb772ed01c081d09cec0b2798186dc32ea2ce2522a78a5ebd032ec9755f186a67926b7a86a1e6c91885a9ead7b77c8b5c78b06f76bd84ac355c036a43d
\ No newline at end of file



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] branch 0.20.0 updated (9a2a9ac -> ae5521a)

2020-10-15 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch 0.20.0
in repository https://gitbox.apache.org/repos/asf/druid.git.


from 9a2a9ac  Suppress CVE-2018-11765 for hadoop dependencies (#10485) 
(#10492)
 add ae5521a  [Backport] Add docs for Auto-compaction snapshot status API 
(#10510) (#10514)

No new revisions were added by this update.

Summary of changes:
 docs/operations/api-reference.md | 26 ++
 docs/operations/metrics.md   | 14 +-
 2 files changed, 39 insertions(+), 1 deletion(-)


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] branch master updated: Any virtual column on "__time" should be a pre-join virtual column (#10451)

2020-10-12 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/master by this push:
 new 567e381  Any virtual column on "__time" should be a pre-join virtual 
column (#10451)
567e381 is described below

commit 567e38170500d3649cbfaa28cf7aa6f5275d02e7
Author: Abhishek Agarwal <1477457+abhishekagarwa...@users.noreply.github.com>
AuthorDate: Tue Oct 13 01:34:55 2020 +0530

Any virtual column on "__time" should be a pre-join virtual column (#10451)

* Virtual column on __time should be in pre-join

* Add unit test
---
 .../join/HashJoinSegmentStorageAdapter.java|  2 +
 .../BaseHashJoinSegmentStorageAdapterTest.java | 17 +++
 .../join/HashJoinSegmentStorageAdapterTest.java| 53 --
 3 files changed, 59 insertions(+), 13 deletions(-)

diff --git 
a/processing/src/main/java/org/apache/druid/segment/join/HashJoinSegmentStorageAdapter.java
 
b/processing/src/main/java/org/apache/druid/segment/join/HashJoinSegmentStorageAdapter.java
index 03f3f94..d6517c1 100644
--- 
a/processing/src/main/java/org/apache/druid/segment/join/HashJoinSegmentStorageAdapter.java
+++ 
b/processing/src/main/java/org/apache/druid/segment/join/HashJoinSegmentStorageAdapter.java
@@ -34,6 +34,7 @@ import org.apache.druid.segment.StorageAdapter;
 import org.apache.druid.segment.VirtualColumn;
 import org.apache.druid.segment.VirtualColumns;
 import org.apache.druid.segment.column.ColumnCapabilities;
+import org.apache.druid.segment.column.ColumnHolder;
 import org.apache.druid.segment.data.Indexed;
 import org.apache.druid.segment.data.ListIndexed;
 import org.apache.druid.segment.join.filter.JoinFilterAnalyzer;
@@ -305,6 +306,7 @@ public class HashJoinSegmentStorageAdapter implements 
StorageAdapter
   )
   {
 final Set baseColumns = new HashSet<>();
+baseColumns.add(ColumnHolder.TIME_COLUMN_NAME);
 Iterables.addAll(baseColumns, baseAdapter.getAvailableDimensions());
 Iterables.addAll(baseColumns, baseAdapter.getAvailableMetrics());
 
diff --git 
a/processing/src/test/java/org/apache/druid/segment/join/BaseHashJoinSegmentStorageAdapterTest.java
 
b/processing/src/test/java/org/apache/druid/segment/join/BaseHashJoinSegmentStorageAdapterTest.java
index 6a6af72..d5dc9a2 100644
--- 
a/processing/src/test/java/org/apache/druid/segment/join/BaseHashJoinSegmentStorageAdapterTest.java
+++ 
b/processing/src/test/java/org/apache/druid/segment/join/BaseHashJoinSegmentStorageAdapterTest.java
@@ -27,7 +27,9 @@ import org.apache.druid.query.QueryContexts;
 import org.apache.druid.query.filter.Filter;
 import org.apache.druid.query.lookup.LookupExtractor;
 import org.apache.druid.segment.QueryableIndexSegment;
+import org.apache.druid.segment.VirtualColumn;
 import org.apache.druid.segment.VirtualColumns;
+import org.apache.druid.segment.column.ValueType;
 import org.apache.druid.segment.join.filter.JoinFilterAnalyzer;
 import org.apache.druid.segment.join.filter.JoinFilterPreAnalysis;
 import org.apache.druid.segment.join.filter.JoinFilterPreAnalysisKey;
@@ -242,4 +244,19 @@ public class BaseHashJoinSegmentStorageAdapterTest
 )
 );
   }
+
+  protected VirtualColumn makeExpressionVirtualColumn(String expression)
+  {
+return makeExpressionVirtualColumn(expression, "virtual");
+  }
+
+  protected VirtualColumn makeExpressionVirtualColumn(String expression, 
String columnName)
+  {
+return new ExpressionVirtualColumn(
+columnName,
+expression,
+ValueType.STRING,
+ExprMacroTable.nil()
+);
+  }
 }
diff --git 
a/processing/src/test/java/org/apache/druid/segment/join/HashJoinSegmentStorageAdapterTest.java
 
b/processing/src/test/java/org/apache/druid/segment/join/HashJoinSegmentStorageAdapterTest.java
index 6406d7a..5546962 100644
--- 
a/processing/src/test/java/org/apache/druid/segment/join/HashJoinSegmentStorageAdapterTest.java
+++ 
b/processing/src/test/java/org/apache/druid/segment/join/HashJoinSegmentStorageAdapterTest.java
@@ -31,6 +31,7 @@ import org.apache.druid.query.filter.ExpressionDimFilter;
 import org.apache.druid.query.filter.Filter;
 import org.apache.druid.query.filter.OrDimFilter;
 import org.apache.druid.query.filter.SelectorDimFilter;
+import org.apache.druid.segment.VirtualColumn;
 import org.apache.druid.segment.VirtualColumns;
 import org.apache.druid.segment.column.ColumnCapabilities;
 import org.apache.druid.segment.column.ValueType;
@@ -38,10 +39,10 @@ import org.apache.druid.segment.filter.SelectorFilter;
 import org.apache.druid.segment.join.filter.JoinFilterPreAnalysis;
 import org.apache.druid.segment.join.lookup.LookupJoinable;
 import org.apache.druid.segment.join.table.IndexedTableJoinable;
-import org.apache.druid.segment.virtual.ExpressionVirtualColumn;
 import org.junit.Assert;
 import org.junit.Te

[druid-website] branch asf-staging updated (6b6df73 -> ca36d49)

2020-10-09 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch asf-staging
in repository https://gitbox.apache.org/repos/asf/druid-website.git.


from 6b6df73  Merge pull request #101 from apache/20docs
 add bb16010  0.20.0-rc2 updates
 new ca36d49  Merge pull request #104 from apache/20rc2_update

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 community/index.html  |   1 +
 docs/0.20.0/development/extensions-core/avro.html |  13 -
 docs/0.20.0/ingestion/data-formats.html   |   9 +
 docs/latest/development/extensions-core/avro.html |  13 -
 docs/latest/ingestion/data-formats.html   |   9 +
 img/favicon.png   | Bin 4514 -> 1156 bytes
 index.html|  16 
 libraries.html|   1 +
 technology.html   |   2 +-
 use-cases.html|   2 +-
 10 files changed, 54 insertions(+), 12 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid-website-src] branch master updated (7ccaba8 -> 8caaf92)

2020-10-09 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/druid-website-src.git.


from 7ccaba8  Merge pull request #176 from implydata/better-favicon
 add bd206f9  0.20.0-rc2 updates
 new 8caaf92  Merge pull request #177 from apache/20rc2_update

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 docs/0.20.0/development/extensions-core/avro.html | 13 -
 docs/0.20.0/ingestion/data-formats.html   |  9 +
 docs/latest/development/extensions-core/avro.html | 13 -
 docs/latest/ingestion/data-formats.html   |  9 +
 4 files changed, 42 insertions(+), 2 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid-website] 01/01: Merge pull request #104 from apache/20rc2_update

2020-10-09 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch asf-staging
in repository https://gitbox.apache.org/repos/asf/druid-website.git

commit ca36d49471d7fdd4e2c64cec9aa35281defebed0
Merge: 6b6df73 bb16010
Author: Jonathan Wei 
AuthorDate: Thu Oct 8 23:38:21 2020 -0700

Merge pull request #104 from apache/20rc2_update

0.20.0-rc2 updates

 community/index.html  |   1 +
 docs/0.20.0/development/extensions-core/avro.html |  13 -
 docs/0.20.0/ingestion/data-formats.html   |   9 +
 docs/latest/development/extensions-core/avro.html |  13 -
 docs/latest/ingestion/data-formats.html   |   9 +
 img/favicon.png   | Bin 4514 -> 1156 bytes
 index.html|  16 
 libraries.html|   1 +
 technology.html   |   2 +-
 use-cases.html|   2 +-
 10 files changed, 54 insertions(+), 12 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid-website-src] 01/01: Merge pull request #177 from apache/20rc2_update

2020-10-09 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid-website-src.git

commit 8caaf924080ac8284addd09532356d7f30f092b8
Merge: 7ccaba8 bd206f9
Author: Jonathan Wei 
AuthorDate: Thu Oct 8 23:38:17 2020 -0700

Merge pull request #177 from apache/20rc2_update

0.20.0-rc2 updates

 docs/0.20.0/development/extensions-core/avro.html | 13 -
 docs/0.20.0/ingestion/data-formats.html   |  9 +
 docs/latest/development/extensions-core/avro.html | 13 -
 docs/latest/ingestion/data-formats.html   |  9 +
 4 files changed, 42 insertions(+), 2 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid-website] branch 20rc2_update created (now bb16010)

2020-10-09 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch 20rc2_update
in repository https://gitbox.apache.org/repos/asf/druid-website.git.


  at bb16010  0.20.0-rc2 updates

This branch includes the following new commits:

 new bb16010  0.20.0-rc2 updates

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid-website] 01/01: 0.20.0-rc2 updates

2020-10-09 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch 20rc2_update
in repository https://gitbox.apache.org/repos/asf/druid-website.git

commit bb160106366d98022f4340f4336e27e2ed24da24
Author: jon-wei 
AuthorDate: Thu Oct 8 23:36:41 2020 -0700

0.20.0-rc2 updates
---
 community/index.html  |   1 +
 docs/0.20.0/development/extensions-core/avro.html |  13 -
 docs/0.20.0/ingestion/data-formats.html   |   9 +
 docs/latest/development/extensions-core/avro.html |  13 -
 docs/latest/ingestion/data-formats.html   |   9 +
 img/favicon.png   | Bin 4514 -> 1156 bytes
 index.html|  16 
 libraries.html|   1 +
 technology.html   |   2 +-
 use-cases.html|   2 +-
 10 files changed, 54 insertions(+), 12 deletions(-)

diff --git a/community/index.html b/community/index.html
index 685a0d1..ae2159a 100644
--- a/community/index.html
+++ b/community/index.html
@@ -159,6 +159,7 @@ new features, on https://github.com/apache/druid;>GitHub.
 
 https://www.cloudera.com/;>Cloudera
 https://datumo.io/;>Datumo
+https://www.deep.bi/solutions/apache-druid;>Deep.BI
 https://imply.io/;>Imply
 
 
diff --git a/docs/0.20.0/development/extensions-core/avro.html 
b/docs/0.20.0/development/extensions-core/avro.html
index b12a1e7..38a808e 100644
--- a/docs/0.20.0/development/extensions-core/avro.html
+++ b/docs/0.20.0/development/extensions-core/avro.html
@@ -82,8 +82,19 @@
 two Avro Parsers for stream ingestion and Hadoop batch ingestion.
 See Avro 
Hadoop Parser and Avro Stream 
Parser
 for more details about how to use these in an ingestion spec.
+Additionally, it provides an InputFormat for reading Avro OCF files when 
using
+native batch indexing, 
see Avro OCF
+for details on how to ingest OCF files.
 Make sure to include 
druid-avro-extensions as an extension.
-← Approximate Histogram 
aggregatorsMicrosoft 
Azure →flattenSpec on 
the parser.
+Druid doesn't currently support Avro logical types, they will be ignored 
and fields will be handled according to the underlying primitive type.
+← Approximate Histogram 
aggregatorsMicrosoft 
Azure →druid-avro-extensions
 as an extension to use the Avro OCF input format.
 
+
+See the Avro 
Types section for how Avro types are handled in Druid
+
 The inputFormat to load data of Avro OCF format. An example 
is:
 "ioConfig": {
   "inputFormat": {
@@ -383,6 +386,9 @@ Each line can be further parsed using parseSpec
 You need to include the druid-avro-extensions
 as an extension to use the Avro Hadoop Parser.
 
+
+See the Avro 
Types section for how Avro types are handled in Druid
+
 This parser is for Hadoop 
batch ingestion.
 The inputFormat of inputSpec in 
ioConfig must be set to 
org.apache.druid.data.input.avro.AvroValueInputFormat.
 You may want to set Avro reader's schema in jobProperties in 
tuningConfig,
@@ -880,6 +886,9 @@ an explicitly defined http://www.joda.org/joda-time/apidocs/org/joda/ti
 
 You need to include the druid-avro-extensions
 as an extension to use the Avro Stream Parser.
 
+
+See the Avro 
Types section for how Avro types are handled in Druid
+
 This parser is for stream ingestion and 
reads Avro data from a stream directly.
 
 
diff --git a/docs/latest/development/extensions-core/avro.html 
b/docs/latest/development/extensions-core/avro.html
index 1b8a32b..32e689b 100644
--- a/docs/latest/development/extensions-core/avro.html
+++ b/docs/latest/development/extensions-core/avro.html
@@ -82,8 +82,19 @@
 two Avro Parsers for stream ingestion and Hadoop batch ingestion.
 See Avro 
Hadoop Parser and Avro Stream 
Parser
 for more details about how to use these in an ingestion spec.
+Additionally, it provides an InputFormat for reading Avro OCF files when 
using
+native batch indexing, 
see Avro OCF
+for details on how to ingest OCF files.
 Make sure to include 
druid-avro-extensions as an extension.
-← Approximate Histogram 
aggregatorsMicrosoft 
Azure →flattenSpec on 
the parser.
+Druid doesn't currently support Avro logical types, they will be ignored 
and fields will be handled according to the underlying primitive type.
+← Approximate Histogram 
aggregatorsMicrosoft 
Azure →druid-avro-extensions
 as an extension to use the Avro OCF input format.
 
+
+See the Avro 
Types section for how Avro types are handled in Druid
+
 The inputFormat to load data of Avro OCF format. An example 
is:
 "ioConfig": {
   "inputFormat": {
@@ -383,6 +386,9 @@ Each line can be further parsed using parseSpec
 You need to include the druid-avro-extensions
 as an extension to use the Avro Hadoop Parser.
 
+
+See the Avro 
Types section for how Avro types are handled in Druid
+
 This parser is for Hadoop 
batch ingesti

[druid-website-src] 01/01: 0.20.0-rc2 updates

2020-10-09 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch 20rc2_update
in repository https://gitbox.apache.org/repos/asf/druid-website-src.git

commit bd206f9f00961d660fc43f6b38d98b6c9dbd261e
Author: jon-wei 
AuthorDate: Thu Oct 8 23:36:01 2020 -0700

0.20.0-rc2 updates
---
 docs/0.20.0/development/extensions-core/avro.html | 13 -
 docs/0.20.0/ingestion/data-formats.html   |  9 +
 docs/latest/development/extensions-core/avro.html | 13 -
 docs/latest/ingestion/data-formats.html   |  9 +
 4 files changed, 42 insertions(+), 2 deletions(-)

diff --git a/docs/0.20.0/development/extensions-core/avro.html 
b/docs/0.20.0/development/extensions-core/avro.html
index b12a1e7..38a808e 100644
--- a/docs/0.20.0/development/extensions-core/avro.html
+++ b/docs/0.20.0/development/extensions-core/avro.html
@@ -82,8 +82,19 @@
 two Avro Parsers for stream ingestion and Hadoop batch ingestion.
 See Avro 
Hadoop Parser and Avro Stream 
Parser
 for more details about how to use these in an ingestion spec.
+Additionally, it provides an InputFormat for reading Avro OCF files when 
using
+native batch indexing, 
see Avro OCF
+for details on how to ingest OCF files.
 Make sure to include 
druid-avro-extensions as an extension.
-← Approximate Histogram 
aggregatorsMicrosoft 
Azure →flattenSpec on 
the parser.
+Druid doesn't currently support Avro logical types, they will be ignored 
and fields will be handled according to the underlying primitive type.
+← Approximate Histogram 
aggregatorsMicrosoft 
Azure →druid-avro-extensions
 as an extension to use the Avro OCF input format.
 
+
+See the Avro 
Types section for how Avro types are handled in Druid
+
 The inputFormat to load data of Avro OCF format. An example 
is:
 "ioConfig": {
   "inputFormat": {
@@ -383,6 +386,9 @@ Each line can be further parsed using parseSpec
 You need to include the druid-avro-extensions
 as an extension to use the Avro Hadoop Parser.
 
+
+See the Avro 
Types section for how Avro types are handled in Druid
+
 This parser is for Hadoop 
batch ingestion.
 The inputFormat of inputSpec in 
ioConfig must be set to 
org.apache.druid.data.input.avro.AvroValueInputFormat.
 You may want to set Avro reader's schema in jobProperties in 
tuningConfig,
@@ -880,6 +886,9 @@ an explicitly defined http://www.joda.org/joda-time/apidocs/org/joda/ti
 
 You need to include the druid-avro-extensions
 as an extension to use the Avro Stream Parser.
 
+
+See the Avro 
Types section for how Avro types are handled in Druid
+
 This parser is for stream ingestion and 
reads Avro data from a stream directly.
 
 
diff --git a/docs/latest/development/extensions-core/avro.html 
b/docs/latest/development/extensions-core/avro.html
index 1b8a32b..32e689b 100644
--- a/docs/latest/development/extensions-core/avro.html
+++ b/docs/latest/development/extensions-core/avro.html
@@ -82,8 +82,19 @@
 two Avro Parsers for stream ingestion and Hadoop batch ingestion.
 See Avro 
Hadoop Parser and Avro Stream 
Parser
 for more details about how to use these in an ingestion spec.
+Additionally, it provides an InputFormat for reading Avro OCF files when 
using
+native batch indexing, 
see Avro OCF
+for details on how to ingest OCF files.
 Make sure to include 
druid-avro-extensions as an extension.
-← Approximate Histogram 
aggregatorsMicrosoft 
Azure →flattenSpec on 
the parser.
+Druid doesn't currently support Avro logical types, they will be ignored 
and fields will be handled according to the underlying primitive type.
+← Approximate Histogram 
aggregatorsMicrosoft 
Azure →druid-avro-extensions
 as an extension to use the Avro OCF input format.
 
+
+See the Avro 
Types section for how Avro types are handled in Druid
+
 The inputFormat to load data of Avro OCF format. An example 
is:
 "ioConfig": {
   "inputFormat": {
@@ -383,6 +386,9 @@ Each line can be further parsed using parseSpec
 You need to include the druid-avro-extensions
 as an extension to use the Avro Hadoop Parser.
 
+
+See the Avro 
Types section for how Avro types are handled in Druid
+
 This parser is for Hadoop 
batch ingestion.
 The inputFormat of inputSpec in 
ioConfig must be set to 
org.apache.druid.data.input.avro.AvroValueInputFormat.
 You may want to set Avro reader's schema in jobProperties in 
tuningConfig,
@@ -880,6 +886,9 @@ an explicitly defined http://www.joda.org/joda-time/apidocs/org/joda/ti
 
 You need to include the druid-avro-extensions
 as an extension to use the Avro Stream Parser.
 
+
+See the Avro 
Types section for how Avro types are handled in Druid
+
 This parser is for stream ingestion and 
reads Avro data from a stream directly.
 
 


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid-website-src] branch 20rc2_update created (now bd206f9)

2020-10-09 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch 20rc2_update
in repository https://gitbox.apache.org/repos/asf/druid-website-src.git.


  at bd206f9  0.20.0-rc2 updates

This branch includes the following new commits:

 new bd206f9  0.20.0-rc2 updates

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



svn commit: r41858 - in /dev/druid: 0.20.0-rc1/ 0.20.0-rc2/

2020-10-09 Thread jonwei
Author: jonwei
Date: Fri Oct  9 06:07:36 2020
New Revision: 41858

Log:
Add 0.20.0-rc2 artifacts

Added:
dev/druid/0.20.0-rc2/
dev/druid/0.20.0-rc2/apache-druid-0.20.0-bin.tar.gz   (with props)
dev/druid/0.20.0-rc2/apache-druid-0.20.0-bin.tar.gz.asc
dev/druid/0.20.0-rc2/apache-druid-0.20.0-bin.tar.gz.sha512
dev/druid/0.20.0-rc2/apache-druid-0.20.0-src.tar.gz   (with props)
dev/druid/0.20.0-rc2/apache-druid-0.20.0-src.tar.gz.asc
dev/druid/0.20.0-rc2/apache-druid-0.20.0-src.tar.gz.sha512
Removed:
dev/druid/0.20.0-rc1/

Added: dev/druid/0.20.0-rc2/apache-druid-0.20.0-bin.tar.gz
==
Binary file - no diff available.

Propchange: dev/druid/0.20.0-rc2/apache-druid-0.20.0-bin.tar.gz
--
svn:mime-type = application/octet-stream

Added: dev/druid/0.20.0-rc2/apache-druid-0.20.0-bin.tar.gz.asc
==
--- dev/druid/0.20.0-rc2/apache-druid-0.20.0-bin.tar.gz.asc (added)
+++ dev/druid/0.20.0-rc2/apache-druid-0.20.0-bin.tar.gz.asc Fri Oct  9 06:07:36 
2020
@@ -0,0 +1,16 @@
+-BEGIN PGP SIGNATURE-
+
+iQIzBAABCgAdFiEEBJXgMcNdEyR5wkHskoPEkhrESD4FAl9/8gAACgkQkoPEkhrE
+SD6LRBAAkZgqvR3U3dUyIRdclf0fQaWh3bpyIAoLEQYy1jytdx0Btfe8dGaVQYQH
+bWdA1r38jn3Uz5eUI5Tn92XWCHr6qRtScB2R2Bnoi+odYWm74JIdeJ0CgnuMlWwv
+/VaFUAV/nyP/8HeH89B/tRKWQgPJKScbRLhi4v4pzvxWeq8OAhGEAU7WKY9RBSY8
+dKldDQct8tH9WsY1K+aNGnNlRwnNwgJ8jOcSTwg/7BZAYG4o3QfTppdQETtdnrkc
+2vJyLLqexgZfAAHhafzfDrkKly1BcfGT/OtFq6kiHnUOJRWx6ZbyEpW062qgE2NP
+aBDYq/J1QlVbqKqCHHWgfmwjx0sQes61y7clF9XmUvtsyeiVa+lsmHmN0ruuOsTJ
+Gda6t02MgR9Zzl6sAapsALKXNsiLmh2Pan3ly+Zg97h8rZUWehU8O2TuCr0CpcfJ
+cUTMbkMDUGYx3NUAqFBWLa4YDqxVu2c94vgY3hnzbiubG7clvljUBPeW72MsEO35
+nKygupcGNFsy8C4WkPHQlai/GymrkQsvvJ/SbNulONSnVOoTf5G3gvdYTYE4Mse4
+oTcFZf4HWLkUzJ5eOzfe/hAbgJBqBJ1Xdo1ozT83JxCU1e45jCYbgce1LhRYxzdi
+nIo0PxKjOWM1XW4UYmRkqAWgtjjKUEucph+jBmAzcaZVZkU0+lg=
+=mnCX
+-END PGP SIGNATURE-

Added: dev/druid/0.20.0-rc2/apache-druid-0.20.0-bin.tar.gz.sha512
==
--- dev/druid/0.20.0-rc2/apache-druid-0.20.0-bin.tar.gz.sha512 (added)
+++ dev/druid/0.20.0-rc2/apache-druid-0.20.0-bin.tar.gz.sha512 Fri Oct  9 
06:07:36 2020
@@ -0,0 +1 @@
+67fbb50071d09bc3328fe2172ee6f27ce1d960a2ce11fa863a38173897126d5e0009495da3f145d1ad110e65db6d6a7e8f37ea2204a2f76a89e10886f6d2aa41
\ No newline at end of file

Added: dev/druid/0.20.0-rc2/apache-druid-0.20.0-src.tar.gz
==
Binary file - no diff available.

Propchange: dev/druid/0.20.0-rc2/apache-druid-0.20.0-src.tar.gz
--
svn:mime-type = application/octet-stream

Added: dev/druid/0.20.0-rc2/apache-druid-0.20.0-src.tar.gz.asc
==
--- dev/druid/0.20.0-rc2/apache-druid-0.20.0-src.tar.gz.asc (added)
+++ dev/druid/0.20.0-rc2/apache-druid-0.20.0-src.tar.gz.asc Fri Oct  9 06:07:36 
2020
@@ -0,0 +1,16 @@
+-BEGIN PGP SIGNATURE-
+
+iQIzBAABCgAdFiEEBJXgMcNdEyR5wkHskoPEkhrESD4FAl9/8gEACgkQkoPEkhrE
+SD52pQ/5ATWgPl0VNn1PlASQ6SovmdkRaslukxlcnmpJiGRa7LnLN8O5bvxSQNZj
+XDoHNXGNewDFlaniqLCEMtMwOLjeuY0Lrs0g6v5kiNvw+ygvV1vhTyjE1SiJT5yN
+q2/E4kN3MXl+AZePRNlupE9zq7s0GcmV/8qa/QcELHACfEc5owUHklVQPdD6Ea16
+NsPY+YeELaq9gRGnUka4Sy51yuvfC7jM2hJHhEn6Tze0QkVxHauvXxXr+qKX/XzP
+w3BfGyJi7cXsiTtdPIyazknQQaOd1s15hVFYQre86Uc5oS22AtrMtLHaaxGAAuPc
+YqQgvlx3Rk0RRmNcxUu24IdhjYwXpn+rME6V0XVwPDz7/pTVjzMar90muZKLGyte
+3a3iNu9oB6PtURTvhkpffutouwJxi/JDvMSj4dOIvhFr9v1pzyQMfx9LqsjzhHPP
+1d1qpM3lKBThexheJ/dhllhxS/OzeQ3NTJQFItPxogPbJxQk8j1yCP7v0NlzDcB0
+T9CyIgyoTCSllGBtuRAPZoldiniD4IxX8LN7Ke40Otv8rdM0T79Z0UvQQ/ZJXVpL
+czTDw0ChnSOUeTG2o1tRu8YvlqpiDkaYZeFr6g6uGo5FIccCcqwKFPSu5AN8ZEIc
+ONt6UBMZL23bugGVG1yIKWsMyfHiCME0cfTOYi2y2s6ojt5PMO4=
+=YXA9
+-END PGP SIGNATURE-

Added: dev/druid/0.20.0-rc2/apache-druid-0.20.0-src.tar.gz.sha512
==
--- dev/druid/0.20.0-rc2/apache-druid-0.20.0-src.tar.gz.sha512 (added)
+++ dev/druid/0.20.0-rc2/apache-druid-0.20.0-src.tar.gz.sha512 Fri Oct  9 
06:07:36 2020
@@ -0,0 +1 @@
+15a424cb772ed01c081d09cec0b2798186dc32ea2ce2522a78a5ebd032ec9755f186a67926b7a86a1e6c91885a9ead7b77c8b5c78b06f76bd84ac355c036a43d
\ No newline at end of file



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] annotated tag druid-0.20.0-rc2 updated (acdc6ee -> 49703d5)

2020-10-08 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to annotated tag druid-0.20.0-rc2
in repository https://gitbox.apache.org/repos/asf/druid.git.


*** WARNING: tag druid-0.20.0-rc2 was modified! ***

from acdc6ee  (commit)
  to 49703d5  (tag)
 tagging acdc6ee7ea3a81fb3e70b92d7cc682921f988eb5 (commit)
 replaces druid-0.8.0-rc1
  by jon-wei
  on Thu Oct 8 21:35:13 2020 -0700

- Log -
[maven-release-plugin] copy for tag druid-0.20.0-rc2
---


No new revisions were added by this update.

Summary of changes:


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] branch 0.20.0 updated: Suppress CVE-2018-11765 for hadoop dependencies (#10485) (#10492)

2020-10-08 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch 0.20.0
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/0.20.0 by this push:
 new 9a2a9ac  Suppress CVE-2018-11765 for hadoop dependencies (#10485) 
(#10492)
9a2a9ac is described below

commit 9a2a9acb7d34b81feb98b5b4499a4a36f640bdc1
Author: Jonathan Wei 
AuthorDate: Thu Oct 8 01:23:25 2020 -0700

Suppress CVE-2018-11765 for hadoop dependencies (#10485) (#10492)
---
 owasp-dependency-check-suppressions.xml | 7 +++
 1 file changed, 7 insertions(+)

diff --git a/owasp-dependency-check-suppressions.xml 
b/owasp-dependency-check-suppressions.xml
index 998e5c6..6a532ef 100644
--- a/owasp-dependency-check-suppressions.xml
+++ b/owasp-dependency-check-suppressions.xml
@@ -281,4 +281,11 @@
  CVE-2018-8009
  CVE-2018-8029
   
+  
+ 
+ ^pkg:maven/org\.apache\.hadoop/hadoop\-.*@.*$
+ CVE-2018-11765
+  
 


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] branch 0.20.0 updated (b3b2538 -> b3afbb0)

2020-10-08 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch 0.20.0
in repository https://gitbox.apache.org/repos/asf/druid.git.


from b3b2538  vectorized group by support for nullable numeric columns 
(#10441) (#10490)
 add b3afbb0  Fix Avro support in Web Console (#10232) (#10491)

No new revisions were added by this update.

Summary of changes:
 docs/development/extensions-core/avro.md   | 21 -
 docs/ingestion/data-formats.md |  6 +++
 .../druid/data/input/avro/AvroFlattenerMaker.java  | 24 --
 .../data/input/AvroStreamInputRowParserTest.java   | 18 +++
 .../data/input/avro/AvroFlattenerMakerTest.java| 12 +++--
 .../druid/data/input/avro/AvroOCFReaderTest.java   | 55 --
 web-console/src/utils/ingestion-spec.spec.ts   | 33 -
 web-console/src/utils/ingestion-spec.tsx   |  4 +-
 .../src/views/load-data-view/load-data-view.tsx|  4 +-
 website/.spelling  |  1 +
 10 files changed, 153 insertions(+), 25 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] branch 0.20.0 updated: vectorized group by support for nullable numeric columns (#10441) (#10490)

2020-10-08 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch 0.20.0
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/0.20.0 by this push:
 new b3b2538  vectorized group by support for nullable numeric columns 
(#10441) (#10490)
b3b2538 is described below

commit b3b25386479f821248584fb87c7903dc8d99cc9e
Author: Jonathan Wei 
AuthorDate: Wed Oct 7 23:56:48 2020 -0700

vectorized group by support for nullable numeric columns (#10441) (#10490)

* vectorized group by support for numeric null columns

* revert unintended change

* adjust

* review stuffs

Co-authored-by: Clint Wylie 
---
 .../VectorValueMatcherColumnProcessorFactory.java  |  18 ++-
 .../epinephelinae/RowBasedGrouperHelper.java   |  67 +---
 .../GroupByVectorColumnProcessorFactory.java   |  51 ++-
 .../NullableDoubleGroupByVectorColumnSelector.java |  82 ++
 .../NullableFloatGroupByVectorColumnSelector.java  |  82 ++
 .../NullableLongGroupByVectorColumnSelector.java   |  82 ++
 .../epinephelinae/vector/VectorGroupByEngine.java  |   2 +-
 .../druid/segment/DimensionHandlerUtils.java   |   5 +
 .../segment/VectorColumnProcessorFactory.java  |  17 ++-
 ...ctorValueMatcherColumnProcessorFactoryTest.java |  67 +++-
 .../query/groupby/GroupByQueryRunnerTest.java  | 169 -
 .../virtual/VectorizedVirtualColumnTest.java   |  13 --
 .../apache/druid/sql/calcite/CalciteQueryTest.java |  44 --
 13 files changed, 590 insertions(+), 109 deletions(-)

diff --git 
a/processing/src/main/java/org/apache/druid/query/filter/vector/VectorValueMatcherColumnProcessorFactory.java
 
b/processing/src/main/java/org/apache/druid/query/filter/vector/VectorValueMatcherColumnProcessorFactory.java
index 5ca511f..b2083cc 100644
--- 
a/processing/src/main/java/org/apache/druid/query/filter/vector/VectorValueMatcherColumnProcessorFactory.java
+++ 
b/processing/src/main/java/org/apache/druid/query/filter/vector/VectorValueMatcherColumnProcessorFactory.java
@@ -20,6 +20,7 @@
 package org.apache.druid.query.filter.vector;
 
 import org.apache.druid.segment.VectorColumnProcessorFactory;
+import org.apache.druid.segment.column.ColumnCapabilities;
 import org.apache.druid.segment.vector.MultiValueDimensionVectorSelector;
 import org.apache.druid.segment.vector.SingleValueDimensionVectorSelector;
 import org.apache.druid.segment.vector.VectorValueSelector;
@@ -40,6 +41,7 @@ public class VectorValueMatcherColumnProcessorFactory 
implements VectorColumnPro
 
   @Override
   public VectorValueMatcherFactory makeSingleValueDimensionProcessor(
+  final ColumnCapabilities capabilities,
   final SingleValueDimensionVectorSelector selector
   )
   {
@@ -48,6 +50,7 @@ public class VectorValueMatcherColumnProcessorFactory 
implements VectorColumnPro
 
   @Override
   public VectorValueMatcherFactory makeMultiValueDimensionProcessor(
+  final ColumnCapabilities capabilities,
   final MultiValueDimensionVectorSelector selector
   )
   {
@@ -55,19 +58,28 @@ public class VectorValueMatcherColumnProcessorFactory 
implements VectorColumnPro
   }
 
   @Override
-  public VectorValueMatcherFactory makeFloatProcessor(final 
VectorValueSelector selector)
+  public VectorValueMatcherFactory makeFloatProcessor(
+  final ColumnCapabilities capabilities,
+  final VectorValueSelector selector
+  )
   {
 return new FloatVectorValueMatcher(selector);
   }
 
   @Override
-  public VectorValueMatcherFactory makeDoubleProcessor(final 
VectorValueSelector selector)
+  public VectorValueMatcherFactory makeDoubleProcessor(
+  final ColumnCapabilities capabilities,
+  final VectorValueSelector selector
+  )
   {
 return new DoubleVectorValueMatcher(selector);
   }
 
   @Override
-  public VectorValueMatcherFactory makeLongProcessor(final VectorValueSelector 
selector)
+  public VectorValueMatcherFactory makeLongProcessor(
+  final ColumnCapabilities capabilities,
+  final VectorValueSelector selector
+  )
   {
 return new LongVectorValueMatcher(selector);
   }
diff --git 
a/processing/src/main/java/org/apache/druid/query/groupby/epinephelinae/RowBasedGrouperHelper.java
 
b/processing/src/main/java/org/apache/druid/query/groupby/epinephelinae/RowBasedGrouperHelper.java
index d21e609..c099eed 100644
--- 
a/processing/src/main/java/org/apache/druid/query/groupby/epinephelinae/RowBasedGrouperHelper.java
+++ 
b/processing/src/main/java/org/apache/druid/query/groupby/epinephelinae/RowBasedGrouperHelper.java
@@ -736,7 +736,6 @@ public class RowBasedGrouperHelper
   {
 private final boolean includeTimestamp;
 private final boolean sortByDimsFirst;
-private final int dimCount;
 private final long maxDictionarySize;
 private final DefaultLimitSpec limitSpec;
 private final List dimensions;
@@ -756,7 +755,6 @@ public class

[druid] branch 0.20.0 updated: Web console: fix compaction status when no compaction config, and small cleanup (#10483) (#10487)

2020-10-08 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch 0.20.0
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/0.20.0 by this push:
 new 18deb16  Web console: fix compaction status when no compaction config, 
and small cleanup (#10483) (#10487)
18deb16 is described below

commit 18deb1683ad135fbc6d30283572df44c21b9d016
Author: Jonathan Wei 
AuthorDate: Wed Oct 7 23:54:24 2020 -0700

Web console: fix compaction status when no compaction config, and small 
cleanup (#10483) (#10487)

* move timed button to icons

* cleanup redundant logic

* fix compaction status text

* remove extra style

Co-authored-by: Vadim Ogievetsky 
---
 .../components/refresh-button/refresh-button.tsx   |  2 +-
 .../__snapshots__/timed-button.spec.tsx.snap   | 83 --
 .../src/components/timed-button/timed-button.scss  | 21 --
 .../components/timed-button/timed-button.spec.tsx  | 14 ++--
 .../src/components/timed-button/timed-button.tsx   | 44 +++-
 .../lookup-edit-dialog/lookup-edit-dialog.tsx  | 11 +--
 web-console/src/utils/compaction.spec.ts   |  2 +-
 web-console/src/utils/compaction.ts| 17 +++--
 8 files changed, 93 insertions(+), 101 deletions(-)

diff --git a/web-console/src/components/refresh-button/refresh-button.tsx 
b/web-console/src/components/refresh-button/refresh-button.tsx
index 681bd42..04fe160 100644
--- a/web-console/src/components/refresh-button/refresh-button.tsx
+++ b/web-console/src/components/refresh-button/refresh-button.tsx
@@ -42,7 +42,7 @@ export const RefreshButton = React.memo(function 
RefreshButton(props: RefreshBut
   return (
 https://goo.gl/fbAQLP
 
-exports[`Timed button matches snapshot 1`] = `
-
-  
-  
+
+
+  
+}
+defaultIsOpen={false}
+disabled={false}
+fill={false}
+hasBackdrop={false}
+hoverCloseDelay={300}
+hoverOpenDelay={150}
+inheritDarkTheme={true}
+interactionKind="click"
+minimal={false}
+modifiers={Object {}}
+openOnTargetFocus={true}
+position="auto"
+targetTagName="span"
+transitionDuration={300}
+usePortal={true}
+wrapperTagName="span"
   >
-
-  
-
-  
-
-  caret-down
-
-
-  
-
-  
-
-  
-
+
+  
+
 `;
diff --git a/web-console/src/components/timed-button/timed-button.scss 
b/web-console/src/components/timed-button/timed-button.scss
deleted file mode 100644
index f4d7700..000
--- a/web-console/src/components/timed-button/timed-button.scss
+++ /dev/null
@@ -1,21 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-.timed-button {
-  padding: 10px 10px 5px 10px;
-}
diff --git a/web-console/src/components/timed-button/timed-button.spec.tsx 
b/web-console/src/components/timed-button/timed-button.spec.tsx
index e5025fb..61c0ca5 100644
--- a/web-console/src/components/timed-button/timed-button.spec.tsx
+++ b/web-console/src/components/timed-button/timed-button.spec.tsx
@@ -16,22 +16,22 @@
  * limitations under the License.
  */
 
-import { render } from '@testing-library/react';
+import { shallow } from 'enzyme';
 import React from 'react';
 
 import { TimedButton } from './timed-button';
 
-describe('Timed button', () => {
+describe('TimedButton', () => {
   it('matches snapshot', () => {
-const timedButton = (
+const timedButton = shallow(
null}
-label={'label'}
+label={'Select delay'}
 defaultDelay={1000}
-  />
+  />,
 );
-const { container } = render(timedButton);
-expect(container.firstChild).toMatchSnapshot();
+
+expect(timedButton).toMatchSnapshot();
   });
 });
diff --git a/web-console/src/components/timed-button/timed-button.tsx 
b/web-console/src/components/timed-button/timed-button.tsx
index 78a0765..fe7a990 100644
--- a/web-console/src/components/timed-button/timed-button.tsx
+++ b/web-console/src/components/timed-button/timed-button.tsx
@@ -16,15 +16

[druid] branch 0.20.0 updated (18deb16 -> 4cb5f39)

2020-10-08 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch 0.20.0
in repository https://gitbox.apache.org/repos/asf/druid.git.


from 18deb16  Web console: fix compaction status when no compaction config, 
and small cleanup (#10483) (#10487)
 add 4cb5f39  Close aggregators in HashVectorGrouper.close() (#10452) 
(#10489)

No new revisions were added by this update.

Summary of changes:
 processing/pom.xml |   5 +
 .../groupby/epinephelinae/HashVectorGrouper.java   |   2 +-
 .../epinephelinae/vector/VectorGroupByEngine.java  |  28 +++---
 ...perTestUtil.java => HashVectorGrouperTest.java} |  34 ---
 .../vector/VectorGroupByEngineIteratorTest.java| 103 +
 .../java/org/apache/druid/segment/TestIndex.java   |   2 +-
 6 files changed, 146 insertions(+), 28 deletions(-)
 copy 
processing/src/test/java/org/apache/druid/query/groupby/epinephelinae/{GrouperTestUtil.java
 => HashVectorGrouperTest.java} (53%)
 create mode 100644 
processing/src/test/java/org/apache/druid/query/groupby/epinephelinae/vector/VectorGroupByEngineIteratorTest.java


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] branch 0.20.0 updated: Fix compaction task slot computation in auto compaction (#10479) (#10488)

2020-10-07 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch 0.20.0
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/0.20.0 by this push:
 new 8a651ee  Fix compaction task slot computation in auto compaction 
(#10479) (#10488)
8a651ee is described below

commit 8a651ee7f077d4682a3b1a3a4a50f9d874e00b1d
Author: Jonathan Wei 
AuthorDate: Wed Oct 7 21:58:06 2020 -0700

Fix compaction task slot computation in auto compaction (#10479) (#10488)

* Fix compaction task slot computation in auto compaction

* add tests for task counting

Co-authored-by: Jihoon Son 
---
 .../parallel/ParallelIndexSupervisorTask.java  |   4 +
 .../parallel/ParallelIndexSupervisorTaskTest.java  |   2 +-
 .../server/coordinator/duty/CompactSegments.java   |  63 +--
 .../coordinator/duty/CompactSegmentsTest.java  | 203 +
 4 files changed, 215 insertions(+), 57 deletions(-)

diff --git 
a/indexing-service/src/main/java/org/apache/druid/indexing/common/task/batch/parallel/ParallelIndexSupervisorTask.java
 
b/indexing-service/src/main/java/org/apache/druid/indexing/common/task/batch/parallel/ParallelIndexSupervisorTask.java
index 4a218a0..acac279 100644
--- 
a/indexing-service/src/main/java/org/apache/druid/indexing/common/task/batch/parallel/ParallelIndexSupervisorTask.java
+++ 
b/indexing-service/src/main/java/org/apache/druid/indexing/common/task/batch/parallel/ParallelIndexSupervisorTask.java
@@ -466,6 +466,10 @@ public class ParallelIndexSupervisorTask extends 
AbstractBatchIndexTask implemen
 registerResourceCloserOnAbnormalExit(currentSubTaskHolder);
   }
 
+  /**
+   * Returns true if this task can run in the parallel mode with the given 
inputSource and tuningConfig.
+   * This method should be synchronized with 
CompactSegments.isParallelMode(ClientCompactionTaskQueryTuningConfig).
+   */
   public static boolean isParallelMode(InputSource inputSource, @Nullable 
ParallelIndexTuningConfig tuningConfig)
   {
 if (null == tuningConfig) {
diff --git 
a/indexing-service/src/test/java/org/apache/druid/indexing/common/task/batch/parallel/ParallelIndexSupervisorTaskTest.java
 
b/indexing-service/src/test/java/org/apache/druid/indexing/common/task/batch/parallel/ParallelIndexSupervisorTaskTest.java
index ef1db8a..710df1d 100644
--- 
a/indexing-service/src/test/java/org/apache/druid/indexing/common/task/batch/parallel/ParallelIndexSupervisorTaskTest.java
+++ 
b/indexing-service/src/test/java/org/apache/druid/indexing/common/task/batch/parallel/ParallelIndexSupervisorTaskTest.java
@@ -249,7 +249,7 @@ public class ParallelIndexSupervisorTaskTest
 }
   }
 
-  public static class staticUtilsTest
+  public static class StaticUtilsTest
   {
 @Test
 public void testIsParallelModeFalse_nullTuningConfig()
diff --git 
a/server/src/main/java/org/apache/druid/server/coordinator/duty/CompactSegments.java
 
b/server/src/main/java/org/apache/druid/server/coordinator/duty/CompactSegments.java
index 50b31ca..3b7ee31 100644
--- 
a/server/src/main/java/org/apache/druid/server/coordinator/duty/CompactSegments.java
+++ 
b/server/src/main/java/org/apache/druid/server/coordinator/duty/CompactSegments.java
@@ -20,6 +20,7 @@
 package org.apache.druid.server.coordinator.duty;
 
 import com.fasterxml.jackson.databind.ObjectMapper;
+import com.google.common.annotations.VisibleForTesting;
 import com.google.common.collect.Maps;
 import com.google.inject.Inject;
 import org.apache.druid.client.indexing.ClientCompactionTaskQuery;
@@ -27,6 +28,7 @@ import 
org.apache.druid.client.indexing.ClientCompactionTaskQueryTuningConfig;
 import org.apache.druid.client.indexing.IndexingServiceClient;
 import org.apache.druid.client.indexing.TaskPayloadResponse;
 import org.apache.druid.indexer.TaskStatusPlus;
+import org.apache.druid.indexer.partitions.SingleDimensionPartitionsSpec;
 import org.apache.druid.java.util.common.ISE;
 import org.apache.druid.java.util.common.logger.Logger;
 import org.apache.druid.server.coordinator.AutoCompactionSnapshot;
@@ -123,8 +125,9 @@ public class CompactSegments implements CoordinatorDuty
 final ClientCompactionTaskQuery compactionTaskQuery = 
(ClientCompactionTaskQuery) response.getPayload();
 final Interval interval = 
compactionTaskQuery.getIoConfig().getInputSpec().getInterval();
 compactionTaskIntervals.computeIfAbsent(status.getDataSource(), k 
-> new ArrayList<>()).add(interval);
-final int numSubTasks = 
findNumMaxConcurrentSubTasks(compactionTaskQuery.getTuningConfig());
-numEstimatedNonCompleteCompactionTasks += numSubTasks + 1; // 
count the compaction task itself
+numEstimatedNonCompleteCompactionTasks += 
findMaxNumTaskSlotsUsedByOneCompactionTask(
+compactionTaskQuery.getTuningConfig()
+);
   } else {
 throw new ISE

[druid] branch 0.20.0 updated (eb6b2e6 -> 000e0b6)

2020-10-07 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch 0.20.0
in repository https://gitbox.apache.org/repos/asf/druid.git.


from eb6b2e6  Allow using jsonpath predicates with AvroFlattener (#10330) 
(#10475)
 add 000e0b6  Web console: Don't include realtime segments in size 
calculations. (#10482) (#10486)

No new revisions were added by this update.

Summary of changes:
 .../src/views/datasource-view/datasource-view.tsx| 16 
 1 file changed, 8 insertions(+), 8 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] branch master updated: Suppress CVE-2018-11765 for hadoop dependencies (#10485)

2020-10-07 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/master by this push:
 new 0aa2a8e  Suppress CVE-2018-11765 for hadoop dependencies (#10485)
0aa2a8e is described below

commit 0aa2a8e2c641aa8eb8722b76b205f70f7bbff8cf
Author: Jonathan Wei 
AuthorDate: Wed Oct 7 21:55:34 2020 -0700

Suppress CVE-2018-11765 for hadoop dependencies (#10485)
---
 owasp-dependency-check-suppressions.xml | 7 +++
 1 file changed, 7 insertions(+)

diff --git a/owasp-dependency-check-suppressions.xml 
b/owasp-dependency-check-suppressions.xml
index 998e5c6..6a532ef 100644
--- a/owasp-dependency-check-suppressions.xml
+++ b/owasp-dependency-check-suppressions.xml
@@ -281,4 +281,11 @@
  CVE-2018-8009
  CVE-2018-8029
   
+  
+ 
+ ^pkg:maven/org\.apache\.hadoop/hadoop\-.*@.*$
+ CVE-2018-11765
+  
 


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid-website-src] 01/01: Merge pull request #173 from apache/20docs

2020-10-05 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid-website-src.git

commit 504c40c7a84a4d6507e5effa0cfc448b8ecb3119
Merge: c32ee24 a28ee78
Author: Jonathan Wei 
AuthorDate: Mon Oct 5 22:40:53 2020 -0700

Merge pull request #173 from apache/20docs

Add 0.20.0 docs

 docs/0.20.0/About-Experimental-Features.html   |   8 +
 docs/0.20.0/Aggregations.html  |   8 +
 docs/0.20.0/ApproxHisto.html   |   8 +
 docs/0.20.0/Batch-ingestion.html   |   8 +
 docs/0.20.0/Booting-a-production-cluster.html  |   8 +
 docs/0.20.0/Broker-Config.html |   8 +
 docs/0.20.0/Broker.html|   8 +
 docs/0.20.0/Build-from-source.html |   8 +
 docs/0.20.0/Cassandra-Deep-Storage.html|   8 +
 docs/0.20.0/Cluster-setup.html |   8 +
 docs/0.20.0/Compute.html   |   8 +
 docs/0.20.0/Concepts-and-Terminology.html  |   8 +
 docs/0.20.0/Configuration.html |   8 +
 docs/0.20.0/Contribute.html|   8 +
 docs/0.20.0/Coordinator-Config.html|   8 +
 docs/0.20.0/Coordinator.html   |   8 +
 docs/0.20.0/DataSource.html|   8 +
 docs/0.20.0/DataSourceMetadataQuery.html   |   8 +
 docs/0.20.0/Data_formats.html  |   8 +
 docs/0.20.0/Deep-Storage.html  |   8 +
 docs/0.20.0/Design.html|   8 +
 docs/0.20.0/DimensionSpecs.html|   8 +
 docs/0.20.0/Download.html  |   8 +
 docs/0.20.0/Druid-Personal-Demo-Cluster.html   |   8 +
 docs/0.20.0/Druid-vs-Cassandra.html|   8 +
 docs/0.20.0/Druid-vs-Elasticsearch.html|   8 +
 docs/0.20.0/Druid-vs-Hadoop.html   |   8 +
 docs/0.20.0/Druid-vs-Impala-or-Shark.html  |   8 +
 docs/0.20.0/Druid-vs-Redshift.html |   8 +
 docs/0.20.0/Druid-vs-Spark.html|   8 +
 docs/0.20.0/Druid-vs-Vertica.html  |   8 +
 docs/0.20.0/Evaluate.html  |   8 +
 docs/0.20.0/Examples.html  |   8 +
 docs/0.20.0/Filters.html   |   8 +
 docs/0.20.0/Firehose.html  |   8 +
 docs/0.20.0/GeographicQueries.html |   8 +
 docs/0.20.0/Granularities.html |   8 +
 docs/0.20.0/GroupByQuery.html  |   8 +
 docs/0.20.0/Hadoop-Configuration.html  |   8 +
 docs/0.20.0/Having.html|   8 +
 docs/0.20.0/Historical-Config.html |   8 +
 docs/0.20.0/Historical.html|   8 +
 docs/0.20.0/Home.html  |   8 +
 docs/0.20.0/Including-Extensions.html  |   8 +
 docs/0.20.0/Indexing-Service-Config.html   |   8 +
 docs/0.20.0/Indexing-Service.html  |   8 +
 docs/0.20.0/Ingestion-FAQ.html |   8 +
 docs/0.20.0/Ingestion-overview.html|   8 +
 docs/0.20.0/Ingestion.html |   8 +
 .../Integrating-Druid-With-Other-Technologies.html |   8 +
 docs/0.20.0/Kafka-Eight.html   |   8 +
 docs/0.20.0/Libraries.html |   8 +
 docs/0.20.0/LimitSpec.html |   8 +
 docs/0.20.0/Loading-Your-Data.html |   8 +
 docs/0.20.0/Logging.html   |   8 +
 docs/0.20.0/Master.html|   8 +
 docs/0.20.0/Metadata-storage.html  |   8 +
 docs/0.20.0/Metrics.html   |   8 +
 docs/0.20.0/Middlemanager.html |   8 +
 docs/0.20.0/Modules.html   |   8 +
 docs/0.20.0/MySQL.html |   8 +
 docs/0.20.0/OrderBy.html   |   8 +
 docs/0.20.0/Other-Hadoop.html  |   8 +
 docs/0.20.0/Papers-and-talks.html  |   8 +
 docs/0.20.0/Peons.html |   8 +
 docs/0.20.0/Performance-FAQ.html   |   8 +
 docs/0.20.0/Plumber.html   |   8 +
 docs/0.20.0/Post-aggregations.html |   8 +
 docs/0.20.0/Production-Cluster-Configuration.html  |   8 +
 docs/0.20.0/Query-Context.html |   8 +
 docs/0.20.0/Querying-your-data.html|   8 +
 docs/0.20.0/Querying.html  |   8 +
 docs/0.20.0/Realtime-Config.html   |   8 +
 docs/0.20.0/Realtime-ingestion.html|   8 +
 docs/0.20.0/Realtime.html  |   8 +
 docs/0.20.0/Recommendations.html   |   8 +
 docs/0.20.0/Rolling-Updates.html

[druid-website-src] branch master updated (c32ee24 -> 504c40c)

2020-10-05 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/druid-website-src.git.


from c32ee24  Merge pull request #171 from druid-matt/patch-23
 add a28ee78  Add 0.20.0 docs
 new 504c40c  Merge pull request #173 from apache/20docs

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../About-Experimental-Features.html   |   0
 docs/{latest => 0.20.0}/Aggregations.html  |   0
 docs/{latest => 0.20.0}/ApproxHisto.html   |   0
 docs/{latest => 0.20.0}/Batch-ingestion.html   |   0
 .../Booting-a-production-cluster.html  |   0
 docs/{latest => 0.20.0}/Broker-Config.html |   0
 docs/{latest => 0.20.0}/Broker.html|   0
 docs/{latest => 0.20.0}/Build-from-source.html |   0
 .../{latest => 0.20.0}/Cassandra-Deep-Storage.html |   0
 docs/{latest => 0.20.0}/Cluster-setup.html |   0
 docs/{latest => 0.20.0}/Compute.html   |   0
 .../Concepts-and-Terminology.html  |   0
 docs/{latest => 0.20.0}/Configuration.html |   0
 docs/{latest => 0.20.0}/Contribute.html|   0
 docs/{latest => 0.20.0}/Coordinator-Config.html|   0
 docs/{latest => 0.20.0}/Coordinator.html   |   0
 docs/{latest => 0.20.0}/DataSource.html|   0
 .../DataSourceMetadataQuery.html   |   0
 docs/{latest => 0.20.0}/Data_formats.html  |   0
 docs/{latest => 0.20.0}/Deep-Storage.html  |   0
 docs/{latest => 0.20.0}/Design.html|   0
 docs/{latest => 0.20.0}/DimensionSpecs.html|   0
 docs/{latest => 0.20.0}/Download.html  |   0
 .../Druid-Personal-Demo-Cluster.html   |   0
 docs/{latest => 0.20.0}/Druid-vs-Cassandra.html|   0
 .../{latest => 0.20.0}/Druid-vs-Elasticsearch.html |   0
 docs/{latest => 0.20.0}/Druid-vs-Hadoop.html   |   0
 .../Druid-vs-Impala-or-Shark.html  |   0
 docs/{latest => 0.20.0}/Druid-vs-Redshift.html |   0
 docs/{latest => 0.20.0}/Druid-vs-Spark.html|   0
 docs/{latest => 0.20.0}/Druid-vs-Vertica.html  |   0
 docs/{latest => 0.20.0}/Evaluate.html  |   0
 docs/{latest => 0.20.0}/Examples.html  |   0
 docs/{latest => 0.20.0}/Filters.html   |   0
 docs/{latest => 0.20.0}/Firehose.html  |   0
 docs/{latest => 0.20.0}/GeographicQueries.html |   0
 docs/{latest => 0.20.0}/Granularities.html |   0
 docs/{latest => 0.20.0}/GroupByQuery.html  |   0
 docs/{latest => 0.20.0}/Hadoop-Configuration.html  |   0
 docs/{latest => 0.20.0}/Having.html|   0
 docs/{latest => 0.20.0}/Historical-Config.html |   0
 docs/{latest => 0.20.0}/Historical.html|   0
 docs/{latest => 0.20.0}/Home.html  |   0
 docs/{latest => 0.20.0}/Including-Extensions.html  |   0
 .../Indexing-Service-Config.html   |   0
 docs/{latest => 0.20.0}/Indexing-Service.html  |   0
 docs/{latest => 0.20.0}/Ingestion-FAQ.html |   0
 docs/{latest => 0.20.0}/Ingestion-overview.html|   0
 docs/{latest => 0.20.0}/Ingestion.html |   0
 .../Integrating-Druid-With-Other-Technologies.html |   0
 docs/{latest => 0.20.0}/Kafka-Eight.html   |   0
 docs/{latest => 0.20.0}/Libraries.html |   0
 docs/{latest => 0.20.0}/LimitSpec.html |   0
 docs/{latest => 0.20.0}/Loading-Your-Data.html |   0
 docs/{latest => 0.20.0}/Logging.html   |   0
 docs/{latest => 0.20.0}/Master.html|   0
 docs/{latest => 0.20.0}/Metadata-storage.html  |   0
 docs/{latest => 0.20.0}/Metrics.html   |   0
 docs/{latest => 0.20.0}/Middlemanager.html |   0
 docs/{latest => 0.20.0}/Modules.html   |   0
 docs/{latest => 0.20.0}/MySQL.html |   0
 docs/{latest => 0.20.0}/OrderBy.html   |   0
 docs/{latest => 0.20.0}/Other-Hadoop.html  |   0
 docs/{latest => 0.20.0}/Papers-and-talks.html  |   0
 docs/{latest => 0.20.0}/Peons.html |   0
 docs/{latest => 0.20.0}/Performance-FAQ.html   |   0
 docs/{latest => 0.20.0}/Plumber.html   |   0
 docs/{latest => 0.20.0}/Post-aggregations.html |   0
 .../Production-Cluster-Configuration.html  |   0
 docs/{latest => 0.20.0}/Query-Context.html |   0
 docs/{latest => 0.20.0}/Querying-your-data.html|   0
 docs/{latest => 0.20.0}/Querying.html  |   0
 docs/{lat

[druid-website] 01/01: Merge pull request #101 from apache/20docs

2020-10-05 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch asf-staging
in repository https://gitbox.apache.org/repos/asf/druid-website.git

commit 6b6df736785489add3f42a9e4d7bb92608217e69
Merge: 350df82 fab4b1e
Author: Jonathan Wei 
AuthorDate: Mon Oct 5 22:40:59 2020 -0700

Merge pull request #101 from apache/20docs

Add 0.20.0 docs for staging

 blog/2011/04/30/introducing-druid.html |2 +-
 blog/2011/05/20/druid-part-deux.html   |2 +-
 blog/2012/01/19/scaling-the-druid-data-store.html  |2 +-
 ...-right-cardinality-estimation-for-big-data.html |2 +-
 blog/2012/09/21/druid-bitmap-compression.html  |2 +-
 ...ond-hadoop-fast-ad-hoc-queries-on-big-data.html |2 +-
 blog/2012/10/24/introducing-druid.html |2 +-
 .../interactive-queries-meet-real-time-data.html   |2 +-
 blog/2013/04/03/15-minutes-to-live-druid.html  |2 +-
 blog/2013/04/03/druid-r-meetup.html|2 +-
 blog/2013/04/26/meet-the-druid.html|2 +-
 blog/2013/05/10/real-time-for-real.html|2 +-
 blog/2013/08/06/twitter-tutorial.html  |2 +-
 blog/2013/08/30/loading-data.html  |2 +-
 .../12/the-art-of-approximating-distributions.html |2 +-
 blog/2013/09/16/upcoming-events.html   |2 +-
 .../09/19/launching-druid-with-apache-whirr.html   |2 +-
 blog/2013/09/20/druid-at-xldb.html |2 +-
 blog/2013/11/04/querying-your-data.html|2 +-
 blog/2014/02/03/rdruid-and-twitterstream.html  |2 +-
 ...oglog-optimizations-for-real-world-systems.html |2 +-
 blog/2014/03/12/batch-ingestion.html   |2 +-
 blog/2014/03/17/benchmarking-druid.html|2 +-
 blog/2014/04/15/intro-to-pydruid.html  |2 +-
 ...ff-on-the-rise-of-the-real-time-data-stack.html |2 +-
 .../07/23/five-tips-for-a-f-ing-great-logo.html|2 +-
 blog/2015/02/20/towards-a-community-led-druid.html |2 +-
 blog/2015/11/03/seeking-new-committers.html|2 +-
 blog/2016/01/06/announcing-new-committers.html |2 +-
 blog/2016/06/28/druid-0-9-1.html   |2 +-
 blog/2016/12/01/druid-0-9-2.html   |2 +-
 blog/2017/04/18/druid-0-10-0.html  |2 +-
 blog/2017/08/22/druid-0-10-1.html  |2 +-
 blog/2017/12/04/druid-0-11-0.html  |2 +-
 blog/2018/03/08/druid-0-12-0.html  |2 +-
 blog/2018/06/08/druid-0-12-1.html  |2 +-
 blog/index.html|2 +-
 community/cla.html |2 +-
 community/index.html   |2 +-
 css/base.css   |  280 ---
 css/blogs.css  |   68 -
 css/bootstrap-pure.css | 1855 -
 css/docs.css   |  126 --
 css/footer.css |   29 -
 css/header.css |  110 -
 css/index.css  |   50 -
 css/news-list.css  |   63 -
 css/reset.css  |   44 -
 css/syntax.css |  281 ---
 css/variables.css  |0
 .../comparisons/druid-vs-elasticsearch.html|4 +-
 .../comparisons/druid-vs-key-value.html|4 +-
 .../comparisons/druid-vs-kudu.html |4 +-
 .../comparisons/druid-vs-redshift.html |4 +-
 .../comparisons/druid-vs-spark.html|4 +-
 .../comparisons/druid-vs-sql-on-hadoop.html|4 +-
 docs/0.13.0-incubating/configuration/index.html|4 +-
 docs/0.13.0-incubating/configuration/logging.html  |4 +-
 docs/0.13.0-incubating/configuration/realtime.html |4 +-
 .../dependencies/cassandra-deep-storage.html   |4 +-
 .../dependencies/deep-storage.html |4 +-
 .../dependencies/metadata-storage.html |4 +-
 docs/0.13.0-incubating/dependencies/zookeeper.html |4 +-
 docs/0.13.0-incubating/design/auth.html|4 +-
 docs/0.13.0-incubating/design/broker.html  |4 +-
 docs/0.13.0-incubating/design/coordinator.html |4 +-
 docs/0.13.0-incubating/design/historical.html  |4 +-
 docs/0.13.0-incubating/design/index.html   |4 +-
 .../0.13.0-incubating/design/indexing-service.html |4 +-
 docs/0.13.0-incubating/design/middlemanager.html   |4 +-
 docs/0.13.0-incubating/design/overlord.html|4 +-
 docs/0.13.0-incubating/design/peons.html   |4 +-
 docs/0.13.0-incubating/design/plumber.html |4 +-
 docs/0.13.0-incubating/design/realtime.html|4

[druid-website] branch asf-staging updated (350df82 -> 6b6df73)

2020-10-05 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch asf-staging
in repository https://gitbox.apache.org/repos/asf/druid-website.git.


from 350df82  latest 0.19.0 docs for staging
 add fab4b1e  Add 0.20.0 docs
 new 6b6df73  Merge pull request #101 from apache/20docs

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 blog/2011/04/30/introducing-druid.html |2 +-
 blog/2011/05/20/druid-part-deux.html   |2 +-
 blog/2012/01/19/scaling-the-druid-data-store.html  |2 +-
 ...-right-cardinality-estimation-for-big-data.html |2 +-
 blog/2012/09/21/druid-bitmap-compression.html  |2 +-
 ...ond-hadoop-fast-ad-hoc-queries-on-big-data.html |2 +-
 blog/2012/10/24/introducing-druid.html |2 +-
 .../interactive-queries-meet-real-time-data.html   |2 +-
 blog/2013/04/03/15-minutes-to-live-druid.html  |2 +-
 blog/2013/04/03/druid-r-meetup.html|2 +-
 blog/2013/04/26/meet-the-druid.html|2 +-
 blog/2013/05/10/real-time-for-real.html|2 +-
 blog/2013/08/06/twitter-tutorial.html  |2 +-
 blog/2013/08/30/loading-data.html  |2 +-
 .../12/the-art-of-approximating-distributions.html |2 +-
 blog/2013/09/16/upcoming-events.html   |2 +-
 .../09/19/launching-druid-with-apache-whirr.html   |2 +-
 blog/2013/09/20/druid-at-xldb.html |2 +-
 blog/2013/11/04/querying-your-data.html|2 +-
 blog/2014/02/03/rdruid-and-twitterstream.html  |2 +-
 ...oglog-optimizations-for-real-world-systems.html |2 +-
 blog/2014/03/12/batch-ingestion.html   |2 +-
 blog/2014/03/17/benchmarking-druid.html|2 +-
 blog/2014/04/15/intro-to-pydruid.html  |2 +-
 ...ff-on-the-rise-of-the-real-time-data-stack.html |2 +-
 .../07/23/five-tips-for-a-f-ing-great-logo.html|2 +-
 blog/2015/02/20/towards-a-community-led-druid.html |2 +-
 blog/2015/11/03/seeking-new-committers.html|2 +-
 blog/2016/01/06/announcing-new-committers.html |2 +-
 blog/2016/06/28/druid-0-9-1.html   |2 +-
 blog/2016/12/01/druid-0-9-2.html   |2 +-
 blog/2017/04/18/druid-0-10-0.html  |2 +-
 blog/2017/08/22/druid-0-10-1.html  |2 +-
 blog/2017/12/04/druid-0-11-0.html  |2 +-
 blog/2018/03/08/druid-0-12-0.html  |2 +-
 blog/2018/06/08/druid-0-12-1.html  |2 +-
 blog/index.html|2 +-
 community/cla.html |2 +-
 community/index.html   |2 +-
 css/base.css   |  280 ---
 css/blogs.css  |   68 -
 css/bootstrap-pure.css | 1855 -
 css/docs.css   |  126 --
 css/footer.css |   29 -
 css/header.css |  110 -
 css/index.css  |   50 -
 css/news-list.css  |   63 -
 css/reset.css  |   44 -
 css/syntax.css |  281 ---
 css/variables.css  |0
 .../comparisons/druid-vs-elasticsearch.html|4 +-
 .../comparisons/druid-vs-key-value.html|4 +-
 .../comparisons/druid-vs-kudu.html |4 +-
 .../comparisons/druid-vs-redshift.html |4 +-
 .../comparisons/druid-vs-spark.html|4 +-
 .../comparisons/druid-vs-sql-on-hadoop.html|4 +-
 docs/0.13.0-incubating/configuration/index.html|4 +-
 docs/0.13.0-incubating/configuration/logging.html  |4 +-
 docs/0.13.0-incubating/configuration/realtime.html |4 +-
 .../dependencies/cassandra-deep-storage.html   |4 +-
 .../dependencies/deep-storage.html |4 +-
 .../dependencies/metadata-storage.html |4 +-
 docs/0.13.0-incubating/dependencies/zookeeper.html |4 +-
 docs/0.13.0-incubating/design/auth.html|4 +-
 docs/0.13.0-incubating/design/broker.html  |4 +-
 docs/0.13.0-incubating/design/coordinator.html |4 +-
 docs/0.13.0-incubating/design/historical.html  |4 +-
 docs/0.13.0-incubating/design/index.html   |4 +-
 .../0.13.0-incubating/design/indexing-service.html |4 +-
 docs/0.13.0-incubating/design/middlemanager.html   |4 +-
 docs/0.13.0-incubating/design/overlord.html   

[druid-website] branch 20docs created (now fab4b1e)

2020-10-05 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch 20docs
in repository https://gitbox.apache.org/repos/asf/druid-website.git.


  at fab4b1e  Add 0.20.0 docs

This branch includes the following new commits:

 new fab4b1e  Add 0.20.0 docs

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid-website-src] branch 20docs created (now a28ee78)

2020-10-05 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch 20docs
in repository https://gitbox.apache.org/repos/asf/druid-website-src.git.


  at a28ee78  Add 0.20.0 docs

This branch includes the following new commits:

 new a28ee78  Add 0.20.0 docs

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



svn commit: r41708 - /dev/druid/0.20.0-rc1/

2020-10-05 Thread jonwei
Author: jonwei
Date: Tue Oct  6 05:16:54 2020
New Revision: 41708

Log:
Add 0.20.0-rc1 artifacts

Added:
dev/druid/0.20.0-rc1/
dev/druid/0.20.0-rc1/apache-druid-0.20.0-bin.tar.gz   (with props)
dev/druid/0.20.0-rc1/apache-druid-0.20.0-bin.tar.gz.asc
dev/druid/0.20.0-rc1/apache-druid-0.20.0-bin.tar.gz.sha512
dev/druid/0.20.0-rc1/apache-druid-0.20.0-src.tar.gz   (with props)
dev/druid/0.20.0-rc1/apache-druid-0.20.0-src.tar.gz.asc
dev/druid/0.20.0-rc1/apache-druid-0.20.0-src.tar.gz.sha512

Added: dev/druid/0.20.0-rc1/apache-druid-0.20.0-bin.tar.gz
==
Binary file - no diff available.

Propchange: dev/druid/0.20.0-rc1/apache-druid-0.20.0-bin.tar.gz
--
svn:mime-type = application/octet-stream

Added: dev/druid/0.20.0-rc1/apache-druid-0.20.0-bin.tar.gz.asc
==
--- dev/druid/0.20.0-rc1/apache-druid-0.20.0-bin.tar.gz.asc (added)
+++ dev/druid/0.20.0-rc1/apache-druid-0.20.0-bin.tar.gz.asc Tue Oct  6 05:16:54 
2020
@@ -0,0 +1,16 @@
+-BEGIN PGP SIGNATURE-
+
+iQIzBAABCgAdFiEEBJXgMcNdEyR5wkHskoPEkhrESD4FAl978G8ACgkQkoPEkhrE
+SD5NzQ/+Ik3H92fOEdvMYisAoO1HrXfqD++1gh0YzgFW7Eh0kJjxT4A6tK2UgLyo
+bAE+z/t3V9+wDSc7xaaPHO+xDW9kR+YaWF8ZKbvQoXmAdQRhkEpgScR95JOFLiOs
+YJgS5V9FhRKf9P1vCAT/BZkpgHNH4GxVPYkJ0tj0043RgSLECaJ4arFhPrskdYa7
++98CvQnHbwdoPtXcsjfdk754j7SEAHq3IoVgTfzYOO81fSJX3qZ2JQIrdCZlv7wj
+So+2dO3z+28v7zzHAhomx2FCTbXtEcTGh+EN8/i0IuAX3fmVxQPCcD0oc2+/nd6y
+GPbtqSohjK4g0nNofvatvM1ifad0ZfBas0fsHOK16AF9S14vH70XbnQ+D81Is1ZC
+sT4zSdeaUmQvs+IvMpY3Tm3vWvbFa5xvAvO7t2ybsZJ4M9dwfip4Zl1A7ixzUcyr
+nk6EaiVET0F1CBQBC2nfSfWkrMCXD9vKa9yIYjofH3WrpPrnIVT3UelmR0Yk56aJ
+wKPkaZrHTrEX/AM/GnHmpr4NFWf7EX8OJvTx1wIKtTsMiCAEWicwoIvvK2ECYQPc
+pCbqrrrjdl/xAvQRcj3ScJfnjIefNRJidUHJJ9orq6JMSis1OX1PF30rRsYpywNY
+MJBdscDsm6p7aV2f5UGYTpUsrTG28OEuHfEcnoZUBzfTQMmQEBk=
+=tmv9
+-END PGP SIGNATURE-

Added: dev/druid/0.20.0-rc1/apache-druid-0.20.0-bin.tar.gz.sha512
==
--- dev/druid/0.20.0-rc1/apache-druid-0.20.0-bin.tar.gz.sha512 (added)
+++ dev/druid/0.20.0-rc1/apache-druid-0.20.0-bin.tar.gz.sha512 Tue Oct  6 
05:16:54 2020
@@ -0,0 +1 @@
+a1225947af35cac6483d50694e4cfb3f8e5a97741fbe171edc6528959049f1b38f3e7d4044e0644afd9b209d890f74282a4befb40d570919fb077f6522b7737e
\ No newline at end of file

Added: dev/druid/0.20.0-rc1/apache-druid-0.20.0-src.tar.gz
==
Binary file - no diff available.

Propchange: dev/druid/0.20.0-rc1/apache-druid-0.20.0-src.tar.gz
--
svn:mime-type = application/octet-stream

Added: dev/druid/0.20.0-rc1/apache-druid-0.20.0-src.tar.gz.asc
==
--- dev/druid/0.20.0-rc1/apache-druid-0.20.0-src.tar.gz.asc (added)
+++ dev/druid/0.20.0-rc1/apache-druid-0.20.0-src.tar.gz.asc Tue Oct  6 05:16:54 
2020
@@ -0,0 +1,16 @@
+-BEGIN PGP SIGNATURE-
+
+iQIzBAABCgAdFiEEBJXgMcNdEyR5wkHskoPEkhrESD4FAl978G8ACgkQkoPEkhrE
+SD4M8w//f0Bj75eqXymv1wDy+QFLiz/GVeCuXP/n124hJCRn8i0WRLrTWKHSQBaJ
+NjkkTM66dRiYCbczA4Bt3rqUJpvqzVTf1Ls/M8boDRudwpIBRp0ZjfNxqLp1wGdH
+kDVQAbdYlgFnrB01TM6PhvigvXJ4Fy8dHZfxGXX5mzPG9+a98/f4hAJayUplLDPa
+NVg2m/7qdLc7TWYEZ+wL7hvFb70mDUlD1je7hvMctkzP1DxiDNkIOBPUxeIFQehD
+w7fy6FilHzgBqXVa8ghciRAD2YhIPJQddMxbotKD0TsceTFnyxTvGD6eVs0CMMve
+rG2hqd4uZsPhJ+Zz54cqqDSIhC22Ve9vNclDbGBPUMnhLNKjfE8Da/q9k/5227v/
+v8i5T1hqNWMQcUezRXIQ1Q+TXdXI5mLgMJZR3XF6HwTr6WDokzRvLxPo1BXypnkz
+snZniV9C2j90kXWHll2E9YYhz76p6Ur4Gf+DjIRe0etDdxXS/Qw5DrxSZClhv//O
+Cg42ctXxcbmj9Lqa0450WvntHEjy8Uqw6gUdQO6mi/7J8lQkdVMuC6f46DyDYrSZ
+x1zzsoZYP2gIjLj7Bxpa8Wq6oy6eXhvcVDLxeVIWq8yCupYivPPPhzWmheRts7Zs
+3Aqe67J9iyS9sE4vLie7JrUeiwHET8R5QqNEnJPOLrxgYh3Kwdc=
+=g13j
+-END PGP SIGNATURE-

Added: dev/druid/0.20.0-rc1/apache-druid-0.20.0-src.tar.gz.sha512
==
--- dev/druid/0.20.0-rc1/apache-druid-0.20.0-src.tar.gz.sha512 (added)
+++ dev/druid/0.20.0-rc1/apache-druid-0.20.0-src.tar.gz.sha512 Tue Oct  6 
05:16:54 2020
@@ -0,0 +1 @@
+c1718ad615d689ec0350f170bdefdbc9e7aa1628693ebb7d5e6de74475e5088b9e99383a68a627f08e434aa89e43fcae84847ec3c9869687176ff8d43a3848c6
\ No newline at end of file



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] annotated tag druid-0.20.0-rc1 updated (2d6d036 -> 4cb964e)

2020-10-05 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to annotated tag druid-0.20.0-rc1
in repository https://gitbox.apache.org/repos/asf/druid.git.


*** WARNING: tag druid-0.20.0-rc1 was modified! ***

from 2d6d036  (commit)
  to 4cb964e  (tag)
 tagging 2d6d03688bbb1d2321baec6555887fe9317f5eb4 (commit)
 replaces druid-0.8.0-rc1
  by jon-wei
  on Mon Oct 5 19:17:57 2020 -0700

- Log -
[maven-release-plugin] copy for tag druid-0.20.0-rc1
---


No new revisions were added by this update.

Summary of changes:


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] branch 0.20.0 updated: Allow using jsonpath predicates with AvroFlattener (#10330) (#10475)

2020-10-04 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch 0.20.0
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/0.20.0 by this push:
 new eb6b2e6  Allow using jsonpath predicates with AvroFlattener (#10330) 
(#10475)
eb6b2e6 is described below

commit eb6b2e6d05788ce83d4a60094cc8ad528ee1137c
Author: Jonathan Wei 
AuthorDate: Sun Oct 4 15:56:07 2020 -0700

Allow using jsonpath predicates with AvroFlattener (#10330) (#10475)

Co-authored-by: Lasse Krogh Mammen 
---
 .../druid/data/input/avro/GenericAvroJsonProvider.java |  2 +-
 .../druid/data/input/avro/AvroFlattenerMakerTest.java  | 18 ++
 2 files changed, 19 insertions(+), 1 deletion(-)

diff --git 
a/extensions-core/avro-extensions/src/main/java/org/apache/druid/data/input/avro/GenericAvroJsonProvider.java
 
b/extensions-core/avro-extensions/src/main/java/org/apache/druid/data/input/avro/GenericAvroJsonProvider.java
index 42195ca..ab6a53e 100644
--- 
a/extensions-core/avro-extensions/src/main/java/org/apache/druid/data/input/avro/GenericAvroJsonProvider.java
+++ 
b/extensions-core/avro-extensions/src/main/java/org/apache/druid/data/input/avro/GenericAvroJsonProvider.java
@@ -193,6 +193,6 @@ public class GenericAvroJsonProvider implements JsonProvider
   @Override
   public Object unwrap(final Object o)
   {
-throw new UnsupportedOperationException("Unused");
+return o;
   }
 }
diff --git 
a/extensions-core/avro-extensions/src/test/java/org/apache/druid/data/input/avro/AvroFlattenerMakerTest.java
 
b/extensions-core/avro-extensions/src/test/java/org/apache/druid/data/input/avro/AvroFlattenerMakerTest.java
index d3faaf4..6becdf7 100644
--- 
a/extensions-core/avro-extensions/src/test/java/org/apache/druid/data/input/avro/AvroFlattenerMakerTest.java
+++ 
b/extensions-core/avro-extensions/src/test/java/org/apache/druid/data/input/avro/AvroFlattenerMakerTest.java
@@ -23,6 +23,8 @@ import 
org.apache.druid.data.input.AvroStreamInputRowParserTest;
 import org.apache.druid.data.input.SomeAvroDatum;
 import org.junit.Assert;
 import org.junit.Test;
+import java.util.Collections;
+import java.util.List;
 
 public class AvroFlattenerMakerTest
 {
@@ -195,6 +197,22 @@ public class AvroFlattenerMakerTest
 record.getSomeRecordArray(),
 flattener.makeJsonPathExtractor("$.someRecordArray").apply(record)
 );
+
+Assert.assertEquals(
+record.getSomeRecordArray().get(0).getNestedString(),
+
flattener.makeJsonPathExtractor("$.someRecordArray[0].nestedString").apply(record)
+);
+
+Assert.assertEquals(
+record.getSomeRecordArray(),
+
flattener.makeJsonPathExtractor("$.someRecordArray[?(@.nestedString)]").apply(record)
+);
+
+List nestedStringArray = 
Collections.singletonList(record.getSomeRecordArray().get(0).getNestedString().toString());
+Assert.assertEquals(
+nestedStringArray,
+
flattener.makeJsonPathExtractor("$.someRecordArray[?(@.nestedString=='string in 
record')].nestedString").apply(record)
+);
   }
 
   @Test(expected = UnsupportedOperationException.class)


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] branch 0.20.0 updated: Web console: fix lookup edit dialog version setting (#10461) (#10473)

2020-10-04 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch 0.20.0
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/0.20.0 by this push:
 new 795bec8  Web console: fix lookup edit dialog version setting (#10461) 
(#10473)
795bec8 is described below

commit 795bec866e29209ff8fcf49cae6a5b0110563050
Author: Jonathan Wei 
AuthorDate: Sun Oct 4 13:57:13 2020 -0700

Web console: fix lookup edit dialog version setting (#10461) (#10473)

* fix lookup edit dialog

* update snapshots

* clean up test

Co-authored-by: Vadim Ogievetsky 
---
 web-console/src/components/auto-form/auto-form.tsx |   2 +
 .../__snapshots__/form-json-selector.spec.tsx.snap |  43 ++
 .../form-json-selector.spec.tsx}   |  33 +-
 .../form-json-selector/form-json-selector.tsx} |  39 +-
 .../__snapshots__/compaction-dialog.spec.tsx.snap  |  88 +--
 .../compaction-dialog/compaction-dialog.scss   |   2 +-
 .../compaction-dialog/compaction-dialog.tsx|  25 +-
 .../__snapshots__/lookup-edit-dialog.spec.tsx.snap | 745 ++---
 .../lookup-edit-dialog/lookup-edit-dialog.scss |  16 +-
 .../lookup-edit-dialog/lookup-edit-dialog.spec.tsx |   6 +-
 .../lookup-edit-dialog/lookup-edit-dialog.tsx  | 144 ++--
 .../src/views/datasource-view/datasource-view.tsx  |   5 +-
 .../src/views/lookups-view/lookups-view.tsx|   4 +-
 13 files changed, 538 insertions(+), 614 deletions(-)

diff --git a/web-console/src/components/auto-form/auto-form.tsx 
b/web-console/src/components/auto-form/auto-form.tsx
index 59561ac..ce26cad 100644
--- a/web-console/src/components/auto-form/auto-form.tsx
+++ b/web-console/src/components/auto-form/auto-form.tsx
@@ -50,6 +50,7 @@ export interface Field {
   placeholder?: Functor;
   min?: number;
   zeroMeansUndefined?: boolean;
+  height?: string;
   disabled?: Functor;
   defined?: Functor;
   required?: Functor;
@@ -272,6 +273,7 @@ export class AutoForm> 
extends React.PureComponent
 value={deepGet(model as any, field.name)}
 onChange={(v: any) => this.fieldChange(field, v)}
 placeholder={AutoForm.evaluateFunctor(field.placeholder, model, '')}
+height={field.height}
   />
 );
   }
diff --git 
a/web-console/src/components/form-json-selector/__snapshots__/form-json-selector.spec.tsx.snap
 
b/web-console/src/components/form-json-selector/__snapshots__/form-json-selector.spec.tsx.snap
new file mode 100644
index 000..d2ec216
--- /dev/null
+++ 
b/web-console/src/components/form-json-selector/__snapshots__/form-json-selector.spec.tsx.snap
@@ -0,0 +1,43 @@
+// Jest Snapshot v1, https://goo.gl/fbAQLP
+
+exports[`FormJsonSelector matches snapshot form json 1`] = `
+
+  
+
+
+  
+
+`;
+
+exports[`FormJsonSelector matches snapshot form tab 1`] = `
+
+  
+
+
+  
+
+`;
diff --git a/web-console/src/dialogs/lookup-edit-dialog/lookup-edit-dialog.scss 
b/web-console/src/components/form-json-selector/form-json-selector.spec.tsx
similarity index 60%
copy from web-console/src/dialogs/lookup-edit-dialog/lookup-edit-dialog.scss
copy to 
web-console/src/components/form-json-selector/form-json-selector.spec.tsx
index 7ee469b..ae7c3a9 100644
--- a/web-console/src/dialogs/lookup-edit-dialog/lookup-edit-dialog.scss
+++ b/web-console/src/components/form-json-selector/form-json-selector.spec.tsx
@@ -16,28 +16,21 @@
  * limitations under the License.
  */
 
-.lookup-edit-dialog {
-  &.bp3-dialog {
-top: 10vh;
+import { shallow } from 'enzyme';
+import React from 'react';
 
-width: 600px;
-  }
+import { FormJsonSelector } from './form-json-selector';
 
-  .auto-form {
-margin: 5px 20px 10px;
-  }
+describe('FormJsonSelector', () => {
+  it('matches snapshot form tab', () => {
+const formJsonSelector = shallow( {}} />);
 
-  .lookup-label {
-padding: 0 20px;
-margin-top: 5px;
-margin-bottom: 5px;
-  }
+expect(formJsonSelector).toMatchSnapshot();
+  });
 
-  .ace-solarized-dark {
-background-color: #232c35;
-  }
+  it('matches snapshot form json', () => {
+const formJsonSelector = shallow( {}} />);
 
-  .ace_gutter-layer {
-background-color: #27313c;
-  }
-}
+expect(formJsonSelector).toMatchSnapshot();
+  });
+});
diff --git a/web-console/src/dialogs/lookup-edit-dialog/lookup-edit-dialog.scss 
b/web-console/src/components/form-json-selector/form-json-selector.tsx
similarity index 54%
copy from web-console/src/dialogs/lookup-edit-dialog/lookup-edit-dialog.scss
copy to web-console/src/components/form-json-selector/form-json-selector.tsx
index 7ee469b..4999826 100644
--- a/web-console/src/dialogs/lookup-edit-dialog/lookup-edit-dialog.scss
+++ b/web-console/src/components/form-json-selector/form-json-selector.tsx
@@ -16,28 +16,25 @@
  * limitations under the License.
  */
 
-.lookup-edit-dialog {
-  &.bp3-dialog {
-top: 10v

[druid] branch 0.20.0 updated: Fix the task id creation in CompactionTask (#10445) (#10472)

2020-10-04 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch 0.20.0
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/0.20.0 by this push:
 new 38f392d  Fix the task id creation in CompactionTask (#10445) (#10472)
38f392d is described below

commit 38f392d2de5937b8a54be5f9ff9faa85d03b981e
Author: Jonathan Wei 
AuthorDate: Sun Oct 4 13:57:02 2020 -0700

Fix the task id creation in CompactionTask (#10445) (#10472)

* Fix the task id creation in CompactionTask

* review comments

* Ignore test for range partitioning and segment lock

Co-authored-by: Abhishek Agarwal 
<1477457+abhishekagarwa...@users.noreply.github.com>
---
 .../druid/indexing/common/task/CompactionTask.java | 13 +++--
 .../parallel/ParallelIndexSupervisorTask.java  | 21 ---
 .../common/task/CompactionTaskParallelRunTest.java | 46 +++
 .../parallel/ParallelIndexSupervisorTaskTest.java  | 67 ++
 4 files changed, 126 insertions(+), 21 deletions(-)

diff --git 
a/indexing-service/src/main/java/org/apache/druid/indexing/common/task/CompactionTask.java
 
b/indexing-service/src/main/java/org/apache/druid/indexing/common/task/CompactionTask.java
index 62a9f26..ba2502f 100644
--- 
a/indexing-service/src/main/java/org/apache/druid/indexing/common/task/CompactionTask.java
+++ 
b/indexing-service/src/main/java/org/apache/druid/indexing/common/task/CompactionTask.java
@@ -32,6 +32,7 @@ import com.google.common.collect.Lists;
 import org.apache.curator.shaded.com.google.common.base.Verify;
 import org.apache.druid.client.coordinator.CoordinatorClient;
 import org.apache.druid.client.indexing.ClientCompactionTaskQuery;
+import org.apache.druid.data.input.InputSource;
 import org.apache.druid.data.input.impl.DimensionSchema;
 import org.apache.druid.data.input.impl.DimensionSchema.MultiValueHandling;
 import org.apache.druid.data.input.impl.DimensionsSpec;
@@ -361,10 +362,14 @@ public class CompactionTask extends AbstractBatchIndexTask
   // a new Appenderator on its own instead. As a result, they should 
use different sequence names to allocate
   // new segmentIds properly. See 
IndexerSQLMetadataStorageCoordinator.allocatePendingSegments() for details.
   // In this case, we use different fake IDs for each created index 
task.
-  final String subtaskId = tuningConfig == null || 
tuningConfig.getMaxNumConcurrentSubTasks() == 1
-   ? createIndexTaskSpecId(i)
-   : getId();
-  return newTask(subtaskId, ingestionSpecs.get(i));
+  ParallelIndexIngestionSpec ingestionSpec = ingestionSpecs.get(i);
+  InputSource inputSource = 
ingestionSpec.getIOConfig().getNonNullInputSource(
+  ingestionSpec.getDataSchema().getParser()
+  );
+  final String subtaskId = 
ParallelIndexSupervisorTask.isParallelMode(inputSource, tuningConfig)
+   ? getId()
+   : createIndexTaskSpecId(i);
+  return newTask(subtaskId, ingestionSpec);
 })
 .collect(Collectors.toList());
 
diff --git 
a/indexing-service/src/main/java/org/apache/druid/indexing/common/task/batch/parallel/ParallelIndexSupervisorTask.java
 
b/indexing-service/src/main/java/org/apache/druid/indexing/common/task/batch/parallel/ParallelIndexSupervisorTask.java
index dd0e759..4a218a0 100644
--- 
a/indexing-service/src/main/java/org/apache/druid/indexing/common/task/batch/parallel/ParallelIndexSupervisorTask.java
+++ 
b/indexing-service/src/main/java/org/apache/druid/indexing/common/task/batch/parallel/ParallelIndexSupervisorTask.java
@@ -466,18 +466,25 @@ public class ParallelIndexSupervisorTask extends 
AbstractBatchIndexTask implemen
 registerResourceCloserOnAbnormalExit(currentSubTaskHolder);
   }
 
-  private boolean isParallelMode()
+  public static boolean isParallelMode(InputSource inputSource, @Nullable 
ParallelIndexTuningConfig tuningConfig)
   {
+if (null == tuningConfig) {
+  return false;
+}
+boolean useRangePartitions = useRangePartitions(tuningConfig);
 // Range partitioning is not implemented for runSequential() (but hash 
partitioning is)
-int minRequiredNumConcurrentSubTasks = useRangePartitions() ? 1 : 2;
+int minRequiredNumConcurrentSubTasks = useRangePartitions ? 1 : 2;
+return inputSource.isSplittable() && 
tuningConfig.getMaxNumConcurrentSubTasks() >= minRequiredNumConcurrentSubTasks;
+  }
 
-return baseInputSource.isSplittable()
-   && ingestionSchema.getTuningConfig().getMaxNumConcurrentSubTasks() 
>= minRequiredNumConcurrentSubTasks;
+  private static boolean useRangePartitions(ParallelIndexTuningConfig 
tuningConfig)
+  {
+return tuningConfig.getGivenOrDefaultPartitionsSpec() instanceof 
SingleDimensionParti

[druid] branch 0.20.0 updated (239e9f0 -> e174586)

2020-10-04 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch 0.20.0
in repository https://gitbox.apache.org/repos/asf/druid.git.


from 239e9f0  fix array types from escaping into wider query engine 
(#10460) (#10474)
 add e174586  Web console: switch to switches instead of checkboxes 
(#10454) (#10470)

No new revisions were added by this update.

Summary of changes:
 .../__snapshots__/menu-checkbox.spec.tsx.snap  | 73 +++---
 .../components/menu-checkbox/menu-checkbox.scss| 26 
 .../menu-checkbox/menu-checkbox.spec.tsx   | 12 +++-
 .../src/components/menu-checkbox/menu-checkbox.tsx | 23 +--
 .../show-log/__snapshots__/show-log.spec.tsx.snap  |  2 +-
 web-console/src/components/show-log/show-log.scss  | 13 ++--
 web-console/src/components/show-log/show-log.tsx   |  6 +-
 .../table-column-selector.tsx  |  2 +-
 .../__snapshots__/warning-checklist.spec.tsx.snap  |  6 +-
 .../warning-checklist/warning-checklist.tsx|  8 +--
 .../src/views/load-data-view/load-data-view.tsx|  5 +-
 .../src/views/query-view/run-button/run-button.tsx | 34 +-
 12 files changed, 131 insertions(+), 79 deletions(-)
 delete mode 100644 web-console/src/components/menu-checkbox/menu-checkbox.scss


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] branch master updated: Update version to 0.21.0-SNAPSHOT (#10450)

2020-10-03 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/master by this push:
 new 65c0d64  Update version to 0.21.0-SNAPSHOT (#10450)
65c0d64 is described below

commit 65c0d64676080ae618c76ecd08fb4dfb9decc679
Author: Jonathan Wei 
AuthorDate: Sat Oct 3 16:08:34 2020 -0700

Update version to 0.21.0-SNAPSHOT (#10450)

* [maven-release-plugin] prepare release druid-0.21.0

* [maven-release-plugin] prepare for next development iteration

* Update web-console versions
---
 benchmarks/pom.xml   |  2 +-
 cloud/aws-common/pom.xml |  2 +-
 cloud/gcp-common/pom.xml |  2 +-
 core/pom.xml |  6 ++
 distribution/pom.xml |  9 -
 extendedset/pom.xml  |  5 ++---
 extensions-contrib/aliyun-oss-extensions/pom.xml |  5 ++---
 extensions-contrib/ambari-metrics-emitter/pom.xml|  5 ++---
 extensions-contrib/cassandra-storage/pom.xml |  2 +-
 extensions-contrib/cloudfiles-extensions/pom.xml |  5 ++---
 extensions-contrib/distinctcount/pom.xml |  5 ++---
 extensions-contrib/dropwizard-emitter/pom.xml|  5 ++---
 extensions-contrib/gce-extensions/pom.xml|  5 ++---
 extensions-contrib/graphite-emitter/pom.xml  |  5 ++---
 extensions-contrib/influx-extensions/pom.xml |  5 ++---
 extensions-contrib/influxdb-emitter/pom.xml  |  6 ++
 extensions-contrib/kafka-emitter/pom.xml |  5 ++---
 extensions-contrib/materialized-view-maintenance/pom.xml |  6 ++
 extensions-contrib/materialized-view-selection/pom.xml   |  6 ++
 extensions-contrib/momentsketch/pom.xml  |  6 ++
 extensions-contrib/moving-average-query/pom.xml  |  5 ++---
 extensions-contrib/opentsdb-emitter/pom.xml  |  6 ++
 extensions-contrib/redis-cache/pom.xml   |  5 ++---
 extensions-contrib/sqlserver-metadata-storage/pom.xml|  2 +-
 extensions-contrib/statsd-emitter/pom.xml|  6 ++
 extensions-contrib/tdigestsketch/pom.xml |  6 ++
 extensions-contrib/thrift-extensions/pom.xml |  6 ++
 extensions-contrib/time-min-max/pom.xml  |  6 ++
 extensions-contrib/virtual-columns/pom.xml   |  2 +-
 extensions-core/avro-extensions/pom.xml  |  5 ++---
 extensions-core/azure-extensions/pom.xml |  5 ++---
 extensions-core/datasketches/pom.xml |  5 ++---
 extensions-core/druid-basic-security/pom.xml |  6 ++
 extensions-core/druid-bloom-filter/pom.xml   |  5 ++---
 extensions-core/druid-kerberos/pom.xml   |  5 ++---
 extensions-core/druid-pac4j/pom.xml  |  2 +-
 extensions-core/druid-ranger-security/pom.xml|  6 ++
 extensions-core/ec2-extensions/pom.xml   |  5 ++---
 extensions-core/google-extensions/pom.xml|  2 +-
 extensions-core/hdfs-storage/pom.xml |  2 +-
 extensions-core/histogram/pom.xml|  2 +-
 extensions-core/kafka-extraction-namespace/pom.xml   |  5 ++---
 extensions-core/kafka-indexing-service/pom.xml   |  2 +-
 extensions-core/kinesis-indexing-service/pom.xml |  5 ++---
 extensions-core/lookups-cached-global/pom.xml|  5 ++---
 extensions-core/lookups-cached-single/pom.xml|  5 ++---
 extensions-core/mysql-metadata-storage/pom.xml   |  2 +-
 extensions-core/orc-extensions/pom.xml   |  6 ++
 extensions-core/parquet-extensions/pom.xml   |  6 ++
 extensions-core/postgresql-metadata-storage/pom.xml  |  2 +-
 extensions-core/protobuf-extensions/pom.xml  |  6 ++
 extensions-core/s3-extensions/pom.xml|  5 ++---
 extensions-core/simple-client-sslcontext/pom.xml |  6 ++
 extensions-core/stats/pom.xml|  2 +-
 hll/pom.xml  |  2 +-
 indexing-hadoop/pom.xml  |  2 +-
 indexing-service/pom.xml |  2 +-
 integration-tests/pom.xml|  6 +++---
 pom.xml  | 13 ++---
 processing/pom.xml   |  2 +-
 server/pom.xml   |  2 +-
 services/pom.xml |  4 ++--
 sql/pom.xml  |  5 ++---
 web

[druid] branch master updated: fix array types from escaping into wider query engine (#10460)

2020-10-03 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/master by this push:
 new 9ec5c08  fix array types from escaping into wider query engine (#10460)
9ec5c08 is described below

commit 9ec5c08e2a3c1210fefc78e26fbafe75702c7c2f
Author: Clint Wylie 
AuthorDate: Sat Oct 3 15:30:34 2020 -0700

fix array types from escaping into wider query engine (#10460)

* fix array types from escaping into wider query engine

* oops

* adjust

* fix lgtm
---
 .../apache/druid/math/expr/BinaryOperatorExpr.java |   8 +-
 .../org/apache/druid/math/expr/ConstantExpr.java   |  11 ++
 .../main/java/org/apache/druid/math/expr/Expr.java |   6 +
 .../java/org/apache/druid/math/expr/ExprType.java  | 104 --
 .../apache/druid/math/expr/ExprTypeConversion.java | 159 +
 .../java/org/apache/druid/math/expr/Function.java  |  33 +++--
 .../org/apache/druid/math/expr/OutputTypeTest.java | 142 ++
 .../druid/segment/virtual/ExpressionPlanner.java   |  12 +-
 .../segment/virtual/ExpressionVirtualColumn.java   |  10 +-
 .../druid/sql/calcite/expression/Expressions.java  |  17 ---
 .../builtin/ReductionOperatorConversionHelper.java |   3 +-
 .../apache/druid/sql/calcite/planner/Calcites.java |   9 +-
 .../apache/druid/sql/calcite/rel/QueryMaker.java   |  16 ++-
 .../apache/druid/sql/calcite/CalciteQueryTest.java |  30 +++-
 14 files changed, 354 insertions(+), 206 deletions(-)

diff --git 
a/core/src/main/java/org/apache/druid/math/expr/BinaryOperatorExpr.java 
b/core/src/main/java/org/apache/druid/math/expr/BinaryOperatorExpr.java
index 128a780..20ecc5d 100644
--- a/core/src/main/java/org/apache/druid/math/expr/BinaryOperatorExpr.java
+++ b/core/src/main/java/org/apache/druid/math/expr/BinaryOperatorExpr.java
@@ -83,7 +83,13 @@ abstract class BinaryOpExprBase implements Expr
   @Override
   public ExprType getOutputType(InputBindingTypes inputTypes)
   {
-return ExprType.operatorAutoTypeConversion(left.getOutputType(inputTypes), 
right.getOutputType(inputTypes));
+if (left.isNullLiteral()) {
+  return right.getOutputType(inputTypes);
+}
+if (right.isNullLiteral()) {
+  return left.getOutputType(inputTypes);
+}
+return ExprTypeConversion.operator(left.getOutputType(inputTypes), 
right.getOutputType(inputTypes));
   }
 
   @Override
diff --git a/core/src/main/java/org/apache/druid/math/expr/ConstantExpr.java 
b/core/src/main/java/org/apache/druid/math/expr/ConstantExpr.java
index 279600d..57ae900 100644
--- a/core/src/main/java/org/apache/druid/math/expr/ConstantExpr.java
+++ b/core/src/main/java/org/apache/druid/math/expr/ConstantExpr.java
@@ -99,6 +99,11 @@ abstract class NullNumericConstantExpr extends ConstantExpr
   }
 
 
+  @Override
+  public boolean isNullLiteral()
+  {
+return true;
+  }
 }
 
 class LongExpr extends ConstantExpr
@@ -429,6 +434,12 @@ class StringExpr extends ConstantExpr
   }
 
   @Override
+  public boolean isNullLiteral()
+  {
+return value == null;
+  }
+
+  @Override
   public String toString()
   {
 return value;
diff --git a/core/src/main/java/org/apache/druid/math/expr/Expr.java 
b/core/src/main/java/org/apache/druid/math/expr/Expr.java
index be0a32e..ff646fe 100644
--- a/core/src/main/java/org/apache/druid/math/expr/Expr.java
+++ b/core/src/main/java/org/apache/druid/math/expr/Expr.java
@@ -53,6 +53,12 @@ public interface Expr
 return false;
   }
 
+  default boolean isNullLiteral()
+  {
+// Overridden by things that are null literals.
+return false;
+  }
+
   /**
* Returns the value of expr if expr is a literal, or throws an exception 
otherwise.
*
diff --git a/core/src/main/java/org/apache/druid/math/expr/ExprType.java 
b/core/src/main/java/org/apache/druid/math/expr/ExprType.java
index e11b8ace..ebdf64a 100644
--- a/core/src/main/java/org/apache/druid/math/expr/ExprType.java
+++ b/core/src/main/java/org/apache/druid/math/expr/ExprType.java
@@ -19,7 +19,6 @@
 
 package org.apache.druid.math.expr;
 
-import org.apache.druid.java.util.common.IAE;
 import org.apache.druid.java.util.common.ISE;
 import org.apache.druid.segment.column.ValueType;
 
@@ -169,107 +168,4 @@ public enum ExprType
 return elementType;
   }
 
-  /**
-   * Given 2 'input' types, choose the most appropriate combined type, if 
possible
-   *
-   * arrays must be the same type
-   * if both types are {@link #STRING}, the output type will be preserved as 
string
-   * if both types are {@link #LONG}, the output type will be preserved as long
-   *
-   */
-  @Nullable
-  public static ExprType operatorAutoTypeConversion(@Nullable ExprType type, 
@Nullable ExprType other)
-  {
-if (type == null || other == null) {
-  // cannot auto conversion unknown types
-  return null;
-}
-// arrays cannot be auto converted

[druid] branch 0.20.0 updated: vectorize constant expressions with optimized selectors (#10440) (#10457)

2020-09-30 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch 0.20.0
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/0.20.0 by this push:
 new 51a4b1c  vectorize constant expressions with optimized selectors 
(#10440) (#10457)
51a4b1c is described below

commit 51a4b1cde69a8eb6fa523aba7c7e82042ba89254
Author: Clint Wylie 
AuthorDate: Wed Sep 30 16:58:01 2020 -0700

vectorize constant expressions with optimized selectors (#10440) (#10457)
---
 .../segment/vector/ConstantVectorSelectors.java| 172 +
 .../druid/segment/virtual/ExpressionPlan.java  |   5 +
 .../segment/virtual/ExpressionVectorSelectors.java |  34 
 .../segment/virtual/ExpressionVirtualColumn.java   |  24 ++-
 .../virtual/ExpressionVectorSelectorsTest.java |  97 +++-
 .../calcite/SqlVectorizedExpressionSanityTest.java |   1 +
 6 files changed, 297 insertions(+), 36 deletions(-)

diff --git 
a/processing/src/main/java/org/apache/druid/segment/vector/ConstantVectorSelectors.java
 
b/processing/src/main/java/org/apache/druid/segment/vector/ConstantVectorSelectors.java
new file mode 100644
index 000..c1e3c3b
--- /dev/null
+++ 
b/processing/src/main/java/org/apache/druid/segment/vector/ConstantVectorSelectors.java
@@ -0,0 +1,172 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.segment.vector;
+
+import org.apache.druid.segment.IdLookup;
+
+import javax.annotation.Nullable;
+import java.util.Arrays;
+
+public class ConstantVectorSelectors
+{
+  public static VectorValueSelector vectorValueSelector(VectorSizeInspector 
inspector, @Nullable Number constant)
+  {
+if (constant == null) {
+  return NilVectorSelector.create(inspector);
+}
+final long[] longVector = new long[inspector.getMaxVectorSize()];
+final float[] floatVector = new float[inspector.getMaxVectorSize()];
+final double[] doubleVector = new double[inspector.getMaxVectorSize()];
+Arrays.fill(longVector, constant.longValue());
+Arrays.fill(floatVector, constant.floatValue());
+Arrays.fill(doubleVector, constant.doubleValue());
+return new VectorValueSelector()
+{
+  @Override
+  public long[] getLongVector()
+  {
+return longVector;
+  }
+
+  @Override
+  public float[] getFloatVector()
+  {
+return floatVector;
+  }
+
+  @Override
+  public double[] getDoubleVector()
+  {
+return doubleVector;
+  }
+
+  @Nullable
+  @Override
+  public boolean[] getNullVector()
+  {
+return null;
+  }
+
+  @Override
+  public int getMaxVectorSize()
+  {
+return inspector.getMaxVectorSize();
+  }
+
+  @Override
+  public int getCurrentVectorSize()
+  {
+return inspector.getCurrentVectorSize();
+  }
+};
+  }
+
+  public static VectorObjectSelector vectorObjectSelector(
+  VectorSizeInspector inspector,
+  @Nullable Object object
+  )
+  {
+if (object == null) {
+  return NilVectorSelector.create(inspector);
+}
+
+final Object[] objects = new Object[inspector.getMaxVectorSize()];
+Arrays.fill(objects, object);
+
+return new VectorObjectSelector()
+{
+  @Override
+  public Object[] getObjectVector()
+  {
+return objects;
+  }
+
+  @Override
+  public int getMaxVectorSize()
+  {
+return inspector.getMaxVectorSize();
+  }
+
+  @Override
+  public int getCurrentVectorSize()
+  {
+return inspector.getCurrentVectorSize();
+  }
+};
+  }
+
+  public static SingleValueDimensionVectorSelector 
singleValueDimensionVectorSelector(
+  VectorSizeInspector inspector,
+  @Nullable String value
+  )
+  {
+if (value == null) {
+  return NilVectorSelector.create(inspector);
+}
+
+final int[] row = new int[inspector.getMaxVectorSize()];
+return new SingleValueDimensionVectorSelector()
+{
+  @Override
+  public int[] getRowVector()
+  {
+return row;
+  }
+
+  @Override
+  public int

[druid] branch master updated: vectorize constant expressions with optimized selectors (#10440)

2020-09-29 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/master by this push:
 new 753bce3  vectorize constant expressions with optimized selectors 
(#10440)
753bce3 is described below

commit 753bce324bdf8c7c5b2b602f89c720749bfa6e22
Author: Clint Wylie 
AuthorDate: Tue Sep 29 13:19:06 2020 -0700

vectorize constant expressions with optimized selectors (#10440)
---
 .../segment/vector/ConstantVectorSelectors.java| 172 +
 .../druid/segment/virtual/ExpressionPlan.java  |   5 +
 .../segment/virtual/ExpressionVectorSelectors.java |  34 
 .../segment/virtual/ExpressionVirtualColumn.java   |  24 ++-
 .../virtual/ExpressionVectorSelectorsTest.java |  97 +++-
 .../calcite/SqlVectorizedExpressionSanityTest.java |   1 +
 6 files changed, 297 insertions(+), 36 deletions(-)

diff --git 
a/processing/src/main/java/org/apache/druid/segment/vector/ConstantVectorSelectors.java
 
b/processing/src/main/java/org/apache/druid/segment/vector/ConstantVectorSelectors.java
new file mode 100644
index 000..c1e3c3b
--- /dev/null
+++ 
b/processing/src/main/java/org/apache/druid/segment/vector/ConstantVectorSelectors.java
@@ -0,0 +1,172 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.segment.vector;
+
+import org.apache.druid.segment.IdLookup;
+
+import javax.annotation.Nullable;
+import java.util.Arrays;
+
+public class ConstantVectorSelectors
+{
+  public static VectorValueSelector vectorValueSelector(VectorSizeInspector 
inspector, @Nullable Number constant)
+  {
+if (constant == null) {
+  return NilVectorSelector.create(inspector);
+}
+final long[] longVector = new long[inspector.getMaxVectorSize()];
+final float[] floatVector = new float[inspector.getMaxVectorSize()];
+final double[] doubleVector = new double[inspector.getMaxVectorSize()];
+Arrays.fill(longVector, constant.longValue());
+Arrays.fill(floatVector, constant.floatValue());
+Arrays.fill(doubleVector, constant.doubleValue());
+return new VectorValueSelector()
+{
+  @Override
+  public long[] getLongVector()
+  {
+return longVector;
+  }
+
+  @Override
+  public float[] getFloatVector()
+  {
+return floatVector;
+  }
+
+  @Override
+  public double[] getDoubleVector()
+  {
+return doubleVector;
+  }
+
+  @Nullable
+  @Override
+  public boolean[] getNullVector()
+  {
+return null;
+  }
+
+  @Override
+  public int getMaxVectorSize()
+  {
+return inspector.getMaxVectorSize();
+  }
+
+  @Override
+  public int getCurrentVectorSize()
+  {
+return inspector.getCurrentVectorSize();
+  }
+};
+  }
+
+  public static VectorObjectSelector vectorObjectSelector(
+  VectorSizeInspector inspector,
+  @Nullable Object object
+  )
+  {
+if (object == null) {
+  return NilVectorSelector.create(inspector);
+}
+
+final Object[] objects = new Object[inspector.getMaxVectorSize()];
+Arrays.fill(objects, object);
+
+return new VectorObjectSelector()
+{
+  @Override
+  public Object[] getObjectVector()
+  {
+return objects;
+  }
+
+  @Override
+  public int getMaxVectorSize()
+  {
+return inspector.getMaxVectorSize();
+  }
+
+  @Override
+  public int getCurrentVectorSize()
+  {
+return inspector.getCurrentVectorSize();
+  }
+};
+  }
+
+  public static SingleValueDimensionVectorSelector 
singleValueDimensionVectorSelector(
+  VectorSizeInspector inspector,
+  @Nullable String value
+  )
+  {
+if (value == null) {
+  return NilVectorSelector.create(inspector);
+}
+
+final int[] row = new int[inspector.getMaxVectorSize()];
+return new SingleValueDimensionVectorSelector()
+{
+  @Override
+  public int[] getRowVector()
+  {
+return row;
+  }
+
+  @Override
+  public int getValueCardinality()
+ 

[druid] branch 0.20.0 created (now 8168e14)

2020-09-29 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch 0.20.0
in repository https://gitbox.apache.org/repos/asf/druid.git.


  at 8168e14  Adding task slot count metrics to Druid Overlord (#10379)

No new revisions were added by this update.


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] branch master updated: Adding task slot count metrics to Druid Overlord (#10379)

2020-09-29 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/master by this push:
 new 8168e14  Adding task slot count metrics to Druid Overlord (#10379)
8168e14 is described below

commit 8168e14e9224c9459efda07b038269815975cf50
Author: Mainak Ghosh 
AuthorDate: Mon Sep 28 23:50:38 2020 -0700

Adding task slot count metrics to Druid Overlord (#10379)

* Adding more worker metrics to Druid Overlord

* Changing the nomenclature from worker to peon as that represents the 
metrics that we want to monitor better

* Few more instance of worker usage replaced with peon

* Modifying the peon idle count logic to only use eligible workers 
available capacity

* Changing the naming to task slot count instead of peon

* Adding some unit test coverage for the new test runner apis

* Addressing Review Comments

* Modifying the TaskSlotCountStatsProvider apis so that overlords which are 
not leader do not emit these metrics

* Fixing the spelling issue in the docs

* Setting the annotation Nullable on the TaskSlotCountStatsProvider methods
---
 docs/operations/metrics.md |  5 ++
 .../main/resources/defaultMetricDimensions.json|  6 ++
 .../druid/indexing/overlord/ForkingTaskRunner.java | 33 +
 .../apache/druid/indexing/overlord/PortFinder.java |  5 ++
 .../druid/indexing/overlord/RemoteTaskRunner.java  | 80 +---
 .../overlord/SingleTaskBackgroundRunner.java   | 30 
 .../apache/druid/indexing/overlord/TaskMaster.java | 64 +++-
 .../apache/druid/indexing/overlord/TaskRunner.java | 13 
 .../indexing/overlord/ThreadingTaskRunner.java | 32 
 .../overlord/hrtr/HttpRemoteTaskRunner.java| 61 +++
 .../indexing/common/task/IngestionTestBase.java| 30 
 .../indexing/overlord/RemoteTaskRunnerTest.java| 25 ++-
 .../druid/indexing/overlord/TestTaskRunner.java| 30 
 .../overlord/hrtr/HttpRemoteTaskRunnerTest.java| 30 
 .../druid/indexing/overlord/http/OverlordTest.java | 30 
 .../server/metrics/TaskSlotCountStatsMonitor.java  | 57 ++
 .../server/metrics/TaskSlotCountStatsProvider.java | 55 ++
 .../metrics/TaskSlotCountStatsMonitorTest.java | 86 ++
 .../java/org/apache/druid/cli/CliOverlord.java |  2 +
 website/.spelling  |  1 +
 20 files changed, 662 insertions(+), 13 deletions(-)

diff --git a/docs/operations/metrics.md b/docs/operations/metrics.md
index 62b6f57..1b4ed7f 100644
--- a/docs/operations/metrics.md
+++ b/docs/operations/metrics.md
@@ -196,6 +196,11 @@ Note: If the JVM does not support CPU time measurement for 
the current thread, i
 |`task/running/count`|Number of current running tasks. This metric is only 
available if the TaskCountStatsMonitor module is included.|dataSource.|Varies.|
 |`task/pending/count`|Number of current pending tasks. This metric is only 
available if the TaskCountStatsMonitor module is included.|dataSource.|Varies.|
 |`task/waiting/count`|Number of current waiting tasks. This metric is only 
available if the TaskCountStatsMonitor module is included.|dataSource.|Varies.|
+|`taskSlot/total/count`|Number of total task slots per emission period. This 
metric is only available if the TaskSlotCountStatsMonitor module is included.| 
|Varies.|
+|`taskSlot/idle/count`|Number of idle task slots per emission period. This 
metric is only available if the TaskSlotCountStatsMonitor module is included.| 
|Varies.|
+|`taskSlot/used/count`|Number of busy task slots per emission period. This 
metric is only available if the TaskSlotCountStatsMonitor module is included.| 
|Varies.|
+|`taskSlot/lazy/count`|Number of total task slots in lazy marked 
MiddleManagers and Indexers per emission period. This metric is only available 
if the TaskSlotCountStatsMonitor module is included.| |Varies.|
+|`taskSlot/blacklisted/count`|Number of total task slots in blacklisted 
MiddleManagers and Indexers per emission period. This metric is only available 
if the TaskSlotCountStatsMonitor module is included.| |Varies.|
 
 ## Coordination
 
diff --git 
a/extensions-contrib/statsd-emitter/src/main/resources/defaultMetricDimensions.json
 
b/extensions-contrib/statsd-emitter/src/main/resources/defaultMetricDimensions.json
index 859a9c6..1a62d70 100644
--- 
a/extensions-contrib/statsd-emitter/src/main/resources/defaultMetricDimensions.json
+++ 
b/extensions-contrib/statsd-emitter/src/main/resources/defaultMetricDimensions.json
@@ -62,6 +62,12 @@
   "task/pending/count" : { "dimensions" : ["dataSource"], "type" : "count" },
   "task/waiting/count" : { "dimensions" : ["da

[druid] branch master updated: add vectorizeVirtualColumns query context parameter (#10432)

2020-09-28 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/master by this push:
 new 1d6cb62  add vectorizeVirtualColumns query context parameter (#10432)
1d6cb62 is described below

commit 1d6cb624f4a455f45f41ef4b773cf21859a09ef4
Author: Clint Wylie 
AuthorDate: Mon Sep 28 18:48:34 2020 -0700

add vectorizeVirtualColumns query context parameter (#10432)

* add vectorizeVirtualColumns query context parameter

* oops

* spelling

* default to false, more docs

* fix test

* fix spelling
---
 .../benchmark/FilteredAggregatorBenchmark.java |   8 +-
 .../apache/druid/benchmark/query/SqlBenchmark.java |  11 +-
 .../benchmark/query/SqlExpressionBenchmark.java|   6 +-
 docs/misc/math-expr.md |  15 +-
 docs/querying/query-context.md |   3 +-
 .../java/org/apache/druid/query/QueryContexts.java |  12 +
 .../druid/query/groupby/GroupByQueryConfig.java|   4 +-
 .../epinephelinae/vector/VectorGroupByEngine.java  |   2 +
 .../query/timeseries/TimeseriesQueryEngine.java|   6 +-
 .../segment/QueryableIndexStorageAdapter.java  |   2 +-
 .../org/apache/druid/segment/VirtualColumns.java   |  12 +-
 .../mean/DoubleMeanAggregationTest.java|   2 +-
 .../query/groupby/GroupByQueryRunnerTest.java  |   3 +-
 .../timeseries/TimeseriesQueryRunnerTest.java  |   6 +-
 .../virtual/AlwaysTwoCounterAggregatorFactory.java |   4 +-
 .../virtual/AlwaysTwoVectorizedVirtualColumn.java  |  18 +-
 .../virtual/VectorizedVirtualColumnTest.java   | 302 -
 .../druid/sql/calcite/BaseCalciteQueryTest.java|   5 +-
 .../apache/druid/sql/calcite/CalciteQueryTest.java |  12 +-
 .../calcite/SqlVectorizedExpressionSanityTest.java |  11 +-
 website/.spelling  |   1 +
 21 files changed, 410 insertions(+), 35 deletions(-)

diff --git 
a/benchmarks/src/test/java/org/apache/druid/benchmark/FilteredAggregatorBenchmark.java
 
b/benchmarks/src/test/java/org/apache/druid/benchmark/FilteredAggregatorBenchmark.java
index 47b5317..560148b 100644
--- 
a/benchmarks/src/test/java/org/apache/druid/benchmark/FilteredAggregatorBenchmark.java
+++ 
b/benchmarks/src/test/java/org/apache/druid/benchmark/FilteredAggregatorBenchmark.java
@@ -32,6 +32,7 @@ import org.apache.druid.java.util.common.logger.Logger;
 import org.apache.druid.query.Druids;
 import org.apache.druid.query.FinalizeResultsQueryRunner;
 import org.apache.druid.query.Query;
+import org.apache.druid.query.QueryContexts;
 import org.apache.druid.query.QueryPlus;
 import org.apache.druid.query.QueryRunner;
 import org.apache.druid.query.QueryRunnerFactory;
@@ -239,7 +240,12 @@ public class FilteredAggregatorBenchmark
 );
 
 final QueryPlus queryToRun = QueryPlus.wrap(
-query.withOverriddenContext(ImmutableMap.of("vectorize", vectorize))
+query.withOverriddenContext(
+ImmutableMap.of(
+QueryContexts.VECTORIZE_KEY, vectorize,
+QueryContexts.VECTORIZE_VIRTUAL_COLUMNS_KEY, vectorize
+)
+)
 );
 Sequence queryResult = theRunner.run(queryToRun, 
ResponseContext.createEmpty());
 return queryResult.toList();
diff --git 
a/benchmarks/src/test/java/org/apache/druid/benchmark/query/SqlBenchmark.java 
b/benchmarks/src/test/java/org/apache/druid/benchmark/query/SqlBenchmark.java
index 55dc74c..38b5c3a 100644
--- 
a/benchmarks/src/test/java/org/apache/druid/benchmark/query/SqlBenchmark.java
+++ 
b/benchmarks/src/test/java/org/apache/druid/benchmark/query/SqlBenchmark.java
@@ -27,6 +27,7 @@ import 
org.apache.druid.java.util.common.granularity.Granularities;
 import org.apache.druid.java.util.common.guava.Sequence;
 import org.apache.druid.java.util.common.io.Closer;
 import org.apache.druid.java.util.common.logger.Logger;
+import org.apache.druid.query.QueryContexts;
 import org.apache.druid.query.QueryRunnerFactoryConglomerate;
 import org.apache.druid.segment.QueryableIndex;
 import org.apache.druid.segment.generator.GeneratorBasicSchemas;
@@ -434,7 +435,10 @@ public class SqlBenchmark
   @OutputTimeUnit(TimeUnit.MILLISECONDS)
   public void querySql(Blackhole blackhole) throws Exception
   {
-final Map context = ImmutableMap.of("vectorize", 
vectorize);
+final Map context = ImmutableMap.of(
+QueryContexts.VECTORIZE_KEY, vectorize,
+QueryContexts.VECTORIZE_VIRTUAL_COLUMNS_KEY, vectorize
+);
 final AuthenticationResult authenticationResult = 
NoopEscalator.getInstance()

.createEscalatedAuthenticationResult();
 try (final DruidPlanner planner = plannerFactory.createPlanner(context, 
ImmutableList.of(), authenticationResult)) {
@@ -450,7 +45

[druid] branch master updated (cb30b1f -> 0cc9eb4)

2020-09-24 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git.


from cb30b1f  Automatically determine numShards for parallel ingestion hash 
partitioning (#10419)
 add 0cc9eb4  Store hash partition function in dataSegment and allow 
segment pruning only when hash partition function is provided (#10288)

No new revisions were added by this update.

Summary of changes:
 .../indexer/partitions/HashedPartitionsSpec.java   |  31 ++-
 .../BuildingHashBasedNumberedShardSpec.java|  17 +-
 .../timeline/partition/BuildingShardSpec.java  |   7 -
 .../HashBasedNumberedPartialShardSpec.java |  19 +-
 .../partition/HashBasedNumberedShardSpec.java  | 245 ++---
 .../timeline/partition/HashBucketShardSpec.java|  40 +++-
 .../timeline/partition/HashPartitionFunction.java  |  62 ++
 .../druid/timeline/partition/HashPartitioner.java  | 101 +
 .../druid/timeline/partition/LinearShardSpec.java  |   6 -
 .../druid/timeline/partition/NoneShardSpec.java|   6 -
 .../partition/NumberedOverwriteShardSpec.java  |   6 -
 .../timeline/partition/NumberedShardSpec.java  |   6 -
 .../timeline/partition/RangeBucketShardSpec.java   |  13 +-
 .../apache/druid/timeline/partition/ShardSpec.java |   4 -
 .../partition/SingleDimensionShardSpec.java|   7 +-
 .../org/apache/druid/timeline/DataSegmentTest.java |   7 -
 .../BuildingHashBasedNumberedShardSpecTest.java|  22 +-
 .../HashBasedNumberedPartialShardSpecTest.java |  14 +-
 .../partition/HashBasedNumberedShardSpecTest.java  | 245 ++---
 .../partition/HashBucketShardSpecTest.java |  35 ++-
 .../partition/NumberedOverwriteShardSpecTest.java  |   2 +-
 .../timeline/partition/NumberedShardSpecTest.java  |   2 +-
 .../partition/PartitionHolderCompletenessTest.java |   6 +-
 .../partition/SingleDimensionShardSpecTest.java|   4 +-
 docs/ingestion/hadoop.md   |  11 +
 docs/ingestion/index.md|   2 +-
 docs/ingestion/native-batch.md |  23 +-
 docs/querying/query-context.md |   1 +
 .../MaterializedViewSupervisorTest.java|  16 +-
 indexing-hadoop/pom.xml|   5 +
 .../indexer/DetermineHashedPartitionsJob.java  |  13 ++
 .../HadoopDruidDetermineConfigurationJob.java  |   5 +
 .../druid/indexer/BatchDeltaIngestionTest.java |  11 +-
 .../indexer/DetermineHashedPartitionsJobTest.java  |  39 +++-
 .../HadoopDruidDetermineConfigurationJobTest.java  | 127 +++
 .../indexer/HadoopDruidIndexerConfigTest.java  |  19 +-
 .../druid/indexer/IndexGeneratorJobTest.java   |  20 +-
 .../partitions/HashedPartitionsSpecTest.java   |  11 +
 .../parallel/PartialDimensionCardinalityTask.java  |   9 +-
 .../batch/partition/HashPartitionAnalysis.java |   1 +
 .../common/actions/SegmentAllocateActionTest.java  |  10 +-
 .../druid/indexing/common/task/IndexTaskTest.java  |  78 ++-
 .../druid/indexing/common/task/ShardSpecsTest.java |   5 +-
 .../batch/parallel/GenericPartitionStatTest.java   |   2 +
 ...ashPartitionMultiPhaseParallelIndexingTest.java |  31 ++-
 .../parallel/ParallelIndexSupervisorTaskTest.java  |  10 +-
 .../parallel/ParallelIndexTestingFactory.java  |   2 +
 .../parallel/PerfectRollupWorkerTaskTest.java  |   1 +
 .../druid/indexing/overlord/TaskLockboxTest.java   |   4 +-
 .../druid/tests/hadoop/ITHadoopIndexTest.java  |   2 +
 .../indexer/ITPerfectRollupParallelIndexTest.java  |   4 +-
 .../java/org/apache/druid/query/QueryContexts.java |   6 +
 .../org/apache/druid/query/QueryContextsTest.java  |  24 ++
 .../druid/client/CachingClusteredClient.java   |  18 +-
 .../druid/client/CachingClusteredClientTest.java   | 238 ++--
 .../IndexerSQLMetadataStorageCoordinatorTest.java  |   4 +-
 .../appenderator/SegmentPublisherHelperTest.java   |  40 +++-
 .../coordinator/duty/CompactSegmentsTest.java  |   1 +
 website/.spelling  |   2 +
 59 files changed, 1311 insertions(+), 391 deletions(-)
 create mode 100644 
core/src/main/java/org/apache/druid/timeline/partition/HashPartitionFunction.java
 create mode 100644 
core/src/main/java/org/apache/druid/timeline/partition/HashPartitioner.java
 create mode 100644 
indexing-hadoop/src/test/java/org/apache/druid/indexer/HadoopDruidDetermineConfigurationJobTest.java


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] branch master updated (cb30b1f -> 0cc9eb4)

2020-09-24 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git.


from cb30b1f  Automatically determine numShards for parallel ingestion hash 
partitioning (#10419)
 add 0cc9eb4  Store hash partition function in dataSegment and allow 
segment pruning only when hash partition function is provided (#10288)

No new revisions were added by this update.

Summary of changes:
 .../indexer/partitions/HashedPartitionsSpec.java   |  31 ++-
 .../BuildingHashBasedNumberedShardSpec.java|  17 +-
 .../timeline/partition/BuildingShardSpec.java  |   7 -
 .../HashBasedNumberedPartialShardSpec.java |  19 +-
 .../partition/HashBasedNumberedShardSpec.java  | 245 ++---
 .../timeline/partition/HashBucketShardSpec.java|  40 +++-
 .../timeline/partition/HashPartitionFunction.java  |  62 ++
 .../druid/timeline/partition/HashPartitioner.java  | 101 +
 .../druid/timeline/partition/LinearShardSpec.java  |   6 -
 .../druid/timeline/partition/NoneShardSpec.java|   6 -
 .../partition/NumberedOverwriteShardSpec.java  |   6 -
 .../timeline/partition/NumberedShardSpec.java  |   6 -
 .../timeline/partition/RangeBucketShardSpec.java   |  13 +-
 .../apache/druid/timeline/partition/ShardSpec.java |   4 -
 .../partition/SingleDimensionShardSpec.java|   7 +-
 .../org/apache/druid/timeline/DataSegmentTest.java |   7 -
 .../BuildingHashBasedNumberedShardSpecTest.java|  22 +-
 .../HashBasedNumberedPartialShardSpecTest.java |  14 +-
 .../partition/HashBasedNumberedShardSpecTest.java  | 245 ++---
 .../partition/HashBucketShardSpecTest.java |  35 ++-
 .../partition/NumberedOverwriteShardSpecTest.java  |   2 +-
 .../timeline/partition/NumberedShardSpecTest.java  |   2 +-
 .../partition/PartitionHolderCompletenessTest.java |   6 +-
 .../partition/SingleDimensionShardSpecTest.java|   4 +-
 docs/ingestion/hadoop.md   |  11 +
 docs/ingestion/index.md|   2 +-
 docs/ingestion/native-batch.md |  23 +-
 docs/querying/query-context.md |   1 +
 .../MaterializedViewSupervisorTest.java|  16 +-
 indexing-hadoop/pom.xml|   5 +
 .../indexer/DetermineHashedPartitionsJob.java  |  13 ++
 .../HadoopDruidDetermineConfigurationJob.java  |   5 +
 .../druid/indexer/BatchDeltaIngestionTest.java |  11 +-
 .../indexer/DetermineHashedPartitionsJobTest.java  |  39 +++-
 .../HadoopDruidDetermineConfigurationJobTest.java  | 127 +++
 .../indexer/HadoopDruidIndexerConfigTest.java  |  19 +-
 .../druid/indexer/IndexGeneratorJobTest.java   |  20 +-
 .../partitions/HashedPartitionsSpecTest.java   |  11 +
 .../parallel/PartialDimensionCardinalityTask.java  |   9 +-
 .../batch/partition/HashPartitionAnalysis.java |   1 +
 .../common/actions/SegmentAllocateActionTest.java  |  10 +-
 .../druid/indexing/common/task/IndexTaskTest.java  |  78 ++-
 .../druid/indexing/common/task/ShardSpecsTest.java |   5 +-
 .../batch/parallel/GenericPartitionStatTest.java   |   2 +
 ...ashPartitionMultiPhaseParallelIndexingTest.java |  31 ++-
 .../parallel/ParallelIndexSupervisorTaskTest.java  |  10 +-
 .../parallel/ParallelIndexTestingFactory.java  |   2 +
 .../parallel/PerfectRollupWorkerTaskTest.java  |   1 +
 .../druid/indexing/overlord/TaskLockboxTest.java   |   4 +-
 .../druid/tests/hadoop/ITHadoopIndexTest.java  |   2 +
 .../indexer/ITPerfectRollupParallelIndexTest.java  |   4 +-
 .../java/org/apache/druid/query/QueryContexts.java |   6 +
 .../org/apache/druid/query/QueryContextsTest.java  |  24 ++
 .../druid/client/CachingClusteredClient.java   |  18 +-
 .../druid/client/CachingClusteredClientTest.java   | 238 ++--
 .../IndexerSQLMetadataStorageCoordinatorTest.java  |   4 +-
 .../appenderator/SegmentPublisherHelperTest.java   |  40 +++-
 .../coordinator/duty/CompactSegmentsTest.java  |   1 +
 website/.spelling  |   2 +
 59 files changed, 1311 insertions(+), 391 deletions(-)
 create mode 100644 
core/src/main/java/org/apache/druid/timeline/partition/HashPartitionFunction.java
 create mode 100644 
core/src/main/java/org/apache/druid/timeline/partition/HashPartitioner.java
 create mode 100644 
indexing-hadoop/src/test/java/org/apache/druid/indexer/HadoopDruidDetermineConfigurationJobTest.java


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] branch master updated (89160c2 -> cb30b1f)

2020-09-24 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git.


from 89160c2  better query view initial state (#10431)
 add cb30b1f  Automatically determine numShards for parallel ingestion hash 
partitioning (#10419)

No new revisions were added by this update.

Summary of changes:
 .../indexer/partitions/HashedPartitionsSpec.java   |   4 +-
 .../partition/HashBasedNumberedShardSpec.java  |   5 +-
 docs/ingestion/native-batch.md |  11 +-
 .../druid/indexing/common/task/IndexTask.java  |   6 +-
 .../apache/druid/indexing/common/task/Task.java|   2 +
 .../batch/parallel/DimensionCardinalityReport.java | 109 +
 .../parallel/ParallelIndexSupervisorTask.java  | 126 ++-
 ...mensionCardinalityParallelIndexTaskRunner.java} |  21 +-
 .../parallel/PartialDimensionCardinalityTask.java  | 245 
 ...HashSegmentGenerateParallelIndexTaskRunner.java |   9 +-
 .../parallel/PartialHashSegmentGenerateTask.java   |  23 +-
 .../common/task/batch/parallel/SubTaskReport.java  |   1 +
 .../AbstractParallelIndexSupervisorTaskTest.java   |   3 +-
 .../parallel/DimensionCardinalityReportTest.java   | 142 
 ...ashPartitionMultiPhaseParallelIndexingTest.java |  36 ++-
 .../ParallelIndexSupervisorTaskSerdeTest.java  |  15 +-
 ...va => PartialDimensionCardinalityTaskTest.java} | 246 +++--
 .../PartialHashSegmentGenerateTaskTest.java|   6 +-
 .../parallel/PerfectRollupWorkerTaskTest.java  |   5 +-
 .../tests/indexer/AbstractITBatchIndexTest.java|   2 +
 20 files changed, 791 insertions(+), 226 deletions(-)
 create mode 100644 
indexing-service/src/main/java/org/apache/druid/indexing/common/task/batch/parallel/DimensionCardinalityReport.java
 copy 
indexing-service/src/main/java/org/apache/druid/indexing/common/task/batch/parallel/{PartialDimensionDistributionParallelIndexTaskRunner.java
 => PartialDimensionCardinalityParallelIndexTaskRunner.java} (75%)
 create mode 100644 
indexing-service/src/main/java/org/apache/druid/indexing/common/task/batch/parallel/PartialDimensionCardinalityTask.java
 create mode 100644 
indexing-service/src/test/java/org/apache/druid/indexing/common/task/batch/parallel/DimensionCardinalityReportTest.java
 copy 
indexing-service/src/test/java/org/apache/druid/indexing/common/task/batch/parallel/{PartialDimensionDistributionTaskTest.java
 => PartialDimensionCardinalityTaskTest.java} (53%)


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] branch master updated (89160c2 -> cb30b1f)

2020-09-24 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git.


from 89160c2  better query view initial state (#10431)
 add cb30b1f  Automatically determine numShards for parallel ingestion hash 
partitioning (#10419)

No new revisions were added by this update.

Summary of changes:
 .../indexer/partitions/HashedPartitionsSpec.java   |   4 +-
 .../partition/HashBasedNumberedShardSpec.java  |   5 +-
 docs/ingestion/native-batch.md |  11 +-
 .../druid/indexing/common/task/IndexTask.java  |   6 +-
 .../apache/druid/indexing/common/task/Task.java|   2 +
 .../batch/parallel/DimensionCardinalityReport.java | 109 +
 .../parallel/ParallelIndexSupervisorTask.java  | 126 ++-
 ...mensionCardinalityParallelIndexTaskRunner.java} |  21 +-
 .../parallel/PartialDimensionCardinalityTask.java  | 245 
 ...HashSegmentGenerateParallelIndexTaskRunner.java |   9 +-
 .../parallel/PartialHashSegmentGenerateTask.java   |  23 +-
 .../common/task/batch/parallel/SubTaskReport.java  |   1 +
 .../AbstractParallelIndexSupervisorTaskTest.java   |   3 +-
 .../parallel/DimensionCardinalityReportTest.java   | 142 
 ...ashPartitionMultiPhaseParallelIndexingTest.java |  36 ++-
 .../ParallelIndexSupervisorTaskSerdeTest.java  |  15 +-
 ...va => PartialDimensionCardinalityTaskTest.java} | 246 +++--
 .../PartialHashSegmentGenerateTaskTest.java|   6 +-
 .../parallel/PerfectRollupWorkerTaskTest.java  |   5 +-
 .../tests/indexer/AbstractITBatchIndexTest.java|   2 +
 20 files changed, 791 insertions(+), 226 deletions(-)
 create mode 100644 
indexing-service/src/main/java/org/apache/druid/indexing/common/task/batch/parallel/DimensionCardinalityReport.java
 copy 
indexing-service/src/main/java/org/apache/druid/indexing/common/task/batch/parallel/{PartialDimensionDistributionParallelIndexTaskRunner.java
 => PartialDimensionCardinalityParallelIndexTaskRunner.java} (75%)
 create mode 100644 
indexing-service/src/main/java/org/apache/druid/indexing/common/task/batch/parallel/PartialDimensionCardinalityTask.java
 create mode 100644 
indexing-service/src/test/java/org/apache/druid/indexing/common/task/batch/parallel/DimensionCardinalityReportTest.java
 copy 
indexing-service/src/test/java/org/apache/druid/indexing/common/task/batch/parallel/{PartialDimensionDistributionTaskTest.java
 => PartialDimensionCardinalityTaskTest.java} (53%)


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] branch master updated (e5f0da3 -> a5c46dc)

2020-09-09 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git.


from e5f0da3  Fix stringFirst/stringLast rollup during ingestion (#10332)
 add a5c46dc  Add vectorization for druid-histogram extension (#10304)

No new revisions were added by this update.

Summary of changes:
 docs/querying/query-context.md |   3 +-
 .../ApproximateHistogramAggregatorFactory.java |  22 +++
 .../ApproximateHistogramBufferAggregator.java  |  34 +---
 ...ApproximateHistogramBufferAggregatorHelper.java |  70 +++
 ...proximateHistogramFoldingAggregatorFactory.java |  31 ++-
 ...pproximateHistogramFoldingBufferAggregator.java |  40 +---
 ...ateHistogramFoldingBufferAggregatorHelper.java} |  71 +++
 ...pproximateHistogramFoldingVectorAggregator.java |  90 +
 .../ApproximateHistogramVectorAggregator.java  |  51 ++---
 .../histogram/FixedBucketsHistogram.java   |  30 +++
 .../histogram/FixedBucketsHistogramAggregator.java |  19 +-
 .../FixedBucketsHistogramAggregatorFactory.java|  33 
 .../FixedBucketsHistogramBufferAggregator.java |  37 +---
 ...ixedBucketsHistogramBufferAggregatorHelper.java |  88 +
 .../FixedBucketsHistogramVectorAggregator.java |  99 ++
 ...ximateHistogramFoldingVectorAggregatorTest.java | 143 ++
 .../ApproximateHistogramVectorAggregatorTest.java  | 152 +++
 .../histogram/FixedBucketsHistogramTest.java   |  88 +
 .../FixedBucketsHistogramVectorAggregatorTest.java | 209 +
 .../histogram/sql/QuantileSqlAggregatorTest.java   |   2 +
 website/.spelling  |   1 +
 21 files changed, 1131 insertions(+), 182 deletions(-)
 create mode 100644 
extensions-core/histogram/src/main/java/org/apache/druid/query/aggregation/histogram/ApproximateHistogramBufferAggregatorHelper.java
 copy 
extensions-core/histogram/src/main/java/org/apache/druid/query/aggregation/histogram/{ApproximateHistogramFoldingBufferAggregator.java
 => ApproximateHistogramFoldingBufferAggregatorHelper.java} (55%)
 create mode 100644 
extensions-core/histogram/src/main/java/org/apache/druid/query/aggregation/histogram/ApproximateHistogramFoldingVectorAggregator.java
 copy 
processing/src/main/java/org/apache/druid/query/aggregation/FloatMinVectorAggregator.java
 => 
extensions-core/histogram/src/main/java/org/apache/druid/query/aggregation/histogram/ApproximateHistogramVectorAggregator.java
 (55%)
 create mode 100644 
extensions-core/histogram/src/main/java/org/apache/druid/query/aggregation/histogram/FixedBucketsHistogramBufferAggregatorHelper.java
 create mode 100644 
extensions-core/histogram/src/main/java/org/apache/druid/query/aggregation/histogram/FixedBucketsHistogramVectorAggregator.java
 create mode 100644 
extensions-core/histogram/src/test/java/org/apache/druid/query/aggregation/histogram/ApproximateHistogramFoldingVectorAggregatorTest.java
 create mode 100644 
extensions-core/histogram/src/test/java/org/apache/druid/query/aggregation/histogram/ApproximateHistogramVectorAggregatorTest.java
 create mode 100644 
extensions-core/histogram/src/test/java/org/apache/druid/query/aggregation/histogram/FixedBucketsHistogramVectorAggregatorTest.java


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] branch master updated (21703d8 -> f82fd22)

2020-08-26 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git.


from 21703d8  Fix handling of 'join' on top of 'union' datasources. (#10318)
 add f82fd22  Move tools for indexing to TaskToolbox instead of injecting 
them in constructor (#10308)

No new revisions were added by this update.

Summary of changes:
 .../IncrementalPublishingKafkaIndexTaskRunner.java |  10 --
 .../druid/indexing/kafka/KafkaIndexTask.java   |  19 +--
 .../indexing/kafka/supervisor/KafkaSupervisor.java |   6 +-
 .../druid/indexing/kafka/KafkaIndexTaskTest.java   |  17 ++-
 .../kafka/supervisor/KafkaSupervisorTest.java  |   7 +-
 .../druid/indexing/kinesis/KinesisIndexTask.java   |  19 +--
 .../indexing/kinesis/KinesisIndexTaskRunner.java   |  10 --
 .../kinesis/supervisor/KinesisSupervisor.java  |   6 +-
 .../kinesis/KinesisIndexTaskSerdeTest.java |   4 -
 .../indexing/kinesis/KinesisIndexTaskTest.java |  31 +++--
 .../kinesis/supervisor/KinesisSupervisorTest.java  |   7 +-
 .../apache/druid/indexing/common/TaskToolbox.java  |  78 +++-
 .../druid/indexing/common/TaskToolboxFactory.java  |  47 ++-
 .../task/AppenderatorDriverRealtimeIndexTask.java  |  48 ++-
 .../druid/indexing/common/task/CompactionTask.java |  84 +
 .../indexing/common/task/HadoopIndexTask.java  |   5 +-
 .../druid/indexing/common/task/IndexTask.java  |  88 +
 .../GeneratedPartitionsMetadataReport.java |   2 +-
 .../InputSourceSplitParallelIndexTaskRunner.java   |   7 +-
 .../batch/parallel/LegacySinglePhaseSubTask.java   |  14 +--
 .../batch/parallel/ParallelIndexPhaseRunner.java   |  14 +--
 .../parallel/ParallelIndexSupervisorTask.java  |  59 +++--
 ...mensionDistributionParallelIndexTaskRunner.java |  39 +-
 .../parallel/PartialDimensionDistributionTask.java |  23 +---
 ...GenericSegmentMergeParallelIndexTaskRunner.java |  11 +-
 .../parallel/PartialGenericSegmentMergeTask.java   |  13 +-
 ...HashSegmentGenerateParallelIndexTaskRunner.java |  11 +-
 .../parallel/PartialHashSegmentGenerateTask.java   |  12 +-
 ...angeSegmentGenerateParallelIndexTaskRunner.java |   9 +-
 .../parallel/PartialRangeSegmentGenerateTask.java  |  12 +-
 .../batch/parallel/PartialSegmentGenerateTask.java |  21 +---
 .../batch/parallel/PartialSegmentMergeTask.java|  19 +--
 .../SinglePhaseParallelIndexTaskRunner.java|   7 +-
 .../task/batch/parallel/SinglePhaseSubTask.java|  21 +---
 .../batch/parallel/SinglePhaseSubTaskSpec.java |  11 +-
 .../seekablestream/SeekableStreamIndexTask.java|  24 +---
 .../SeekableStreamIndexTaskRunner.java |  40 +++---
 .../druid/indexing/common/TaskToolboxTest.java |  13 ++
 .../AppenderatorDriverRealtimeIndexTaskTest.java   |  35 ++
 .../task/ClientCompactionTaskQuerySerdeTest.java   |   9 +-
 .../common/task/CompactionTaskParallelRunTest.java |  60 +
 .../common/task/CompactionTaskRunTest.java | 104 +++
 .../indexing/common/task/CompactionTaskTest.java   |  86 -
 .../druid/indexing/common/task/IndexTaskTest.java  | 139 -
 .../indexing/common/task/IngestionTestBase.java|  11 ++
 .../common/task/RealtimeIndexTaskTest.java |  22 ++--
 .../druid/indexing/common/task/TaskSerdeTest.java  |   8 --
 .../AbstractMultiPhaseParallelIndexingTest.java|   7 +-
 .../AbstractParallelIndexSupervisorTaskTest.java   |  82 +++-
 .../parallel/ParallelIndexPhaseRunnerTest.java |   7 +-
 .../ParallelIndexSupervisorTaskKillTest.java   |  47 +++
 .../ParallelIndexSupervisorTaskResourceTest.java   |  36 ++
 .../ParallelIndexSupervisorTaskSerdeTest.java  |  21 +---
 .../parallel/ParallelIndexSupervisorTaskTest.java  |   5 -
 .../task/batch/parallel/PartialCompactionTest.java |  13 +-
 .../PartialDimensionDistributionTaskTest.java  |  49 
 .../PartialGenericSegmentMergeTaskTest.java|   5 +-
 .../PartialHashSegmentGenerateTaskTest.java|   5 +-
 .../PartialRangeSegmentGenerateTaskTest.java   |   9 +-
 .../parallel/SinglePhaseParallelIndexingTest.java  |  13 +-
 .../overlord/SingleTaskBackgroundRunnerTest.java   |  12 ++
 .../druid/indexing/overlord/TaskLifecycleTest.java |  66 +-
 .../SeekableStreamSupervisorStateTest.java |  14 +--
 .../indexing/worker/WorkerTaskManagerTest.java |  28 ++---
 .../indexing/worker/WorkerTaskMonitorTest.java |  15 ++-
 .../org/apache/druid/cli/CliMiddleManager.java |   3 +-
 .../java/org/apache/druid/cli/CliOverlord.java |   3 +-
 67 files changed, 549 insertions(+), 1233 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] branch master updated (0891b1f -> 9a81740)

2020-08-18 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git.


from 0891b1f  Add note about aggregations on floats (#10285)
 add 9a81740  Don't log the entire task spec (#10278)

No new revisions were added by this update.

Summary of changes:
 .../org/apache/druid/common/utils/IdUtils.java |  47 ++
 .../org/apache/druid/common/utils/IdUtilsTest.java |  49 ++
 .../druid/indexing/common/task/AbstractTask.java   |  39 +---
 .../druid/indexing/common/task/CompactionTask.java |   4 +-
 .../batch/parallel/ParallelIndexPhaseRunner.java   |  28 +-
 .../parallel/ParallelIndexSupervisorTask.java  |  16 +---
 .../common/task/batch/parallel/TaskMonitor.java|   4 +-
 .../task/ClientCompactionTaskQuerySerdeTest.java   | 100 -
 ...ClientKillUnusedSegmentsTaskQuerySerdeTest.java |  80 +
 .../druid/indexing/common/task/TaskSerdeTest.java  |  37 
 .../AbstractParallelIndexSupervisorTaskTest.java   |   2 +-
 .../ParallelIndexSupervisorTaskKillTest.java   |   2 +-
 .../ParallelIndexSupervisorTaskResourceTest.java   |   2 +-
 .../task/batch/parallel/TaskMonitorTest.java   |   2 +-
 .../client/indexing/ClientCompactionTaskQuery.java |  25 --
 .../ClientKillUnusedSegmentsTaskQuery.java |  38 +++-
 .../druid/client/indexing/ClientTaskQuery.java |   6 +-
 .../client/indexing/HttpIndexingServiceClient.java |  39 +---
 .../client/indexing/IndexingServiceClient.java |   5 +-
 .../server/coordinator/duty/CompactSegments.java   |   1 +
 .../coordinator/duty/KillUnusedSegments.java   |   2 +-
 .../druid/server/http/DataSourcesResource.java |   2 +-
 ... => ClientKillUnusedSegmentsTaskQueryTest.java} |  14 ++-
 .../client/indexing/NoopIndexingServiceClient.java |   5 +-
 .../coordinator/duty/CompactSegmentsTest.java  |   9 +-
 .../druid/server/http/DataSourcesResourceTest.java |   2 +-
 26 files changed, 401 insertions(+), 159 deletions(-)
 create mode 100644 
indexing-service/src/test/java/org/apache/druid/indexing/common/task/ClientKillUnusedSegmentsTaskQuerySerdeTest.java
 rename 
server/src/test/java/org/apache/druid/client/indexing/{ClientKillUnusedSegmentsQueryTest.java
 => ClientKillUnusedSegmentsTaskQueryTest.java} (82%)


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



  1   2   3   4   5   6   >