[GitHub] [druid] ziggythehamster commented on issue #10983: SQL JSON interface should have a context option to write decimals as strings to avoid unintended float coercion

2021-03-23 Thread GitBox


ziggythehamster commented on issue #10983:
URL: https://github.com/apache/druid/issues/10983#issuecomment-805547456


   That's a good point that I hadn't remembered when I wrote this bug. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [druid] maytasm opened a new pull request #11025: Add an option for ingestion task to drop (mark unused) segments that are old

2021-03-23 Thread GitBox


maytasm opened a new pull request #11025:
URL: https://github.com/apache/druid/pull/11025


   Add an option for ingestion task to drop (mark unused) segments that are old
   
   ### Description
   
   Will write more details and use cases here tomorrow morning.
   
   This PR has:
   - [ ] been self-reviewed.
  - [ ] using the [concurrency 
checklist](https://github.com/apache/druid/blob/master/dev/code-review/concurrency.md)
 (Remove this item if the PR doesn't have any relation to concurrency.)
   - [ ] added documentation for new or modified features or behaviors.
   - [ ] added Javadocs for most classes and all non-trivial methods. Linked 
related entities via Javadoc links.
   - [ ] added or updated version, license, or notice information in 
[licenses.yaml](https://github.com/apache/druid/blob/master/dev/license.md)
   - [ ] added comments explaining the "why" and the intent of the code 
wherever would not be obvious for an unfamiliar reader.
   - [ ] added unit tests or modified existing tests to cover new code paths, 
ensuring the threshold for [code 
coverage](https://github.com/apache/druid/blob/master/dev/code-review/code-coverage.md)
 is met.
   - [ ] added integration tests.
   - [ ] been tested in a test Druid cluster.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [druid] clintropolis commented on a change in pull request #11018: add protobuf inputformat

2021-03-23 Thread GitBox


clintropolis commented on a change in pull request #11018:
URL: https://github.com/apache/druid/pull/11018#discussion_r600143986



##
File path: 
extensions-core/protobuf-extensions/src/main/java/org/apache/druid/data/input/protobuf/ProtobufExtensionsModule.java
##
@@ -37,7 +37,8 @@
 return Collections.singletonList(
 new SimpleModule("ProtobufInputRowParserModule")
 .registerSubtypes(
-new NamedType(ProtobufInputRowParser.class, "protobuf")
+new NamedType(ProtobufInputRowParser.class, "protobuf"),
+new NamedType(ProtobufInputFormat.class, "protobuf_format")

Review comment:
   I think this could just be `protobuf` the same as the parser name, since 
they are separate interfaces

##
File path: 
extensions-core/protobuf-extensions/src/main/java/org/apache/druid/data/input/protobuf/ProtobufReader.java
##
@@ -0,0 +1,88 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.data.input.protobuf;
+
+import com.fasterxml.jackson.core.JsonProcessingException;
+import com.fasterxml.jackson.databind.JsonNode;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.google.common.collect.Iterators;
+import com.google.protobuf.util.JsonFormat;
+import org.apache.commons.io.IOUtils;
+import org.apache.druid.data.input.InputEntity;
+import org.apache.druid.data.input.InputRow;
+import org.apache.druid.data.input.InputRowSchema;
+import org.apache.druid.data.input.IntermediateRowParsingReader;
+import org.apache.druid.data.input.impl.MapInputRowParser;
+import org.apache.druid.java.util.common.CloseableIterators;
+import org.apache.druid.java.util.common.parsers.CloseableIterator;
+import org.apache.druid.java.util.common.parsers.JSONFlattenerMaker;
+import org.apache.druid.java.util.common.parsers.JSONPathSpec;
+import org.apache.druid.java.util.common.parsers.ObjectFlattener;
+import org.apache.druid.java.util.common.parsers.ObjectFlatteners;
+import org.apache.druid.java.util.common.parsers.ParseException;
+
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.util.Collections;
+import java.util.List;
+import java.util.Map;
+
+public class ProtobufReader extends IntermediateRowParsingReader
+{
+  private final InputRowSchema inputRowSchema;
+  private final InputEntity source;
+  private final ObjectFlattener recordFlattener;
+  private final ProtobufBytesDecoder protobufBytesDecoder;
+
+  ProtobufReader(
+  InputRowSchema inputRowSchema,
+  InputEntity source,
+  ProtobufBytesDecoder protobufBytesDecoder,
+  JSONPathSpec flattenSpec
+  )
+  {
+this.inputRowSchema = inputRowSchema;
+this.source = source;
+this.protobufBytesDecoder = protobufBytesDecoder;
+this.recordFlattener = ObjectFlatteners.create(flattenSpec, new 
JSONFlattenerMaker(true));
+  }
+
+  @Override
+  protected CloseableIterator intermediateRowIterator() throws 
IOException
+  {
+return CloseableIterators.withEmptyBaggage(
+
Iterators.singletonIterator(JsonFormat.printer().print(protobufBytesDecoder.parse(ByteBuffer.wrap(IOUtils.toByteArray(source.open(
+)));

Review comment:
   The `InputRowParser` implementation for protobuf has an optimization 
that skips the conversion to JSON if a flattenSpec is not defined (see #), 
since the overhead to convert to be able to flatten can slow input processing a 
fair bit (from the numbers in that PR).
   
   To retain this, it might make sense to make the intermediary format be 
`ByteBuffer` or `byte[]`, and handle the case of having a `flattenSpec` or not 
separately. I think these could probably be done within this same class, just 
make `parseInputRows` behave differently for each situation, and it maybe makes 
sense to use JSON conversion for the `toMap` method (it is by 
`InputSourceSampler` for the sampler API).




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-

[GitHub] [druid] maytasm commented on a change in pull request #11019: Auto-compaction with segment granularity retrieve incomplete segments from timeline when interval overlap

2021-03-23 Thread GitBox


maytasm commented on a change in pull request #11019:
URL: https://github.com/apache/druid/pull/11019#discussion_r60014



##
File path: 
server/src/main/java/org/apache/druid/server/coordinator/duty/NewestSegmentFirstIterator.java
##
@@ -249,14 +246,13 @@ private void updateQueue(String dataSourceName, 
DataSourceCompactionConfig confi
   private static class CompactibleTimelineObjectHolderCursor implements 
Iterator>
   {
 private final List> holders;
-private final Map, ShardSpec> originalShardSpecs;
-private final Map, String> originalVersion;
+private final VersionedIntervalTimeline 
originalTimeline;

Review comment:
   Done




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [druid] maytasm commented on a change in pull request #11019: Auto-compaction with segment granularity retrieve incomplete segments from timeline when interval overlap

2021-03-23 Thread GitBox


maytasm commented on a change in pull request #11019:
URL: https://github.com/apache/druid/pull/11019#discussion_r600099799



##
File path: 
integration-tests/src/test/java/org/apache/druid/tests/coordinator/duty/ITAutoCompactionTest.java
##
@@ -438,6 +438,67 @@ public void 
testAutoCompactionDutyWithSegmentGranularityAndExistingCompactedSegm
 }
   }
 
+  @Test
+  public void 
testAutoCompactionDutyWithSegmentGranularityAndSmallerSegmentGranularityCoveringMultipleSegmentsInTimeline()
 throws Exception
+  {
+loadData(INDEX_TASK);
+try (final Closeable ignored = unloader(fullDatasourceName)) {
+  final List intervalsBeforeCompaction = 
coordinator.getSegmentIntervals(fullDatasourceName);
+  intervalsBeforeCompaction.sort(null);
+  // 4 segments across 2 days (4 total)...
+  verifySegmentsCount(4);
+  verifyQuery(INDEX_QUERIES_RESOURCE);
+
+  Granularity newGranularity = Granularities.YEAR;
+  submitCompactionConfig(MAX_ROWS_PER_SEGMENT_COMPACTED, NO_SKIP_OFFSET, 
new UserCompactionTaskGranularityConfig(newGranularity, null));
+
+  List expectedIntervalAfterCompaction = new ArrayList<>();
+  // We wil have one segment with interval of 2013-01-01/2014-01-01 
(compacted with YEAR)
+  for (String interval : intervalsBeforeCompaction) {
+for (Interval newinterval : newGranularity.getIterable(new 
Interval(interval, ISOChronology.getInstanceUTC( {
+  expectedIntervalAfterCompaction.add(newinterval.toString());
+}
+  }
+  forceTriggerAutoCompaction(1);
+  verifyQuery(INDEX_QUERIES_RESOURCE);
+  verifySegmentsCompacted(1, MAX_ROWS_PER_SEGMENT_COMPACTED);
+  checkCompactionIntervals(expectedIntervalAfterCompaction);
+
+  loadData(INDEX_TASK);
+  verifySegmentsCount(5);
+  verifyQuery(INDEX_QUERIES_RESOURCE);
+  // 5 segments. 1 compacted YEAR segment and 4 newly ingested DAY 
segments across 2 days
+  // We wil have one segment with interval of 2013-01-01/2014-01-01 
(compacted with YEAR) from the compaction
+  // two segments with interval of 2013-09-01/2013-10-01 (newly ingested 
with DAY)

Review comment:
   Done




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [druid] maytasm commented on a change in pull request #11019: Auto-compaction with segment granularity retrieve incomplete segments from timeline when interval overlap

2021-03-23 Thread GitBox


maytasm commented on a change in pull request #11019:
URL: https://github.com/apache/druid/pull/11019#discussion_r600098544



##
File path: 
integration-tests/src/test/java/org/apache/druid/tests/coordinator/duty/ITAutoCompactionTest.java
##
@@ -438,6 +438,67 @@ public void 
testAutoCompactionDutyWithSegmentGranularityAndExistingCompactedSegm
 }
   }
 
+  @Test
+  public void 
testAutoCompactionDutyWithSegmentGranularityAndSmallerSegmentGranularityCoveringMultipleSegmentsInTimeline()
 throws Exception
+  {
+loadData(INDEX_TASK);
+try (final Closeable ignored = unloader(fullDatasourceName)) {
+  final List intervalsBeforeCompaction = 
coordinator.getSegmentIntervals(fullDatasourceName);
+  intervalsBeforeCompaction.sort(null);
+  // 4 segments across 2 days (4 total)...
+  verifySegmentsCount(4);
+  verifyQuery(INDEX_QUERIES_RESOURCE);
+
+  Granularity newGranularity = Granularities.YEAR;
+  submitCompactionConfig(MAX_ROWS_PER_SEGMENT_COMPACTED, NO_SKIP_OFFSET, 
new UserCompactionTaskGranularityConfig(newGranularity, null));
+
+  List expectedIntervalAfterCompaction = new ArrayList<>();
+  // We wil have one segment with interval of 2013-01-01/2014-01-01 
(compacted with YEAR)
+  for (String interval : intervalsBeforeCompaction) {
+for (Interval newinterval : newGranularity.getIterable(new 
Interval(interval, ISOChronology.getInstanceUTC( {
+  expectedIntervalAfterCompaction.add(newinterval.toString());
+}
+  }
+  forceTriggerAutoCompaction(1);
+  verifyQuery(INDEX_QUERIES_RESOURCE);
+  verifySegmentsCompacted(1, MAX_ROWS_PER_SEGMENT_COMPACTED);
+  checkCompactionIntervals(expectedIntervalAfterCompaction);
+
+  loadData(INDEX_TASK);
+  verifySegmentsCount(5);
+  verifyQuery(INDEX_QUERIES_RESOURCE);
+  // 5 segments. 1 compacted YEAR segment and 4 newly ingested DAY 
segments across 2 days
+  // We wil have one segment with interval of 2013-01-01/2014-01-01 
(compacted with YEAR) from the compaction
+  // two segments with interval of 2013-09-01/2013-10-01 (newly ingested 
with DAY)

Review comment:
   The interval in the comment is incorrect. It should be one day interval 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [druid] jon-wei commented on a change in pull request #11019: Auto-compaction with segment granularity retrieve incomplete segments from timeline when interval overlap

2021-03-23 Thread GitBox


jon-wei commented on a change in pull request #11019:
URL: https://github.com/apache/druid/pull/11019#discussion_r600089575



##
File path: 
integration-tests/src/test/java/org/apache/druid/tests/coordinator/duty/ITAutoCompactionTest.java
##
@@ -438,6 +438,67 @@ public void 
testAutoCompactionDutyWithSegmentGranularityAndExistingCompactedSegm
 }
   }
 
+  @Test
+  public void 
testAutoCompactionDutyWithSegmentGranularityAndSmallerSegmentGranularityCoveringMultipleSegmentsInTimeline()
 throws Exception
+  {
+loadData(INDEX_TASK);
+try (final Closeable ignored = unloader(fullDatasourceName)) {
+  final List intervalsBeforeCompaction = 
coordinator.getSegmentIntervals(fullDatasourceName);
+  intervalsBeforeCompaction.sort(null);
+  // 4 segments across 2 days (4 total)...
+  verifySegmentsCount(4);
+  verifyQuery(INDEX_QUERIES_RESOURCE);
+
+  Granularity newGranularity = Granularities.YEAR;
+  submitCompactionConfig(MAX_ROWS_PER_SEGMENT_COMPACTED, NO_SKIP_OFFSET, 
new UserCompactionTaskGranularityConfig(newGranularity, null));
+
+  List expectedIntervalAfterCompaction = new ArrayList<>();
+  // We wil have one segment with interval of 2013-01-01/2014-01-01 
(compacted with YEAR)
+  for (String interval : intervalsBeforeCompaction) {
+for (Interval newinterval : newGranularity.getIterable(new 
Interval(interval, ISOChronology.getInstanceUTC( {
+  expectedIntervalAfterCompaction.add(newinterval.toString());
+}
+  }
+  forceTriggerAutoCompaction(1);
+  verifyQuery(INDEX_QUERIES_RESOURCE);
+  verifySegmentsCompacted(1, MAX_ROWS_PER_SEGMENT_COMPACTED);
+  checkCompactionIntervals(expectedIntervalAfterCompaction);
+
+  loadData(INDEX_TASK);
+  verifySegmentsCount(5);
+  verifyQuery(INDEX_QUERIES_RESOURCE);
+  // 5 segments. 1 compacted YEAR segment and 4 newly ingested DAY 
segments across 2 days
+  // We wil have one segment with interval of 2013-01-01/2014-01-01 
(compacted with YEAR) from the compaction
+  // two segments with interval of 2013-09-01/2013-10-01 (newly ingested 
with DAY)

Review comment:
   Is the comment here correct? The comment has monthly intervals but the 
ingestion spec uses day granularity




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [druid] bananaaggle closed pull request #11018: add protobuf inputformat

2021-03-23 Thread GitBox


bananaaggle closed pull request #11018:
URL: https://github.com/apache/druid/pull/11018


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [druid] bananaaggle commented on pull request #11018: add protobuf inputformat

2021-03-23 Thread GitBox


bananaaggle commented on pull request #11018:
URL: https://github.com/apache/druid/pull/11018#issuecomment-805401266


   > > @clintropolis Hi, I create ProtobufInputFormat followed your suggestion. 
I don't use this interface before, so I'm not very familiar with it. Can you 
review my code and tell me if this implementation meet requirement or not? If 
this implementation is correct, I will add more unit tests. By the way, where 
should I change in document about this feature?
   > 
   > Thanks! I will have a look this weekend. I think 
https://github.com/apache/druid/blob/master/docs/ingestion/data-formats.md is 
the appropriate place to document the new `InputFormat` (I guess we also forgot 
to update the protobuf section of this in the last PR, 
https://github.com/apache/druid/blob/master/docs/ingestion/data-formats.md#protobuf-parser)
   > 
   > > @clintropolis I reviewed code about Avro for inputFormat and learnt it 
only supports batch ingestion jobs. Why do we not support stream ingestion 
jobs? I think it's not very hard to implement it and I'm glad to do that.
   > 
   > 👍 The only reason streaming Avro isn't supported yet is basically the same 
reason it wasn't done for Protobuf, simply that no one has done the conversion. 
I think it would be great if you would like to take that on, especially since I 
think Avro and Protobuf (until this PR) are the only "core" extensions that do 
not yet support `InputFormat`. It would make ingestion be consistent for native 
batch and streaming, and be much appreciated!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [druid] jihoonson commented on a change in pull request #11019: Auto-compaction with segment granularity retrieve incomplete segments from timeline when interval overlap

2021-03-23 Thread GitBox


jihoonson commented on a change in pull request #11019:
URL: https://github.com/apache/druid/pull/11019#discussion_r600078907



##
File path: 
server/src/main/java/org/apache/druid/server/coordinator/duty/NewestSegmentFirstIterator.java
##
@@ -249,14 +246,13 @@ private void updateQueue(String dataSourceName, 
DataSourceCompactionConfig confi
   private static class CompactibleTimelineObjectHolderCursor implements 
Iterator>
   {
 private final List> holders;
-private final Map, ShardSpec> originalShardSpecs;
-private final Map, String> originalVersion;
+private final VersionedIntervalTimeline 
originalTimeline;

Review comment:
   Please add `@Nullable` here too.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] branch master updated: Add resources used to EXPLAIN PLAN FOR output (#11024)

2021-03-23 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/master by this push:
 new 8296123  Add resources used to EXPLAIN PLAN FOR output (#11024)
8296123 is described below

commit 8296123d895db7d06bc4517db5e767afb7862b83
Author: Jonathan Wei 
AuthorDate: Tue Mar 23 17:21:15 2021 -0700

Add resources used to EXPLAIN PLAN FOR output (#11024)
---
 .../druid/sql/calcite/planner/DruidPlanner.java| 28 ++
 .../druid/sql/calcite/planner/PlannerFactory.java  |  6 +++--
 .../druid/sql/avatica/DruidAvaticaHandlerTest.java |  4 +++-
 .../apache/druid/sql/calcite/CalciteQueryTest.java | 17 +
 .../org/apache/druid/sql/http/SqlResourceTest.java |  4 +++-
 5 files changed, 46 insertions(+), 13 deletions(-)

diff --git 
a/sql/src/main/java/org/apache/druid/sql/calcite/planner/DruidPlanner.java 
b/sql/src/main/java/org/apache/druid/sql/calcite/planner/DruidPlanner.java
index 772f650..6c91673 100644
--- a/sql/src/main/java/org/apache/druid/sql/calcite/planner/DruidPlanner.java
+++ b/sql/src/main/java/org/apache/druid/sql/calcite/planner/DruidPlanner.java
@@ -19,6 +19,8 @@
 
 package org.apache.druid.sql.calcite.planner;
 
+import com.fasterxml.jackson.core.JsonProcessingException;
+import com.fasterxml.jackson.databind.ObjectMapper;
 import com.google.common.base.Preconditions;
 import com.google.common.base.Supplier;
 import com.google.common.base.Suppliers;
@@ -62,6 +64,7 @@ import org.apache.calcite.util.Pair;
 import org.apache.druid.java.util.common.guava.BaseSequence;
 import org.apache.druid.java.util.common.guava.Sequence;
 import org.apache.druid.java.util.common.guava.Sequences;
+import org.apache.druid.java.util.emitter.EmittingLogger;
 import org.apache.druid.segment.DimensionHandlerUtils;
 import org.apache.druid.sql.calcite.rel.DruidConvention;
 import org.apache.druid.sql.calcite.rel.DruidRel;
@@ -75,19 +78,24 @@ import java.util.Properties;
 
 public class DruidPlanner implements Closeable
 {
+  private static final EmittingLogger log = new 
EmittingLogger(DruidPlanner.class);
+
   private final FrameworkConfig frameworkConfig;
   private final Planner planner;
   private final PlannerContext plannerContext;
+  private final ObjectMapper jsonMapper;
   private RexBuilder rexBuilder;
 
   public DruidPlanner(
   final FrameworkConfig frameworkConfig,
-  final PlannerContext plannerContext
+  final PlannerContext plannerContext,
+  final ObjectMapper jsonMapper
   )
   {
 this.frameworkConfig = frameworkConfig;
 this.planner = Frameworks.getPlanner(frameworkConfig);
 this.plannerContext = plannerContext;
+this.jsonMapper = jsonMapper;
   }
 
   /**
@@ -358,8 +366,17 @@ public class DruidPlanner implements Closeable
   )
   {
 final String explanation = RelOptUtil.dumpPlan("", rel, 
explain.getFormat(), explain.getDetailLevel());
+String resources;
+try {
+  resources = jsonMapper.writeValueAsString(plannerContext.getResources());
+}
+catch (JsonProcessingException jpe) {
+  // this should never happen, we create the Resources here, not a user
+  log.error(jpe, "Encountered exception while serializing Resources for 
explain output");
+  resources = null;
+}
 final Supplier> resultsSupplier = Suppliers.ofInstance(
-Sequences.simple(ImmutableList.of(new Object[]{explanation})));
+Sequences.simple(ImmutableList.of(new Object[]{explanation, 
resources})));
 return new PlannerResult(resultsSupplier, 
getExplainStructType(rel.getCluster().getTypeFactory()));
   }
 
@@ -414,8 +431,11 @@ public class DruidPlanner implements Closeable
   private static RelDataType getExplainStructType(RelDataTypeFactory 
typeFactory)
   {
 return typeFactory.createStructType(
-ImmutableList.of(Calcites.createSqlType(typeFactory, 
SqlTypeName.VARCHAR)),
-ImmutableList.of("PLAN")
+ImmutableList.of(
+Calcites.createSqlType(typeFactory, SqlTypeName.VARCHAR),
+Calcites.createSqlType(typeFactory, SqlTypeName.VARCHAR)
+),
+ImmutableList.of("PLAN", "RESOURCES")
 );
   }
 
diff --git 
a/sql/src/main/java/org/apache/druid/sql/calcite/planner/PlannerFactory.java 
b/sql/src/main/java/org/apache/druid/sql/calcite/planner/PlannerFactory.java
index fc584d7..9f86eda 100644
--- a/sql/src/main/java/org/apache/druid/sql/calcite/planner/PlannerFactory.java
+++ b/sql/src/main/java/org/apache/druid/sql/calcite/planner/PlannerFactory.java
@@ -107,7 +107,8 @@ public class PlannerFactory
 
 return new DruidPlanner(
 frameworkConfig,
-plannerContext
+plannerContext,
+jsonMapper
 );
   }
 
@@ -121,7 +122,8 @@ public class PlannerFactory
 
 return new DruidPlanner(
 frameworkConfig,
-plannerContext
+plannerContext,
+json

[GitHub] [druid] jon-wei merged pull request #11024: Add resources used to EXPLAIN PLAN FOR output

2021-03-23 Thread GitBox


jon-wei merged pull request #11024:
URL: https://github.com/apache/druid/pull/11024


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] 02/02: Test UI to trigger auto compaction (#10469)

2021-03-23 Thread jihoonson
This is an automated email from the ASF dual-hosted git repository.

jihoonson pushed a commit to branch 0.20.2
in repository https://gitbox.apache.org/repos/asf/druid.git

commit 2016b52abbff68c5e7f4583640a02dc9fb7cb054
Author: Chi Cao Minh 
AuthorDate: Sun Oct 4 00:06:07 2020 -0700

Test UI to trigger auto compaction (#10469)

In the web console E2E tests, Use the new UI to trigger auto compaction
instead of calling the REST API directly so that the UI is covered by
tests.
---
 web-console/e2e-tests/auto-compaction.spec.ts  | 21 +++--
 .../e2e-tests/component/datasources/overview.ts| 36 ++
 2 files changed, 41 insertions(+), 16 deletions(-)

diff --git a/web-console/e2e-tests/auto-compaction.spec.ts 
b/web-console/e2e-tests/auto-compaction.spec.ts
index 68cd00b..496e56a 100644
--- a/web-console/e2e-tests/auto-compaction.spec.ts
+++ b/web-console/e2e-tests/auto-compaction.spec.ts
@@ -16,7 +16,6 @@
  * limitations under the License.
  */
 
-import axios from 'axios';
 import path from 'path';
 import * as playwright from 'playwright-core';
 
@@ -25,7 +24,6 @@ import { Datasource } from 
'./component/datasources/datasource';
 import { DatasourcesOverview } from './component/datasources/overview';
 import { HashedPartitionsSpec } from './component/load-data/config/partition';
 import { saveScreenshotIfError } from './util/debug';
-import { COORDINATOR_URL } from './util/druid';
 import { DRUID_EXAMPLES_QUICKSTART_TUTORIAL_DIR } from './util/druid';
 import { UNIFIED_CONSOLE_URL } from './util/druid';
 import { runIndexTask } from './util/druid';
@@ -77,7 +75,7 @@ describe('Auto-compaction', () => {
   // need several iterations if several time chunks need compaction
   let currNumSegment = uncompactedNumSegment;
   await retryIfJestAssertionError(async () => {
-await triggerCompaction();
+await triggerCompaction(page);
 currNumSegment = await waitForCompaction(page, datasourceName, 
currNumSegment);
 
 const compactedNumSegment = 2;
@@ -127,15 +125,18 @@ async function configureCompaction(
   const datasourcesOverview = new DatasourcesOverview(page, 
UNIFIED_CONSOLE_URL);
   await datasourcesOverview.setCompactionConfiguration(datasourceName, 
compactionConfig);
 
-  const savedCompactionConfig = await 
datasourcesOverview.getCompactionConfiguration(
-datasourceName,
-  );
-  expect(savedCompactionConfig).toEqual(compactionConfig);
+  // Saving the compaction config is not instantaneous
+  await retryIfJestAssertionError(async () => {
+const savedCompactionConfig = await 
datasourcesOverview.getCompactionConfiguration(
+  datasourceName,
+);
+expect(savedCompactionConfig).toEqual(compactionConfig);
+  });
 }
 
-async function triggerCompaction() {
-  const res = await 
axios.post(`${COORDINATOR_URL}/druid/coordinator/v1/compaction/compact`);
-  expect(res.status).toBe(200);
+async function triggerCompaction(page: playwright.Page) {
+  const datasourcesOverview = new DatasourcesOverview(page, 
UNIFIED_CONSOLE_URL);
+  await datasourcesOverview.triggerCompaction();
 }
 
 async function waitForCompaction(
diff --git a/web-console/e2e-tests/component/datasources/overview.ts 
b/web-console/e2e-tests/component/datasources/overview.ts
index 11f44b4..397f514 100644
--- a/web-console/e2e-tests/component/datasources/overview.ts
+++ b/web-console/e2e-tests/component/datasources/overview.ts
@@ -44,7 +44,6 @@ enum DatasourceColumn {
   ACTIONS,
 }
 
-const EDIT_COMPACTION_CONFIGURATION = 'Edit compaction configuration';
 const SKIP_OFFSET_FROM_LATEST = 'Skip offset from latest';
 
 /**
@@ -83,9 +82,8 @@ export class DatasourcesOverview {
 datasourceName: string,
 compactionConfig: CompactionConfig,
   ): Promise {
-await this.openEditActions(datasourceName);
+await this.openCompactionConfigurationDialog(datasourceName);
 
-await this.page.click(`"${EDIT_COMPACTION_CONFIGURATION}"`);
 await setLabeledInput(
   this.page,
   SKIP_OFFSET_FROM_LATEST,
@@ -96,10 +94,20 @@ export class DatasourcesOverview {
 await clickButton(this.page, 'Submit');
   }
 
-  async getCompactionConfiguration(datasourceName: string): 
Promise {
+  private async openCompactionConfigurationDialog(datasourceName: string): 
Promise {
 await this.openEditActions(datasourceName);
+await this.clickMenuItem('Edit compaction configuration');
+await this.page.waitForSelector('div.compaction-dialog');
+  }
+
+  private async clickMenuItem(text: string): Promise {
+const menuItemSelector = `//a[*[contains(text(),"${text}")]]`;
+await this.page.click(menuItemSelector);
+  }
+
+  async getCompactionConfiguration(datasourceName: string): 
Promise {
+await this.openCompactionConfigurationDialog(datasourceName);
 
-await this.page.click(`"${EDIT_COMPACTION_CONFIGURATION}"`);
 const skipOffsetFromLatest = await getLabeledInput(this.page, 
SKIP_OFFSET_FROM_LATEST);
 const part

[druid] 01/02: fix tests for java 11

2021-03-23 Thread jihoonson
This is an automated email from the ASF dual-hosted git repository.

jihoonson pushed a commit to branch 0.20.2
in repository https://gitbox.apache.org/repos/asf/druid.git

commit 3282b6a7f0679578d8e03f4eacc7f37bc73fc3d1
Author: Jihoon Son 
AuthorDate: Tue Mar 23 15:40:00 2021 -0700

fix tests for java 11
---
 .../druid/firehose/PostgresqlFirehoseDatabaseConnectorTest.java | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git 
a/extensions-core/postgresql-metadata-storage/src/test/java/org/apache/druid/firehose/PostgresqlFirehoseDatabaseConnectorTest.java
 
b/extensions-core/postgresql-metadata-storage/src/test/java/org/apache/druid/firehose/PostgresqlFirehoseDatabaseConnectorTest.java
index 229f1a4..4ab9ab3 100644
--- 
a/extensions-core/postgresql-metadata-storage/src/test/java/org/apache/druid/firehose/PostgresqlFirehoseDatabaseConnectorTest.java
+++ 
b/extensions-core/postgresql-metadata-storage/src/test/java/org/apache/druid/firehose/PostgresqlFirehoseDatabaseConnectorTest.java
@@ -87,7 +87,7 @@ public class PostgresqlFirehoseDatabaseConnectorTest
 
 JdbcAccessSecurityConfig securityConfig = 
newSecurityConfigEnforcingAllowList(ImmutableSet.of(""));
 
-expectedException.expectMessage("The property [keyonly] is not in the 
allowed list");
+expectedException.expectMessage("is not in the allowed list");
 expectedException.expect(IllegalArgumentException.class);
 
 new PostgresqlFirehoseDatabaseConnector(
@@ -132,7 +132,7 @@ public class PostgresqlFirehoseDatabaseConnectorTest
 
 JdbcAccessSecurityConfig securityConfig = 
newSecurityConfigEnforcingAllowList(ImmutableSet.of("none", "nonenone"));
 
-expectedException.expectMessage("The property [keyonly] is not in the 
allowed list");
+expectedException.expectMessage("is not in the allowed list");
 expectedException.expect(IllegalArgumentException.class);
 
 new PostgresqlFirehoseDatabaseConnector(
@@ -155,7 +155,7 @@ public class PostgresqlFirehoseDatabaseConnectorTest
 
 JdbcAccessSecurityConfig securityConfig = 
newSecurityConfigEnforcingAllowList(ImmutableSet.of("user", "nonenone"));
 
-expectedException.expectMessage("The property [keyonly] is not in the 
allowed list");
+expectedException.expectMessage("is not in the allowed list");
 expectedException.expect(IllegalArgumentException.class);
 
 new PostgresqlFirehoseDatabaseConnector(

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] branch 0.20.2 updated (48953e35 -> 2016b52)

2021-03-23 Thread jihoonson
This is an automated email from the ASF dual-hosted git repository.

jihoonson pushed a change to branch 0.20.2
in repository https://gitbox.apache.org/repos/asf/druid.git.


from 48953e35 Allow list for JDBC properties
 new 3282b6a  fix tests for java 11
 new 2016b52  Test UI to trigger auto compaction (#10469)

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../PostgresqlFirehoseDatabaseConnectorTest.java   |  6 ++--
 web-console/e2e-tests/auto-compaction.spec.ts  | 21 +++--
 .../e2e-tests/component/datasources/overview.ts| 36 ++
 3 files changed, 44 insertions(+), 19 deletions(-)

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] branch master updated: allow multiple ldap bootstrap files for integration tests (#11023)

2021-03-23 Thread jihoonson
This is an automated email from the ASF dual-hosted git repository.

jihoonson pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/master by this push:
 new 6aec8f0  allow multiple ldap bootstrap files for integration tests 
(#11023)
6aec8f0 is described below

commit 6aec8f0c1b3968c94a1cdb741281af907299978c
Author: Jihoon Son 
AuthorDate: Tue Mar 23 13:18:36 2021 -0700

allow multiple ldap bootstrap files for integration tests (#11023)
---
 integration-tests/docker/docker-compose.base.yml | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/integration-tests/docker/docker-compose.base.yml 
b/integration-tests/docker/docker-compose.base.yml
index 46dda1d..f8119de 100644
--- a/integration-tests/docker/docker-compose.base.yml
+++ b/integration-tests/docker/docker-compose.base.yml
@@ -60,7 +60,7 @@ services:
 image: druid/cluster
 container_name: druid-metadata-storage
 ports:
-  - 3306:3306
+  - 13306:3306
 networks:
   druid-it-net:
 ipv4_address: 172.172.172.3
@@ -371,7 +371,7 @@ services:
   - 8636:636
 privileged: true
 volumes:
-  - 
./ldap-configs/bootstrap.ldif:/container/service/slapd/assets/config/bootstrap/ldif/bootstrap.ldif
+  - 
./ldap-configs:/container/service/slapd/assets/config/bootstrap/ldif/custom
   - ${HOME}/shared:/shared
 env_file:
   - ./environment-configs/common

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [druid] jihoonson merged pull request #11023: allow multiple ldap bootstrap files for integration tests

2021-03-23 Thread GitBox


jihoonson merged pull request #11023:
URL: https://github.com/apache/druid/pull/11023


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] branch master updated: Allow overlapping intervals for the compaction task (#10912)

2021-03-23 Thread maytasm
This is an automated email from the ASF dual-hosted git repository.

maytasm pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/master by this push:
 new a041933  Allow overlapping intervals for the compaction task (#10912)
a041933 is described below

commit a04193301732b08977b2085f2f858751ab614141
Author: Jihoon Son 
AuthorDate: Tue Mar 23 11:21:54 2021 -0700

Allow overlapping intervals for the compaction task (#10912)

* Allow overlapping intervals for the compaction task

* unused import

* line indentation

Co-authored-by: Maytas Monsereenusorn 
---
 .../common/granularity/IntervalsByGranularity.java | 25 +---
 .../util/common/IntervalsByGranularityTest.java| 60 ++
 .../druid/indexing/common/task/CompactionTask.java |  6 +-
 .../common/task/CompactionTaskRunTest.java | 74 ++
 .../indexing/granularity/BaseGranularitySpec.java  |  9 +--
 5 files changed, 117 insertions(+), 57 deletions(-)

diff --git 
a/core/src/main/java/org/apache/druid/java/util/common/granularity/IntervalsByGranularity.java
 
b/core/src/main/java/org/apache/druid/java/util/common/granularity/IntervalsByGranularity.java
index 7065535..ff076d4 100644
--- 
a/core/src/main/java/org/apache/druid/java/util/common/granularity/IntervalsByGranularity.java
+++ 
b/core/src/main/java/org/apache/druid/java/util/common/granularity/IntervalsByGranularity.java
@@ -23,16 +23,12 @@ import com.google.common.collect.FluentIterable;
 import org.apache.druid.common.guava.SettableSupplier;
 import org.apache.druid.java.util.common.IAE;
 import org.apache.druid.java.util.common.JodaUtils;
-import org.apache.druid.java.util.common.guava.Comparators;
 import org.joda.time.Interval;
 
-import java.util.ArrayList;
 import java.util.Collection;
 import java.util.Collections;
-import java.util.HashSet;
 import java.util.Iterator;
 import java.util.List;
-import java.util.Set;
 
 /**
  * Produce a stream of intervals generated by a given set of intervals as 
input and a given
@@ -51,19 +47,7 @@ public class IntervalsByGranularity
*/
   public IntervalsByGranularity(Collection intervals, Granularity 
granularity)
   {
-// eliminate dups, sort intervals:
-Set intervalSet = new HashSet<>(intervals);
-List inputIntervals = new ArrayList<>(intervals.size());
-inputIntervals.addAll(intervalSet);
-inputIntervals.sort(Comparators.intervalsByStartThenEnd());
-
-// sanity check
-if (JodaUtils.containOverlappingIntervals(inputIntervals)) {
-  throw new IAE("Intervals contain overlapping intervals [%s]", intervals);
-}
-
-// all good:
-sortedNonOverlappingIntervals = inputIntervals;
+this.sortedNonOverlappingIntervals = 
JodaUtils.condenseIntervals(intervals);
 this.granularity = granularity;
   }
 
@@ -73,9 +57,8 @@ public class IntervalsByGranularity
*/
   public Iterator granularityIntervalsIterator()
   {
-Iterator ite;
 if (sortedNonOverlappingIntervals.isEmpty()) {
-  ite = Collections.emptyIterator();
+  return Collections.emptyIterator();
 } else {
   // The filter after transform & concat is to remove duplicats.
   // This can happen when condense left intervals that did not overlap but
@@ -85,7 +68,7 @@ public class IntervalsByGranularity
   // intervals will be returned, both with the same value 
2013-01-01T00:00:00.000Z/2013-02-01T00:00:00.000Z.
   // Thus dups can be created given the right conditions
   final SettableSupplier previous = new SettableSupplier<>();
-  ite = 
FluentIterable.from(sortedNonOverlappingIntervals).transformAndConcat(granularity::getIterable)
+  return 
FluentIterable.from(sortedNonOverlappingIntervals).transformAndConcat(granularity::getIterable)
   .filter(interval -> {
 if (previous.get() != null && 
previous.get().equals(interval)) {
   return false;
@@ -94,7 +77,5 @@ public class IntervalsByGranularity
 return true;
   }).iterator();
 }
-return ite;
   }
-
 }
diff --git 
a/core/src/test/java/org/apache/druid/java/util/common/IntervalsByGranularityTest.java
 
b/core/src/test/java/org/apache/druid/java/util/common/IntervalsByGranularityTest.java
index a38e6d5..ee01aa0 100644
--- 
a/core/src/test/java/org/apache/druid/java/util/common/IntervalsByGranularityTest.java
+++ 
b/core/src/test/java/org/apache/druid/java/util/common/IntervalsByGranularityTest.java
@@ -21,11 +21,13 @@ package org.apache.druid.java.util.common;
 
 import com.google.common.collect.ImmutableList;
 import org.apache.druid.java.util.common.granularity.Granularities;
-import org.apache.druid.java.util.common.granularity.Granularity;
 import org.apache.druid.java.util.common.granularity.IntervalsByGranularity;
 import org.joda.time.Interva

[GitHub] [druid] maytasm merged pull request #10912: Allow overlapping intervals for the compaction task

2021-03-23 Thread GitBox


maytasm merged pull request #10912:
URL: https://github.com/apache/druid/pull/10912


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [druid-website-src] fjy merged pull request #215: Update druid-powered.md to add societe generale

2021-03-23 Thread GitBox


fjy merged pull request #215:
URL: https://github.com/apache/druid-website-src/pull/215


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid-website-src] branch master updated: Update druid-powered.md to add societe generale

2021-03-23 Thread fjy
This is an automated email from the ASF dual-hosted git repository.

fjy pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid-website-src.git


The following commit(s) were added to refs/heads/master by this push:
 new 724169e  Update druid-powered.md to add societe generale
 new e608489  Merge pull request #215 from jelenazanko/patch-3
724169e is described below

commit 724169e81d5da75a7fe9fe3ec254da3f28f93b9e
Author: Jelena Zanko <59612355+jelenaza...@users.noreply.github.com>
AuthorDate: Mon Mar 22 09:28:28 2021 -0500

Update druid-powered.md to add societe generale
---
 druid-powered.md | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/druid-powered.md b/druid-powered.md
index 2184cd6..f267220 100644
--- a/druid-powered.md
+++ b/druid-powered.md
@@ -619,6 +619,12 @@ Smyte provides an API and UI for detecting and blocking 
bad actors on the intern
 
 * [Data Analytics and Processing at 
Snap](https://www.slideshare.net/CharlesAllen9/data-analytics-and-processing-at-snap-druid-meetup-la-september-2018)
 
+## Societe Generale
+
+Societe Generale, one of Europe's leading financial services groups and a 
major player in the economy for over 150 years, supports 29 million clients 
every day with 138,000 staff in 62 countries.
+
+Within the Societe Generale IT department, Apache Druid is used as Time Series 
Database in order to store performance metrics generated in real-time by 
thousands of servers, databases, middlewares. These data are stored in multiple 
Druid clusters in multiple regions (+840 vCPUs, +7000GB of RAM, +300 billions 
of events) and are used for many purposes, such as dashboarding and predictive 
maintenance use cases.
+
 ## Splunk
 
 We went through the journey of deploying Apache Druid clusters on Kubernetes 
and created a [druid-operator](https://github.com/druid-io/druid-operator). We 
use this operator to deploy Druid clusters at Splunk.

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [druid] tanisdlj commented on issue #10866: Ingestion task fails with InterruptedException when handling the segments

2021-03-23 Thread GitBox


tanisdlj commented on issue #10866:
URL: https://github.com/apache/druid/issues/10866#issuecomment-805030653


   Mhmhmh... how do I get that logs?
   As far as I know the supervisor doesn't have specific logs. And no idea on 
how to get the Kafka Indexing Logs


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [druid] tanisdlj commented on issue #10868: Middlemanager Task failing on apparently graceful shutdown

2021-03-23 Thread GitBox


tanisdlj commented on issue #10868:
URL: https://github.com/apache/druid/issues/10868#issuecomment-805026726


   A bit more info, might be useful. Another task killed the same way, from the 
overlord:
   ```
   Mar 22 19:07:00 druid-master-1 java[9686]: 2021-03-22T19:07:00,487 INFO 
[KafkaSupervisor-events-hourly] 
org.apache.druid.indexing.overlord.RemoteTaskRunner - Shutdown 
[index_kafka_events-hourly_6b40993d6374451_mfkjmnbg] because: [No task in the 
corresponding pending completion taskGroup[6] succeeded before completion 
timeout elapsed]
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org