[GitHub] [incubator-druid] gianm commented on issue #7571: Optimize coordinator API to retrieve segments with overshadowed status

2019-05-21 Thread GitBox
gianm commented on issue #7571: Optimize coordinator API to retrieve segments 
with overshadowed status
URL: 
https://github.com/apache/incubator-druid/issues/7571#issuecomment-494659852
 
 
   > Added to 0.15 milestone because If SegmentWithOvershadowedStatus will leak 
into Druid 0.15 API, it will be much harder to remove it from later (it will 
require transition PRs through several consecutive Druid versions, and 
temporary glue code). So I would really like to see it go away before Druid 
0.15 if other people agree that needs to be done. Please provide arguments if 
you disagree.
   
   @leventov - which API leak are you concerned about? From context I'm 
guessing it's the HTTP API. I suggest we address that by not documenting it and 
treating it as an internal API. The info is still exposed through system tables 
(the original motivation for creating this HTTP API), and that won't change 
even if we end up wanting to alter the underlying API.
   
   By the way, I'm still not really sure that a mutable DataSegment is the way 
to go. It just feels wrong. It's a class that is meant to represent the 
'payload' in the druid_segments table and the announcement in ZK of an 
available segment. In my experience these sorts of widely used modeling classes 
work best when they are immutable. Also, if we added mutable fields like 
'isOvershadowed', it would often be invalid (for example: in code that is 
reading or writing an individual segment announcement in ZK, where the 
overshadowedness concept does not have meaning).
   
   Besides, I don't think that having mutable DataSegments would really do too 
much to reduce churn. DataSegments aren't updated very often. Most are either 
never updated, or are just updated once (during realtime-to-historical 
handoff). If someone is doing a deep storage migration they might update all 
their segments, but this would be quite infrequent. If we use wrappers in 
situations that want to track extra state, like overshadowedness, then those 
wrappers could be mutable to avoid churn in the wrappers. (Meaning: we could 
make SegmentWithOvershadowedStatus mutable.)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [incubator-druid] gianm commented on issue #7653: Refactor SQLMetadataSegmentManager; Change contract of REST methods in DataSourcesResource

2019-05-21 Thread GitBox
gianm commented on issue #7653: Refactor SQLMetadataSegmentManager; Change 
contract of REST methods in DataSourcesResource
URL: https://github.com/apache/incubator-druid/pull/7653#issuecomment-494655199
 
 
   I'll take a look at the SQLMetadataSegmentManager changes.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [incubator-druid] gianm edited a comment on issue #7595: Optimize overshadowed segments computation

2019-05-21 Thread GitBox
gianm edited a comment on issue #7595: Optimize overshadowed segments 
computation
URL: https://github.com/apache/incubator-druid/pull/7595#issuecomment-494654440
 
 
   > For the concern about DruidCoordinatorRuleRunner seeing a stale view, do 
you foresee issues with that acting on an older view of the overshadowed 
segments?
   
   It's not just a view of overshadowed segments, right? The entire timeline 
would be an older snapshot. I think as long as it's not 'surprisingly rolled 
back' then it is OK. Meaning: no run should use an older snapshot than a prior 
run.
   
   Within any given coordinator leadership epoch, the snapshot at time T will 
be newer than the snapshot at previous times < T. So avoiding surprising 
rollbacks really boils down to avoiding a situation where the _new leader_ uses 
an older snapshot than the _old leader_. I think the best way to avoid this is 
to make sure that any snapshot used in a given leadership epoch must have been 
taken after the start of that epoch. (If it was from prior to the start of the 
epoch, it might be out of date and using it might lead to a surprising 
rollback.)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [incubator-druid] hellobabygogo opened a new pull request #7727: fix Select query fails with columns called "timestamp" bug

2019-05-21 Thread GitBox
hellobabygogo opened a new pull request #7727: fix Select query fails with 
columns called "timestamp" bug
URL: https://github.com/apache/incubator-druid/pull/7727
 
 
   If you have a dimension or metric called "timestamp", it will override the 
actual timestamp from the row
   leading to an exception later on when things try to read that value as if it 
were a timestamp.
   
   related with issue : https://github.com/apache/incubator-druid/issues/3303


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [incubator-druid] vsharathchandra commented on issue #2320: [Proposal] support for setting javaOpts per a task

2019-05-21 Thread GitBox
vsharathchandra commented on issue #2320: [Proposal] support for setting 
javaOpts per a task
URL: 
https://github.com/apache/incubator-druid/issues/2320#issuecomment-494643903
 
 
   Thanks.Tried it it's working.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [incubator-druid] dijkspicy opened a new issue #7726: longMin/longMax has different result, should unify?

2019-05-21 Thread GitBox
dijkspicy opened a new issue #7726: longMin/longMax has different result, 
should unify?
URL: https://github.com/apache/incubator-druid/issues/7726
 
 
   two sql statement: (col < 0) is always false
   1. select max(col) filter(where col < 0) from datasource;
   2. select max(col) from datasource where col <0;
   
   we convert these two into two http message while both use longMax 
aggregator, but 1. sql will return Long.MIN_VALUE, 2. sql will return empty. 
should these two results combined one?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [incubator-druid] surekhasaharan commented on issue #7706: Get correct curr_size attribute value for historical servers

2019-05-21 Thread GitBox
surekhasaharan commented on issue #7706: Get correct curr_size attribute value 
for historical servers
URL: https://github.com/apache/incubator-druid/pull/7706#issuecomment-494628398
 
 
   @fjy this is for 0.16 i think.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [incubator-druid] viongpanzi commented on a change in pull request #7716: AggregatorUtil should cache parsed expression to avoid memory problem (OOM/FGC) when Expression is used in metricsSpe

2019-05-21 Thread GitBox
viongpanzi commented on a change in pull request #7716: AggregatorUtil should 
cache parsed expression to avoid memory problem (OOM/FGC) when Expression is 
used in metricsSpec
URL: https://github.com/apache/incubator-druid/pull/7716#discussion_r286290056
 
 

 ##
 File path: 
processing/src/main/java/org/apache/druid/query/aggregation/AggregatorUtil.java
 ##
 @@ -196,7 +246,7 @@ static BaseFloatColumnValueSelector 
makeColumnValueSelectorWithFloatDefault(
 if (fieldName != null) {
   return metricFactory.makeColumnValueSelector(fieldName);
 } else {
-  final Expr expr = Parser.parse(fieldExpression, macroTable);
+  final Expr expr = parseIfAbsent(fieldExpression, macroTable);
 
 Review comment:
   @himanshug Good advice! I'll push a new commit when I resolve it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [incubator-druid] viongpanzi commented on a change in pull request #7716: AggregatorUtil should cache parsed expression to avoid memory problem (OOM/FGC) when Expression is used in metricsSpe

2019-05-21 Thread GitBox
viongpanzi commented on a change in pull request #7716: AggregatorUtil should 
cache parsed expression to avoid memory problem (OOM/FGC) when Expression is 
used in metricsSpec
URL: https://github.com/apache/incubator-druid/pull/7716#discussion_r286288369
 
 

 ##
 File path: core/src/main/java/org/apache/druid/math/expr/Parser.java
 ##
 @@ -75,6 +75,9 @@ public static Expr parse(String in, ExprMacroTable 
macroTable)
   @VisibleForTesting
   static Expr parse(String in, ExprMacroTable macroTable, boolean withFlatten)
   {
+if (log.isDebugEnabled()) {
 
 Review comment:
   @himanshug Thanks for your advice. I'll remove this check in the next 
commit. This is just for debug purpose.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [incubator-druid] mcbrewster opened a new pull request #7725: Web-Console: add lookup-table-action-dialog to lookup-view

2019-05-21 Thread GitBox
mcbrewster opened a new pull request #7725: Web-Console: add 
lookup-table-action-dialog to lookup-view
URL: https://github.com/apache/incubator-druid/pull/7725
 
 
   https://user-images.githubusercontent.com/37322608/58136479-0d279000-7be3-11e9-9f83-d158a1c008a2.png;>
   https://user-images.githubusercontent.com/37322608/58136480-0d279000-7be3-11e9-9bb6-3194ff4160b6.png;>
   https://user-images.githubusercontent.com/37322608/58136486-10bb1700-7be3-11e9-9268-37c64b1d2bc3.png;>
   https://user-images.githubusercontent.com/37322608/58136487-10bb1700-7be3-11e9-953d-f1a528eb2b41.png;>
   
   lookup-table-action-dialog shows the result of a get request to 
/druid/v1/lookups/introspect/{lookupId} under the map of values tab and the 
result of get request to /druid/v1/lookups/introspect/{lookupId}/values under 
the values tab. 
   
   The actions are now accessible via the wrench icon, which is constant with 
the tasks and supervisors views. the actions are obtained via the 
getlookupActions functions, although the same actions are returned for all 
types of lookups, this way of getting actions is constant with tasks actions 
and supervisors actions and can easily be filtered if necessary. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [incubator-druid] jon-wei commented on a change in pull request #7595: Optimize overshadowed segments computation

2019-05-21 Thread GitBox
jon-wei commented on a change in pull request #7595: Optimize overshadowed 
segments computation
URL: https://github.com/apache/incubator-druid/pull/7595#discussion_r286243330
 
 

 ##
 File path: 
server/src/main/java/org/apache/druid/metadata/SQLMetadataSegmentManager.java
 ##
 @@ -744,6 +756,32 @@ public DataSegment map(int index, ResultSet r, 
StatementContext ctx) throws SQLE
 
 // Replace "dataSources" atomically.
 dataSources = newDataSources;
+overshadowedSegments = 
ImmutableSet.copyOf(determineOvershadowedSegments(segments));
+  }
+
+  /**
+   * This method builds a timeline from given segments and finds the 
overshadowed segments
 
 Review comment:
   nit: the comment refers to a single timeline but multiple timelines are 
being built


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [incubator-druid] jon-wei commented on a change in pull request #7595: Optimize overshadowed segments computation

2019-05-21 Thread GitBox
jon-wei commented on a change in pull request #7595: Optimize overshadowed 
segments computation
URL: https://github.com/apache/incubator-druid/pull/7595#discussion_r286247200
 
 

 ##
 File path: 
core/src/main/java/org/apache/druid/timeline/DataSegmentWithOvershadowedStatus.java
 ##
 @@ -25,16 +25,16 @@
 /**
  * DataSegment object plus the overshadowed status for the segment. An 
immutable object.
  *
- * SegmentWithOvershadowedStatus's {@link #compareTo} method considers only 
the {@link SegmentId}
+ * DataSegmentWithOvershadowedStatus's {@link #compareTo} method considers 
only the {@link SegmentId}
  * of the DataSegment object.
  */
-public class SegmentWithOvershadowedStatus implements 
Comparable
+public class DataSegmentWithOvershadowedStatus implements 
Comparable
 
 Review comment:
   I would undo the rename, I don't think it's really necessary here


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [incubator-druid] himanshug commented on a change in pull request #7717: Adding influxdb emitter as a contrib extension

2019-05-21 Thread GitBox
himanshug commented on a change in pull request #7717: Adding influxdb emitter 
as a contrib extension
URL: https://github.com/apache/incubator-druid/pull/7717#discussion_r286233035
 
 

 ##
 File path: 
extensions-contrib/influxdb-emitter/src/main/java/org/apache/druid/emitter/influxdb/InfluxdbEmitter.java
 ##
 @@ -0,0 +1,231 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.emitter.influxdb;
+
+import com.google.common.collect.ImmutableSet;
+import com.google.common.util.concurrent.ThreadFactoryBuilder;
+import org.apache.druid.java.util.common.logger.Logger;
+import org.apache.druid.java.util.emitter.core.Emitter;
+import org.apache.druid.java.util.emitter.core.Event;
+import org.apache.druid.java.util.emitter.service.ServiceMetricEvent;
+import org.apache.http.client.HttpClient;
+import org.apache.http.client.methods.HttpPost;
+import org.apache.http.entity.ContentType;
+import org.apache.http.entity.StringEntity;
+import org.apache.http.impl.client.HttpClientBuilder;
+
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.concurrent.Executors;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.regex.Pattern;
+
+
+public class InfluxdbEmitter implements Emitter
+{
+
+  private static final Logger log = new Logger(InfluxdbEmitter.class);
+  private HttpClient influxdbClient;
+  private final InfluxdbEmitterConfig influxdbEmitterConfig;
+  private final AtomicBoolean started = new AtomicBoolean(false);
+  private final ScheduledExecutorService exec = 
Executors.newScheduledThreadPool(1, new ThreadFactoryBuilder()
+  .setDaemon(true)
+  .setNameFormat("InfluxdbEmitter-%s")
+  .build());
+
+  private final ImmutableSet dimensionWhiteList;
+
+  private final LinkedBlockingQueue eventsQueue;
+
+  public InfluxdbEmitter(InfluxdbEmitterConfig influxdbEmitterConfig)
+  {
+this.influxdbEmitterConfig = influxdbEmitterConfig;
+this.influxdbClient = HttpClientBuilder.create().build();
+this.eventsQueue = new 
LinkedBlockingQueue<>(influxdbEmitterConfig.getMaxQueueSize());
+
+this.dimensionWhiteList = ImmutableSet.of(
+"dataSource",
+"type",
+"numMetrics",
+"numDimensions",
+"threshold",
+"dimension",
+"taskType",
+"taskStatus",
+"tier"
+);
+
+log.info("constructing influxdb emitter");
+  }
+
+  @Override
+  public void start()
+  {
+synchronized (started) {
+  if (!started.get()) {
+exec.scheduleAtFixedRate(
+new ConsumerRunnable(),
 
 Review comment:
   nit: drop ConsumerRunnable and
   ```suggestion
   () -> transformAndSendToInfluxdb(eventsQueue),
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [incubator-druid] himanshug commented on a change in pull request #7717: Adding influxdb emitter as a contrib extension

2019-05-21 Thread GitBox
himanshug commented on a change in pull request #7717: Adding influxdb emitter 
as a contrib extension
URL: https://github.com/apache/incubator-druid/pull/7717#discussion_r286194227
 
 

 ##
 File path: 
extensions-contrib/influxdb-emitter/src/main/java/org/apache/druid/emitter/influxdb/InfluxdbEmitter.java
 ##
 @@ -0,0 +1,231 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.emitter.influxdb;
+
+import com.google.common.collect.ImmutableSet;
+import com.google.common.util.concurrent.ThreadFactoryBuilder;
+import org.apache.druid.java.util.common.logger.Logger;
+import org.apache.druid.java.util.emitter.core.Emitter;
+import org.apache.druid.java.util.emitter.core.Event;
+import org.apache.druid.java.util.emitter.service.ServiceMetricEvent;
+import org.apache.http.client.HttpClient;
+import org.apache.http.client.methods.HttpPost;
+import org.apache.http.entity.ContentType;
+import org.apache.http.entity.StringEntity;
+import org.apache.http.impl.client.HttpClientBuilder;
+
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.concurrent.Executors;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.regex.Pattern;
+
+
+public class InfluxdbEmitter implements Emitter
+{
+
+  private static final Logger log = new Logger(InfluxdbEmitter.class);
+  private HttpClient influxdbClient;
 
 Review comment:
   ```suggestion
 private final HttpClient influxdbClient;
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [incubator-druid] himanshug commented on a change in pull request #7717: Adding influxdb emitter as a contrib extension

2019-05-21 Thread GitBox
himanshug commented on a change in pull request #7717: Adding influxdb emitter 
as a contrib extension
URL: https://github.com/apache/incubator-druid/pull/7717#discussion_r286231085
 
 

 ##
 File path: 
extensions-contrib/influxdb-emitter/src/main/java/org/apache/druid/emitter/influxdb/InfluxdbEmitter.java
 ##
 @@ -0,0 +1,231 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.emitter.influxdb;
+
+import com.google.common.collect.ImmutableSet;
+import com.google.common.util.concurrent.ThreadFactoryBuilder;
+import org.apache.druid.java.util.common.logger.Logger;
+import org.apache.druid.java.util.emitter.core.Emitter;
+import org.apache.druid.java.util.emitter.core.Event;
+import org.apache.druid.java.util.emitter.service.ServiceMetricEvent;
+import org.apache.http.client.HttpClient;
+import org.apache.http.client.methods.HttpPost;
+import org.apache.http.entity.ContentType;
+import org.apache.http.entity.StringEntity;
+import org.apache.http.impl.client.HttpClientBuilder;
+
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.concurrent.Executors;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.regex.Pattern;
+
+
+public class InfluxdbEmitter implements Emitter
+{
+
+  private static final Logger log = new Logger(InfluxdbEmitter.class);
+  private HttpClient influxdbClient;
+  private final InfluxdbEmitterConfig influxdbEmitterConfig;
+  private final AtomicBoolean started = new AtomicBoolean(false);
+  private final ScheduledExecutorService exec = 
Executors.newScheduledThreadPool(1, new ThreadFactoryBuilder()
+  .setDaemon(true)
+  .setNameFormat("InfluxdbEmitter-%s")
+  .build());
+
+  private final ImmutableSet dimensionWhiteList;
+
+  private final LinkedBlockingQueue eventsQueue;
+
+  public InfluxdbEmitter(InfluxdbEmitterConfig influxdbEmitterConfig)
+  {
+this.influxdbEmitterConfig = influxdbEmitterConfig;
+this.influxdbClient = HttpClientBuilder.create().build();
+this.eventsQueue = new 
LinkedBlockingQueue<>(influxdbEmitterConfig.getMaxQueueSize());
+
+this.dimensionWhiteList = ImmutableSet.of(
+"dataSource",
+"type",
+"numMetrics",
+"numDimensions",
+"threshold",
+"dimension",
+"taskType",
+"taskStatus",
+"tier"
+);
+
+log.info("constructing influxdb emitter");
+  }
+
+  @Override
+  public void start()
+  {
+synchronized (started) {
+  if (!started.get()) {
+exec.scheduleAtFixedRate(
+new ConsumerRunnable(),
+influxdbEmitterConfig.getFlushDelay(),
+influxdbEmitterConfig.getFlushPeriod(),
+TimeUnit.MILLISECONDS
+);
+started.set(true);
+  }
+}
+  }
+
+  @Override
+  public void emit(Event event)
+  {
+if (event instanceof ServiceMetricEvent) {
+  ServiceMetricEvent metricEvent = (ServiceMetricEvent) event;
+  try {
+eventsQueue.put(metricEvent);
+  }
+  catch (InterruptedException exception) {
+log.error(exception.toString());
+Thread.currentThread().interrupt();
+  }
+}
+  }
+
+  public void postToInflux(String payload)
+  {
+HttpPost post = new HttpPost(
+"http://; + influxdbEmitterConfig.getHostname()
++ ":" + influxdbEmitterConfig.getPort()
++ "/write?db=" + influxdbEmitterConfig.getDatabaseName()
++ "=" + influxdbEmitterConfig.getInfluxdbUserName()
++ "=" + influxdbEmitterConfig.getInfluxdbPassword()
+);
+
+post.setEntity(new StringEntity(payload, ContentType.DEFAULT_TEXT));
+post.setHeader("Content-Type", "application/x-www-form-urlencoded");
+
+try {
+  influxdbClient.execute(post);
+}
+catch (IOException ex) {
+  log.info(ex.toString());
+}
+finally {
+  post.releaseConnection();
+}
+  }
+
+  public String transformForInfluxSystems(ServiceMetricEvent event)
+  {
+// split Druid metric on slashes and join middle parts (if any) with "_"
+String[] parts = 

[GitHub] [incubator-druid] himanshug commented on a change in pull request #7717: Adding influxdb emitter as a contrib extension

2019-05-21 Thread GitBox
himanshug commented on a change in pull request #7717: Adding influxdb emitter 
as a contrib extension
URL: https://github.com/apache/incubator-druid/pull/7717#discussion_r286231336
 
 

 ##
 File path: 
extensions-contrib/influxdb-emitter/src/main/java/org/apache/druid/emitter/influxdb/InfluxdbEmitter.java
 ##
 @@ -0,0 +1,231 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.emitter.influxdb;
+
+import com.google.common.collect.ImmutableSet;
+import com.google.common.util.concurrent.ThreadFactoryBuilder;
+import org.apache.druid.java.util.common.logger.Logger;
+import org.apache.druid.java.util.emitter.core.Emitter;
+import org.apache.druid.java.util.emitter.core.Event;
+import org.apache.druid.java.util.emitter.service.ServiceMetricEvent;
+import org.apache.http.client.HttpClient;
+import org.apache.http.client.methods.HttpPost;
+import org.apache.http.entity.ContentType;
+import org.apache.http.entity.StringEntity;
+import org.apache.http.impl.client.HttpClientBuilder;
+
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.concurrent.Executors;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.regex.Pattern;
+
+
+public class InfluxdbEmitter implements Emitter
+{
+
+  private static final Logger log = new Logger(InfluxdbEmitter.class);
+  private HttpClient influxdbClient;
+  private final InfluxdbEmitterConfig influxdbEmitterConfig;
+  private final AtomicBoolean started = new AtomicBoolean(false);
+  private final ScheduledExecutorService exec = 
Executors.newScheduledThreadPool(1, new ThreadFactoryBuilder()
+  .setDaemon(true)
+  .setNameFormat("InfluxdbEmitter-%s")
+  .build());
+
+  private final ImmutableSet dimensionWhiteList;
+
+  private final LinkedBlockingQueue eventsQueue;
+
+  public InfluxdbEmitter(InfluxdbEmitterConfig influxdbEmitterConfig)
+  {
+this.influxdbEmitterConfig = influxdbEmitterConfig;
+this.influxdbClient = HttpClientBuilder.create().build();
+this.eventsQueue = new 
LinkedBlockingQueue<>(influxdbEmitterConfig.getMaxQueueSize());
+
+this.dimensionWhiteList = ImmutableSet.of(
+"dataSource",
+"type",
+"numMetrics",
+"numDimensions",
+"threshold",
+"dimension",
+"taskType",
+"taskStatus",
+"tier"
+);
+
+log.info("constructing influxdb emitter");
+  }
+
+  @Override
+  public void start()
+  {
+synchronized (started) {
+  if (!started.get()) {
+exec.scheduleAtFixedRate(
+new ConsumerRunnable(),
+influxdbEmitterConfig.getFlushDelay(),
+influxdbEmitterConfig.getFlushPeriod(),
+TimeUnit.MILLISECONDS
+);
+started.set(true);
+  }
+}
+  }
+
+  @Override
+  public void emit(Event event)
+  {
+if (event instanceof ServiceMetricEvent) {
+  ServiceMetricEvent metricEvent = (ServiceMetricEvent) event;
+  try {
+eventsQueue.put(metricEvent);
+  }
+  catch (InterruptedException exception) {
+log.error(exception.toString());
+Thread.currentThread().interrupt();
+  }
+}
+  }
+
+  public void postToInflux(String payload)
+  {
+HttpPost post = new HttpPost(
+"http://; + influxdbEmitterConfig.getHostname()
++ ":" + influxdbEmitterConfig.getPort()
++ "/write?db=" + influxdbEmitterConfig.getDatabaseName()
++ "=" + influxdbEmitterConfig.getInfluxdbUserName()
++ "=" + influxdbEmitterConfig.getInfluxdbPassword()
+);
+
+post.setEntity(new StringEntity(payload, ContentType.DEFAULT_TEXT));
+post.setHeader("Content-Type", "application/x-www-form-urlencoded");
+
+try {
+  influxdbClient.execute(post);
+}
+catch (IOException ex) {
+  log.info(ex.toString());
+}
+finally {
+  post.releaseConnection();
+}
+  }
+
+  public String transformForInfluxSystems(ServiceMetricEvent event)
+  {
+// split Druid metric on slashes and join middle parts (if any) with "_"
+String[] parts = 

[GitHub] [incubator-druid] himanshug commented on a change in pull request #7717: Adding influxdb emitter as a contrib extension

2019-05-21 Thread GitBox
himanshug commented on a change in pull request #7717: Adding influxdb emitter 
as a contrib extension
URL: https://github.com/apache/incubator-druid/pull/7717#discussion_r286231618
 
 

 ##
 File path: 
extensions-contrib/influxdb-emitter/src/main/java/org/apache/druid/emitter/influxdb/InfluxdbEmitter.java
 ##
 @@ -0,0 +1,231 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.emitter.influxdb;
+
+import com.google.common.collect.ImmutableSet;
+import com.google.common.util.concurrent.ThreadFactoryBuilder;
+import org.apache.druid.java.util.common.logger.Logger;
+import org.apache.druid.java.util.emitter.core.Emitter;
+import org.apache.druid.java.util.emitter.core.Event;
+import org.apache.druid.java.util.emitter.service.ServiceMetricEvent;
+import org.apache.http.client.HttpClient;
+import org.apache.http.client.methods.HttpPost;
+import org.apache.http.entity.ContentType;
+import org.apache.http.entity.StringEntity;
+import org.apache.http.impl.client.HttpClientBuilder;
+
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.concurrent.Executors;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.regex.Pattern;
+
+
+public class InfluxdbEmitter implements Emitter
+{
+
+  private static final Logger log = new Logger(InfluxdbEmitter.class);
+  private HttpClient influxdbClient;
+  private final InfluxdbEmitterConfig influxdbEmitterConfig;
+  private final AtomicBoolean started = new AtomicBoolean(false);
+  private final ScheduledExecutorService exec = 
Executors.newScheduledThreadPool(1, new ThreadFactoryBuilder()
+  .setDaemon(true)
+  .setNameFormat("InfluxdbEmitter-%s")
+  .build());
+
+  private final ImmutableSet dimensionWhiteList;
+
+  private final LinkedBlockingQueue eventsQueue;
+
+  public InfluxdbEmitter(InfluxdbEmitterConfig influxdbEmitterConfig)
+  {
+this.influxdbEmitterConfig = influxdbEmitterConfig;
+this.influxdbClient = HttpClientBuilder.create().build();
+this.eventsQueue = new 
LinkedBlockingQueue<>(influxdbEmitterConfig.getMaxQueueSize());
+
+this.dimensionWhiteList = ImmutableSet.of(
+"dataSource",
+"type",
+"numMetrics",
+"numDimensions",
+"threshold",
+"dimension",
+"taskType",
+"taskStatus",
+"tier"
+);
+
+log.info("constructing influxdb emitter");
+  }
+
+  @Override
+  public void start()
+  {
+synchronized (started) {
+  if (!started.get()) {
+exec.scheduleAtFixedRate(
+new ConsumerRunnable(),
+influxdbEmitterConfig.getFlushDelay(),
+influxdbEmitterConfig.getFlushPeriod(),
+TimeUnit.MILLISECONDS
+);
+started.set(true);
+  }
+}
+  }
+
+  @Override
+  public void emit(Event event)
+  {
+if (event instanceof ServiceMetricEvent) {
+  ServiceMetricEvent metricEvent = (ServiceMetricEvent) event;
+  try {
+eventsQueue.put(metricEvent);
+  }
+  catch (InterruptedException exception) {
+log.error(exception.toString());
+Thread.currentThread().interrupt();
+  }
+}
+  }
+
+  public void postToInflux(String payload)
+  {
+HttpPost post = new HttpPost(
+"http://; + influxdbEmitterConfig.getHostname()
++ ":" + influxdbEmitterConfig.getPort()
++ "/write?db=" + influxdbEmitterConfig.getDatabaseName()
++ "=" + influxdbEmitterConfig.getInfluxdbUserName()
++ "=" + influxdbEmitterConfig.getInfluxdbPassword()
+);
+
+post.setEntity(new StringEntity(payload, ContentType.DEFAULT_TEXT));
+post.setHeader("Content-Type", "application/x-www-form-urlencoded");
+
+try {
+  influxdbClient.execute(post);
+}
+catch (IOException ex) {
+  log.info(ex.toString());
+}
+finally {
+  post.releaseConnection();
+}
+  }
+
+  public String transformForInfluxSystems(ServiceMetricEvent event)
+  {
+// split Druid metric on slashes and join middle parts (if any) with "_"
+String[] parts = 

[GitHub] [incubator-druid] himanshug commented on a change in pull request #7717: Adding influxdb emitter as a contrib extension

2019-05-21 Thread GitBox
himanshug commented on a change in pull request #7717: Adding influxdb emitter 
as a contrib extension
URL: https://github.com/apache/incubator-druid/pull/7717#discussion_r286196992
 
 

 ##
 File path: 
extensions-contrib/influxdb-emitter/src/main/java/org/apache/druid/emitter/influxdb/InfluxdbEmitter.java
 ##
 @@ -0,0 +1,231 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.emitter.influxdb;
+
+import com.google.common.collect.ImmutableSet;
+import com.google.common.util.concurrent.ThreadFactoryBuilder;
+import org.apache.druid.java.util.common.logger.Logger;
+import org.apache.druid.java.util.emitter.core.Emitter;
+import org.apache.druid.java.util.emitter.core.Event;
+import org.apache.druid.java.util.emitter.service.ServiceMetricEvent;
+import org.apache.http.client.HttpClient;
+import org.apache.http.client.methods.HttpPost;
+import org.apache.http.entity.ContentType;
+import org.apache.http.entity.StringEntity;
+import org.apache.http.impl.client.HttpClientBuilder;
+
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.concurrent.Executors;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.regex.Pattern;
+
+
+public class InfluxdbEmitter implements Emitter
+{
+
+  private static final Logger log = new Logger(InfluxdbEmitter.class);
+  private HttpClient influxdbClient;
+  private final InfluxdbEmitterConfig influxdbEmitterConfig;
+  private final AtomicBoolean started = new AtomicBoolean(false);
+  private final ScheduledExecutorService exec = 
Executors.newScheduledThreadPool(1, new ThreadFactoryBuilder()
+  .setDaemon(true)
+  .setNameFormat("InfluxdbEmitter-%s")
+  .build());
 
 Review comment:
   ```suggestion
 private final ScheduledExecutorService exec = 
ScheduledExecutors.fixed(1, "InfluxdbEmitter-%s");
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [incubator-druid] himanshug commented on a change in pull request #7717: Adding influxdb emitter as a contrib extension

2019-05-21 Thread GitBox
himanshug commented on a change in pull request #7717: Adding influxdb emitter 
as a contrib extension
URL: https://github.com/apache/incubator-druid/pull/7717#discussion_r286232467
 
 

 ##
 File path: 
extensions-contrib/influxdb-emitter/src/main/java/org/apache/druid/emitter/influxdb/InfluxdbEmitter.java
 ##
 @@ -0,0 +1,231 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.emitter.influxdb;
+
+import com.google.common.collect.ImmutableSet;
+import com.google.common.util.concurrent.ThreadFactoryBuilder;
+import org.apache.druid.java.util.common.logger.Logger;
+import org.apache.druid.java.util.emitter.core.Emitter;
+import org.apache.druid.java.util.emitter.core.Event;
+import org.apache.druid.java.util.emitter.service.ServiceMetricEvent;
+import org.apache.http.client.HttpClient;
+import org.apache.http.client.methods.HttpPost;
+import org.apache.http.entity.ContentType;
+import org.apache.http.entity.StringEntity;
+import org.apache.http.impl.client.HttpClientBuilder;
+
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.concurrent.Executors;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.regex.Pattern;
+
+
+public class InfluxdbEmitter implements Emitter
+{
+
+  private static final Logger log = new Logger(InfluxdbEmitter.class);
+  private HttpClient influxdbClient;
+  private final InfluxdbEmitterConfig influxdbEmitterConfig;
+  private final AtomicBoolean started = new AtomicBoolean(false);
+  private final ScheduledExecutorService exec = 
Executors.newScheduledThreadPool(1, new ThreadFactoryBuilder()
+  .setDaemon(true)
+  .setNameFormat("InfluxdbEmitter-%s")
+  .build());
+
+  private final ImmutableSet dimensionWhiteList;
+
+  private final LinkedBlockingQueue eventsQueue;
+
+  public InfluxdbEmitter(InfluxdbEmitterConfig influxdbEmitterConfig)
+  {
+this.influxdbEmitterConfig = influxdbEmitterConfig;
+this.influxdbClient = HttpClientBuilder.create().build();
+this.eventsQueue = new 
LinkedBlockingQueue<>(influxdbEmitterConfig.getMaxQueueSize());
+
+this.dimensionWhiteList = ImmutableSet.of(
+"dataSource",
+"type",
+"numMetrics",
+"numDimensions",
+"threshold",
+"dimension",
+"taskType",
+"taskStatus",
+"tier"
+);
+
+log.info("constructing influxdb emitter");
+  }
+
+  @Override
+  public void start()
+  {
+synchronized (started) {
+  if (!started.get()) {
+exec.scheduleAtFixedRate(
+new ConsumerRunnable(),
+influxdbEmitterConfig.getFlushDelay(),
+influxdbEmitterConfig.getFlushPeriod(),
+TimeUnit.MILLISECONDS
+);
+started.set(true);
+  }
+}
+  }
+
+  @Override
+  public void emit(Event event)
+  {
+if (event instanceof ServiceMetricEvent) {
+  ServiceMetricEvent metricEvent = (ServiceMetricEvent) event;
+  try {
+eventsQueue.put(metricEvent);
+  }
+  catch (InterruptedException exception) {
+log.error(exception.toString());
+Thread.currentThread().interrupt();
+  }
+}
+  }
+
+  public void postToInflux(String payload)
+  {
+HttpPost post = new HttpPost(
+"http://; + influxdbEmitterConfig.getHostname()
++ ":" + influxdbEmitterConfig.getPort()
++ "/write?db=" + influxdbEmitterConfig.getDatabaseName()
++ "=" + influxdbEmitterConfig.getInfluxdbUserName()
++ "=" + influxdbEmitterConfig.getInfluxdbPassword()
+);
+
+post.setEntity(new StringEntity(payload, ContentType.DEFAULT_TEXT));
+post.setHeader("Content-Type", "application/x-www-form-urlencoded");
+
+try {
+  influxdbClient.execute(post);
+}
+catch (IOException ex) {
+  log.info(ex.toString());
+}
+finally {
+  post.releaseConnection();
+}
+  }
+
+  public String transformForInfluxSystems(ServiceMetricEvent event)
+  {
+// split Druid metric on slashes and join middle parts (if any) with "_"
+String[] parts = 

[GitHub] [incubator-druid] himanshug commented on a change in pull request #7717: Adding influxdb emitter as a contrib extension

2019-05-21 Thread GitBox
himanshug commented on a change in pull request #7717: Adding influxdb emitter 
as a contrib extension
URL: https://github.com/apache/incubator-druid/pull/7717#discussion_r286194864
 
 

 ##
 File path: 
extensions-contrib/influxdb-emitter/src/main/java/org/apache/druid/emitter/influxdb/InfluxdbEmitter.java
 ##
 @@ -0,0 +1,231 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.emitter.influxdb;
+
+import com.google.common.collect.ImmutableSet;
+import com.google.common.util.concurrent.ThreadFactoryBuilder;
+import org.apache.druid.java.util.common.logger.Logger;
+import org.apache.druid.java.util.emitter.core.Emitter;
+import org.apache.druid.java.util.emitter.core.Event;
+import org.apache.druid.java.util.emitter.service.ServiceMetricEvent;
+import org.apache.http.client.HttpClient;
+import org.apache.http.client.methods.HttpPost;
+import org.apache.http.entity.ContentType;
+import org.apache.http.entity.StringEntity;
+import org.apache.http.impl.client.HttpClientBuilder;
+
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.concurrent.Executors;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.regex.Pattern;
+
+
+public class InfluxdbEmitter implements Emitter
+{
+
+  private static final Logger log = new Logger(InfluxdbEmitter.class);
+  private HttpClient influxdbClient;
+  private final InfluxdbEmitterConfig influxdbEmitterConfig;
+  private final AtomicBoolean started = new AtomicBoolean(false);
+  private final ScheduledExecutorService exec = 
Executors.newScheduledThreadPool(1, new ThreadFactoryBuilder()
+  .setDaemon(true)
+  .setNameFormat("InfluxdbEmitter-%s")
+  .build());
+
+  private final ImmutableSet dimensionWhiteList;
+
+  private final LinkedBlockingQueue eventsQueue;
+
+  public InfluxdbEmitter(InfluxdbEmitterConfig influxdbEmitterConfig)
+  {
+this.influxdbEmitterConfig = influxdbEmitterConfig;
+this.influxdbClient = HttpClientBuilder.create().build();
+this.eventsQueue = new 
LinkedBlockingQueue<>(influxdbEmitterConfig.getMaxQueueSize());
+
+this.dimensionWhiteList = ImmutableSet.of(
+"dataSource",
+"type",
+"numMetrics",
+"numDimensions",
+"threshold",
+"dimension",
+"taskType",
+"taskStatus",
+"tier"
+);
+
+log.info("constructing influxdb emitter");
 
 Review comment:
   ```suggestion
   log.info("constructed influxdb emitter");
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [incubator-druid] himanshug commented on a change in pull request #7717: Adding influxdb emitter as a contrib extension

2019-05-21 Thread GitBox
himanshug commented on a change in pull request #7717: Adding influxdb emitter 
as a contrib extension
URL: https://github.com/apache/incubator-druid/pull/7717#discussion_r286233630
 
 

 ##
 File path: 
extensions-contrib/influxdb-emitter/src/main/resources/META-INF/services/org.apache.druid.initialization.DruidModule
 ##
 @@ -0,0 +1,16 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+org.apache.druid.emitter.influxdb.InfluxdbEmitterModule
 
 Review comment:
   nit: newline


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [incubator-druid] himanshug commented on a change in pull request #7717: Adding influxdb emitter as a contrib extension

2019-05-21 Thread GitBox
himanshug commented on a change in pull request #7717: Adding influxdb emitter 
as a contrib extension
URL: https://github.com/apache/incubator-druid/pull/7717#discussion_r286232561
 
 

 ##
 File path: 
extensions-contrib/influxdb-emitter/src/main/java/org/apache/druid/emitter/influxdb/InfluxdbEmitter.java
 ##
 @@ -0,0 +1,231 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.emitter.influxdb;
+
+import com.google.common.collect.ImmutableSet;
+import com.google.common.util.concurrent.ThreadFactoryBuilder;
+import org.apache.druid.java.util.common.logger.Logger;
+import org.apache.druid.java.util.emitter.core.Emitter;
+import org.apache.druid.java.util.emitter.core.Event;
+import org.apache.druid.java.util.emitter.service.ServiceMetricEvent;
+import org.apache.http.client.HttpClient;
+import org.apache.http.client.methods.HttpPost;
+import org.apache.http.entity.ContentType;
+import org.apache.http.entity.StringEntity;
+import org.apache.http.impl.client.HttpClientBuilder;
+
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.concurrent.Executors;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.regex.Pattern;
+
+
+public class InfluxdbEmitter implements Emitter
+{
+
+  private static final Logger log = new Logger(InfluxdbEmitter.class);
+  private HttpClient influxdbClient;
+  private final InfluxdbEmitterConfig influxdbEmitterConfig;
+  private final AtomicBoolean started = new AtomicBoolean(false);
+  private final ScheduledExecutorService exec = 
Executors.newScheduledThreadPool(1, new ThreadFactoryBuilder()
+  .setDaemon(true)
+  .setNameFormat("InfluxdbEmitter-%s")
+  .build());
+
+  private final ImmutableSet dimensionWhiteList;
+
+  private final LinkedBlockingQueue eventsQueue;
+
+  public InfluxdbEmitter(InfluxdbEmitterConfig influxdbEmitterConfig)
+  {
+this.influxdbEmitterConfig = influxdbEmitterConfig;
+this.influxdbClient = HttpClientBuilder.create().build();
+this.eventsQueue = new 
LinkedBlockingQueue<>(influxdbEmitterConfig.getMaxQueueSize());
+
+this.dimensionWhiteList = ImmutableSet.of(
+"dataSource",
+"type",
+"numMetrics",
+"numDimensions",
+"threshold",
+"dimension",
+"taskType",
+"taskStatus",
+"tier"
+);
+
+log.info("constructing influxdb emitter");
+  }
+
+  @Override
+  public void start()
+  {
+synchronized (started) {
+  if (!started.get()) {
+exec.scheduleAtFixedRate(
+new ConsumerRunnable(),
+influxdbEmitterConfig.getFlushDelay(),
+influxdbEmitterConfig.getFlushPeriod(),
+TimeUnit.MILLISECONDS
+);
+started.set(true);
+  }
+}
+  }
+
+  @Override
+  public void emit(Event event)
+  {
+if (event instanceof ServiceMetricEvent) {
+  ServiceMetricEvent metricEvent = (ServiceMetricEvent) event;
+  try {
+eventsQueue.put(metricEvent);
+  }
+  catch (InterruptedException exception) {
+log.error(exception.toString());
+Thread.currentThread().interrupt();
+  }
+}
+  }
+
+  public void postToInflux(String payload)
+  {
+HttpPost post = new HttpPost(
+"http://; + influxdbEmitterConfig.getHostname()
++ ":" + influxdbEmitterConfig.getPort()
++ "/write?db=" + influxdbEmitterConfig.getDatabaseName()
++ "=" + influxdbEmitterConfig.getInfluxdbUserName()
++ "=" + influxdbEmitterConfig.getInfluxdbPassword()
+);
+
+post.setEntity(new StringEntity(payload, ContentType.DEFAULT_TEXT));
+post.setHeader("Content-Type", "application/x-www-form-urlencoded");
+
+try {
+  influxdbClient.execute(post);
+}
+catch (IOException ex) {
+  log.info(ex.toString());
+}
+finally {
+  post.releaseConnection();
+}
+  }
+
+  public String transformForInfluxSystems(ServiceMetricEvent event)
+  {
+// split Druid metric on slashes and join middle parts (if any) with "_"
+String[] parts = 

[GitHub] [incubator-druid] himanshug commented on a change in pull request #7717: Adding influxdb emitter as a contrib extension

2019-05-21 Thread GitBox
himanshug commented on a change in pull request #7717: Adding influxdb emitter 
as a contrib extension
URL: https://github.com/apache/incubator-druid/pull/7717#discussion_r286195480
 
 

 ##
 File path: 
extensions-contrib/influxdb-emitter/src/main/java/org/apache/druid/emitter/influxdb/InfluxdbEmitter.java
 ##
 @@ -0,0 +1,231 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.emitter.influxdb;
+
+import com.google.common.collect.ImmutableSet;
+import com.google.common.util.concurrent.ThreadFactoryBuilder;
+import org.apache.druid.java.util.common.logger.Logger;
+import org.apache.druid.java.util.emitter.core.Emitter;
+import org.apache.druid.java.util.emitter.core.Event;
+import org.apache.druid.java.util.emitter.service.ServiceMetricEvent;
+import org.apache.http.client.HttpClient;
+import org.apache.http.client.methods.HttpPost;
+import org.apache.http.entity.ContentType;
+import org.apache.http.entity.StringEntity;
+import org.apache.http.impl.client.HttpClientBuilder;
+
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.concurrent.Executors;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.regex.Pattern;
+
+
+public class InfluxdbEmitter implements Emitter
+{
+
+  private static final Logger log = new Logger(InfluxdbEmitter.class);
+  private HttpClient influxdbClient;
+  private final InfluxdbEmitterConfig influxdbEmitterConfig;
+  private final AtomicBoolean started = new AtomicBoolean(false);
+  private final ScheduledExecutorService exec = 
Executors.newScheduledThreadPool(1, new ThreadFactoryBuilder()
+  .setDaemon(true)
+  .setNameFormat("InfluxdbEmitter-%s")
+  .build());
+
+  private final ImmutableSet dimensionWhiteList;
+
+  private final LinkedBlockingQueue eventsQueue;
+
+  public InfluxdbEmitter(InfluxdbEmitterConfig influxdbEmitterConfig)
+  {
+this.influxdbEmitterConfig = influxdbEmitterConfig;
+this.influxdbClient = HttpClientBuilder.create().build();
+this.eventsQueue = new 
LinkedBlockingQueue<>(influxdbEmitterConfig.getMaxQueueSize());
+
+this.dimensionWhiteList = ImmutableSet.of(
+"dataSource",
+"type",
+"numMetrics",
+"numDimensions",
+"threshold",
+"dimension",
+"taskType",
+"taskStatus",
+"tier"
+);
 
 Review comment:
   would be nice to have this configurable.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [incubator-druid] himanshug commented on a change in pull request #7717: Adding influxdb emitter as a contrib extension

2019-05-21 Thread GitBox
himanshug commented on a change in pull request #7717: Adding influxdb emitter 
as a contrib extension
URL: https://github.com/apache/incubator-druid/pull/7717#discussion_r286233348
 
 

 ##
 File path: 
extensions-contrib/influxdb-emitter/src/main/java/org/apache/druid/emitter/influxdb/InfluxdbEmitter.java
 ##
 @@ -0,0 +1,231 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.emitter.influxdb;
+
+import com.google.common.collect.ImmutableSet;
+import com.google.common.util.concurrent.ThreadFactoryBuilder;
+import org.apache.druid.java.util.common.logger.Logger;
+import org.apache.druid.java.util.emitter.core.Emitter;
+import org.apache.druid.java.util.emitter.core.Event;
+import org.apache.druid.java.util.emitter.service.ServiceMetricEvent;
+import org.apache.http.client.HttpClient;
+import org.apache.http.client.methods.HttpPost;
+import org.apache.http.entity.ContentType;
+import org.apache.http.entity.StringEntity;
+import org.apache.http.impl.client.HttpClientBuilder;
+
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.concurrent.Executors;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.regex.Pattern;
+
+
+public class InfluxdbEmitter implements Emitter
+{
+
+  private static final Logger log = new Logger(InfluxdbEmitter.class);
+  private HttpClient influxdbClient;
+  private final InfluxdbEmitterConfig influxdbEmitterConfig;
+  private final AtomicBoolean started = new AtomicBoolean(false);
+  private final ScheduledExecutorService exec = 
Executors.newScheduledThreadPool(1, new ThreadFactoryBuilder()
+  .setDaemon(true)
+  .setNameFormat("InfluxdbEmitter-%s")
+  .build());
+
+  private final ImmutableSet dimensionWhiteList;
+
+  private final LinkedBlockingQueue eventsQueue;
+
+  public InfluxdbEmitter(InfluxdbEmitterConfig influxdbEmitterConfig)
+  {
+this.influxdbEmitterConfig = influxdbEmitterConfig;
+this.influxdbClient = HttpClientBuilder.create().build();
+this.eventsQueue = new 
LinkedBlockingQueue<>(influxdbEmitterConfig.getMaxQueueSize());
+
+this.dimensionWhiteList = ImmutableSet.of(
+"dataSource",
+"type",
+"numMetrics",
+"numDimensions",
+"threshold",
+"dimension",
+"taskType",
+"taskStatus",
+"tier"
+);
+
+log.info("constructing influxdb emitter");
+  }
+
+  @Override
+  public void start()
+  {
+synchronized (started) {
+  if (!started.get()) {
+exec.scheduleAtFixedRate(
+new ConsumerRunnable(),
+influxdbEmitterConfig.getFlushDelay(),
+influxdbEmitterConfig.getFlushPeriod(),
+TimeUnit.MILLISECONDS
+);
+started.set(true);
+  }
+}
+  }
+
+  @Override
+  public void emit(Event event)
+  {
+if (event instanceof ServiceMetricEvent) {
+  ServiceMetricEvent metricEvent = (ServiceMetricEvent) event;
+  try {
+eventsQueue.put(metricEvent);
+  }
+  catch (InterruptedException exception) {
+log.error(exception.toString());
+Thread.currentThread().interrupt();
+  }
+}
+  }
+
+  public void postToInflux(String payload)
+  {
+HttpPost post = new HttpPost(
+"http://; + influxdbEmitterConfig.getHostname()
++ ":" + influxdbEmitterConfig.getPort()
++ "/write?db=" + influxdbEmitterConfig.getDatabaseName()
++ "=" + influxdbEmitterConfig.getInfluxdbUserName()
++ "=" + influxdbEmitterConfig.getInfluxdbPassword()
+);
+
+post.setEntity(new StringEntity(payload, ContentType.DEFAULT_TEXT));
+post.setHeader("Content-Type", "application/x-www-form-urlencoded");
+
+try {
+  influxdbClient.execute(post);
+}
+catch (IOException ex) {
+  log.info(ex.toString());
 
 Review comment:
   this swallows the stacktrace.
   
   ```suggestion
 log.info(ex, "Failed to post events to InfluxDB.");
   ```


This is an automated message from the 

[GitHub] [incubator-druid] himanshug commented on a change in pull request #7717: Adding influxdb emitter as a contrib extension

2019-05-21 Thread GitBox
himanshug commented on a change in pull request #7717: Adding influxdb emitter 
as a contrib extension
URL: https://github.com/apache/incubator-druid/pull/7717#discussion_r286197759
 
 

 ##
 File path: 
extensions-contrib/influxdb-emitter/src/main/java/org/apache/druid/emitter/influxdb/InfluxdbEmitter.java
 ##
 @@ -0,0 +1,231 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.emitter.influxdb;
+
+import com.google.common.collect.ImmutableSet;
+import com.google.common.util.concurrent.ThreadFactoryBuilder;
+import org.apache.druid.java.util.common.logger.Logger;
+import org.apache.druid.java.util.emitter.core.Emitter;
+import org.apache.druid.java.util.emitter.core.Event;
+import org.apache.druid.java.util.emitter.service.ServiceMetricEvent;
+import org.apache.http.client.HttpClient;
+import org.apache.http.client.methods.HttpPost;
+import org.apache.http.entity.ContentType;
+import org.apache.http.entity.StringEntity;
+import org.apache.http.impl.client.HttpClientBuilder;
+
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.concurrent.Executors;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.regex.Pattern;
+
+
+public class InfluxdbEmitter implements Emitter
+{
+
+  private static final Logger log = new Logger(InfluxdbEmitter.class);
+  private HttpClient influxdbClient;
+  private final InfluxdbEmitterConfig influxdbEmitterConfig;
+  private final AtomicBoolean started = new AtomicBoolean(false);
+  private final ScheduledExecutorService exec = 
Executors.newScheduledThreadPool(1, new ThreadFactoryBuilder()
+  .setDaemon(true)
+  .setNameFormat("InfluxdbEmitter-%s")
+  .build());
+
+  private final ImmutableSet dimensionWhiteList;
+
+  private final LinkedBlockingQueue eventsQueue;
+
+  public InfluxdbEmitter(InfluxdbEmitterConfig influxdbEmitterConfig)
+  {
+this.influxdbEmitterConfig = influxdbEmitterConfig;
+this.influxdbClient = HttpClientBuilder.create().build();
+this.eventsQueue = new 
LinkedBlockingQueue<>(influxdbEmitterConfig.getMaxQueueSize());
+
+this.dimensionWhiteList = ImmutableSet.of(
+"dataSource",
+"type",
+"numMetrics",
+"numDimensions",
+"threshold",
+"dimension",
+"taskType",
+"taskStatus",
+"tier"
+);
+
+log.info("constructing influxdb emitter");
+  }
+
+  @Override
+  public void start()
+  {
+synchronized (started) {
+  if (!started.get()) {
+exec.scheduleAtFixedRate(
+new ConsumerRunnable(),
+influxdbEmitterConfig.getFlushDelay(),
+influxdbEmitterConfig.getFlushPeriod(),
+TimeUnit.MILLISECONDS
+);
+started.set(true);
+  }
+}
+  }
+
+  @Override
+  public void emit(Event event)
+  {
+if (event instanceof ServiceMetricEvent) {
+  ServiceMetricEvent metricEvent = (ServiceMetricEvent) event;
+  try {
+eventsQueue.put(metricEvent);
+  }
+  catch (InterruptedException exception) {
+log.error(exception.toString());
 
 Review comment:
   this would swallow the stacktrace.
   
   ```suggestion
   log.error(exception, "Failed to add metricEvent to events queue.");
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [incubator-druid] gerbal opened a new issue #7724: Lookup transform expression not defined during ingest from kafka

2019-05-21 Thread GitBox
gerbal opened a new issue #7724: Lookup transform expression not defined during 
ingest from kafka
URL: https://github.com/apache/incubator-druid/issues/7724
 
 
   ### Affected Version
   
   master
   
   ### Description
   
   When trying to resolve an attribute via a lookup at ingestion time I get the 
following error: 
   
   
   ```
   2019-05-21T21:06:54,247 ERROR [KafkaSupervisor-example] 
org.apache.druid.indexing.seekablestream.supervisor.SeekableStreamSupervisor - 
SeekableStreamSupervisor[example] failed to handle notice: 
{class=org.apache.druid.indexing.seekablestream.supervisor.SeekableStreamSupervisor,
 exceptionType=class org.apache.druid.java.util.common.RE, 
exceptionMessage=function 'lookup' is not defined., noticeClass=RunNotice}
   org.apache.druid.java.util.common.RE: function 'lookup' is not defined.
   at 
org.apache.druid.math.expr.ExprListenerImpl.exitFunctionExpr(ExprListenerImpl.java:303)
 ~[druid-core-0.15.0-incubating-SNAPSHOT.jar:0.15.0-incubating-SNAPSHOT]
   at 
org.apache.druid.math.expr.antlr.ExprParser$FunctionExprContext.exitRule(ExprParser.java:212)
 ~[druid-core-0.15.0-incubating-SNAPSHOT.jar:0.15.0-incubating-SNAPSHOT]
   at 
org.antlr.v4.runtime.tree.ParseTreeWalker.exitRule(ParseTreeWalker.java:71) 
~[antlr4-runtime-4.5.1.jar:4.5.1]
   at 
org.antlr.v4.runtime.tree.ParseTreeWalker.walk(ParseTreeWalker.java:54) 
~[antlr4-runtime-4.5.1.jar:4.5.1]
   at org.apache.druid.math.expr.Parser.parse(Parser.java:85) 
~[druid-core-0.15.0-incubating-SNAPSHOT.jar:0.15.0-incubating-SNAPSHOT]
   at org.apache.druid.math.expr.Parser.parse(Parser.java:72) 
~[druid-core-0.15.0-incubating-SNAPSHOT.jar:0.15.0-incubating-SNAPSHOT]
   at 
org.apache.druid.segment.transform.ExpressionTransform.getRowFunction(ExpressionTransform.java:68)
 ~[druid-processing-0.15.0-incubating-SNAPSHOT.jar:0.15.0-incubating-SNAPSHOT]
   at 
org.apache.druid.segment.transform.Transformer.(Transformer.java:50) 
~[druid-processing-0.15.0-incubating-SNAPSHOT.jar:0.15.0-incubating-SNAPSHOT]
   at 
org.apache.druid.segment.transform.TransformSpec.toTransformer(TransformSpec.java:122)
 ~[druid-processing-0.15.0-incubating-SNAPSHOT.jar:0.15.0-incubating-SNAPSHOT]
   at 
org.apache.druid.segment.transform.TransformingStringInputRowParser.(TransformingStringInputRowParser.java:44)
 ~[druid-processing-0.15.0-incubating-SNAPSHOT.jar:0.15.0-incubating-SNAPSHOT]
   at 
org.apache.druid.segment.transform.TransformSpec.decorate(TransformSpec.java:108)
 ~[druid-processing-0.15.0-incubating-SNAPSHOT.jar:0.15.0-incubating-SNAPSHOT]
   at 
org.apache.druid.segment.indexing.DataSchema.getParser(DataSchema.java:125) 
~[druid-server-0.15.0-incubating-SNAPSHOT.jar:0.15.0-incubating-SNAPSHOT]
   at 
org.apache.druid.indexing.seekablestream.SeekableStreamIndexTask.(SeekableStreamIndexTask.java:102)
 
~[druid-indexing-service-0.15.0-incubating-SNAPSHOT.jar:0.15.0-incubating-SNAPSHOT]
   at 
org.apache.druid.indexing.kafka.KafkaIndexTask.(KafkaIndexTask.java:70) 
~[?:?]
   at 
org.apache.druid.indexing.kafka.supervisor.KafkaSupervisor.createIndexTasks(KafkaSupervisor.java:246)
 ~[?:?]
   at 
org.apache.druid.indexing.seekablestream.supervisor.SeekableStreamSupervisor.createTasksForGroup(SeekableStreamSupervisor.java:2492)
 
~[druid-indexing-service-0.15.0-incubating-SNAPSHOT.jar:0.15.0-incubating-SNAPSHOT]
   at 
org.apache.druid.indexing.seekablestream.supervisor.SeekableStreamSupervisor.createNewTasks(SeekableStreamSupervisor.java:2306)
 
~[druid-indexing-service-0.15.0-incubating-SNAPSHOT.jar:0.15.0-incubating-SNAPSHOT]
   at 
org.apache.druid.indexing.seekablestream.supervisor.SeekableStreamSupervisor.runInternal(SeekableStreamSupervisor.java:1012)
 
~[druid-indexing-service-0.15.0-incubating-SNAPSHOT.jar:0.15.0-incubating-SNAPSHOT]
   at 
org.apache.druid.indexing.seekablestream.supervisor.SeekableStreamSupervisor$RunNotice.handle(SeekableStreamSupervisor.java:264)
 
~[druid-indexing-service-0.15.0-incubating-SNAPSHOT.jar:0.15.0-incubating-SNAPSHOT]
   at 
org.apache.druid.indexing.seekablestream.supervisor.SeekableStreamSupervisor.lambda$tryInit$3(SeekableStreamSupervisor.java:723)
 
~[druid-indexing-service-0.15.0-incubating-SNAPSHOT.jar:0.15.0-incubating-SNAPSHOT]
   at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
[?:1.8.0_181]
   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
[?:1.8.0_181]
   at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
[?:1.8.0_181]
   at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
[?:1.8.0_181]
   at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]
   ```
   
   
   Here is the ingestion spec:
   
   ```json
   {
 "type": "kafka",
 "dataSchema": {
   "dataSource": "example",
  

[GitHub] [incubator-druid] mcbrewster commented on a change in pull request #7723: Web-console: add resizable split screen layout to tasks and servers views

2019-05-21 Thread GitBox
mcbrewster commented on a change in pull request #7723: Web-console: add 
resizable split screen layout to tasks and servers views 
URL: https://github.com/apache/incubator-druid/pull/7723#discussion_r286200999
 
 

 ##
 File path: web-console/package.json
 ##
 @@ -35,6 +35,7 @@
   },
   "dependencies": {
 "@blueprintjs/core": "^3.15.1",
+"@types/react-splitter-layout": "^3.0.0",
 
 Review comment:
   fixed!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [incubator-druid] jihoonson commented on issue #7706: Get correct curr_size attribute value for historical servers

2019-05-21 Thread GitBox
jihoonson commented on issue #7706: Get correct curr_size attribute value for 
historical servers
URL: https://github.com/apache/incubator-druid/pull/7706#issuecomment-494535952
 
 
   @fjy this bug was introduced in 
https://github.com/apache/incubator-druid/pull/7654.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [incubator-druid] vogievetsky commented on a change in pull request #7723: Web-console: add resizable split screen layout to tasks and servers views

2019-05-21 Thread GitBox
vogievetsky commented on a change in pull request #7723: Web-console: add 
resizable split screen layout to tasks and servers views 
URL: https://github.com/apache/incubator-druid/pull/7723#discussion_r286199454
 
 

 ##
 File path: web-console/package.json
 ##
 @@ -35,6 +35,7 @@
   },
   "dependencies": {
 "@blueprintjs/core": "^3.15.1",
+"@types/react-splitter-layout": "^3.0.0",
 
 Review comment:
   the types are dev dependancies


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [incubator-druid] mcbrewster opened a new pull request #7723: Web-console: add resizable split screen layout to tasks and servers views

2019-05-21 Thread GitBox
mcbrewster opened a new pull request #7723: Web-console: add resizable split 
screen layout to tasks and servers views 
URL: https://github.com/apache/incubator-druid/pull/7723
 
 
   https://user-images.githubusercontent.com/37322608/58125346-39ccaf00-7bc5-11e9-9422-e49d7d6dcaed.png;>
   
   React-splitter-layout allows for the split screen views to be dragged and 
resized. When the user finishes dragging the current size of the secondary pane 
is stored in local storage so the layout remains the same if the page is 
refreshed. If there is no size value in local storage the view will 
automatically be set to 60/40. 
   
   Additionally the navigation buttons now follow the pane when resized. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [incubator-druid] himanshug commented on a change in pull request #7718: allow quantiles merge aggregator to also accept doubles

2019-05-21 Thread GitBox
himanshug commented on a change in pull request #7718: allow quantiles merge 
aggregator to also accept doubles
URL: https://github.com/apache/incubator-druid/pull/7718#discussion_r286192541
 
 

 ##
 File path: 
extensions-core/datasketches/src/main/java/org/apache/druid/query/aggregation/datasketches/quantiles/DoublesSketchMergeBufferAggregator.java
 ##
 @@ -62,12 +62,16 @@ public synchronized void init(final ByteBuffer buffer, 
final int position)
   @Override
   public synchronized void aggregate(final ByteBuffer buffer, final int 
position)
   {
-final DoublesSketch sketch = selector.getObject();
-if (sketch == null) {
+final Object object = selector.getObject();
+if (object == null) {
   return;
 }
 final DoublesUnion union = unions.get(buffer).get(position);
-union.update(sketch);
+if (object instanceof DoublesSketch) {
+  union.update((DoublesSketch) object);
+} else {
+  union.update(selector.getDouble());
+}
 
 Review comment:
   nit: would be nice to extract last 5 lines in either of the aggregator 
classes.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [incubator-druid] himanshug commented on a change in pull request #7716: AggregatorUtil should cache parsed expression to avoid memory problem (OOM/FGC) when Expression is used in metricsSpec

2019-05-21 Thread GitBox
himanshug commented on a change in pull request #7716: AggregatorUtil should 
cache parsed expression to avoid memory problem (OOM/FGC) when Expression is 
used in metricsSpec
URL: https://github.com/apache/incubator-druid/pull/7716#discussion_r286188705
 
 

 ##
 File path: core/src/main/java/org/apache/druid/math/expr/Parser.java
 ##
 @@ -75,6 +75,9 @@ public static Expr parse(String in, ExprMacroTable 
macroTable)
   @VisibleForTesting
   static Expr parse(String in, ExprMacroTable macroTable, boolean withFlatten)
   {
+if (log.isDebugEnabled()) {
 
 Review comment:
   log.debug(..) has this check inside it , you can use string format instead 
of string concatenation and remove this check.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [incubator-druid] himanshug commented on a change in pull request #7716: AggregatorUtil should cache parsed expression to avoid memory problem (OOM/FGC) when Expression is used in metricsSpec

2019-05-21 Thread GitBox
himanshug commented on a change in pull request #7716: AggregatorUtil should 
cache parsed expression to avoid memory problem (OOM/FGC) when Expression is 
used in metricsSpec
URL: https://github.com/apache/incubator-druid/pull/7716#discussion_r286190859
 
 

 ##
 File path: 
processing/src/main/java/org/apache/druid/query/aggregation/AggregatorUtil.java
 ##
 @@ -196,7 +246,7 @@ static BaseFloatColumnValueSelector 
makeColumnValueSelectorWithFloatDefault(
 if (fieldName != null) {
   return metricFactory.makeColumnValueSelector(fieldName);
 } else {
-  final Expr expr = Parser.parse(fieldExpression, macroTable);
+  final Expr expr = parseIfAbsent(fieldExpression, macroTable);
 
 Review comment:
   would be better to not have explicit cache added in this class. Instead, if 
you change arguments of this(and other similar methods) to...
   
   ```
 static BaseFloatColumnValueSelector 
makeColumnValueSelectorWithFloatDefault(
 final ColumnSelectorFactory metricFactory,
 @Nullable final String fieldName,
 @Nullable final Expr fieldExpression,
 final float nullValue
 )
   ```
   
   and let caching happen in `SimpleXXAggregatorFactory` classes . Don't parse 
in the constructor of those classes because those objects could be created in 
many places where they wouldn't need parsed expression, so do the parsing 
lazily.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [incubator-druid] himanshug commented on a change in pull request #7716: AggregatorUtil should cache parsed expression to avoid memory problem (OOM/FGC) when Expression is used in metricsSpec

2019-05-21 Thread GitBox
himanshug commented on a change in pull request #7716: AggregatorUtil should 
cache parsed expression to avoid memory problem (OOM/FGC) when Expression is 
used in metricsSpec
URL: https://github.com/apache/incubator-druid/pull/7716#discussion_r286191367
 
 

 ##
 File path: core/src/main/java/org/apache/druid/math/expr/Parser.java
 ##
 @@ -75,6 +75,9 @@ public static Expr parse(String in, ExprMacroTable 
macroTable)
   @VisibleForTesting
   static Expr parse(String in, ExprMacroTable macroTable, boolean withFlatten)
   {
+if (log.isDebugEnabled()) {
 
 Review comment:
   also, not sure if you really need this log at all.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [incubator-druid] jihoonson commented on issue #7571: Optimize coordinator API to retrieve segments with overshadowed status

2019-05-21 Thread GitBox
jihoonson commented on issue #7571: Optimize coordinator API to retrieve 
segments with overshadowed status
URL: 
https://github.com/apache/incubator-druid/issues/7571#issuecomment-494516109
 
 
   For the metadata of segment, perhaps it's better to have different data 
structures for segments which are actively generated by stream ingestion tasks 
and others. For example, the former might not have `loadSpec` and `size` fields 
to reduce confusion. For interning properly, we might need two maps (segmentId 
-> segment metadata) for each data structure which their keys are never 
overlapped.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [incubator-druid] fjy commented on issue #7706: Get correct curr_size attribute value for historical servers

2019-05-21 Thread GitBox
fjy commented on issue #7706: Get correct curr_size attribute value for 
historical servers
URL: https://github.com/apache/incubator-druid/pull/7706#issuecomment-494514888
 
 
   Merging given that TC is broken.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[incubator-druid] branch master updated: Fix currSize attribute of historical server type (#7706)

2019-05-21 Thread fjy
This is an automated email from the ASF dual-hosted git repository.

fjy pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-druid.git


The following commit(s) were added to refs/heads/master by this push:
 new 1fe0de1  Fix currSize attribute of historical server type (#7706)
1fe0de1 is described below

commit 1fe0de1c962400321ce0ca3e2a6f8528e64466cf
Author: Surekha 
AuthorDate: Tue May 21 11:55:58 2019 -0700

Fix currSize attribute of historical server type (#7706)
---
 .../druid/sql/calcite/schema/SystemSchema.java | 13 +--
 .../druid/sql/calcite/schema/SystemSchemaTest.java | 26 +-
 .../druid/sql/calcite/util/CalciteTests.java   |  2 ++
 3 files changed, 33 insertions(+), 8 deletions(-)

diff --git 
a/sql/src/main/java/org/apache/druid/sql/calcite/schema/SystemSchema.java 
b/sql/src/main/java/org/apache/druid/sql/calcite/schema/SystemSchema.java
index bf0e3b6..f863104 100644
--- a/sql/src/main/java/org/apache/druid/sql/calcite/schema/SystemSchema.java
+++ b/sql/src/main/java/org/apache/druid/sql/calcite/schema/SystemSchema.java
@@ -43,7 +43,9 @@ import org.apache.calcite.schema.ScannableTable;
 import org.apache.calcite.schema.Table;
 import org.apache.calcite.schema.impl.AbstractSchema;
 import org.apache.calcite.schema.impl.AbstractTable;
+import org.apache.druid.client.DruidServer;
 import org.apache.druid.client.ImmutableDruidServer;
+import org.apache.druid.client.InventoryView;
 import org.apache.druid.client.JsonParserIterator;
 import org.apache.druid.client.TimelineServerView;
 import org.apache.druid.client.coordinator.Coordinator;
@@ -184,6 +186,7 @@ public class SystemSchema extends AbstractSchema
   final DruidSchema druidSchema,
   final MetadataSegmentView metadataView,
   final TimelineServerView serverView,
+  final InventoryView serverInventoryView,
   final AuthorizerMapper authorizerMapper,
   final @Coordinator DruidLeaderClient coordinatorDruidLeaderClient,
   final @IndexingService DruidLeaderClient overlordDruidLeaderClient,
@@ -201,7 +204,7 @@ public class SystemSchema extends AbstractSchema
 );
 this.tableMap = ImmutableMap.of(
 SEGMENTS_TABLE, segmentsTable,
-SERVERS_TABLE, new ServersTable(druidNodeDiscoveryProvider, 
authorizerMapper),
+SERVERS_TABLE, new ServersTable(druidNodeDiscoveryProvider, 
serverInventoryView, authorizerMapper),
 SERVER_SEGMENTS_TABLE, new ServerSegmentsTable(serverView, 
authorizerMapper),
 TASKS_TABLE, new TasksTable(overlordDruidLeaderClient, jsonMapper, 
responseHandler, authorizerMapper)
 );
@@ -441,14 +444,17 @@ public class SystemSchema extends AbstractSchema
   {
 private final AuthorizerMapper authorizerMapper;
 private final DruidNodeDiscoveryProvider druidNodeDiscoveryProvider;
+private final InventoryView serverInventoryView;
 
 public ServersTable(
 DruidNodeDiscoveryProvider druidNodeDiscoveryProvider,
+InventoryView serverInventoryView,
 AuthorizerMapper authorizerMapper
 )
 {
   this.authorizerMapper = authorizerMapper;
   this.druidNodeDiscoveryProvider = druidNodeDiscoveryProvider;
+  this.serverInventoryView = serverInventoryView;
 }
 
 @Override
@@ -477,7 +483,10 @@ public class SystemSchema extends AbstractSchema
   .transform(val -> {
 boolean isDataNode = false;
 final DruidNode node = val.getDruidNode();
+long currHistoricalSize = 0;
 if (val.getNodeType().equals(NodeType.HISTORICAL)) {
+  final DruidServer server = 
serverInventoryView.getInventoryValue(val.toDruidServer().getName());
+  currHistoricalSize = server.getCurrSize();
   isDataNode = true;
 }
 return new Object[]{
@@ -487,7 +496,7 @@ public class SystemSchema extends AbstractSchema
 (long) extractPort(node.getHostAndTlsPort()),
 StringUtils.toLowerCase(toStringOrNull(val.getNodeType())),
 isDataNode ? val.toDruidServer().getTier() : null,
-isDataNode ? val.toDruidServer().getCurrSize() : 
CURRENT_SERVER_SIZE,
+isDataNode ? currHistoricalSize : CURRENT_SERVER_SIZE,
 isDataNode ? val.toDruidServer().getMaxSize() : MAX_SERVER_SIZE
 };
   });
diff --git 
a/sql/src/test/java/org/apache/druid/sql/calcite/schema/SystemSchemaTest.java 
b/sql/src/test/java/org/apache/druid/sql/calcite/schema/SystemSchemaTest.java
index 2d8e1d6..49e406b 100644
--- 
a/sql/src/test/java/org/apache/druid/sql/calcite/schema/SystemSchemaTest.java
+++ 
b/sql/src/test/java/org/apache/druid/sql/calcite/schema/SystemSchemaTest.java
@@ -37,6 +37,8 @@ import org.apache.calcite.sql.type.SqlTypeName;
 import org.apache.druid.client.DruidServer;
 import org.apache.druid.client.ImmutableDruidDataSource;
 import 

[GitHub] [incubator-druid] fjy merged pull request #7706: Get correct curr_size attribute value for historical servers

2019-05-21 Thread GitBox
fjy merged pull request #7706: Get correct curr_size attribute value for 
historical servers
URL: https://github.com/apache/incubator-druid/pull/7706
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [incubator-druid] fjy commented on issue #7706: Get correct curr_size attribute value for historical servers

2019-05-21 Thread GitBox
fjy commented on issue #7706: Get correct curr_size attribute value for 
historical servers
URL: https://github.com/apache/incubator-druid/pull/7706#issuecomment-494514859
 
 
   @surekhasaharan which release is this for?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[incubator-druid] branch master updated: SQL: Allow NULLs in place of optional arguments in many functions. (#7709)

2019-05-21 Thread fjy
This is an automated email from the ASF dual-hosted git repository.

fjy pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-druid.git


The following commit(s) were added to refs/heads/master by this push:
 new cbbce95  SQL: Allow NULLs in place of optional arguments in many 
functions. (#7709)
cbbce95 is described below

commit cbbce955de588330797fda986dd1031fceb3b679
Author: Gian Merlino 
AuthorDate: Tue May 21 11:54:34 2019 -0700

SQL: Allow NULLs in place of optional arguments in many functions. (#7709)

* SQL: Allow NULLs in place of optional arguments in many functions.

Also adjust SQL docs to describe how to make time literals using
TIME_PARSE (which is now possible in a nicer way).

* Be less forbidden.
---
 docs/content/querying/sql.md   |  15 +-
 .../calcite/expression/OperatorConversions.java| 194 -
 .../apache/druid/sql/calcite/CalciteQueryTest.java |  35 
 3 files changed, 232 insertions(+), 12 deletions(-)

diff --git a/docs/content/querying/sql.md b/docs/content/querying/sql.md
index 5b6e308..5d562f6 100644
--- a/docs/content/querying/sql.md
+++ b/docs/content/querying/sql.md
@@ -214,6 +214,10 @@ context parameter "sqlTimeZone" to the name of another 
time zone, like "America/
 the connection time zone, some functions also accept time zones as parameters. 
These parameters always take precedence
 over the connection time zone.
 
+Literal timestamps in the connection time zone can be written using `TIMESTAMP 
'2000-01-01 00:00:00'` syntax. The
+simplest way to write literal timestamps in other time zones is to use 
TIME_PARSE, like
+`TIME_PARSE('2000-02-01 00:00:00', NULL, 'America/Los_Angeles')`.
+
 |Function|Notes|
 ||-|
 |`CURRENT_TIMESTAMP`|Current timestamp in the connection's time zone.|
@@ -291,11 +295,12 @@ Additionally, some Druid features are not supported by 
the SQL language. Some un
 
 Druid natively supports five basic column types: "long" (64 bit signed int), 
"float" (32 bit float), "double" (64 bit
 float) "string" (UTF-8 encoded strings), and "complex" (catch-all for more 
exotic data types like hyperUnique and
-approxHistogram columns). Timestamps (including the `__time` column) are 
stored as longs, with the value being the
-number of milliseconds since 1 January 1970 UTC.
+approxHistogram columns).
 
-At runtime, Druid may widen 32-bit floats to 64-bit for certain operators, 
like SUM aggregators. The reverse will not
-happen: 64-bit floats are not be narrowed to 32-bit.
+Timestamps (including the `__time` column) are treated by Druid as longs, with 
the value being the number of
+milliseconds since 1970-01-01 00:00:00 UTC, not counting leap seconds. 
Therefore, timestamps in Druid do not carry any
+timezone information, but only carry information about the exact moment in 
time they represent. See the
+[Time functions](#time-functions) section for more information about timestamp 
handling.
 
 Druid generally treats NULLs and empty strings interchangeably, rather than 
according to the SQL standard. As such,
 Druid SQL only has partial support for NULLs. For example, the expressions 
`col IS NULL` and `col = ''` are equivalent,
@@ -307,7 +312,7 @@ datasource, then it will be treated as zero for rows from 
those segments.
 
 For mathematical operations, Druid SQL will use integer math if all operands 
involved in an expression are integers.
 Otherwise, Druid will switch to floating point math. You can force this to 
happen by casting one of your operands
-to FLOAT.
+to FLOAT. At runtime, Druid may widen 32-bit floats to 64-bit for certain 
operators, like SUM aggregators.
 
 The following table describes how SQL types map onto Druid types during query 
runtime. Casts between two SQL types
 that have the same Druid runtime type will have no effect, other than 
exceptions noted in the table. Casts between two
diff --git 
a/sql/src/main/java/org/apache/druid/sql/calcite/expression/OperatorConversions.java
 
b/sql/src/main/java/org/apache/druid/sql/calcite/expression/OperatorConversions.java
index a4dbfd2..8a73cbd 100644
--- 
a/sql/src/main/java/org/apache/druid/sql/calcite/expression/OperatorConversions.java
+++ 
b/sql/src/main/java/org/apache/druid/sql/calcite/expression/OperatorConversions.java
@@ -20,18 +20,29 @@
 package org.apache.druid.sql.calcite.expression;
 
 import com.google.common.base.Preconditions;
+import com.google.common.collect.Iterables;
+import it.unimi.dsi.fastutil.ints.IntArraySet;
+import it.unimi.dsi.fastutil.ints.IntSet;
+import org.apache.calcite.rel.type.RelDataType;
 import org.apache.calcite.rex.RexCall;
 import org.apache.calcite.rex.RexLiteral;
 import org.apache.calcite.rex.RexNode;
+import org.apache.calcite.sql.SqlCallBinding;
 import org.apache.calcite.sql.SqlFunction;
 import org.apache.calcite.sql.SqlFunctionCategory;
 import org.apache.calcite.sql.SqlKind;
-import 

[GitHub] [incubator-druid] himanshug commented on issue #2320: [Proposal] support for setting javaOpts per a task

2019-05-21 Thread GitBox
himanshug commented on issue #2320: [Proposal] support for setting javaOpts per 
a task
URL: 
https://github.com/apache/incubator-druid/issues/2320#issuecomment-494513344
 
 
   @vsharathchandra you can put same in kafka supervisor context and it would 
be propagated to all the launched KafkaIndexTasks


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [incubator-druid] jihoonson commented on issue #7571: Optimize coordinator API to retrieve segments with overshadowed status

2019-05-21 Thread GitBox
jihoonson commented on issue #7571: Optimize coordinator API to retrieve 
segments with overshadowed status
URL: 
https://github.com/apache/incubator-druid/issues/7571#issuecomment-494510520
 
 
   > Added to 0.15 milestone because If SegmentWithOvershadowedStatus will leak 
into Druid 0.15 API, it will be much harder to remove it from later
   
   This sounds like going back to feature-based release instead of time-based 
one. Per the 
[discussion](https://groups.google.com/forum/#!msg/druid-development/QPZUIzLtZ2I/hc3jkMATCgAJ;context-place=searchin/druid-development/roman%7Csort:date)
 you started, we are trying to freeze code every 3 months. After code freeze, 
we are backporting only regression bug fixes, security bug fixes, and doc 
changes if necessary. If you want to talk about a better release policy, please 
start a discussion on dev mailing list.
   
   Since this issue is not any of bug fix or doc change, I don't think this 
issue should be necessarily a blocker for 0.15.0 release. IMO, if it's not done 
yet before code freeze, it simply should wait for the next release. This is the 
least controversial way to choose features to add to each release I think. 
Especially for this issue, even the discussion on the proper solution is still 
ongoing.
   
   > Regarding DataSegment's immutability, it might need to go away anyway, as 
#6358 should be fixed (note @surekhasaharan: you can fix that problem, incl. 
across Coordinator -> Broker segment streaming, in the same PR). 
   
   I agree the current way that DataSegment is updated and maintained is 
somewhat unintuitive. However, still I think it's not a good idea to make it 
immutable. Currently the variables can be updated in DataSegment are `loadSpec` 
and `size`, and the update happens only one time after the segment created by 
stream ingestion tasks is published. So, it's not the update of state, but 
update of metadata.
   
   `overshadowedStatus` is different. It represents a state of a dataSegment 
which can have different values depending on the context. This means, the 
`overshadowedStatus` must be computed correctly before it's used. If it's added 
to DataSegment as a mutable variable, I guess it would be very confusing unless 
there is a way to figure out under what context that state was computed.
   
   So, I'm also curious about the benefit we can when `DataSegment` becomes 
mutable. Is there anything else except memory savings?
   
   Finally, I don't think it's a good idea to recommend to fix several problems 
in a single PR. It makes the PR complicated which in turn making review harder. 
I would say, if the author wants to fix several problems in a single PR, I'm ok 
with that if the PR is not much complicated and large. But, I think simple PRs 
more likely get reviewed because they are easy to review.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [incubator-druid] fjy merged pull request #7704: SQL: Respect default timezone for TIME_PARSE and TIME_SHIFT.

2019-05-21 Thread GitBox
fjy merged pull request #7704: SQL: Respect default timezone for TIME_PARSE and 
TIME_SHIFT.
URL: https://github.com/apache/incubator-druid/pull/7704
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [incubator-druid] fjy commented on issue #7704: SQL: Respect default timezone for TIME_PARSE and TIME_SHIFT.

2019-05-21 Thread GitBox
fjy commented on issue #7704: SQL: Respect default timezone for TIME_PARSE and 
TIME_SHIFT.
URL: https://github.com/apache/incubator-druid/pull/7704#issuecomment-494509467
 
 
   Merging given that TC is broken.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[incubator-druid] branch master updated: SQL: Respect default timezone for TIME_PARSE and TIME_SHIFT. (#7704)

2019-05-21 Thread fjy
This is an automated email from the ASF dual-hosted git repository.

fjy pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-druid.git


The following commit(s) were added to refs/heads/master by this push:
 new 43c5438  SQL: Respect default timezone for TIME_PARSE and TIME_SHIFT. 
(#7704)
43c5438 is described below

commit 43c54385f6f7705507570c50e048b08cf5902b10
Author: Gian Merlino 
AuthorDate: Tue May 21 11:40:44 2019 -0700

SQL: Respect default timezone for TIME_PARSE and TIME_SHIFT. (#7704)

* SQL: Respect default timezone for TIME_PARSE and TIME_SHIFT.

They were inadvertently using UTC rather than the default timezone.
Also, harmonize how time functions handle their parameters.

* Fix tests

* Add another TIME_SHIFT test.
---
 .../calcite/expression/OperatorConversions.java| 19 +
 .../builtin/TimeArithmeticOperatorConversion.java  |  3 +-
 .../builtin/TimeExtractOperatorConversion.java |  9 ++--
 .../builtin/TimeFloorOperatorConversion.java   | 23 +++
 .../builtin/TimeFormatOperatorConversion.java  | 21 +++---
 .../builtin/TimeParseOperatorConversion.java   | 38 -
 .../builtin/TimeShiftOperatorConversion.java   | 48 +-
 .../apache/druid/sql/calcite/CalciteQueryTest.java | 13 +++---
 .../sql/calcite/expression/ExpressionsTest.java| 20 +++--
 9 files changed, 165 insertions(+), 29 deletions(-)

diff --git 
a/sql/src/main/java/org/apache/druid/sql/calcite/expression/OperatorConversions.java
 
b/sql/src/main/java/org/apache/druid/sql/calcite/expression/OperatorConversions.java
index f982d80..a4dbfd2 100644
--- 
a/sql/src/main/java/org/apache/druid/sql/calcite/expression/OperatorConversions.java
+++ 
b/sql/src/main/java/org/apache/druid/sql/calcite/expression/OperatorConversions.java
@@ -21,6 +21,7 @@ package org.apache.druid.sql.calcite.expression;
 
 import com.google.common.base.Preconditions;
 import org.apache.calcite.rex.RexCall;
+import org.apache.calcite.rex.RexLiteral;
 import org.apache.calcite.rex.RexNode;
 import org.apache.calcite.sql.SqlFunction;
 import org.apache.calcite.sql.SqlFunctionCategory;
@@ -105,6 +106,24 @@ public class OperatorConversions
 return expressionFunction.apply(druidExpressions);
   }
 
+  /**
+   * Gets operand "i" from "operands", or returns a default value if it 
doesn't exist (operands is too short)
+   * or is null.
+   */
+  public static  T getOperandWithDefault(
+  final List operands,
+  final int i,
+  final Function f,
+  final T defaultReturnValue
+  )
+  {
+if (operands.size() > i && !RexLiteral.isNullLiteral(operands.get(i))) {
+  return f.apply(operands.get(i));
+} else {
+  return defaultReturnValue;
+}
+  }
+
   public static OperatorBuilder operatorBuilder(final String name)
   {
 return new OperatorBuilder(name);
diff --git 
a/sql/src/main/java/org/apache/druid/sql/calcite/expression/builtin/TimeArithmeticOperatorConversion.java
 
b/sql/src/main/java/org/apache/druid/sql/calcite/expression/builtin/TimeArithmeticOperatorConversion.java
index 815cb7e..66f69ff 100644
--- 
a/sql/src/main/java/org/apache/druid/sql/calcite/expression/builtin/TimeArithmeticOperatorConversion.java
+++ 
b/sql/src/main/java/org/apache/druid/sql/calcite/expression/builtin/TimeArithmeticOperatorConversion.java
@@ -91,7 +91,8 @@ public abstract class TimeArithmeticOperatorConversion 
implements SqlOperatorCon
   simpleExtraction -> null,
   expression -> StringUtils.format("concat('P', %s, 'M')", 
expression)
   ),
-  
DruidExpression.fromExpression(DruidExpression.numberLiteral(direction > 0 ? 1 
: -1))
+  
DruidExpression.fromExpression(DruidExpression.numberLiteral(direction > 0 ? 1 
: -1)),
+  
DruidExpression.fromExpression(DruidExpression.stringLiteral(plannerContext.getTimeZone().getID()))
   )
   );
 } else if (rightRexNode.getType().getFamily() == 
SqlTypeFamily.INTERVAL_DAY_TIME) {
diff --git 
a/sql/src/main/java/org/apache/druid/sql/calcite/expression/builtin/TimeExtractOperatorConversion.java
 
b/sql/src/main/java/org/apache/druid/sql/calcite/expression/builtin/TimeExtractOperatorConversion.java
index 5f1c77d..c8d7f3e 100644
--- 
a/sql/src/main/java/org/apache/druid/sql/calcite/expression/builtin/TimeExtractOperatorConversion.java
+++ 
b/sql/src/main/java/org/apache/druid/sql/calcite/expression/builtin/TimeExtractOperatorConversion.java
@@ -88,9 +88,12 @@ public class TimeExtractOperatorConversion implements 
SqlOperatorConversion
 
StringUtils.toUpperCase(RexLiteral.stringValue(call.getOperands().get(1)))
 );
 
-final DateTimeZone timeZone = call.getOperands().size() > 2 && 
!RexLiteral.isNullLiteral(call.getOperands().get(2))
-  ? 
DateTimes.inferTzFromString(RexLiteral.stringValue(call.getOperands().get(2)))

[GitHub] [incubator-druid] fjy commented on issue #7710: SQL: TIME_EXTRACT should have 2 required operands.

2019-05-21 Thread GitBox
fjy commented on issue #7710: SQL: TIME_EXTRACT should have 2 required operands.
URL: https://github.com/apache/incubator-druid/pull/7710#issuecomment-494506736
 
 
   Merging given that TC is broken.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[incubator-druid] branch master updated: SQL: TIME_EXTRACT should have 2 required operands. (#7710)

2019-05-21 Thread fjy
This is an automated email from the ASF dual-hosted git repository.

fjy pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-druid.git


The following commit(s) were added to refs/heads/master by this push:
 new 69b2ea3  SQL: TIME_EXTRACT should have 2 required operands. (#7710)
69b2ea3 is described below

commit 69b2ea3ddc0fd6831f5ca4cfa2a5fcde4ebfc3cc
Author: Gian Merlino 
AuthorDate: Tue May 21 11:32:36 2019 -0700

SQL: TIME_EXTRACT should have 2 required operands. (#7710)

* SQL: TIME_EXTRACT should have 2 required operands.

Timestamp and time unit are both required.

* Add regression test.
---
 .../builtin/TimeExtractOperatorConversion.java |  2 +-
 .../org/apache/druid/sql/calcite/CalciteQueryTest.java | 18 ++
 2 files changed, 19 insertions(+), 1 deletion(-)

diff --git 
a/sql/src/main/java/org/apache/druid/sql/calcite/expression/builtin/TimeExtractOperatorConversion.java
 
b/sql/src/main/java/org/apache/druid/sql/calcite/expression/builtin/TimeExtractOperatorConversion.java
index 28464fa..5f1c77d 100644
--- 
a/sql/src/main/java/org/apache/druid/sql/calcite/expression/builtin/TimeExtractOperatorConversion.java
+++ 
b/sql/src/main/java/org/apache/druid/sql/calcite/expression/builtin/TimeExtractOperatorConversion.java
@@ -43,7 +43,7 @@ public class TimeExtractOperatorConversion implements 
SqlOperatorConversion
   private static final SqlFunction SQL_FUNCTION = OperatorConversions
   .operatorBuilder("TIME_EXTRACT")
   .operandTypes(SqlTypeFamily.TIMESTAMP, SqlTypeFamily.CHARACTER, 
SqlTypeFamily.CHARACTER)
-  .requiredOperands(1)
+  .requiredOperands(2)
   .returnType(SqlTypeName.BIGINT)
   .functionCategory(SqlFunctionCategory.TIMEDATE)
   .build();
diff --git 
a/sql/src/test/java/org/apache/druid/sql/calcite/CalciteQueryTest.java 
b/sql/src/test/java/org/apache/druid/sql/calcite/CalciteQueryTest.java
index ecef1b5..e67cde8 100644
--- a/sql/src/test/java/org/apache/druid/sql/calcite/CalciteQueryTest.java
+++ b/sql/src/test/java/org/apache/druid/sql/calcite/CalciteQueryTest.java
@@ -21,6 +21,8 @@ package org.apache.druid.sql.calcite;
 
 import com.google.common.base.Joiner;
 import com.google.common.collect.ImmutableList;
+import org.apache.calcite.runtime.CalciteContextException;
+import org.apache.calcite.tools.ValidationException;
 import org.apache.druid.common.config.NullHandling;
 import org.apache.druid.java.util.common.DateTimes;
 import org.apache.druid.java.util.common.Intervals;
@@ -6696,6 +6698,22 @@ public class CalciteQueryTest extends 
BaseCalciteQueryTest
   }
 
   @Test
+  public void testTimeExtractWithTooFewArguments() throws Exception
+  {
+// Regression test for https://github.com/apache/incubator-druid/pull/7710.
+expectedException.expect(ValidationException.class);
+
expectedException.expectCause(CoreMatchers.instanceOf(CalciteContextException.class));
+expectedException.expectCause(
+ThrowableMessageMatcher.hasMessage(
+CoreMatchers.containsString(
+"Invalid number of arguments to function 'TIME_EXTRACT'. Was 
expecting 2 arguments"
+)
+)
+);
+testQuery("SELECT TIME_EXTRACT(__time) FROM druid.foo", 
ImmutableList.of(), ImmutableList.of());
+  }
+
+  @Test
   public void testUsingSubqueryAsFilterForbiddenByConfig()
   {
 assertQueryIsUnplannable(


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [incubator-druid] fjy merged pull request #7710: SQL: TIME_EXTRACT should have 2 required operands.

2019-05-21 Thread GitBox
fjy merged pull request #7710: SQL: TIME_EXTRACT should have 2 required 
operands.
URL: https://github.com/apache/incubator-druid/pull/7710
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [incubator-druid] fjy merged pull request #7707: SQL: Fix exception with OR of impossible filters.

2019-05-21 Thread GitBox
fjy merged pull request #7707: SQL: Fix exception with OR of impossible filters.
URL: https://github.com/apache/incubator-druid/pull/7707
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [incubator-druid] fjy closed issue #7671: SQL: Exception with OR of impossible filters

2019-05-21 Thread GitBox
fjy closed issue #7671: SQL: Exception with OR of impossible filters
URL: https://github.com/apache/incubator-druid/issues/7671
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[incubator-druid] branch master updated: SQL: Fix exception with OR of impossible filters. (#7707)

2019-05-21 Thread fjy
This is an automated email from the ASF dual-hosted git repository.

fjy pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-druid.git


The following commit(s) were added to refs/heads/master by this push:
 new bcea05e  SQL: Fix exception with OR of impossible filters. (#7707)
bcea05e is described below

commit bcea05e4e8d4cabd698aedbc8f60fdfa8666e2ca
Author: Gian Merlino 
AuthorDate: Tue May 21 11:32:09 2019 -0700

SQL: Fix exception with OR of impossible filters. (#7707)

Fixes #7671.
---
 .../filtration/CombineAndSimplifyBounds.java   | 89 +++---
 .../apache/druid/sql/calcite/CalciteQueryTest.java | 22 ++
 2 files changed, 66 insertions(+), 45 deletions(-)

diff --git 
a/sql/src/main/java/org/apache/druid/sql/calcite/filtration/CombineAndSimplifyBounds.java
 
b/sql/src/main/java/org/apache/druid/sql/calcite/filtration/CombineAndSimplifyBounds.java
index a8eb0a5..ce29494 100644
--- 
a/sql/src/main/java/org/apache/druid/sql/calcite/filtration/CombineAndSimplifyBounds.java
+++ 
b/sql/src/main/java/org/apache/druid/sql/calcite/filtration/CombineAndSimplifyBounds.java
@@ -118,48 +118,23 @@ public class CombineAndSimplifyBounds extends 
BottomUpTransform
* Simplify BoundDimFilters that are children of an OR or an AND.
*
* @param childrenthe filters
-   * @param disjunction true for disjunction, false for conjunction
+   * @param disjunction true for OR, false for AND
*
* @return simplified filters
*/
   private static DimFilter doSimplify(final List children, boolean 
disjunction)
   {
-// Copy children list
+// Copy the list of child filters. We'll modify the copy and eventually 
return it.
 final List newChildren = Lists.newArrayList(children);
 
 // Group Bound filters by dimension, extractionFn, and comparator and 
compute a RangeSet for each one.
 final Map> bounds = new HashMap<>();
 
-final Iterator iterator = newChildren.iterator();
-while (iterator.hasNext()) {
-  final DimFilter child = iterator.next();
-
-  if (child.equals(Filtration.matchNothing())) {
-// Child matches nothing, equivalent to FALSE
-// OR with FALSE => ignore
-// AND with FALSE => always false, short circuit
-if (disjunction) {
-  iterator.remove();
-} else {
-  return Filtration.matchNothing();
-}
-  } else if (child.equals(Filtration.matchEverything())) {
-// Child matches everything, equivalent to TRUE
-// OR with TRUE => always true, short circuit
-// AND with TRUE => ignore
-if (disjunction) {
-  return Filtration.matchEverything();
-} else {
-  iterator.remove();
-}
-  } else if (child instanceof BoundDimFilter) {
+for (final DimFilter child : newChildren) {
+  if (child instanceof BoundDimFilter) {
 final BoundDimFilter bound = (BoundDimFilter) child;
 final BoundRefKey boundRefKey = BoundRefKey.from(bound);
-List filterList = bounds.get(boundRefKey);
-if (filterList == null) {
-  filterList = new ArrayList<>();
-  bounds.put(boundRefKey, filterList);
-}
+final List filterList = 
bounds.computeIfAbsent(boundRefKey, k -> new ArrayList<>());
 filterList.add(bound);
   }
 }
@@ -184,25 +159,13 @@ public class CombineAndSimplifyBounds extends 
BottomUpTransform
 
 if (rangeSet.asRanges().isEmpty()) {
   // range set matches nothing, equivalent to FALSE
-  // OR with FALSE => ignore
-  // AND with FALSE => always false, short circuit
-  if (disjunction) {
-newChildren.add(Filtration.matchNothing());
-  } else {
-return Filtration.matchNothing();
-  }
+  newChildren.add(Filtration.matchNothing());
 }
 
 for (final Range range : rangeSet.asRanges()) {
   if (!range.hasLowerBound() && !range.hasUpperBound()) {
 // range matches all, equivalent to TRUE
-// AND with TRUE => ignore
-// OR with TRUE => always true; short circuit
-if (disjunction) {
-  return Filtration.matchEverything();
-} else {
-  newChildren.add(Filtration.matchEverything());
-}
+newChildren.add(Filtration.matchEverything());
   } else {
 newChildren.add(Bounds.toFilter(boundRefKey, range));
   }
@@ -210,8 +173,44 @@ public class CombineAndSimplifyBounds extends 
BottomUpTransform
   }
 }
 
+// Finally: Go through newChildren, removing or potentially exiting early 
based on TRUE / FALSE marker filters.
 Preconditions.checkState(newChildren.size() > 0, "newChildren.size > 0");
-if (newChildren.size() == 1) {
+
+final Iterator iterator = newChildren.iterator();
+while (iterator.hasNext()) {
+  final DimFilter newChild = 

[incubator-druid] branch 0.15.0-incubating updated: fix issue where result level cache was recomputing post aggs that were already cached, causing issues with finalizing aggregators (#7708) (#7711)

2019-05-21 Thread fjy
This is an automated email from the ASF dual-hosted git repository.

fjy pushed a commit to branch 0.15.0-incubating
in repository https://gitbox.apache.org/repos/asf/incubator-druid.git


The following commit(s) were added to refs/heads/0.15.0-incubating by this push:
 new 1909dda  fix issue where result level cache was recomputing post aggs 
that were already cached, causing issues with finalizing aggregators (#7708) 
(#7711)
1909dda is described below

commit 1909dda5274d7e0316b5d6c0326ee68b1ca4f936
Author: Jihoon Son 
AuthorDate: Tue May 21 11:31:38 2019 -0700

fix issue where result level cache was recomputing post aggs that were 
already cached, causing issues with finalizing aggregators (#7708) (#7711)
---
 .../druid/query/topn/TopNQueryQueryToolChest.java  |  10 +-
 .../query/topn/TopNQueryQueryToolChestTest.java| 131 +
 2 files changed, 136 insertions(+), 5 deletions(-)

diff --git 
a/processing/src/main/java/org/apache/druid/query/topn/TopNQueryQueryToolChest.java
 
b/processing/src/main/java/org/apache/druid/query/topn/TopNQueryQueryToolChest.java
index d87a178..53af0908 100644
--- 
a/processing/src/main/java/org/apache/druid/query/topn/TopNQueryQueryToolChest.java
+++ 
b/processing/src/main/java/org/apache/druid/query/topn/TopNQueryQueryToolChest.java
@@ -294,8 +294,7 @@ public class TopNQueryQueryToolChest extends 
QueryToolChest aggs = 
Lists.newArrayList(query.getAggregatorSpecs());
   private final List postAggs = 
AggregatorUtil.pruneDependentPostAgg(
   query.getPostAggregatorSpecs(),
-  query.getTopNMetricSpec()
-   .getMetricName(query.getDimensionSpec())
+  query.getTopNMetricSpec().getMetricName(query.getDimensionSpec())
   );
 
   @Override
@@ -419,14 +418,15 @@ public class TopNQueryQueryToolChest extends 
QueryToolChest postItr = 
query.getPostAggregatorSpecs().iterator();
 while (postItr.hasNext() && resultIter.hasNext()) {
   vals.put(postItr.next().getName(), resultIter.next());
 }
+  } else {
+for (PostAggregator postAgg : postAggs) {
+  vals.put(postAgg.getName(), postAgg.compute(vals));
+}
   }
   retVal.add(vals);
 }
diff --git 
a/processing/src/test/java/org/apache/druid/query/topn/TopNQueryQueryToolChestTest.java
 
b/processing/src/test/java/org/apache/druid/query/topn/TopNQueryQueryToolChestTest.java
index 191cc55..f9080e7 100644
--- 
a/processing/src/test/java/org/apache/druid/query/topn/TopNQueryQueryToolChestTest.java
+++ 
b/processing/src/test/java/org/apache/druid/query/topn/TopNQueryQueryToolChestTest.java
@@ -24,6 +24,7 @@ import com.google.common.collect.ImmutableList;
 import com.google.common.collect.ImmutableMap;
 import org.apache.druid.collections.CloseableStupidPool;
 import org.apache.druid.collections.SerializablePair;
+import org.apache.druid.hll.HyperLogLogCollector;
 import org.apache.druid.java.util.common.DateTimes;
 import org.apache.druid.java.util.common.Intervals;
 import org.apache.druid.java.util.common.granularity.Granularities;
@@ -40,6 +41,8 @@ import org.apache.druid.query.aggregation.AggregatorFactory;
 import org.apache.druid.query.aggregation.CountAggregatorFactory;
 import org.apache.druid.query.aggregation.LongSumAggregatorFactory;
 import org.apache.druid.query.aggregation.SerializablePairLongString;
+import org.apache.druid.query.aggregation.cardinality.CardinalityAggregator;
+import 
org.apache.druid.query.aggregation.hyperloglog.HyperUniquesAggregatorFactory;
 import org.apache.druid.query.aggregation.last.DoubleLastAggregatorFactory;
 import org.apache.druid.query.aggregation.last.FloatLastAggregatorFactory;
 import org.apache.druid.query.aggregation.last.LongLastAggregatorFactory;
@@ -47,6 +50,7 @@ import 
org.apache.druid.query.aggregation.last.StringLastAggregatorFactory;
 import org.apache.druid.query.aggregation.post.ArithmeticPostAggregator;
 import org.apache.druid.query.aggregation.post.ConstantPostAggregator;
 import org.apache.druid.query.aggregation.post.FieldAccessPostAggregator;
+import 
org.apache.druid.query.aggregation.post.FinalizingFieldAccessPostAggregator;
 import org.apache.druid.query.dimension.DefaultDimensionSpec;
 import org.apache.druid.query.spec.MultipleIntervalSegmentSpec;
 import org.apache.druid.segment.IncrementalIndexSegment;
@@ -80,6 +84,15 @@ public class TopNQueryQueryToolChestTest
   }
 
   @Test
+  public void testCacheStrategyOrderByPostAggs() throws Exception
+  {
+doTestCacheStrategyOrderByPost(ValueType.STRING, "val1");
+doTestCacheStrategyOrderByPost(ValueType.FLOAT, 2.1f);
+doTestCacheStrategyOrderByPost(ValueType.DOUBLE, 2.1d);
+doTestCacheStrategyOrderByPost(ValueType.LONG, 2L);
+  }
+
+  @Test
   public void testComputeCacheKeyWithDifferentPostAgg()
   {
 final TopNQuery query1 = new TopNQuery(
@@ -306,6 +319,28 @@ public class 

[GitHub] [incubator-druid] fjy merged pull request #7711: [Backport] fix result level cache issue with topN when ordering by post-aggregators

2019-05-21 Thread GitBox
fjy merged pull request #7711: [Backport] fix result level cache issue with 
topN when ordering by post-aggregators
URL: https://github.com/apache/incubator-druid/pull/7711
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [incubator-druid] fjy commented on issue #7711: [Backport] fix result level cache issue with topN when ordering by post-aggregators

2019-05-21 Thread GitBox
fjy commented on issue #7711: [Backport] fix result level cache issue with topN 
when ordering by post-aggregators
URL: https://github.com/apache/incubator-druid/pull/7711#issuecomment-494506428
 
 
   Merging given that TC is broken.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [incubator-druid] fjy commented on issue #7707: SQL: Fix exception with OR of impossible filters.

2019-05-21 Thread GitBox
fjy commented on issue #7707: SQL: Fix exception with OR of impossible filters.
URL: https://github.com/apache/incubator-druid/pull/7707#issuecomment-494506569
 
 
   Merging given that TC is broken.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[incubator-druid] branch 0.15.0-incubating updated: Fix case insensitive of ParserUtils.findDuplicates (#7692) (#7712)

2019-05-21 Thread fjy
This is an automated email from the ASF dual-hosted git repository.

fjy pushed a commit to branch 0.15.0-incubating
in repository https://gitbox.apache.org/repos/asf/incubator-druid.git


The following commit(s) were added to refs/heads/0.15.0-incubating by this push:
 new c529edc  Fix case insensitive of ParserUtils.findDuplicates (#7692) 
(#7712)
c529edc is described below

commit c529edc3b717545e332403ea4409cfc040e9c12b
Author: Jihoon Son 
AuthorDate: Tue May 21 11:31:20 2019 -0700

Fix case insensitive of ParserUtils.findDuplicates (#7692) (#7712)

* Fix case insensitive ParserUtils.findDuplicates

* unused import
---
 .../druid/data/input/impl/DimensionsSpec.java  |  4 +-
 .../java/util/common/parsers/ParserUtils.java  | 12 +++---
 .../java/util/common/parsers/ParserUtilsTest.java  | 45 ++
 3 files changed, 52 insertions(+), 9 deletions(-)

diff --git 
a/core/src/main/java/org/apache/druid/data/input/impl/DimensionsSpec.java 
b/core/src/main/java/org/apache/druid/data/input/impl/DimensionsSpec.java
index ab23b4a..54c90e0 100644
--- a/core/src/main/java/org/apache/druid/data/input/impl/DimensionsSpec.java
+++ b/core/src/main/java/org/apache/druid/data/input/impl/DimensionsSpec.java
@@ -199,9 +199,6 @@ public class DimensionsSpec
 "dimensions and dimensions exclusions cannot overlap"
 );
 
-ParserUtils.validateFields(dimNames);
-ParserUtils.validateFields(dimensionExclusions);
-
 List spatialDimNames = Lists.transform(
 spatialDimensions,
 new Function()
@@ -216,6 +213,7 @@ public class DimensionsSpec
 
 // Don't allow duplicates between main list and deprecated spatial list
 ParserUtils.validateFields(Iterables.concat(dimNames, spatialDimNames));
+ParserUtils.validateFields(dimensionExclusions);
   }
 
   @Override
diff --git 
a/core/src/main/java/org/apache/druid/java/util/common/parsers/ParserUtils.java 
b/core/src/main/java/org/apache/druid/java/util/common/parsers/ParserUtils.java
index bfe732c..57f52c8 100644
--- 
a/core/src/main/java/org/apache/druid/java/util/common/parsers/ParserUtils.java
+++ 
b/core/src/main/java/org/apache/druid/java/util/common/parsers/ParserUtils.java
@@ -19,10 +19,10 @@
 
 package org.apache.druid.java.util.common.parsers;
 
+import com.google.common.annotations.VisibleForTesting;
 import com.google.common.base.Function;
 import com.google.common.base.Splitter;
 import org.apache.druid.common.config.NullHandling;
-import org.apache.druid.java.util.common.StringUtils;
 import org.joda.time.DateTimeZone;
 
 import javax.annotation.Nullable;
@@ -77,17 +77,17 @@ public class ParserUtils
 return names;
   }
 
-  public static Set findDuplicates(Iterable fieldNames)
+  @VisibleForTesting
+  static Set findDuplicates(Iterable fieldNames)
   {
 Set duplicates = new HashSet<>();
 Set uniqueNames = new HashSet<>();
 
 for (String fieldName : fieldNames) {
-  String next = StringUtils.toLowerCase(fieldName);
-  if (uniqueNames.contains(next)) {
-duplicates.add(next);
+  if (uniqueNames.contains(fieldName)) {
+duplicates.add(fieldName);
   }
-  uniqueNames.add(next);
+  uniqueNames.add(fieldName);
 }
 
 return duplicates;
diff --git 
a/core/src/test/java/org/apache/druid/java/util/common/parsers/ParserUtilsTest.java
 
b/core/src/test/java/org/apache/druid/java/util/common/parsers/ParserUtilsTest.java
new file mode 100644
index 000..5645733
--- /dev/null
+++ 
b/core/src/test/java/org/apache/druid/java/util/common/parsers/ParserUtilsTest.java
@@ -0,0 +1,45 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.java.util.common.parsers;
+
+import com.google.common.collect.ImmutableList;
+import com.google.common.collect.ImmutableSet;
+import org.junit.Assert;
+import org.junit.Test;
+
+import java.util.Collections;
+import java.util.List;
+
+public class ParserUtilsTest
+{
+  @Test
+  public void testFindDuplicatesMixedCases()
+  {
+final List fields = ImmutableList.of("f1", "f2", "F1", "F2", "f3");
+Assert.assertEquals(Collections.emptySet(), 
ParserUtils.findDuplicates(fields));
+  }
+
+  @Test
+  public void 

[GitHub] [incubator-druid] fjy merged pull request #7712: [Backport] Fix case insensitive of ParserUtils.findDuplicates

2019-05-21 Thread GitBox
fjy merged pull request #7712: [Backport] Fix case insensitive of 
ParserUtils.findDuplicates
URL: https://github.com/apache/incubator-druid/pull/7712
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [incubator-druid] fjy commented on issue #7712: [Backport] Fix case insensitive of ParserUtils.findDuplicates

2019-05-21 Thread GitBox
fjy commented on issue #7712: [Backport] Fix case insensitive of 
ParserUtils.findDuplicates
URL: https://github.com/apache/incubator-druid/pull/7712#issuecomment-494506314
 
 
   Merging given that TC is broken.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[incubator-druid] branch 0.15.0-incubating updated: Update Druid Console docs for 0.15.0 (#7697) (#7720)

2019-05-21 Thread fjy
This is an automated email from the ASF dual-hosted git repository.

fjy pushed a commit to branch 0.15.0-incubating
in repository https://gitbox.apache.org/repos/asf/incubator-druid.git


The following commit(s) were added to refs/heads/0.15.0-incubating by this push:
 new 7266e47  Update Druid Console docs for 0.15.0 (#7697) (#7720)
7266e47 is described below

commit 7266e4708e0fb3eb156438657c96db5a2ce25d75
Author: Clint Wylie 
AuthorDate: Tue May 21 11:30:47 2019 -0700

Update Druid Console docs for 0.15.0 (#7697) (#7720)

* Update Druid Console docs for 0.15.0

* SQL -> query

* added links and fix typos
---
 docs/content/operations/druid-console.md |  48 +--
 docs/content/operations/img/01-home-view.png | Bin 60287 -> 58587 bytes
 docs/content/operations/img/02-data-loader-1.png | Bin 0 -> 68576 bytes
 docs/content/operations/img/02-datasources.png   | Bin 163824 -> 0 bytes
 docs/content/operations/img/03-data-loader-2.png | Bin 0 -> 456607 bytes
 docs/content/operations/img/03-retention.png | Bin 123857 -> 0 bytes
 docs/content/operations/img/04-datasources.png   | Bin 0 -> 178133 bytes
 docs/content/operations/img/04-segments.png  | Bin 125873 -> 0 bytes
 docs/content/operations/img/05-retention.png | Bin 0 -> 173350 bytes
 docs/content/operations/img/05-tasks-1.png   | Bin 101635 -> 0 bytes
 docs/content/operations/img/06-segments.png  | Bin 0 -> 209772 bytes
 docs/content/operations/img/06-tasks-2.png   | Bin 221977 -> 0 bytes
 docs/content/operations/img/07-supervisors.png   | Bin 0 -> 120310 bytes
 docs/content/operations/img/07-tasks-3.png   | Bin 195170 -> 0 bytes
 docs/content/operations/img/08-servers.png   | Bin 119310 -> 0 bytes
 docs/content/operations/img/08-tasks.png | Bin 0 -> 64362 bytes
 docs/content/operations/img/09-sql-1.png | Bin 80580 -> 0 bytes
 docs/content/operations/img/09-task-status.png   | Bin 0 -> 94299 bytes
 docs/content/operations/img/10-servers.png   | Bin 0 -> 79421 bytes
 docs/content/operations/img/10-sql-2.png | Bin 179193 -> 0 bytes
 docs/content/operations/img/11-query-sql.png | Bin 0 -> 111209 bytes
 docs/content/operations/img/12-query-rune.png| Bin 0 -> 137679 bytes
 docs/content/operations/img/13-lookups.png   | Bin 0 -> 54480 bytes
 23 files changed, 35 insertions(+), 13 deletions(-)

diff --git a/docs/content/operations/druid-console.md 
b/docs/content/operations/druid-console.md
index c8b0696..3dbc491 100644
--- a/docs/content/operations/druid-console.md
+++ b/docs/content/operations/druid-console.md
@@ -45,49 +45,71 @@ The home view provides a high level overview of the 
cluster. Each card is clicka
 
 ![home-view](./img/01-home-view.png)
 
+## Data loader
+
+The data loader view allows you to load data by building an ingestion spec 
with a step-by-step wizard. 
+
+![data-loader-1](./img/02-data-loader-1.png)
+
+After picking the source of your data just follow the series for steps that 
will show you incremental previews of the data as it will be ingested.
+After filling in the required details on every step you can navigate to the 
next step by clicking the `Next` button.
+You can also freely navigate between the steps from the top navigation.
+
+Navigating with the top navigation will leave the underlying spec unmodified 
while clicking the `Next` button will attempt to fill in the subsequent steps 
with appropriate defaults.
+
+![data-loader-2](./img/03-data-loader-2.png)
+
 ## Datasources
 
 The datasources view shows all the currently enabled datasources. From this 
view you can see the sizes and availability of the different datasources. You 
can edit the retention rules and drop data (as well as issue kill tasks).
 Like any view that is powered by a DruidSQL query you can click “Go to SQL” to 
run the underlying SQL query directly.
 
-![datasources](./img/02-datasources.png)
+![datasources](./img/04-datasources.png)
 
 You can view and edit retention rules to determine the general availability of 
a datasource.
 
-![retention](./img/03-retention.png)
+![retention](./img/05-retention.png)
 
 ## Segments
 
 The segment view shows every single segment in the cluster. Each segment can 
be expanded to provide more information. The Segment ID is also conveniently 
broken down into Datasource, Start, End, Version, and Partition columns for 
ease of filtering and sorting.
 
-![segments](./img/04-segments.png)
+![segments](./img/06-segments.png)
 
 ## Tasks and supervisors
 
 The task view is also the home of supervisors. From this view you can check 
the status of existing supervisors as well as suspend and resume them. You can 
also submit new supervisors by entering their JSON spec.
 
-![tasks-1](./img/05-tasks-1.png)
+![supervisors](./img/07-supervisors.png)
 
 The tasks table allows you see the currently running and recently completed 
tasks. From this table you can monitor individual tasks and also submit new 

[GitHub] [incubator-druid] fjy merged pull request #7720: [Backport] Update Druid Console docs for 0.15.0

2019-05-21 Thread GitBox
fjy merged pull request #7720: [Backport] Update Druid Console docs for 0.15.0
URL: https://github.com/apache/incubator-druid/pull/7720
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [incubator-druid] fjy commented on issue #7720: [Backport] Update Druid Console docs for 0.15.0

2019-05-21 Thread GitBox
fjy commented on issue #7720: [Backport] Update Druid Console docs for 0.15.0
URL: https://github.com/apache/incubator-druid/pull/7720#issuecomment-494506099
 
 
   Merging given that TC is broken.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[incubator-druid] branch 0.15.0-incubating updated: Web console: fix missing value input in timestampSpec step (#7698) (#7721)

2019-05-21 Thread fjy
This is an automated email from the ASF dual-hosted git repository.

fjy pushed a commit to branch 0.15.0-incubating
in repository https://gitbox.apache.org/repos/asf/incubator-druid.git


The following commit(s) were added to refs/heads/0.15.0-incubating by this push:
 new f44496c  Web console: fix missing value input in timestampSpec step 
(#7698) (#7721)
f44496c is described below

commit f44496c89f318dc2d8771b37d3eeefb576207394
Author: Clint Wylie 
AuthorDate: Tue May 21 11:30:32 2019 -0700

Web console: fix missing value input in timestampSpec step (#7698) (#7721)
---
 web-console/src/utils/ingestion-spec.tsx | 27 +++
 web-console/src/views/load-data-view.tsx |  2 +-
 2 files changed, 24 insertions(+), 5 deletions(-)

diff --git a/web-console/src/utils/ingestion-spec.tsx 
b/web-console/src/utils/ingestion-spec.tsx
index f89dadb..8363d0d 100644
--- a/web-console/src/utils/ingestion-spec.tsx
+++ b/web-console/src/utils/ingestion-spec.tsx
@@ -250,11 +250,12 @@ const TIMESTAMP_SPEC_FORM_FIELDS: Field[] 
= [
   {
 name: 'column',
 type: 'string',
-isDefined: (timestampSpec: TimestampSpec) => 
isColumnTimestampSpec(timestampSpec)
+defaultValue: 'timestamp'
   },
   {
 name: 'format',
 type: 'string',
+defaultValue: 'auto',
 suggestions: ['auto'].concat(TIMESTAMP_FORMAT_VALUES),
 isDefined: (timestampSpec: TimestampSpec) => 
isColumnTimestampSpec(timestampSpec),
 info: 
@@ -264,12 +265,30 @@ const TIMESTAMP_SPEC_FORM_FIELDS: Field[] 
= [
   {
 name: 'missingValue',
 type: 'string',
-isDefined: (timestampSpec: TimestampSpec) => 
!isColumnTimestampSpec(timestampSpec)
+placeholder: '(optional)',
+info: 
+  This value will be used if the specified column can not be found.
+
   }
 ];
 
-export function getTimestampSpecFormFields() {
-  return TIMESTAMP_SPEC_FORM_FIELDS;
+const CONSTANT_TIMESTAMP_SPEC_FORM_FIELDS: Field[] = [
+  {
+name: 'missingValue',
+label: 'Constant value',
+type: 'string',
+info: 
+  The dummy value that will be used as the timestamp.
+
+  }
+];
+
+export function getTimestampSpecFormFields(timestampSpec: TimestampSpec) {
+  if (isColumnTimestampSpec(timestampSpec)) {
+return TIMESTAMP_SPEC_FORM_FIELDS;
+  } else {
+return CONSTANT_TIMESTAMP_SPEC_FORM_FIELDS;
+  }
 }
 
 export function issueWithTimestampSpec(timestampSpec: TimestampSpec | 
undefined): string | null {
diff --git a/web-console/src/views/load-data-view.tsx 
b/web-console/src/views/load-data-view.tsx
index bbd9e76..ca144fc 100644
--- a/web-console/src/views/load-data-view.tsx
+++ b/web-console/src/views/load-data-view.tsx
@@ -1026,7 +1026,7 @@ export class LoadDataView extends 
React.Component
 
  {
 this.updateSpec(deepSet(spec, 
'dataSchema.parser.parseSpec.timestampSpec', timestampSpec));


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [incubator-druid] fjy merged pull request #7721: [Backport] Web console: fix missing value input in timestampSpec step

2019-05-21 Thread GitBox
fjy merged pull request #7721: [Backport] Web console: fix missing value input 
in timestampSpec step
URL: https://github.com/apache/incubator-druid/pull/7721
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[incubator-druid] branch master updated: Upgrade various build and doc links to https. (#7722)

2019-05-21 Thread fjy
This is an automated email from the ASF dual-hosted git repository.

fjy pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-druid.git


The following commit(s) were added to refs/heads/master by this push:
 new b694155  Upgrade various build and doc links to https. (#7722)
b694155 is described below

commit b6941551aee143074049c8187c84b84aa1014a49
Author: Gian Merlino 
AuthorDate: Tue May 21 11:30:14 2019 -0700

Upgrade various build and doc links to https. (#7722)

* Upgrade various build and doc links to https.

Where it wasn't possible to upgrade build-time dependencies to https,
I kept http in place but used hardcoded checksums or GPG keys to ensure
that artifacts fetched over http are verified properly.

* Switch to https://apache.org.
---
 distribution/docker/Dockerfile.mysql   |  2 +-
 .../extensions-core/datasketches-extension.md  |  2 +-
 .../extensions-core/datasketches-hll.md|  2 +-
 .../extensions-core/datasketches-quantiles.md  |  2 +-
 .../extensions-core/datasketches-theta.md  |  2 +-
 .../extensions-core/datasketches-tuple.md  |  2 +-
 docs/content/development/extensions.md |  2 +-
 docs/content/querying/aggregations.md  |  4 +-
 .../quickstart/tutorial/hadoop/docker/Dockerfile   |  5 +-
 .../tutorial/hadoop/docker/setup-zulu-repo.sh  | 67 ++
 extensions-contrib/ambari-metrics-emitter/pom.xml  |  2 +-
 extensions-core/avro-extensions/pom.xml|  2 +-
 extensions-core/datasketches/README.md |  2 +-
 extensions-core/datasketches/pom.xml   |  2 +-
 hll/pom.xml|  2 +-
 integration-tests/docker-base/setup.sh | 14 +++--
 pom.xml|  6 +-
 17 files changed, 95 insertions(+), 25 deletions(-)

diff --git a/distribution/docker/Dockerfile.mysql 
b/distribution/docker/Dockerfile.mysql
index d2d4288..5664dc8 100644
--- a/distribution/docker/Dockerfile.mysql
+++ b/distribution/docker/Dockerfile.mysql
@@ -21,7 +21,7 @@ ARG DRUID_RELEASE=druid/druid:0.14.0
 FROM $DRUID_RELEASE
 
 COPY sha256sums.txt /tmp
-RUN wget -O 
/opt/druid/extensions/mysql-metadata-storage/mysql-connector-java-5.1.38.jar 
http://central.maven.org/maven2/mysql/mysql-connector-java/5.1.38/mysql-connector-java-5.1.38.jar
 \
+RUN wget -O 
/opt/druid/extensions/mysql-metadata-storage/mysql-connector-java-5.1.38.jar 
https://repo1.maven.org/maven2/mysql/mysql-connector-java/5.1.38/mysql-connector-java-5.1.38.jar
 \
  && sed -e '/^#/d' /tmp/sha256sums.txt > /tmp/sha256sums-stripped.txt \
  && sha256sum -c /tmp/sha256sums-stripped.txt \
  && rm -f /opt/druid/lib/mysql-connector-java-5.1.38.jar \
diff --git a/docs/content/development/extensions-core/datasketches-extension.md 
b/docs/content/development/extensions-core/datasketches-extension.md
index 3a5b126..49ac225 100644
--- a/docs/content/development/extensions-core/datasketches-extension.md
+++ b/docs/content/development/extensions-core/datasketches-extension.md
@@ -24,7 +24,7 @@ title: "DataSketches extension"
 
 # DataSketches extension
 
-Apache Druid (incubating) aggregators based on 
[datasketches](http://datasketches.github.io/) library. Sketches are data 
structures implementing approximate streaming mergeable algorithms. Sketches 
can be ingested from the outside of Druid or built from raw data at ingestion 
time. Sketches can be stored in Druid segments as additive metrics.
+Apache Druid (incubating) aggregators based on 
[datasketches](https://datasketches.github.io/) library. Sketches are data 
structures implementing approximate streaming mergeable algorithms. Sketches 
can be ingested from the outside of Druid or built from raw data at ingestion 
time. Sketches can be stored in Druid segments as additive metrics.
 
 To use the datasketches aggregators, make sure you 
[include](../../operations/including-extensions.html) the extension in your 
config file:
 
diff --git a/docs/content/development/extensions-core/datasketches-hll.md 
b/docs/content/development/extensions-core/datasketches-hll.md
index 799cbc0..90e284f 100644
--- a/docs/content/development/extensions-core/datasketches-hll.md
+++ b/docs/content/development/extensions-core/datasketches-hll.md
@@ -24,7 +24,7 @@ title: "DataSketches HLL Sketch module"
 
 # DataSketches HLL Sketch module
 
-This module provides Apache Druid (incubating) aggregators for distinct 
counting based on HLL sketch from 
[datasketches](http://datasketches.github.io/) library. At ingestion time, this 
aggregator creates the HLL sketch objects to be stored in Druid segments. At 
query time, sketches are read and merged together. In the end, by default, you 
receive the estimate of the number of distinct values presented to the sketch. 
Also, you can use post aggregator to produce a union of sketch columns  [...]
+This module provides Apache 

[GitHub] [incubator-druid] fjy merged pull request #7722: Upgrade various build and doc links to https.

2019-05-21 Thread GitBox
fjy merged pull request #7722: Upgrade various build and doc links to https.
URL: https://github.com/apache/incubator-druid/pull/7722
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [incubator-druid] fjy commented on issue #7722: Upgrade various build and doc links to https.

2019-05-21 Thread GitBox
fjy commented on issue #7722: Upgrade various build and doc links to https.
URL: https://github.com/apache/incubator-druid/pull/7722#issuecomment-494505841
 
 
   Merging given that TC is broken.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [incubator-druid] himanshug commented on a change in pull request #7722: Upgrade various build and doc links to https.

2019-05-21 Thread GitBox
himanshug commented on a change in pull request #7722: Upgrade various build 
and doc links to https.
URL: https://github.com/apache/incubator-druid/pull/7722#discussion_r286155773
 
 

 ##
 File path: examples/quickstart/tutorial/hadoop/docker/setup-zulu-repo.sh
 ##
 @@ -0,0 +1,67 @@
+#!/bin/bash
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+#
+# Script to set up the Azul Zulu JDK yum repository.
+#
+
+# Hardcode GPG key so we don't have to fetch it over http.
 
 Review comment:
   Ah, now I see why we are using yum there. sure, feel free to keep it in that 
case.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [incubator-druid] gianm commented on a change in pull request #7722: Upgrade various build and doc links to https.

2019-05-21 Thread GitBox
gianm commented on a change in pull request #7722: Upgrade various build and 
doc links to https.
URL: https://github.com/apache/incubator-druid/pull/7722#discussion_r286128710
 
 

 ##
 File path: integration-tests/docker-base/setup.sh
 ##
 @@ -34,14 +34,30 @@ apt-get install -y mysql-server
 apt-get install -y supervisor
 
 # Zookeeper
-wget -q -O - 
http://www.us.apache.org/dist/zookeeper/zookeeper-3.4.13/zookeeper-3.4.13.tar.gz
 | tar -xzf - -C /usr/local \
-  && cp /usr/local/zookeeper-3.4.13/conf/zoo_sample.cfg 
/usr/local/zookeeper-3.4.13/conf/zoo.cfg \
-  && ln -s /usr/local/zookeeper-3.4.13 /usr/local/zookeeper
+wget -q -O /tmp/zookeeper-3.4.14.tar.gz 
"http://www.us.apache.org/dist/zookeeper/zookeeper-3.4.14/zookeeper-3.4.14.tar.gz;
 
 Review comment:
   OK, updated.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [incubator-druid] gianm commented on a change in pull request #7722: Upgrade various build and doc links to https.

2019-05-21 Thread GitBox
gianm commented on a change in pull request #7722: Upgrade various build and 
doc links to https.
URL: https://github.com/apache/incubator-druid/pull/7722#discussion_r286128681
 
 

 ##
 File path: examples/quickstart/tutorial/hadoop/docker/setup-zulu-repo.sh
 ##
 @@ -0,0 +1,67 @@
+#!/bin/bash
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+#
+# Script to set up the Azul Zulu JDK yum repository.
+#
+
+# Hardcode GPG key so we don't have to fetch it over http.
 
 Review comment:
   Do you mind if we keep the yum repo? That way, it'll always use the latest 
jdk.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [incubator-druid] himanshug commented on a change in pull request #7722: Upgrade various build and doc links to https.

2019-05-21 Thread GitBox
himanshug commented on a change in pull request #7722: Upgrade various build 
and doc links to https.
URL: https://github.com/apache/incubator-druid/pull/7722#discussion_r286100060
 
 

 ##
 File path: examples/quickstart/tutorial/hadoop/docker/setup-zulu-repo.sh
 ##
 @@ -0,0 +1,67 @@
+#!/bin/bash
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+#
+# Script to set up the Azul Zulu JDK yum repository.
+#
+
+# Hardcode GPG key so we don't have to fetch it over http.
 
 Review comment:
   if we are using them only for jre , we can directly download same from 
https://cdn.azul.com/zulu/bin/zulu8.38.0.13-ca-jre8.0.212-linux_x64.tar.gz and 
not configure the yum repo.
   Ref: https://www.azul.com/downloads/zulu/


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [incubator-druid] himanshug commented on a change in pull request #7722: Upgrade various build and doc links to https.

2019-05-21 Thread GitBox
himanshug commented on a change in pull request #7722: Upgrade various build 
and doc links to https.
URL: https://github.com/apache/incubator-druid/pull/7722#discussion_r286095404
 
 

 ##
 File path: integration-tests/docker-base/setup.sh
 ##
 @@ -34,14 +34,30 @@ apt-get install -y mysql-server
 apt-get install -y supervisor
 
 # Zookeeper
-wget -q -O - 
http://www.us.apache.org/dist/zookeeper/zookeeper-3.4.13/zookeeper-3.4.13.tar.gz
 | tar -xzf - -C /usr/local \
-  && cp /usr/local/zookeeper-3.4.13/conf/zoo_sample.cfg 
/usr/local/zookeeper-3.4.13/conf/zoo.cfg \
-  && ln -s /usr/local/zookeeper-3.4.13 /usr/local/zookeeper
+wget -q -O /tmp/zookeeper-3.4.14.tar.gz 
"http://www.us.apache.org/dist/zookeeper/zookeeper-3.4.14/zookeeper-3.4.14.tar.gz;
 
 Review comment:
   sure


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [incubator-druid] himanshug commented on a change in pull request #7722: Upgrade various build and doc links to https.

2019-05-21 Thread GitBox
himanshug commented on a change in pull request #7722: Upgrade various build 
and doc links to https.
URL: https://github.com/apache/incubator-druid/pull/7722#discussion_r286093387
 
 

 ##
 File path: integration-tests/docker-base/setup.sh
 ##
 @@ -34,14 +34,30 @@ apt-get install -y mysql-server
 apt-get install -y supervisor
 
 # Zookeeper
-wget -q -O - 
http://www.us.apache.org/dist/zookeeper/zookeeper-3.4.13/zookeeper-3.4.13.tar.gz
 | tar -xzf - -C /usr/local \
-  && cp /usr/local/zookeeper-3.4.13/conf/zoo_sample.cfg 
/usr/local/zookeeper-3.4.13/conf/zoo.cfg \
-  && ln -s /usr/local/zookeeper-3.4.13 /usr/local/zookeeper
+wget -q -O /tmp/zookeeper-3.4.14.tar.gz 
"http://www.us.apache.org/dist/zookeeper/zookeeper-3.4.14/zookeeper-3.4.14.tar.gz;
 
 Review comment:
   because that is "https" , we can probably do same for kafka url below.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [incubator-druid] gianm commented on a change in pull request #7722: Upgrade various build and doc links to https.

2019-05-21 Thread GitBox
gianm commented on a change in pull request #7722: Upgrade various build and 
doc links to https.
URL: https://github.com/apache/incubator-druid/pull/7722#discussion_r286094458
 
 

 ##
 File path: integration-tests/docker-base/setup.sh
 ##
 @@ -34,14 +34,30 @@ apt-get install -y mysql-server
 apt-get install -y supervisor
 
 # Zookeeper
-wget -q -O - 
http://www.us.apache.org/dist/zookeeper/zookeeper-3.4.13/zookeeper-3.4.13.tar.gz
 | tar -xzf - -C /usr/local \
-  && cp /usr/local/zookeeper-3.4.13/conf/zoo_sample.cfg 
/usr/local/zookeeper-3.4.13/conf/zoo.cfg \
-  && ln -s /usr/local/zookeeper-3.4.13 /usr/local/zookeeper
+wget -q -O /tmp/zookeeper-3.4.14.tar.gz 
"http://www.us.apache.org/dist/zookeeper/zookeeper-3.4.14/zookeeper-3.4.14.tar.gz;
 
 Review comment:
   Ah I didn't notice the https. I guess I can remove the shasum checks too. 
What do you all think?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [incubator-druid] himanshug commented on a change in pull request #7722: Upgrade various build and doc links to https.

2019-05-21 Thread GitBox
himanshug commented on a change in pull request #7722: Upgrade various build 
and doc links to https.
URL: https://github.com/apache/incubator-druid/pull/7722#discussion_r286093387
 
 

 ##
 File path: integration-tests/docker-base/setup.sh
 ##
 @@ -34,14 +34,30 @@ apt-get install -y mysql-server
 apt-get install -y supervisor
 
 # Zookeeper
-wget -q -O - 
http://www.us.apache.org/dist/zookeeper/zookeeper-3.4.13/zookeeper-3.4.13.tar.gz
 | tar -xzf - -C /usr/local \
-  && cp /usr/local/zookeeper-3.4.13/conf/zoo_sample.cfg 
/usr/local/zookeeper-3.4.13/conf/zoo.cfg \
-  && ln -s /usr/local/zookeeper-3.4.13 /usr/local/zookeeper
+wget -q -O /tmp/zookeeper-3.4.14.tar.gz 
"http://www.us.apache.org/dist/zookeeper/zookeeper-3.4.14/zookeeper-3.4.14.tar.gz;
 
 Review comment:
   probably because that is "https" , we can probably do same for kafka url 
below.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [incubator-druid] leventov commented on issue #7653: Refactor SQLMetadataSegmentManager; Change contract of REST methods in DataSourcesResource

2019-05-21 Thread GitBox
leventov commented on issue #7653: Refactor SQLMetadataSegmentManager; Change 
contract of REST methods in DataSourcesResource
URL: https://github.com/apache/incubator-druid/pull/7653#issuecomment-49252
 
 
   @gianm @surekhasaharan @dampcake @egor-ryashin could you please review this 
PR (at least respective parts for which you are mentioned in the first message)?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [incubator-druid] leventov commented on issue #7571: Optimize coordinator API to retrieve segments with overshadowed status

2019-05-21 Thread GitBox
leventov commented on issue #7571: Optimize coordinator API to retrieve 
segments with overshadowed status
URL: 
https://github.com/apache/incubator-druid/issues/7571#issuecomment-494443286
 
 
   Added to 0.15 milestone because If `SegmentWithOvershadowedStatus` will leak 
into Druid 0.15 API, it will be much harder to remove it from later (it will 
require transition PRs through several consecutive Druid versions, and 
temporary glue code). So I would really like to see it go away before Druid 
0.15 if other people agree that needs to be done. Please provide arguments if 
you disagree.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [incubator-druid] leventov commented on a change in pull request #7595: Optimize overshadowed segments computation

2019-05-21 Thread GitBox
leventov commented on a change in pull request #7595: Optimize overshadowed 
segments computation
URL: https://github.com/apache/incubator-druid/pull/7595#discussion_r286089955
 
 

 ##
 File path: 
server/src/main/java/org/apache/druid/server/coordinator/helper/DruidCoordinatorRuleRunner.java
 ##
 @@ -84,8 +85,10 @@ public DruidCoordinatorRuntimeParams 
run(DruidCoordinatorRuntimeParams params)
 // find available segments which are not overshadowed by other segments in 
DB
 // only those would need to be loaded/dropped
 // anything overshadowed by served segments is dropped automatically by 
DruidCoordinatorCleanupOvershadowed
-final Set overshadowed = ImmutableDruidDataSource
-.determineOvershadowedSegments(params.getAvailableSegments());
+// If metadata store hasn't been polled yet, use empty overshadowed list
+final Collection overshadowed = Optional
+
.ofNullable(coordinator.getMetadataSegmentManager().findOvershadowedSegments())
 
 Review comment:
   Why are you reluctant to make changes to `DataSegment`? You refer to the 
design as "debated" or "not settled", well, we should resolve the debate and 
settle somewhere. I expressed my view in #7571. If you disagree with some parts 
of it, please respond in that issue.
   
   I'm concerned with keeping the status quo because if 
`DataSegmentWithOvershadowedStatus` will leak into Druid 0.15 API, it will be 
much harder (transition PRs, compatibility, etc.) to remove it eventually. So I 
really want it to go away before Druid 0.15 is released.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [incubator-druid] xvrl commented on a change in pull request #7722: Upgrade various build and doc links to https.

2019-05-21 Thread GitBox
xvrl commented on a change in pull request #7722: Upgrade various build and doc 
links to https.
URL: https://github.com/apache/incubator-druid/pull/7722#discussion_r286088776
 
 

 ##
 File path: examples/quickstart/tutorial/hadoop/docker/setup-zulu-repo.sh
 ##
 @@ -0,0 +1,67 @@
+#!/bin/bash
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+#
+# Script to set up the Azul Zulu JDK yum repository.
+#
+
+# Hardcode GPG key so we don't have to fetch it over http.
 
 Review comment:
   how can you be sure the one you got was legit ;) ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [incubator-druid] gianm commented on a change in pull request #7685: Remove unnecessary principal handling in KerberosAuthenticator

2019-05-21 Thread GitBox
gianm commented on a change in pull request #7685: Remove unnecessary principal 
handling in KerberosAuthenticator
URL: https://github.com/apache/incubator-druid/pull/7685#discussion_r286087615
 
 

 ##
 File path: 
extensions-core/druid-kerberos/src/main/java/org/apache/druid/security/kerberos/KerberosAuthenticator.java
 ##
 @@ -230,56 +218,36 @@ protected AuthenticationToken 
getToken(HttpServletRequest request) throws Authen
   public void doFilter(ServletRequest request, ServletResponse response, 
FilterChain filterChain)
   throws IOException, ServletException
   {
-HttpServletRequest httpReq = (HttpServletRequest) request;
-
 // If there's already an auth result, then we have authenticated 
already, skip this.
 if (request.getAttribute(AuthConfig.DRUID_AUTHENTICATION_RESULT) != 
null) {
   filterChain.doFilter(request, response);
   return;
 }
 
+// In the hadoop-auth 2.7.3 code that this was adapted from, the login 
would've occurred during init() of
+// the AuthenticationFilter via 
`initializeAuthHandler(authHandlerClassName, filterConfig)`.
+// Since we co-exist with other authentication schemes, don't login 
until we've checked that
+// some other Authenticator didn't already validate this request.
 if (loginContext == null) {
   initializeKerberosLogin();
 }
 
+// Checking for excluded paths is Druid-specific, not from hadoop-auth
 String path = ((HttpServletRequest) request).getRequestURI();
 if (isExcluded(path)) {
   filterChain.doFilter(request, response);
 } else {
-  String clientPrincipal;
-  try {
-Cookie[] cookies = httpReq.getCookies();
-if (cookies == null) {
-  clientPrincipal = 
getPrincipalFromRequestNew((HttpServletRequest) request);
-} else {
-  clientPrincipal = null;
-  for (Cookie cookie : cookies) {
-if ("hadoop.auth".equals(cookie.getName())) {
-  Matcher matcher = 
HADOOP_AUTH_COOKIE_REGEX.matcher(cookie.getValue());
-  if (matcher.matches()) {
-clientPrincipal = matcher.group(1);
-break;
-  }
-}
-  }
-}
-  }
-  catch (Exception ex) {
-clientPrincipal = null;
-  }
-
-  if (clientPrincipal != null) {
-request.setAttribute(
-AuthConfig.DRUID_AUTHENTICATION_RESULT,
-new AuthenticationResult(clientPrincipal, authorizerName, 
name, null)
-);
-  }
+  // Run the original doFilter method, but with modifications to error 
handling
+  doFilterSuper(request, response, filterChain);
 }
-
-doFilterSuper(request, response, filterChain);
 
 Review comment:
   This used to doFilterSuper in both branches of the above `if`, but now it 
will only do it if `!isExcluded(path)`. Is this new behavior better?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [incubator-druid] gianm commented on a change in pull request #7722: Upgrade various build and doc links to https.

2019-05-21 Thread GitBox
gianm commented on a change in pull request #7722: Upgrade various build and 
doc links to https.
URL: https://github.com/apache/incubator-druid/pull/7722#discussion_r286086695
 
 

 ##
 File path: integration-tests/docker-base/setup.sh
 ##
 @@ -34,14 +34,30 @@ apt-get install -y mysql-server
 apt-get install -y supervisor
 
 # Zookeeper
-wget -q -O - 
http://www.us.apache.org/dist/zookeeper/zookeeper-3.4.13/zookeeper-3.4.13.tar.gz
 | tar -xzf - -C /usr/local \
-  && cp /usr/local/zookeeper-3.4.13/conf/zoo_sample.cfg 
/usr/local/zookeeper-3.4.13/conf/zoo.cfg \
-  && ln -s /usr/local/zookeeper-3.4.13 /usr/local/zookeeper
+wget -q -O /tmp/zookeeper-3.4.14.tar.gz 
"http://www.us.apache.org/dist/zookeeper/zookeeper-3.4.14/zookeeper-3.4.14.tar.gz;
 
 Review comment:
   Sure, is it better for some reason?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [incubator-druid] leventov commented on a change in pull request #7595: Optimize overshadowed segments computation

2019-05-21 Thread GitBox
leventov commented on a change in pull request #7595: Optimize overshadowed 
segments computation
URL: https://github.com/apache/incubator-druid/pull/7595#discussion_r286086550
 
 

 ##
 File path: 
server/src/main/java/org/apache/druid/metadata/SQLMetadataSegmentManager.java
 ##
 @@ -744,6 +757,32 @@ public DataSegment map(int index, ResultSet r, 
StatementContext ctx) throws SQLE
 
 // Replace "dataSources" atomically.
 dataSources = newDataSources;
+overshadowedSegments = 
ImmutableSet.copyOf(determineOvershadowedSegments(segments));
 
 Review comment:
   > when some dataSources are enabled or disabled outside doPoll
   
   Yes. Also note that there are additional changes here: #7653 (see section 
"SQLMetadataSegmentManager: remove data from dataSources in 
markAsUnusedSegmentsInInterval and markSegmentsAsUnused"), which were missed in 
#7490.
   
   > Also, if they do become invalid, would a comment be enough, or something 
should be done in code to prevent invalid overshadowed segments ?
   
   I think `overshadowedSegments` should become invalidated.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [incubator-druid] b-slim commented on a change in pull request #7722: Upgrade various build and doc links to https.

2019-05-21 Thread GitBox
b-slim commented on a change in pull request #7722: Upgrade various build and 
doc links to https.
URL: https://github.com/apache/incubator-druid/pull/7722#discussion_r286080941
 
 

 ##
 File path: integration-tests/docker-base/setup.sh
 ##
 @@ -34,14 +34,30 @@ apt-get install -y mysql-server
 apt-get install -y supervisor
 
 # Zookeeper
-wget -q -O - 
http://www.us.apache.org/dist/zookeeper/zookeeper-3.4.13/zookeeper-3.4.13.tar.gz
 | tar -xzf - -C /usr/local \
-  && cp /usr/local/zookeeper-3.4.13/conf/zoo_sample.cfg 
/usr/local/zookeeper-3.4.13/conf/zoo.cfg \
-  && ln -s /usr/local/zookeeper-3.4.13 /usr/local/zookeeper
+wget -q -O /tmp/zookeeper-3.4.14.tar.gz 
"http://www.us.apache.org/dist/zookeeper/zookeeper-3.4.14/zookeeper-3.4.14.tar.gz;
 
 Review comment:
   how about using this one 
https://apache.org/dist/zookeeper/zookeeper-3.4.14/zookeeper-3.4.14.tar.gz


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[incubator-druid] branch master updated: Add TIMESTAMPDIFF sql support (#7695)

2019-05-21 Thread gian
This is an automated email from the ASF dual-hosted git repository.

gian pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-druid.git


The following commit(s) were added to refs/heads/master by this push:
 new dd7dace  Add TIMESTAMPDIFF sql support (#7695)
dd7dace is described below

commit dd7dace70a26a6fdcf5617957f508b6fe7e176bc
Author: Xue Yu <278006...@qq.com>
AuthorDate: Tue May 21 23:05:38 2019 +0800

Add TIMESTAMPDIFF sql support (#7695)

* add timestampdiff sql support

* feedback address
---
 .../apache/druid/java/util/common/DateTimes.java   |  9 
 .../java/org/apache/druid/math/expr/Function.java  | 29 
 docs/content/querying/sql.md   |  3 +-
 .../expression/builtin/CastOperatorConversion.java |  8 
 .../builtin/TimeArithmeticOperatorConversion.java  | 51 +-
 .../sql/calcite/planner/DruidConvertletTable.java  |  1 +
 .../apache/druid/sql/calcite/CalciteQueryTest.java | 44 +++
 7 files changed, 132 insertions(+), 13 deletions(-)

diff --git 
a/core/src/main/java/org/apache/druid/java/util/common/DateTimes.java 
b/core/src/main/java/org/apache/druid/java/util/common/DateTimes.java
index de1fc40..c1718bd 100644
--- a/core/src/main/java/org/apache/druid/java/util/common/DateTimes.java
+++ b/core/src/main/java/org/apache/druid/java/util/common/DateTimes.java
@@ -23,6 +23,7 @@ import io.netty.util.SuppressForbidden;
 import org.joda.time.Chronology;
 import org.joda.time.DateTime;
 import org.joda.time.DateTimeZone;
+import org.joda.time.Months;
 import org.joda.time.chrono.ISOChronology;
 import org.joda.time.format.DateTimeFormatter;
 import org.joda.time.format.ISODateTimeFormat;
@@ -146,6 +147,14 @@ public final class DateTimes
 return dt1.compareTo(dt2) < 0 ? dt1 : dt2;
   }
 
+  public static int subMonths(long timestamp1, long timestamp2, DateTimeZone 
timeZone)
+  {
+DateTime time1 = new DateTime(timestamp1, timeZone);
+DateTime time2 = new DateTime(timestamp2, timeZone);
+
+return Months.monthsBetween(time1, time2).getMonths();
+  }
+
   private DateTimes()
   {
   }
diff --git a/core/src/main/java/org/apache/druid/math/expr/Function.java 
b/core/src/main/java/org/apache/druid/math/expr/Function.java
index 31cdd8e..14aa44b 100644
--- a/core/src/main/java/org/apache/druid/math/expr/Function.java
+++ b/core/src/main/java/org/apache/druid/math/expr/Function.java
@@ -24,6 +24,7 @@ import org.apache.druid.java.util.common.DateTimes;
 import org.apache.druid.java.util.common.IAE;
 import org.apache.druid.java.util.common.StringUtils;
 import org.joda.time.DateTime;
+import org.joda.time.DateTimeZone;
 import org.joda.time.format.DateTimeFormat;
 
 import java.math.BigDecimal;
@@ -1424,4 +1425,32 @@ interface Function
 }
   }
 
+  class SubMonthFunc implements Function
+  {
+@Override
+public String name()
+{
+  return "subtract_months";
+}
+
+@Override
+public ExprEval apply(List args, Expr.ObjectBinding bindings)
+{
+  if (args.size() != 3) {
+throw new IAE("Function[%s] needs 3 arguments", name());
+  }
+
+  Long left = args.get(0).eval(bindings).asLong();
+  Long right = args.get(1).eval(bindings).asLong();
+  DateTimeZone timeZone = 
DateTimes.inferTzFromString(args.get(2).eval(bindings).asString());
+
+  if (left == null || right == null) {
+return ExprEval.of(null);
+  } else {
+return ExprEval.of(DateTimes.subMonths(right, left, timeZone));
+  }
+
+}
+  }
+
 }
diff --git a/docs/content/querying/sql.md b/docs/content/querying/sql.md
index 169fa47..5b6e308 100644
--- a/docs/content/querying/sql.md
+++ b/docs/content/querying/sql.md
@@ -230,6 +230,7 @@ over the connection time zone.
 |`FLOOR(timestamp_expr TO )`|Rounds down a timestamp, returning it as a 
new timestamp. Unit can be SECOND, MINUTE, HOUR, DAY, WEEK, MONTH, QUARTER, or 
YEAR.|
 |`CEIL(timestamp_expr TO )`|Rounds up a timestamp, returning it as a new 
timestamp. Unit can be SECOND, MINUTE, HOUR, DAY, WEEK, MONTH, QUARTER, or 
YEAR.|
 |`TIMESTAMPADD(, , )`|Equivalent to `timestamp + count 
* INTERVAL '1' UNIT`.|
+|`TIMESTAMPDIFF(, , )`|Returns the (signed) 
number of `unit` between `timestamp1` and `timestamp2`. Unit can be SECOND, 
MINUTE, HOUR, DAY, WEEK, MONTH, QUARTER, or YEAR.|
 |`timestamp_expr { +  - } `|Add or subtract an amount of 
time from a timestamp. interval_expr can include interval literals like 
`INTERVAL '2' HOUR`, and may include interval arithmetic as well. This operator 
treats days as uniformly 86400 seconds long, and does not take into account 
daylight savings time. To account for daylight savings time, use TIME_SHIFT 
instead.|
 
 ### Comparison operators
@@ -744,4 +745,4 @@ Broker will emit the following metrics for SQL.
 
 ## Authorization Permissions
 
-Please see [Defining SQL 

[GitHub] [incubator-druid] gianm merged pull request #7695: Add TIMESTAMPDIFF sql support

2019-05-21 Thread GitBox
gianm merged pull request #7695: Add TIMESTAMPDIFF sql support
URL: https://github.com/apache/incubator-druid/pull/7695
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [incubator-druid] gianm opened a new pull request #7722: Upgrade various build and doc links to https.

2019-05-21 Thread GitBox
gianm opened a new pull request #7722: Upgrade various build and doc links to 
https.
URL: https://github.com/apache/incubator-druid/pull/7722
 
 
   Where it wasn't possible to upgrade build-time dependencies to https,
   I kept http in place but used hardcoded checksums or GPG keys to ensure
   that artifacts fetched over http are verified properly.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [incubator-druid] sanastas commented on issue #7676: Add OakIncrementalIndex to Druid

2019-05-21 Thread GitBox
sanastas commented on issue #7676: Add OakIncrementalIndex to Druid
URL: https://github.com/apache/incubator-druid/pull/7676#issuecomment-494413285
 
 
   Here we have some results for 
org.apache.druid.benchmark.indexing.IndexIngestionBenchmark 
(IndexIngestionBenchmark)
   
   We try to insert 3M rows, that should be about 4GB data. We try to give the 
same amount of data for Oak case and for native Druid IncrementalIndex. Pay 
attention that as IndexIngestionBenchmark is written the rows are generated 
prior to the benchmark and hold almost 4GB of on-heap memory anyway. So native 
Druid IncrementalIndex has some advantage, as it just needs to reference 
something already on-heap, but Oak needs to copy and needs additional memory. 
Also many on-heap space goes to StringIndexer and other structures.
   
   
[IngestionOakvsIncrIdx.pdf](https://github.com/apache/incubator-druid/files/3203010/IngestionOakvsIncrIdx.pdf)
   
   Finally, this is single threaded. We see that we can give much more 
advantage in a multithreaded case, which we will describe shortly.
   
   The command lines for reference:
   ```
   java -Xmx15g -XX:MaxDirectMemorySize=0g -jar 
benchmarks/target/benchmarks.jar IndexIngestionBenchmark -p 
rowsPerSegment=300 -p indexType=onheap
   java -Xmx9g -XX:MaxDirectMemorySize=6g -jar benchmarks/target/benchmarks.jar 
IndexIngestionBenchmark -p rowsPerSegment=300 -p indexType=oak
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [incubator-druid] clintropolis opened a new pull request #7721: [Backport] Web console: fix missing value input in timestampSpec step

2019-05-21 Thread GitBox
clintropolis opened a new pull request #7721: [Backport] Web console: fix 
missing value input in timestampSpec step
URL: https://github.com/apache/incubator-druid/pull/7721
 
 
   Backport of #7698 to 0.15.0-incubating.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [incubator-druid] clintropolis opened a new pull request #7720: [Backport] Update Druid Console docs for 0.15.0

2019-05-21 Thread GitBox
clintropolis opened a new pull request #7720: [Backport] Update Druid Console 
docs for 0.15.0
URL: https://github.com/apache/incubator-druid/pull/7720
 
 
   Backport of #7697 to 0.15.0-incubating.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [incubator-druid] clintropolis opened a new pull request #7719: add bloom filter fallback aggregator when types are unknown

2019-05-21 Thread GitBox
clintropolis opened a new pull request #7719: add bloom filter fallback 
aggregator when types are unknown
URL: https://github.com/apache/incubator-druid/pull/7719
 
 
   I discovered a similar issue to #7660 while working on #7718 with the bloom 
filter aggregator, where it behaved in a manner even more strict than the 
quantiles aggregator, just not working at all if `ColumnCapabilities` are not 
available. This PR remedies this issue by adding a fallback aggregator, 
`ObjectBloomFilterAggregator` which examines the objects and aggregates to the 
best of its ability. 
   
   This (and many other) aggregator could perhaps be improved by using 
something like a functional interface inside `bufferAdd` to have the initial 
version of the function checking types, and then locking in a selector 
specialized function after the first non-null value. However, since i'm unsure 
if the cost of the if is insignificant to the rest of the work, and since this 
is not the only aggregator that is using this per-row check, I save exploring 
this optimization for future work revisiting complex value aggregators as a 
whole.
   
   The added test only works for group by v2 because the bloom filter 
aggregator only has stub methods for it's `ComplexMetricSerde`, which group by 
v1 requires to be a bit more implemented perform nested queries, and results in 
some confusing `Bloom filter aggregators are query-time only` error messages 
that should probably be fixed in a follow-up PR.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [incubator-druid] clintropolis opened a new pull request #7718: allow quantiles merge aggregator to also accept doubles

2019-05-21 Thread GitBox
clintropolis opened a new pull request #7718: allow quantiles merge aggregator 
to also accept doubles
URL: https://github.com/apache/incubator-druid/pull/7718
 
 
   Fixes #7660.
   
   This PR just does the quicker fix, modifying the merge aggregator to allow 
it to operate on either `DoubleSketch` selectors or `Double` selectors.
   
   The added test triggers a failure of the form:
   
   ```
   java.lang.ClassCastException: java.lang.Double cannot be cast to 
com.yahoo.sketches.quantiles.DoublesSketch
   ```
   
   without the modification of this patch.
   
   While investigating this issue, I have been reviewing many of the complex 
value aggregators, which are not incredibly consistent with each other in terms 
of usage/construction, but all have very similar needs: being able to build 
complex values, and being able to merge complex values. Refactoring to try to 
find a common pattern feels out of scope of fixing this issue though, so I will 
be hopefully revisiting this in a follow-up proposal or PR.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [incubator-druid] awelsh93 opened a new pull request #7717: Adding influxdb emitter as a contrib extension

2019-05-21 Thread GitBox
awelsh93 opened a new pull request #7717: Adding influxdb emitter as a contrib 
extension
URL: https://github.com/apache/incubator-druid/pull/7717
 
 
   This pull request adds the influxdb emitter extension which we've been using 
successfully to monitor our clusters (>100 nodes) for over a year. We would 
like to contribute this back to the druid community.
   
   For more info on the extension, please see the docs.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[incubator-druid] branch master updated: Update Druid Console docs for 0.15.0 (#7697)

2019-05-21 Thread cwylie
This is an automated email from the ASF dual-hosted git repository.

cwylie pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-druid.git


The following commit(s) were added to refs/heads/master by this push:
 new 1563229  Update Druid Console docs for 0.15.0 (#7697)
1563229 is described below

commit 156322932f124e953f60c248cc2eecfc0e26d936
Author: Vadim Ogievetsky 
AuthorDate: Tue May 21 04:00:42 2019 -0700

Update Druid Console docs for 0.15.0 (#7697)

* Update Druid Console docs for 0.15.0

* SQL -> query

* added links and fix typos
---
 docs/content/operations/druid-console.md |  48 +--
 docs/content/operations/img/01-home-view.png | Bin 60287 -> 58587 bytes
 docs/content/operations/img/02-data-loader-1.png | Bin 0 -> 68576 bytes
 docs/content/operations/img/02-datasources.png   | Bin 163824 -> 0 bytes
 docs/content/operations/img/03-data-loader-2.png | Bin 0 -> 456607 bytes
 docs/content/operations/img/03-retention.png | Bin 123857 -> 0 bytes
 docs/content/operations/img/04-datasources.png   | Bin 0 -> 178133 bytes
 docs/content/operations/img/04-segments.png  | Bin 125873 -> 0 bytes
 docs/content/operations/img/05-retention.png | Bin 0 -> 173350 bytes
 docs/content/operations/img/05-tasks-1.png   | Bin 101635 -> 0 bytes
 docs/content/operations/img/06-segments.png  | Bin 0 -> 209772 bytes
 docs/content/operations/img/06-tasks-2.png   | Bin 221977 -> 0 bytes
 docs/content/operations/img/07-supervisors.png   | Bin 0 -> 120310 bytes
 docs/content/operations/img/07-tasks-3.png   | Bin 195170 -> 0 bytes
 docs/content/operations/img/08-servers.png   | Bin 119310 -> 0 bytes
 docs/content/operations/img/08-tasks.png | Bin 0 -> 64362 bytes
 docs/content/operations/img/09-sql-1.png | Bin 80580 -> 0 bytes
 docs/content/operations/img/09-task-status.png   | Bin 0 -> 94299 bytes
 docs/content/operations/img/10-servers.png   | Bin 0 -> 79421 bytes
 docs/content/operations/img/10-sql-2.png | Bin 179193 -> 0 bytes
 docs/content/operations/img/11-query-sql.png | Bin 0 -> 111209 bytes
 docs/content/operations/img/12-query-rune.png| Bin 0 -> 137679 bytes
 docs/content/operations/img/13-lookups.png   | Bin 0 -> 54480 bytes
 23 files changed, 35 insertions(+), 13 deletions(-)

diff --git a/docs/content/operations/druid-console.md 
b/docs/content/operations/druid-console.md
index c8b0696..3dbc491 100644
--- a/docs/content/operations/druid-console.md
+++ b/docs/content/operations/druid-console.md
@@ -45,49 +45,71 @@ The home view provides a high level overview of the 
cluster. Each card is clicka
 
 ![home-view](./img/01-home-view.png)
 
+## Data loader
+
+The data loader view allows you to load data by building an ingestion spec 
with a step-by-step wizard. 
+
+![data-loader-1](./img/02-data-loader-1.png)
+
+After picking the source of your data just follow the series for steps that 
will show you incremental previews of the data as it will be ingested.
+After filling in the required details on every step you can navigate to the 
next step by clicking the `Next` button.
+You can also freely navigate between the steps from the top navigation.
+
+Navigating with the top navigation will leave the underlying spec unmodified 
while clicking the `Next` button will attempt to fill in the subsequent steps 
with appropriate defaults.
+
+![data-loader-2](./img/03-data-loader-2.png)
+
 ## Datasources
 
 The datasources view shows all the currently enabled datasources. From this 
view you can see the sizes and availability of the different datasources. You 
can edit the retention rules and drop data (as well as issue kill tasks).
 Like any view that is powered by a DruidSQL query you can click “Go to SQL” to 
run the underlying SQL query directly.
 
-![datasources](./img/02-datasources.png)
+![datasources](./img/04-datasources.png)
 
 You can view and edit retention rules to determine the general availability of 
a datasource.
 
-![retention](./img/03-retention.png)
+![retention](./img/05-retention.png)
 
 ## Segments
 
 The segment view shows every single segment in the cluster. Each segment can 
be expanded to provide more information. The Segment ID is also conveniently 
broken down into Datasource, Start, End, Version, and Partition columns for 
ease of filtering and sorting.
 
-![segments](./img/04-segments.png)
+![segments](./img/06-segments.png)
 
 ## Tasks and supervisors
 
 The task view is also the home of supervisors. From this view you can check 
the status of existing supervisors as well as suspend and resume them. You can 
also submit new supervisors by entering their JSON spec.
 
-![tasks-1](./img/05-tasks-1.png)
+![supervisors](./img/07-supervisors.png)
 
 The tasks table allows you see the currently running and recently completed 
tasks. From this table you can monitor individual tasks and also submit new 
tasks by entering their JSON 

[GitHub] [incubator-druid] clintropolis merged pull request #7697: Update Druid Console docs for 0.15.0

2019-05-21 Thread GitBox
clintropolis merged pull request #7697: Update Druid Console docs for 0.15.0
URL: https://github.com/apache/incubator-druid/pull/7697
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[incubator-druid] branch master updated: Web console: fix missing value input in timestampSpec step (#7698)

2019-05-21 Thread cwylie
This is an automated email from the ASF dual-hosted git repository.

cwylie pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-druid.git


The following commit(s) were added to refs/heads/master by this push:
 new 169d249  Web console: fix missing value input in timestampSpec step 
(#7698)
169d249 is described below

commit 169d2493bc71aa4d4a6fe4835c13b95d02e33246
Author: Vadim Ogievetsky 
AuthorDate: Tue May 21 03:59:48 2019 -0700

Web console: fix missing value input in timestampSpec step (#7698)
---
 web-console/src/utils/ingestion-spec.tsx | 27 +++
 web-console/src/views/load-data-view.tsx |  2 +-
 2 files changed, 24 insertions(+), 5 deletions(-)

diff --git a/web-console/src/utils/ingestion-spec.tsx 
b/web-console/src/utils/ingestion-spec.tsx
index 9e50618..678adbb 100644
--- a/web-console/src/utils/ingestion-spec.tsx
+++ b/web-console/src/utils/ingestion-spec.tsx
@@ -255,11 +255,12 @@ const TIMESTAMP_SPEC_FORM_FIELDS: Field[] 
= [
   {
 name: 'column',
 type: 'string',
-isDefined: (timestampSpec: TimestampSpec) => 
isColumnTimestampSpec(timestampSpec)
+defaultValue: 'timestamp'
   },
   {
 name: 'format',
 type: 'string',
+defaultValue: 'auto',
 suggestions: ['auto'].concat(TIMESTAMP_FORMAT_VALUES),
 isDefined: (timestampSpec: TimestampSpec) => 
isColumnTimestampSpec(timestampSpec),
 info: 
@@ -269,12 +270,30 @@ const TIMESTAMP_SPEC_FORM_FIELDS: Field[] 
= [
   {
 name: 'missingValue',
 type: 'string',
-isDefined: (timestampSpec: TimestampSpec) => 
!isColumnTimestampSpec(timestampSpec)
+placeholder: '(optional)',
+info: 
+  This value will be used if the specified column can not be found.
+
   }
 ];
 
-export function getTimestampSpecFormFields() {
-  return TIMESTAMP_SPEC_FORM_FIELDS;
+const CONSTANT_TIMESTAMP_SPEC_FORM_FIELDS: Field[] = [
+  {
+name: 'missingValue',
+label: 'Constant value',
+type: 'string',
+info: 
+  The dummy value that will be used as the timestamp.
+
+  }
+];
+
+export function getTimestampSpecFormFields(timestampSpec: TimestampSpec) {
+  if (isColumnTimestampSpec(timestampSpec)) {
+return TIMESTAMP_SPEC_FORM_FIELDS;
+  } else {
+return CONSTANT_TIMESTAMP_SPEC_FORM_FIELDS;
+  }
 }
 
 export function issueWithTimestampSpec(timestampSpec: TimestampSpec | 
undefined): string | null {
diff --git a/web-console/src/views/load-data-view.tsx 
b/web-console/src/views/load-data-view.tsx
index e69bf26..dbaecd3 100644
--- a/web-console/src/views/load-data-view.tsx
+++ b/web-console/src/views/load-data-view.tsx
@@ -1112,7 +1112,7 @@ export class LoadDataView extends 
React.Component
 
  {
 this.updateSpec(deepSet(spec, 
'dataSchema.parser.parseSpec.timestampSpec', timestampSpec));


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [incubator-druid] clintropolis merged pull request #7698: Web console: fix missing value input in timestampSpec step

2019-05-21 Thread GitBox
clintropolis merged pull request #7698: Web console: fix missing value input in 
timestampSpec step
URL: https://github.com/apache/incubator-druid/pull/7698
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[incubator-druid] branch 0.15.0-incubating updated: Move 'Query' to the right again (#7699)

2019-05-21 Thread cwylie
This is an automated email from the ASF dual-hosted git repository.

cwylie pushed a commit to branch 0.15.0-incubating
in repository https://gitbox.apache.org/repos/asf/incubator-druid.git


The following commit(s) were added to refs/heads/0.15.0-incubating by this push:
 new 8920315  Move 'Query' to the right again (#7699)
8920315 is described below

commit 8920315bb2cb0ff6a60481a0f15811a9726f2d41
Author: Vadim Ogievetsky 
AuthorDate: Tue May 21 03:59:22 2019 -0700

Move 'Query' to the right again (#7699)
---
 web-console/src/components/header-bar.tsx | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/web-console/src/components/header-bar.tsx 
b/web-console/src/components/header-bar.tsx
index 538f8a3..523ffc2 100644
--- a/web-console/src/components/header-bar.tsx
+++ b/web-console/src/components/header-bar.tsx
@@ -160,15 +160,15 @@ export class HeaderBar extends 
React.Component {
   minimal={!loadDataPrimary}
   intent={loadDataPrimary ? Intent.PRIMARY : Intent.NONE}
 />
-
 
 
 
 
 
+
 
 
-
+
 
   
   
@@ -179,7 +179,7 @@ export class HeaderBar extends 
React.Component {
   
 }
 
-  
+  
 
 
   


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [incubator-druid] clintropolis merged pull request #7699: Move 'Query' to the right again

2019-05-21 Thread GitBox
clintropolis merged pull request #7699: Move 'Query' to the right again
URL: https://github.com/apache/incubator-druid/pull/7699
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [incubator-druid] viongpanzi opened a new pull request #7716: AggregatorUtil should cache parsed expression to avoid memory problem (OOM/FGC) when Expression is used in metricsSpec

2019-05-21 Thread GitBox
viongpanzi opened a new pull request #7716: AggregatorUtil should cache parsed 
expression to avoid memory problem (OOM/FGC) when Expression is used in 
metricsSpec
URL: https://github.com/apache/incubator-druid/pull/7716
 
 
   Fix #7715 
   
   In this pull request:
   1. Add parsed expression cache in AggregatorUtil
   2. Add some test cases
   3. Add expression schema in BenchmarkSchemas 
   
   With new code, steps in #7715 used to reproduce the issue are all successful 
and complete fast.
   
   The benchmark result show that the performance of Expression processing is 
twice as fast than before if memory is sufficient.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[GitHub] [incubator-druid] viongpanzi opened a new issue #7715: AggregatorUtil should cache parsed expression to avoid memory problem (OOM/FGC) when Expression is used in metricsSpec

2019-05-21 Thread GitBox
viongpanzi opened a new issue #7715: AggregatorUtil should cache parsed 
expression to  avoid memory problem (OOM/FGC) when Expression is used in 
metricsSpec 
URL: https://github.com/apache/incubator-druid/issues/7715
 
 
   Affected Version :0.13.0
   
   The expression field is defined in the metricsSpec of the json.
   `
   {
   "type": "longSum",
   "name": "sum",
   "discription": "",
   "expression": "..."
   `
   maxRowsInMemory is set to 30 and heap size is set to 12g. When  adding 
21 rows to facts calling OnheapIncrementalIndex.add, then JVM is slow down 
due to frequently FGC(use jstat command). After replacing the expression with 
javascript, everything is fine.
   
   `
   {
   "type": "javascript",
   "name": "sum",
   "fieldNames": [
   "xxx"
   ],
   "fnAggregate": "...",
   "fnCombine": "...",
   "fnReset": ".."
   `
   
   And check the code according to the following stack stace:
   
   ```
   "task-runner-0-priority-0" #135 daemon prio=5 os_prio=0 
tid=0x7fc49b1f6000 nid=0x43067 runnable [0x7fc0bfef800
   0]
  java.lang.Thread.State: RUNNABLE
   at java.lang.Object.hashCode(Native Method)
   at java.util.HashMap.hash(HashMap.java:338)
   at java.util.HashMap.put(HashMap.java:611)
   at 
org.apache.druid.math.expr.ExprListenerImpl.exitIdentifierExpr(ExprListenerImpl.java:318)
   at 
org.apache.druid.math.expr.antlr.ExprParser$IdentifierExprContext.exitRule(ExprParser.java:283)
   at 
org.antlr.v4.runtime.tree.ParseTreeWalker.exitRule(ParseTreeWalker.java:71)
   at 
org.antlr.v4.runtime.tree.ParseTreeWalker.walk(ParseTreeWalker.java:54)
   at 
org.antlr.v4.runtime.tree.ParseTreeWalker.walk(ParseTreeWalker.java:52)
   at 
org.antlr.v4.runtime.tree.ParseTreeWalker.walk(ParseTreeWalker.java:52)
   at org.apache.druid.math.expr.Parser.parse(Parser.java:85)
   at org.apache.druid.math.expr.Parser.parse(Parser.java:72)
   at 
org.apache.druid.query.aggregation.AggregatorUtil.makeColumnValueSelectorWithDoubleDefault(AggregatorUtil.java:275)
   at 
org.apache.druid.query.aggregation.SimpleDoubleAggregatorFactory.getDoubleColumnSelector(SimpleDoubleAggregatorFactory.java:68)
   at 
org.apache.druid.query.aggregation.DoubleSumAggregatorFactory.selector(DoubleSumAggregatorFactory.java:58)
   at 
org.apache.druid.query.aggregation.DoubleSumAggregatorFactory.selector(DoubleSumAggregatorFactory.java:37)
   at 
org.apache.druid.query.aggregation.NullableAggregatorFactory.factorize(NullableAggregatorFactory.java:40)
   at 
org.apache.druid.segment.incremental.OnheapIncrementalIndex.factorizeAggs(OnheapIncrementalIndex.java:234)
   at 
org.apache.druid.segment.incremental.OnheapIncrementalIndex.addToFacts(OnheapIncrementalIndex.java:166)
   at 
org.apache.druid.segment.incremental.IncrementalIndex.add(IncrementalIndex.java:610)
   at org.apache.druid.segment.realtime.plumber.Sink.add(Sink.java:181)
   - locked <0x0004c1c1e1b8> (a java.lang.Object)
   at 
org.apache.druid.segment.realtime.appenderator.AppenderatorImpl.add(AppenderatorImpl.java:246)
   at 
org.apache.druid.segment.realtime.appenderator.BaseAppenderatorDriver.append(BaseAppenderatorDriver.java:403)
   at 
org.apache.druid.segment.realtime.appenderator.StreamAppenderatorDriver.add(StreamAppenderatorDriver.java:180)
   at 
org.apache.druid.indexing.kafka.IncrementalPublishingKafkaIndexTaskRunner.runInternal(IncrementalPublishingKafkaIndexTaskRunner.java:513)
   at 
org.apache.druid.indexing.kafka.IncrementalPublishingKafkaIndexTaskRunner.run(IncrementalPublishingKafkaIndexTaskRunner.java:232)
   at 
org.apache.druid.indexing.kafka.KafkaIndexTask.run(KafkaIndexTask.java:211)
   at 
org.apache.druid.indexing.overlord.SingleTaskBackgroundRunner$SingleTaskBackgroundRunnerCallable.call(SingleTaskBackgroundRunner.java:421)
   at 
org.apache.druid.indexing.overlord.SingleTaskBackgroundRunner$SingleTaskBackgroundRunnerCallable.call(SingleTaskBackgroundRunner.java:393)
   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
   at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
   at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
   at java.lang.Thread.run(Thread.java:748)
   ```
   
   The cause is AggregatorUtil.makeColumnValueSelectorWithDoubleDefault will 
parse the same expression every time when a new row is added and the parsed 
expression is wrapped in the baseSelector actually stored in the array 
OnheapIncrementalIndex.aggregators.
   
   Druid benchmark is simple tool to reproduce the problem.
   
   1. add the following code in BenchmarkSchemas.java
   ```
   List 

[GitHub] [incubator-druid] JackyYangPassion commented on issue #6988: [Improvement] historical fast restart by lazy load columns metadata(20X faster)

2019-05-21 Thread GitBox
JackyYangPassion commented on issue #6988: [Improvement] historical fast 
restart by lazy load columns metadata(20X faster)
URL: https://github.com/apache/incubator-druid/pull/6988#issuecomment-494310929
 
 
   apply this pr to 0.12.1
   
`https://github.com/JackyYangPassion/incubator-druid/tree/0.12.1-fast-restart-historical`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



  1   2   >