[GitHub] [incubator-pinot] fx19880617 opened a new pull request #5453: Make Literal transformer return string literals
fx19880617 opened a new pull request #5453: URL: https://github.com/apache/incubator-pinot/pull/5453 - Let LiteralTransformFunction return strings. - Convert Literal object to string for Selection List This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org For additional commands, e-mail: commits-h...@pinot.apache.org
[incubator-pinot] branch fixing_literal_in_selection created (now ca63e41)
This is an automated email from the ASF dual-hosted git repository. xiangfu pushed a change to branch fixing_literal_in_selection in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git. at ca63e41 Make Literal transformer return string literals This branch includes the following new commits: new ca63e41 Make Literal transformer return string literals The 1 revisions listed above as "new" are entirely new to this repository and will be described in separate emails. The revisions listed as "add" were already present in the repository and have only been added to this reference. - To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org For additional commands, e-mail: commits-h...@pinot.apache.org
[incubator-pinot] 01/01: Make Literal transformer return string literals
This is an automated email from the ASF dual-hosted git repository. xiangfu pushed a commit to branch fixing_literal_in_selection in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git commit ca63e41d1ba9f690e12e4d98e2c6078d0b689c0e Author: Xiang Fu AuthorDate: Tue May 26 23:55:03 2020 -0700 Make Literal transformer return string literals --- .../pinot/pql/parsers/PinotQuery2BrokerRequestConverter.java | 2 +- .../transform/function/LiteralTransformFunction.java | 12 +--- 2 files changed, 10 insertions(+), 4 deletions(-) diff --git a/pinot-common/src/main/java/org/apache/pinot/pql/parsers/PinotQuery2BrokerRequestConverter.java b/pinot-common/src/main/java/org/apache/pinot/pql/parsers/PinotQuery2BrokerRequestConverter.java index 1ce9a06..f4a9639 100644 --- a/pinot-common/src/main/java/org/apache/pinot/pql/parsers/PinotQuery2BrokerRequestConverter.java +++ b/pinot-common/src/main/java/org/apache/pinot/pql/parsers/PinotQuery2BrokerRequestConverter.java @@ -128,7 +128,7 @@ public class PinotQuery2BrokerRequestConverter { if (selection == null) { selection = new Selection(); } - selection.addToSelectionColumns(expression.getLiteral().getStringValue()); + selection.addToSelectionColumns(expression.getLiteral().getFieldValue().toString()); break; case IDENTIFIER: if (selection == null) { diff --git a/pinot-core/src/main/java/org/apache/pinot/core/operator/transform/function/LiteralTransformFunction.java b/pinot-core/src/main/java/org/apache/pinot/core/operator/transform/function/LiteralTransformFunction.java index 24a2374..4317a2b 100644 --- a/pinot-core/src/main/java/org/apache/pinot/core/operator/transform/function/LiteralTransformFunction.java +++ b/pinot-core/src/main/java/org/apache/pinot/core/operator/transform/function/LiteralTransformFunction.java @@ -18,11 +18,13 @@ */ package org.apache.pinot.core.operator.transform.function; +import java.util.Arrays; import java.util.List; import java.util.Map; import org.apache.pinot.core.common.DataSource; import org.apache.pinot.core.operator.blocks.ProjectionBlock; import org.apache.pinot.core.operator.transform.TransformResultMetadata; +import org.apache.pinot.core.plan.DocIdSetPlanNode; import org.apache.pinot.core.segment.index.readers.Dictionary; @@ -32,6 +34,7 @@ import org.apache.pinot.core.segment.index.readers.Dictionary; */ public class LiteralTransformFunction implements TransformFunction { private final String _literal; + private String[] _result; public LiteralTransformFunction(String literal) { _literal = literal; @@ -48,12 +51,11 @@ public class LiteralTransformFunction implements TransformFunction { @Override public void init(List arguments, Map dataSourceMap) { -throw new UnsupportedOperationException(); } @Override public TransformResultMetadata getResultMetadata() { -throw new UnsupportedOperationException(); +return BaseTransformFunction.STRING_SV_NO_DICTIONARY_METADATA; } @Override @@ -93,7 +95,11 @@ public class LiteralTransformFunction implements TransformFunction { @Override public String[] transformToStringValuesSV(ProjectionBlock projectionBlock) { -throw new UnsupportedOperationException(); +if (_result == null) { + _result = new String[DocIdSetPlanNode.MAX_DOC_PER_CALL]; + Arrays.fill(_result, _literal); +} +return _result; } @Override - To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org For additional commands, e-mail: commits-h...@pinot.apache.org
[GitHub] [incubator-pinot] xiaohui-sun commented on pull request #5435: [TE] clean up decprecated/unused code
xiaohui-sun commented on pull request #5435: URL: https://github.com/apache/incubator-pinot/pull/5435#issuecomment-634438517 @cecilynie @cyenjung FYI. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org For additional commands, e-mail: commits-h...@pinot.apache.org
[GitHub] [incubator-pinot] xiaohui-sun commented on a change in pull request #5435: [TE] clean up decprecated/unused code
xiaohui-sun commented on a change in pull request #5435: URL: https://github.com/apache/incubator-pinot/pull/5435#discussion_r430861217 ## File path: thirdeye/thirdeye-pinot/src/main/java/org/apache/pinot/thirdeye/anomaly/task/TaskConstants.java ## @@ -26,15 +26,8 @@ DETECTION, DETECTION_ALERT, YAML_DETECTION_ONBOARD, -ANOMALY_DETECTION, -MERGE, // TODO: deprecate ALERT task type Review comment: Remove //TODO ## File path: thirdeye/thirdeye-pinot/src/main/java/org/apache/pinot/thirdeye/common/BaseThirdEyeApplication.java ## @@ -51,11 +46,9 @@ protected final Logger LOG = LoggerFactory.getLogger(this.getClass()); protected AnomalyFunctionManager anomalyFunctionDAO; Review comment: anomalyFunctionDAO is not used, including AnomalyFunctionManager. ## File path: thirdeye/thirdeye-pinot/src/main/java/org/apache/pinot/thirdeye/dashboard/resources/AnomalyResource.java ## @@ -101,21 +99,18 @@ private MergedAnomalyResultManager anomalyMergedResultDAO; private AlertConfigManager emailConfigurationDAO; private MergedAnomalyResultManager mergedAnomalyResultDAO; - private AutotuneConfigManager autotuneConfigDAO; private DatasetConfigManager datasetConfigDAO; private AnomalyFunctionFactory anomalyFunctionFactory; private AlertFilterFactory alertFilterFactory; Review comment: Not used. ## File path: thirdeye/thirdeye-pinot/src/main/java/org/apache/pinot/thirdeye/dashboard/ThirdEyeDashboardApplication.java ## @@ -160,14 +155,12 @@ public void run(ThirdEyeDashboardConfiguration config, Environment env) AnomalyFunctionFactory anomalyFunctionFactory = new AnomalyFunctionFactory(config.getFunctionConfigPath()); Review comment: AnomalyFunctionFactory is not used. ## File path: thirdeye/thirdeye-pinot/src/main/java/org/apache/pinot/thirdeye/dashboard/resources/AnomalyResource.java ## @@ -101,21 +99,18 @@ private MergedAnomalyResultManager anomalyMergedResultDAO; private AlertConfigManager emailConfigurationDAO; private MergedAnomalyResultManager mergedAnomalyResultDAO; - private AutotuneConfigManager autotuneConfigDAO; private DatasetConfigManager datasetConfigDAO; private AnomalyFunctionFactory anomalyFunctionFactory; private AlertFilterFactory alertFilterFactory; private LoadingCache collectionMaxDataTimeCache; private static final DAORegistry DAO_REGISTRY = DAORegistry.getInstance(); - public AnomalyResource(AnomalyFunctionFactory anomalyFunctionFactory, AlertFilterFactory alertFilterFactory, - AlertFilterAutotuneFactory alertFilterAutotuneFactory) { + public AnomalyResource(AnomalyFunctionFactory anomalyFunctionFactory, AlertFilterFactory alertFilterFactory) { this.anomalyFunctionDAO = DAO_REGISTRY.getAnomalyFunctionDAO(); this.anomalyMergedResultDAO = DAO_REGISTRY.getMergedAnomalyResultDAO(); this.emailConfigurationDAO = DAO_REGISTRY.getAlertConfigDAO(); this.mergedAnomalyResultDAO = DAO_REGISTRY.getMergedAnomalyResultDAO(); -this.autotuneConfigDAO = DAO_REGISTRY.getAutotuneConfigDAO(); this.datasetConfigDAO = DAO_REGISTRY.getDatasetConfigDAO(); this.anomalyFunctionFactory = anomalyFunctionFactory; Review comment: This is not used. ## File path: thirdeye/thirdeye-pinot/src/main/java/org/apache/pinot/thirdeye/datalayer/pojo/DatasetConfigBean.java ## @@ -93,10 +89,6 @@ private boolean requiresCompletenessCheck = false; Review comment: Not used. ## File path: thirdeye/thirdeye-pinot/src/main/java/org/apache/pinot/thirdeye/anomaly/task/TaskInfoFactory.java ## @@ -58,25 +54,9 @@ public static TaskInfo getTaskInfoFromTaskType(TaskType taskType, String taskInf case YAML_DETECTION_ONBOARD: taskInfo = OBJECT_MAPPER.readValue(taskInfoString, YamlOnboardingTaskInfo.class); break; -case ANOMALY_DETECTION: - taskInfo = OBJECT_MAPPER.readValue(taskInfoString, DetectionTaskInfo.class); - break; -case MERGE: - LOG.error("TaskType MERGE not supported"); - break; case MONITOR: taskInfo = OBJECT_MAPPER.readValue(taskInfoString, MonitorTaskInfo.class); break; -case ALERT: -case ALERT2: - taskInfo = OBJECT_MAPPER.readValue(taskInfoString, AlertTaskInfo.class); - break; -case DATA_COMPLETENESS: - taskInfo = OBJECT_MAPPER.readValue(taskInfoString, DataCompletenessTaskInfo.class); - break; -case CLASSIFICATION: - taskInfo = OBJECT_MAPPER.readValue(taskInfoString, ClassificationTaskInfo.class); - break; default: LOG.error("TaskType must be one of ANOMALY_DETECTION, MONITOR, ALERT, ALERT2, DATA_COMPLETENESS, CLASSIFICATION"); Review comment: Update error log accordingly. ## File path: thirdeye/th
[GitHub] [incubator-pinot] apucher opened a new pull request #5452: [TE] fix dockerfile to account for #5428 changes
apucher opened a new pull request #5452: URL: https://github.com/apache/incubator-pinot/pull/5452 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org For additional commands, e-mail: commits-h...@pinot.apache.org
[incubator-pinot] 01/01: [TE] fix dockerfile to account for #5428 changes
This is an automated email from the ASF dual-hosted git repository. apucher pushed a commit to branch dockerfile-fix-for-5428 in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git commit c85dbed65534e7b060c7fb55f293f64e1f079972 Author: Alexander Pucher AuthorDate: Tue May 26 22:21:32 2020 -0700 [TE] fix dockerfile to account for #5428 changes --- docker/images/pinot-thirdeye/Dockerfile | 3 +++ 1 file changed, 3 insertions(+) diff --git a/docker/images/pinot-thirdeye/Dockerfile b/docker/images/pinot-thirdeye/Dockerfile index a6d9a73..42e23c7 100644 --- a/docker/images/pinot-thirdeye/Dockerfile +++ b/docker/images/pinot-thirdeye/Dockerfile @@ -50,6 +50,9 @@ RUN npm install -g phantomjs --unsafe-perm --ignore-scripts RUN git clone ${PINOT_GIT_URL} ${TE_BUILD_DIR} \ && cd ${TE_BUILD_DIR}/thirdeye \ && git checkout ${PINOT_BRANCH} \ +&& cd thirdeye-frontend \ +&& mvn clean install -X -DskipTests || exit 1 \ +&& cd .. \ && mvn clean install -X -DskipTests || exit 1 \ && mkdir -p ${TE_HOME}/config/default \ && mkdir -p ${TE_HOME}/bin \ - To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org For additional commands, e-mail: commits-h...@pinot.apache.org
[incubator-pinot] branch dockerfile-fix-for-5428 created (now c85dbed)
This is an automated email from the ASF dual-hosted git repository. apucher pushed a change to branch dockerfile-fix-for-5428 in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git. at c85dbed [TE] fix dockerfile to account for #5428 changes This branch includes the following new commits: new c85dbed [TE] fix dockerfile to account for #5428 changes The 1 revisions listed above as "new" are entirely new to this repository and will be described in separate emails. The revisions listed as "add" were already present in the repository and have only been added to this reference. - To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org For additional commands, e-mail: commits-h...@pinot.apache.org
[GitHub] [incubator-pinot] mcvsubbu commented on a change in pull request #5266: Add PinotServiceManager to start Pinot components
mcvsubbu commented on a change in pull request #5266: URL: https://github.com/apache/incubator-pinot/pull/5266#discussion_r430838191 ## File path: pinot-common/src/main/java/org/apache/pinot/common/utils/ServiceStatus.java ## @@ -43,17 +44,79 @@ */ @SuppressWarnings("unused") public class ServiceStatus { + public static final String STATUS_DESCRIPTION_NONE = "None"; + public static final String STATUS_DESCRIPTION_INIT = "Init"; + public static final String STATUS_DESCRIPTION_STARTED = "Started"; + public static final String STATUS_DESCRIPTION_NO_HELIX_STATE = "Helix state does not exist"; private static final Logger LOGGER = LoggerFactory.getLogger(ServiceStatus.class); + private static final int MAX_RESOURCE_NAMES_TO_LOG = 5; + private static final Map serviceStatusCallbackMap = new ConcurrentHashMap<>(); + private static final ServiceStatusCallback serviceStatusCallback = + new MapBasedMultipleCallbackServiceStatusCallback(serviceStatusCallbackMap); - public enum Status { -STARTING, GOOD, BAD + public static void setServiceStatusCallback(String name, ServiceStatusCallback serviceStatusCallback) { +ServiceStatus.serviceStatusCallbackMap.put(name, serviceStatusCallback); } - public static final String STATUS_DESCRIPTION_NONE = "None"; - public static final String STATUS_DESCRIPTION_INIT = "Init"; - public static final String STATUS_DESCRIPTION_NO_HELIX_STATE = "Helix state does not exist"; + public static void removeServiceStatusCallback(String name) { +ServiceStatus.serviceStatusCallbackMap.remove(name); + } - private static final int MAX_RESOURCE_NAMES_TO_LOG = 5; + public static Status getServiceStatus() { +return getServiceStatus(serviceStatusCallback); + } + + public static Status getServiceStatus(String name) { +if (serviceStatusCallbackMap.containsKey(name)) { + return getServiceStatus(serviceStatusCallbackMap.get(name)); +} else { + return Status.NOT_STARTED; Review comment: But then the shut down sequence is not instantaneous. You call shutdown and then wait for it to shutdown. During that time, the status should be SHUTTING_DOWN. And then you can change it to NOT_STARTED when the shutdown is complete. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org For additional commands, e-mail: commits-h...@pinot.apache.org
[GitHub] [incubator-pinot] Jackie-Jiang commented on a change in pull request #5409: Faster bit unpacking (Part 1)
Jackie-Jiang commented on a change in pull request #5409: URL: https://github.com/apache/incubator-pinot/pull/5409#discussion_r430592580 ## File path: pinot-core/src/main/java/org/apache/pinot/core/io/util/FixedBitIntReaderWriterV2.java ## @@ -0,0 +1,151 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.apache.pinot.core.io.util; + +import com.google.common.base.Preconditions; +import java.io.Closeable; +import java.io.IOException; +import org.apache.pinot.core.plan.DocIdSetPlanNode; +import org.apache.pinot.core.segment.memory.PinotDataBuffer; + + +public final class FixedBitIntReaderWriterV2 implements Closeable { + private volatile PinotDataBitSetV2 _dataBitSet; Review comment: You don't need to make this volatile and set it to null in close(). This issue has been addressed in #4764 ## File path: pinot-core/src/main/java/org/apache/pinot/core/io/util/PinotDataBitSetV2.java ## @@ -0,0 +1,516 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.apache.pinot.core.io.util; + +import java.io.Closeable; +import java.io.IOException; +import org.apache.pinot.core.plan.DocIdSetPlanNode; +import org.apache.pinot.core.segment.memory.PinotDataBuffer; + + +public abstract class PinotDataBitSetV2 implements Closeable { + private static final int BYTE_MASK = 0xFF; + static final int MAX_VALUES_UNPACKED_SINGLE_ALIGNED_READ = 16; // comes from 2-bit encoding + + private static final ThreadLocal THREAD_LOCAL_DICT_IDS = + ThreadLocal.withInitial(() -> new int[DocIdSetPlanNode.MAX_DOC_PER_CALL]); + + protected PinotDataBuffer _dataBuffer; + protected int _numBitsPerValue; + + /** + * Unpack single dictId at the given docId. This is efficient Review comment: Don't limit it to `dictId` and `docId` in the javadoc. The `BitSet` is a general reader/writer which can be used for different purposes. ## File path: pinot-core/src/main/java/org/apache/pinot/core/io/util/PinotDataBitSetV2.java ## @@ -0,0 +1,516 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.apache.pinot.core.io.util; + +import java.io.Closeable; +import java.io.IOException; +import org.apache.pinot.core.plan.DocIdSetPlanNode; +import org.apache.pinot.core.segment.memory.PinotDataBuffer; + + +public abstract class PinotDataBitSetV2 implements Closeable { + private static final int BYTE_MASK = 0xFF; + static final int MAX_VALUES_UNPACKED_SINGLE_ALIGNED_READ = 16; // comes from 2-bit encoding + + private static final ThreadLocal THREAD_LOCAL_DICT_IDS = + ThreadLocal.withInitial(() -> new int[DocIdSetPlanNode.MAX_DOC_PER_CALL]); + + prote
[GitHub] [incubator-pinot] fx19880617 opened a new pull request #5442: Update pinot-admin and quickstart log4j config
fx19880617 opened a new pull request #5442: URL: https://github.com/apache/incubator-pinot/pull/5442 - logs all info level to a rolling file - configurable log directory and log pattern from env variable - eliminate console log to warn level - still logs info level for pinot tools related stuffs. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org For additional commands, e-mail: commits-h...@pinot.apache.org
[GitHub] [incubator-pinot] jihaozh merged pull request #5432: [TE] endpoints for alerts page filters
jihaozh merged pull request #5432: URL: https://github.com/apache/incubator-pinot/pull/5432 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org For additional commands, e-mail: commits-h...@pinot.apache.org
[GitHub] [incubator-pinot] KKcorps commented on a change in pull request #5293: Adding support for Protobuf input format
KKcorps commented on a change in pull request #5293: URL: https://github.com/apache/incubator-pinot/pull/5293#discussion_r430657887 ## File path: pinot-plugins/pinot-input-format/pinot-protobuf/src/main/java/org/apache/pinot/plugin/inputformat/protobuf/ProtoBufRecordReader.java ## @@ -0,0 +1,147 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.apache.pinot.plugin.inputformat.protobuf; + +import com.google.protobuf.DescriptorProtos; +import com.google.protobuf.Descriptors; +import com.google.protobuf.DynamicMessage; +import com.google.protobuf.Message; +import java.io.File; +import java.io.FileInputStream; +import java.io.IOException; +import java.io.InputStream; +import java.net.URI; +import java.util.Set; +import javax.annotation.Nullable; +import org.apache.pinot.spi.data.Schema; +import org.apache.pinot.spi.data.readers.GenericRow; +import org.apache.pinot.spi.data.readers.RecordReader; +import org.apache.pinot.spi.data.readers.RecordReaderConfig; +import org.apache.pinot.spi.data.readers.RecordReaderUtils; +import org.apache.pinot.spi.utils.ResourceFinder; +import org.apache.pinot.spi.utils.SchemaFieldExtractorUtils; + + +public class ProtoBufRecordReader implements RecordReader { + private File _dataFile; + private Schema _schema; + private ProtoBufRecordExtractor _recordExtractor; + + private InputStream _inputStream; + private boolean _hasNext; + private Descriptors.Descriptor _descriptor; + + private boolean hasMoreToRead() + throws IOException { +_inputStream.mark(1); +int nextByte = _inputStream.read(); +_inputStream.reset(); +return nextByte != -1; + } + + private void init() + throws IOException { +_inputStream = RecordReaderUtils.getBufferedInputStream(_dataFile); +try { + _hasNext = hasMoreToRead(); +} catch (Exception e) { + _inputStream.close(); + throw e; +} + } + + @Override + public void init(File dataFile, Schema schema, @Nullable RecordReaderConfig recordReaderConfig) + throws IOException { +_dataFile = dataFile; +_schema = schema; +Set sourceFields = SchemaFieldExtractorUtils.extract(schema); +ProtoBufRecordExtractorConfig recordExtractorConfig = new ProtoBufRecordExtractorConfig(); +ProtoBufRecordReaderConfig protoBufRecordReaderConfig = (ProtoBufRecordReaderConfig) recordReaderConfig; +InputStream fin = getDescriptorFileInputStream(protoBufRecordReaderConfig); +buildProtoBufDescriptor(fin); +_recordExtractor = new ProtoBufRecordExtractor(); +_recordExtractor.init(sourceFields, recordExtractorConfig); +init(); + } + + private void buildProtoBufDescriptor(InputStream fin) + throws IOException { +try { + DescriptorProtos.FileDescriptorSet set = DescriptorProtos.FileDescriptorSet.parseFrom(fin); + Descriptors.FileDescriptor fileDescriptor = + Descriptors.FileDescriptor.buildFrom(set.getFile(0), new Descriptors.FileDescriptor[]{}); + _descriptor = fileDescriptor.getMessageTypes().get(0); +} catch (Descriptors.DescriptorValidationException e) { + throw new IOException("Descriptor file validation failed", e); +} + } + + private InputStream getDescriptorFileInputStream(ProtoBufRecordReaderConfig protoBufRecordReaderConfig) + throws IOException { +URI descriptorFileURI = protoBufRecordReaderConfig.getDescriptorFile(); +return ResourceFinder.openResource(descriptorFileURI); + } + + @Override + public boolean hasNext() { +return _hasNext; + } + + @Override + public GenericRow next() + throws IOException { +return next(new GenericRow()); + } + + @Override + public GenericRow next(GenericRow reuse) + throws IOException { +Message message = null; +try { + DynamicMessage tmp = DynamicMessage.getDefaultInstance(_descriptor); + Message.Builder builder = tmp.newBuilderForType(); Review comment: not sure. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, pleas
[GitHub] [incubator-pinot] npawar commented on pull request #5449: Include trailing empty string in group key split
npawar commented on pull request #5449: URL: https://github.com/apache/incubator-pinot/pull/5449#issuecomment-634275733 > can we add a test case added This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org For additional commands, e-mail: commits-h...@pinot.apache.org
[GitHub] [incubator-pinot] kishred commented on a change in pull request #5442: Update pinot-admin and quickstart log4j config
kishred commented on a change in pull request #5442: URL: https://github.com/apache/incubator-pinot/pull/5442#discussion_r430527249 ## File path: pinot-tools/src/main/resources/conf/log4j2.xml ## @@ -20,33 +20,31 @@ --> - - - -%d{/MM/dd HH:mm:ss.SSS} %p [%c{1}] [%t] %m%n - - - - -%d{/MM/dd HH:mm:ss.SSS} %p [%c{1}] [%t] %m%n - - + +logs +%d{/MM/dd HH:mm:ss.SSS} %p [%c{1}] [%t] %m%n +all + + + + Review comment: Few comments: 1. Log file name need not have date pattern since it is only size based rollover 2. Enable compression to reduce the space used and increase the amount of logs to back in date. Use bzip2 compression since it has `bzgrep` tool to grep logs without uncompressing them. 3. Follow the convention of *NIX rolled logs by moving `%i ` after `.log` extension Suggested filePattern that takes care of #1 and #2: `filePattern="${env:LOG_ROOT}/pinot-${env:PINOT_COMPONENT}.log.%i.bz2"` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org For additional commands, e-mail: commits-h...@pinot.apache.org
[GitHub] [incubator-pinot] vincentchenjl commented on a change in pull request #5435: [TE] clean up decprecated/unused code
vincentchenjl commented on a change in pull request #5435: URL: https://github.com/apache/incubator-pinot/pull/5435#discussion_r430689663 ## File path: thirdeye/thirdeye-pinot/src/main/java/org/apache/pinot/thirdeye/anomaly/task/TaskRunnerFactory.java ## @@ -59,16 +56,6 @@ public static TaskRunner getTaskRunnerFromTaskType(TaskType taskType) { case MONITOR: taskRunner = new MonitorTaskRunner(); break; - case DATA_COMPLETENESS: Review comment: Removed. ## File path: thirdeye/thirdeye-pinot/src/main/java/org/apache/pinot/thirdeye/completeness/checker/Wo4WAvgDataCompletenessAlgorithm.java ## @@ -41,6 +41,7 @@ /** * This is the implementation of the WO4W Average function or checking data completeness of datasets */ +@Deprecated public class Wo4WAvgDataCompletenessAlgorithm implements DataCompletenessAlgorithm { public static double DEFAULT_EXPECTED_COMPLETENESS = 80; Review comment: The issue is that `Wo4WAvgDataCompletenessAlgorithm` is used as default value for `DatasetConfigBean.dataCompletenessAlgorithm`. I understand that it is not used, but i am not sure about the impact if i remove this field from `DatasetConfigBean` and still have this field inside the JSON in DB. ## File path: thirdeye/thirdeye-pinot/src/main/java/org/apache/pinot/thirdeye/completeness/checker/DataCompletenessUtils.java ## @@ -261,13 +261,4 @@ public static DateTimeFormatter getDateTimeFormatterForDataset(TimeSpec timeSpec return bucketNameToCountStar; } - public static double getPercentCompleteness(PercentCompletenessFunctionInput input) { Review comment: This class is used in `Wo4WAvgDataCompletenessAlgorithm`. We need to drop both of them if we want to drop this. `Wo4WAvgDataCompletenessAlgorithm` cannot be dropped due to the issue below. ## File path: thirdeye/thirdeye-pinot/src/main/java/org/apache/pinot/thirdeye/completeness/checker/DataCompletenessUtils.java ## @@ -261,13 +261,4 @@ public static DateTimeFormatter getDateTimeFormatterForDataset(TimeSpec timeSpec return bucketNameToCountStar; } - public static double getPercentCompleteness(PercentCompletenessFunctionInput input) { Review comment: I remove `Wo4WAvgDataCompletenessAlgorithm` completely and also the field `DatasetConfigBean.dataCompletenessAlgorithm` from `DatasetConfigBean`. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org For additional commands, e-mail: commits-h...@pinot.apache.org
[GitHub] [incubator-pinot] Jackie-Jiang commented on pull request #5409: Faster bit unpacking (Part 1)
Jackie-Jiang commented on pull request #5409: URL: https://github.com/apache/incubator-pinot/pull/5409#issuecomment-634189690 For the benchmark, you should also compare the worst case scenario such as 3, 5, 9, 17 bits This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org For additional commands, e-mail: commits-h...@pinot.apache.org
[GitHub] [incubator-pinot] siddharthteotia commented on pull request #5409: Faster bit unpacking (Part 1)
siddharthteotia commented on pull request #5409: URL: https://github.com/apache/incubator-pinot/pull/5409#issuecomment-634394494 > For the benchmark, you should also compare the worst case scenario such as 3, 5, 9, 17 bits I am still working on adding faster methods for non power of 2. Follow up will address this. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org For additional commands, e-mail: commits-h...@pinot.apache.org
[GitHub] [incubator-pinot] mayankshriv commented on a change in pull request #5443: Add broker request info to the error msg in combine operator.
mayankshriv commented on a change in pull request #5443: URL: https://github.com/apache/incubator-pinot/pull/5443#discussion_r430114816 ## File path: pinot-core/src/main/java/org/apache/pinot/core/operator/CombineOperator.java ## @@ -168,13 +168,13 @@ public IntermediateResultsBlock callJob() try { mergedBlock = mergedBlockFuture.get(endTimeMs - System.currentTimeMillis(), TimeUnit.MILLISECONDS); } catch (InterruptedException e) { - LOGGER.error("Caught InterruptedException.", e); + LOGGER.error("Caught InterruptedException. (brokerRequest = {})", _brokerRequest, e); Review comment: Do we need brokerRequest, or just requestId should be fine? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org For additional commands, e-mail: commits-h...@pinot.apache.org
[GitHub] [incubator-pinot] npawar opened a new pull request #5449: Include trailing empty string in group key split
npawar opened a new pull request #5449: URL: https://github.com/apache/incubator-pinot/pull/5449 https://github.com/apache/incubator-pinot/issues/5448 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org For additional commands, e-mail: commits-h...@pinot.apache.org
[GitHub] [incubator-pinot] kishoreg commented on pull request #5446: Make Pinot Broker/Server can start by just passing a config file
kishoreg commented on pull request #5446: URL: https://github.com/apache/incubator-pinot/pull/5446#issuecomment-634197569 Let's ensure all parameter names follow the same convention - config or configFile or configFileName all config seems to have prefix as well to indicate what config it is so maybe something like this - serverConfigFile - brokerConfigFile - tableConfigFile - schemFile - jobSpecFile This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org For additional commands, e-mail: commits-h...@pinot.apache.org
[GitHub] [incubator-pinot] fx19880617 commented on a change in pull request #5442: Update pinot-admin and quickstart log4j config
fx19880617 commented on a change in pull request #5442: URL: https://github.com/apache/incubator-pinot/pull/5442#discussion_r430621171 ## File path: pinot-tools/src/main/resources/conf/log4j2.xml ## @@ -20,33 +20,31 @@ --> - - - -%d{/MM/dd HH:mm:ss.SSS} %p [%c{1}] [%t] %m%n - - - - -%d{/MM/dd HH:mm:ss.SSS} %p [%c{1}] [%t] %m%n - - + +logs +%d{/MM/dd HH:mm:ss.SSS} %p [%c{1}] [%t] %m%n +all + + + + Review comment: Sounds good! I feel it's still better to use gzip for log files, which can be open by vim and grep by zgrep also. Changed to `${env:LOG_ROOT}/pinot-${env:PINOT_COMPONENT}.log.%i.gz` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org For additional commands, e-mail: commits-h...@pinot.apache.org
[GitHub] [incubator-pinot] KKcorps commented on pull request #5440: Add UDFs for String Transformation
KKcorps commented on pull request #5440: URL: https://github.com/apache/incubator-pinot/pull/5440#issuecomment-634088300 @fx19880617 @siddharthteotia Should I add tests in CalciteSQL for all the functions? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org For additional commands, e-mail: commits-h...@pinot.apache.org
[GitHub] [incubator-pinot] npawar merged pull request #5449: Include trailing empty string in group key split
npawar merged pull request #5449: URL: https://github.com/apache/incubator-pinot/pull/5449 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org For additional commands, e-mail: commits-h...@pinot.apache.org
[GitHub] [incubator-pinot] fx19880617 opened a new pull request #5445: Update Quickstart to take tmp dir as a parameter and default to current dir
fx19880617 opened a new pull request #5445: URL: https://github.com/apache/incubator-pinot/pull/5445 - Move quickstart main function to use pinot-admin - Update Quickstart to take as a parameter to host pinot data - Using as default tmp dir for quickstart to prevent the issue of using `java.io.tmp`. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org For additional commands, e-mail: commits-h...@pinot.apache.org
[GitHub] [incubator-pinot] harleyjj opened a new pull request #5447: [TE] frontend - harleyjj/rca - update frontend for new AC event format
harleyjj opened a new pull request #5447: URL: https://github.com/apache/incubator-pinot/pull/5447 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org For additional commands, e-mail: commits-h...@pinot.apache.org
[GitHub] [incubator-pinot] kishred opened a new issue #5448: java.lang.ClassCastException when running SQL queries
kishred opened a new issue #5448: URL: https://github.com/apache/incubator-pinot/issues/5448 Getting java.lang.ClassCastException when running this SQL query: `SELECT colA, colB, SUM(metricA) from table GROUP BY colA, colB` `colA` and `colB` are STRING typed. This SQL query works fine when blank column values in colB are discarded using WHERE clause. `SELECT colA, colB, SUM(metricA) FROM table WHERE colB != '' GROUP BY colA, colB` Note: Exact same query returns results on PQL interface. Below is the exception stacktrace printed by pinotServer: ``` java.lang.ClassCastException: java.lang.Double cannot be cast to java.lang.String at org.apache.pinot.core.operator.blocks.IntermediateResultsBlock.setDataTableColumn(IntermediateResultsBlock.java:290) ~[pinot-all-0.4.0-SNAPSHOT-jar-with-dependencies.jar:0.4.0-SNAPSHOT-6f840db6c84c69d2ebf137f1407332b517ce9322] at org.apache.pinot.core.operator.blocks.IntermediateResultsBlock.getResultDataTable(IntermediateResultsBlock.java:264) ~[pinot-all-0.4.0-SNAPSHOT-jar-with-dependencies.jar:0.4.0-SNAPSHOT-6f840db6c84c69d2ebf137f1407332b517ce9322] at org.apache.pinot.core.operator.blocks.IntermediateResultsBlock.getDataTable(IntermediateResultsBlock.java:229) ~[pinot-all-0.4.0-SNAPSHOT-jar-with-dependencies.jar:0.4.0-SNAPSHOT-6f840db6c84c69d2ebf137f1407332b517ce9322] at org.apache.pinot.core.operator.blocks.InstanceResponseBlock.(InstanceResponseBlock.java:43) ~[pinot-all-0.4.0-SNAPSHOT-jar-with-dependencies.jar:0.4.0-SNAPSHOT-6f840db6c84c69d2ebf137f1407332b517ce9322] at org.apache.pinot.core.operator.InstanceResponseOperator.getNextBlock(InstanceResponseOperator.java:37) ~[pinot-all-0.4.0-SNAPSHOT-jar-with-dependencies.jar:0.4.0-SNAPSHOT-6f840db6c84c69d2ebf137f1407332b517ce9322] at org.apache.pinot.core.operator.InstanceResponseOperator.getNextBlock(InstanceResponseOperator.java:26) ~[pinot-all-0.4.0-SNAPSHOT-jar-with-dependencies.jar:0.4.0-SNAPSHOT-6f840db6c84c69d2ebf137f1407332b517ce9322] at org.apache.pinot.core.operator.BaseOperator.nextBlock(BaseOperator.java:49) ~[pinot-all-0.4.0-SNAPSHOT-jar-with-dependencies.jar:0.4.0-SNAPSHOT-6f840db6c84c69d2ebf137f1407332b517ce9322] at org.apache.pinot.core.plan.GlobalPlanImplV0.execute(GlobalPlanImplV0.java:48) ~[pinot-all-0.4.0-SNAPSHOT-jar-with-dependencies.jar:0.4.0-SNAPSHOT-6f840db6c84c69d2ebf137f1407332b517ce9322] at org.apache.pinot.core.query.executor.ServerQueryExecutorV1Impl.processQuery(ServerQueryExecutorV1Impl.java:220) ~[pinot-all-0.4.0-SNAPSHOT-jar-with-dependencies.jar:0.4.0-SNAPSHOT-6f840db6c84c69d2ebf137f1407332b517ce9322] at org.apache.pinot.core.query.scheduler.QueryScheduler.processQueryAndSerialize(QueryScheduler.java:152) ~[pinot-all-0.4.0-SNAPSHOT-jar-with-dependencies.jar:0.4.0-SNAPSHOT-6f840db6c84c69d2ebf137f1407332b517ce9322] at org.apache.pinot.core.query.scheduler.QueryScheduler.lambda$createQueryFutureTask$0(QueryScheduler.java:136) ~[pinot-all-0.4.0-SNAPSHOT-jar-with-dependencies.jar:0.4.0-SNAPSHOT-6f840db6c84c69d2ebf137f1407332b517ce9322] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_232] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_232] at shaded.com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:111) [pinot-all-0.4.0-SNAPSHOT-jar-with-dependencies.jar:0.4.0-SNAPSHOT-6f840db6c84c69d2ebf137f1407332b517ce9322] at shaded.com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:58) [pinot-all-0.4.0-SNAPSHOT-jar-with-dependencies.jar:0.4.0-SNAPSHOT-6f840db6c84c69d2ebf137f1407332b517ce9322] at shaded.com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:75) [pinot-all-0.4.0-SNAPSHOT-jar-with-dependencies.jar:0.4.0-SNAPSHOT-6f840db6c84c69d2ebf137f1407332b517ce9322] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_232] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_232] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_232] ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org For additional commands, e-mail: commits-h...@pinot.apache.org
[GitHub] [incubator-pinot] siddharthteotia commented on a change in pull request #5409: Faster bit unpacking (Part 1)
siddharthteotia commented on a change in pull request #5409: URL: https://github.com/apache/incubator-pinot/pull/5409#discussion_r430205187 ## File path: pinot-core/src/main/java/org/apache/pinot/core/io/util/FixedBitIntReaderWriterV2.java ## @@ -0,0 +1,104 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.apache.pinot.core.io.util; + +import com.google.common.base.Preconditions; +import java.io.Closeable; +import java.io.IOException; +import org.apache.pinot.core.plan.DocIdSetPlanNode; +import org.apache.pinot.core.segment.memory.PinotDataBuffer; + + +public final class FixedBitIntReaderWriterV2 implements Closeable { + private volatile PinotDataBitSetV2 _dataBitSet; + private final int _numBitsPerValue; + + public FixedBitIntReaderWriterV2(PinotDataBuffer dataBuffer, int numValues, int numBitsPerValue) { +Preconditions +.checkState(dataBuffer.size() == (int) (((long) numValues * numBitsPerValue + Byte.SIZE - 1) / Byte.SIZE)); +_dataBitSet = PinotDataBitSetV2.createBitSet(dataBuffer, numBitsPerValue); +_numBitsPerValue = numBitsPerValue; + } + + /** + * Read dictionaryId for a particular docId + * @param index docId to get the dictionaryId for + * @return dictionaryId + */ + public int readInt(int index) { +return _dataBitSet.readInt(index); + } + + /** + * Array based API to read dictionaryIds for a contiguous + * range of docIds starting at startDocId for a given length + * @param startDocId docId range start + * @param length length of contiguous docId range + * @param buffer out buffer to read dictionaryIds into + */ + public void readInt(int startDocId, int length, int[] buffer) { +_dataBitSet.readInt(startDocId, length, buffer); + } + + /** + * Array based API to read dictionaryIds for an array of docIds + * which are monotonically increasing but not necessarily contiguous + * @param docIds array of docIds to read the dictionaryIds for + * @param docIdStartIndex start index in docIds array + * @param docIdLength length to process in docIds array + * @param values out array to store the dictionaryIds into + * @param valuesStartIndex start index in values array + */ + public void readValues(int[] docIds, int docIdStartIndex, int docIdLength, int[] values, int valuesStartIndex) { +int docIdEndIndex = docIdStartIndex + docIdLength - 1; +if (shouldBulkRead(docIds, docIdStartIndex, docIdEndIndex)) { + _dataBitSet.readInt(docIds, docIdStartIndex, docIdLength, values, valuesStartIndex); +} else { + for (int i = docIdStartIndex; i <= docIdEndIndex; i++) { +values[valuesStartIndex++] = _dataBitSet.readInt(docIds[i]); + } +} + } + + private boolean shouldBulkRead(int[] docIds, int startIndex, int endIndex) { +int numDocsToRead = endIndex - startIndex + 1; +int docIdRange = docIds[endIndex] - docIds[startIndex] + 1; +if (docIdRange > DocIdSetPlanNode.MAX_DOC_PER_CALL) { Review comment: I think this is coming from the previous commit. The latest version of the PE doesn't have this check. ## File path: pinot-core/src/main/java/org/apache/pinot/core/io/util/FixedBitIntReaderWriterV2.java ## @@ -0,0 +1,151 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.apache.pinot.core.io.util; + +import com.google.common.base.Preconditions; +import java.io.Closeable; +import java.io.IOExceptio
[GitHub] [incubator-pinot] npawar commented on issue #5448: java.lang.ClassCastException when running SQL queries
npawar commented on issue #5448: URL: https://github.com/apache/incubator-pinot/issues/5448#issuecomment-634321674 Thanks for reporting! This has been fixed in the linked PR This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org For additional commands, e-mail: commits-h...@pinot.apache.org
[GitHub] [incubator-pinot] Jackie-Jiang commented on a change in pull request #5444: Enhance and simplify the filtering
Jackie-Jiang commented on a change in pull request #5444: URL: https://github.com/apache/incubator-pinot/pull/5444#discussion_r430789063 ## File path: pinot-core/src/main/java/org/apache/pinot/core/io/readerwriter/impl/FixedByteSingleColumnMultiValueReaderWriter.java ## @@ -277,42 +260,6 @@ public void setStringArray(int row, String[] stringArray) { } } - @Override Review comment: No, I only removed the ones that does not apply to Pinot types (we don't support MV BYTES). ## File path: pinot-core/src/main/java/org/apache/pinot/core/io/writer/SingleColumnMultiValueWriter.java ## @@ -19,60 +19,14 @@ package org.apache.pinot.core.io.writer; public interface SingleColumnMultiValueWriter extends DataFileWriter { - /** Review comment: Same here. I only removed the ones that does not apply to Pinot data types. ## File path: pinot-core/src/main/java/org/apache/pinot/core/common/BlockValIterator.java ## @@ -22,6 +22,8 @@ boolean hasNext(); + int getNextDocId(); Review comment: This iterator is not only iterator but supports `skipTo(docId)` and `reset()`. It is easier to track the docId here comparing to tracking it on the caller side. I'll add some javadoc to explain this ## File path: pinot-core/src/main/java/org/apache/pinot/core/common/BlockDocIdSet.java ## @@ -21,6 +21,4 @@ public interface BlockDocIdSet { BlockDocIdIterator iterator(); Review comment: Good point, will add javadoc This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org For additional commands, e-mail: commits-h...@pinot.apache.org
[GitHub] [incubator-pinot] Jackie-Jiang commented on pull request #5443: Add broker request info to the error msg in combine operator.
Jackie-Jiang commented on pull request #5443: URL: https://github.com/apache/incubator-pinot/pull/5443#issuecomment-634210017 We have 3 different combine operators, you might need to add this to all of them This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org For additional commands, e-mail: commits-h...@pinot.apache.org
[GitHub] [incubator-pinot] kishoreg commented on a change in pull request #5444: Enhance and simplify the filtering
kishoreg commented on a change in pull request #5444: URL: https://github.com/apache/incubator-pinot/pull/5444#discussion_r430777214 ## File path: pinot-core/src/main/java/org/apache/pinot/core/io/writer/SingleColumnMultiValueWriter.java ## @@ -19,60 +19,14 @@ package org.apache.pinot.core.io.writer; public interface SingleColumnMultiValueWriter extends DataFileWriter { - /** Review comment: we don't want to remove these. There are usecases where we will support no-dictionary modes ## File path: pinot-core/src/main/java/org/apache/pinot/core/io/readerwriter/impl/FixedByteSingleColumnMultiValueReaderWriter.java ## @@ -277,42 +260,6 @@ public void setStringArray(int row, String[] stringArray) { } } - @Override Review comment: why are we removing these methods, dont we need them for no-dictionary? ## File path: pinot-core/src/main/java/org/apache/pinot/core/common/BlockValIterator.java ## @@ -22,6 +22,8 @@ boolean hasNext(); + int getNextDocId(); Review comment: why is blockValIterator returning getNextDocId ? ## File path: pinot-core/src/main/java/org/apache/pinot/core/common/BlockDocIdSet.java ## @@ -21,6 +21,4 @@ public interface BlockDocIdSet { BlockDocIdIterator iterator(); - Review comment: this is awesome This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org For additional commands, e-mail: commits-h...@pinot.apache.org
[GitHub] [incubator-pinot] npawar opened a new issue #5450: Clear results from Pinot Data Explorer
npawar opened a new issue #5450: URL: https://github.com/apache/incubator-pinot/issues/5450 Request from @kishred In the Pinot Data Explorer, it is hard to tell if the new query ran and gave results. There are buttons for Copy, Excel, CSV. Can we have another button to Clear old results? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org For additional commands, e-mail: commits-h...@pinot.apache.org
[GitHub] [incubator-pinot] fx19880617 commented on pull request #5446: Make Pinot Broker/Server can start by just passing a config file
fx19880617 commented on pull request #5446: URL: https://github.com/apache/incubator-pinot/pull/5446#issuecomment-63454 > Let's ensure all parameter names follow the same convention - config or configFile or configFileName > > all config seems to have prefix as well to indicate what config it is > > so maybe something like this > > * serverConfigFile > * brokerConfigFile > * tableConfigFile > * schemFile > * jobSpecFile updated by adding configs alias. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org For additional commands, e-mail: commits-h...@pinot.apache.org
[GitHub] [incubator-pinot] fx19880617 opened a new pull request #5446: Make Pinot Broker/Server can start by just passing a config file
fx19880617 opened a new pull request #5446: URL: https://github.com/apache/incubator-pinot/pull/5446 - Adding constants for helix cluster name and zookeeper server - Create construct for `HelixBrokerStarter` and `HelixServerStarter` to use just `Configuration` and move all Pinot internal usage to use new constructor. - Deprecated old constructors for `HelixBrokerStarter` and `HelixServerStarter` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org For additional commands, e-mail: commits-h...@pinot.apache.org
[GitHub] [incubator-pinot] Jackie-Jiang opened a new pull request #5444: Enhance and simplify the filtering
Jackie-Jiang opened a new pull request #5444: URL: https://github.com/apache/incubator-pinot/pull/5444 Removed the methods that complicate the code but provide no value: - BlockDocIdSet - getRaw - FilterBlockDocIdSet - getMinDocId - getMaxDocId - setStartDocId - setEndDocId - BlockDocIdIterator - currentDocId - ScanBasedDocIdIterator - isMatch Uniformed the behavior of all filter-related classes to bound the return docIds with numDocs Simplified the logic of AND/OR handling Pushed down the AND smart handling logic to BlockDocIdIterator to ensure the best performance: - AndDocIdSet might return RangelessBitmapDocIdIterator which can be merged with other iterators - OrDocIdSet might return BitmapDocIdIterator which can be merged with other iterators Tested all the queries (10K PQL, 10K SQL) within the query file using HybridClusterIntegrationTest This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org For additional commands, e-mail: commits-h...@pinot.apache.org
[GitHub] [incubator-pinot] siddharthteotia edited a comment on pull request #5409: Faster bit unpacking (Part 1)
siddharthteotia edited a comment on pull request #5409: URL: https://github.com/apache/incubator-pinot/pull/5409#issuecomment-634394494 > For the benchmark, you should also compare the worst case scenario such as 3, 5, 9, 17 bits I am still working on adding faster methods for non power of 2. Follow up will address this. Right now it is standalone code (yet to wire in). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org For additional commands, e-mail: commits-h...@pinot.apache.org
[GitHub] [incubator-pinot] kishoreg opened a new pull request #5441: Adding files generated by running quickstart to gitignore
kishoreg opened a new pull request #5441: URL: https://github.com/apache/incubator-pinot/pull/5441 Quickstart generates additional files in the directory and its annoying to see them as part of git status. ``` Untracked files: (use "git add ..." to include in what will be committed) examples/ quickStartData1589785258792/ quickStartData1589786369065/ quickStartData1589786485590/ quickStartData1589786542192/ ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org For additional commands, e-mail: commits-h...@pinot.apache.org
[GitHub] [incubator-pinot] snleee opened a new pull request #5443: Add broker request info to the error msg in combine operator.
snleee opened a new pull request #5443: URL: https://github.com/apache/incubator-pinot/pull/5443 Current log indicates exception/timeout in combine operator but this does not give much information for debugging. Adding broker request to the log will help for debugging. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org For additional commands, e-mail: commits-h...@pinot.apache.org
[GitHub] [incubator-pinot] snleee commented on pull request #5443: Add broker request info to the error msg in combine operator.
snleee commented on pull request #5443: URL: https://github.com/apache/incubator-pinot/pull/5443#issuecomment-634218297 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org For additional commands, e-mail: commits-h...@pinot.apache.org
[GitHub] [incubator-pinot] Jackie-Jiang opened a new pull request #5451: Refactor DistinctTable to use PriorityQueue based algorithm
Jackie-Jiang opened a new pull request #5451: URL: https://github.com/apache/incubator-pinot/pull/5451 Currently DISTINCT query is solved the same way as GROUP-BY queries, which is not necessary (consume much more memory and CPU) and does not guarantee accuracy of the result. Instead, DISTINCT query can be solved by a set and a heap efficiently (similar to SelectionOrderBy but unique records need to be tracked). The new DistinctTable does not implement the Table interface because the table interface is designed mostly for the GROUP-BY queries, and is not efficient for DISTINCT. If in the future we want to use Table interface to uniform all the input/output, we can redesign the Table interface to make it suitable for all types of queries. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org For additional commands, e-mail: commits-h...@pinot.apache.org
[GitHub] [incubator-pinot] KKcorps commented on a change in pull request #5440: Add UDFs for String Transformation
KKcorps commented on a change in pull request #5440: URL: https://github.com/apache/incubator-pinot/pull/5440#discussion_r430488587 ## File path: pinot-core/src/main/java/org/apache/pinot/core/operator/transform/function/GenericTransformFunction.java ## @@ -0,0 +1,170 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.apache.pinot.core.operator.transform.function; + +import com.google.common.base.Preconditions; +import java.lang.reflect.InvocationTargetException; +import java.lang.reflect.Method; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.List; +import java.util.Map; +import org.apache.pinot.common.function.FunctionInfo; +import org.apache.pinot.common.function.FunctionInvoker; +import org.apache.pinot.core.common.DataSource; +import org.apache.pinot.core.operator.blocks.ProjectionBlock; +import org.apache.pinot.core.operator.transform.TransformResultMetadata; +import org.apache.pinot.core.plan.DocIdSetPlanNode; +import org.apache.pinot.spi.data.FieldSpec; + + +public class GenericTransformFunction extends BaseTransformFunction { + + private FunctionInfo _info; + FunctionInvoker _functionInvoker; + String _name; + Object[] _args; + List _nonLiteralArgIndices; + List _nonLiteralArgType; + List _nonLiteralTransformFunction; + String[] _stringResult; + + public GenericTransformFunction() { +_nonLiteralArgIndices = new ArrayList<>(); +_nonLiteralArgType = new ArrayList<>(); +_nonLiteralTransformFunction = new ArrayList<>(); + } + + @Override + public String getName() { +return _name; + } + + public void setFunction(String functionName, FunctionInfo info) + throws Exception { +_name = functionName; +_info = info; +_functionInvoker = new FunctionInvoker(info); + } + + @Override + public void init(List arguments, Map dataSourceMap) { +Preconditions.checkArgument(arguments.size() == _functionInvoker.getParameterTypes().length, +"The number of arguments are not same for scalar function and transform function: %s", getName()); + +_args = new Object[arguments.size()]; +for (int i = 0; i < arguments.size(); i++) { + TransformFunction function = arguments.get(i); + if (function instanceof LiteralTransformFunction) { +String literal = ((LiteralTransformFunction) function).getLiteral(); +Class paramType = _functionInvoker.getParameterTypes()[i]; +switch (paramType.getTypeName()) { + case "java.lang.Integer": +_args[i] = Integer.parseInt(literal); +break; + case "java.lang.String": +_args[i] = literal; +break; + case "java.lang.Double": +_args[i] = Double.valueOf(literal); +break; + case "java.lang.Long": +_args[i] = Long.valueOf(literal); +break; + default: +throw new RuntimeException( +"Unsupported data type " + paramType.getTypeName() + "for transform function " + getName()); +} + } else { +_nonLiteralArgIndices.add(i); +_nonLiteralTransformFunction.add(function); +Class paramType = _functionInvoker.getParameterTypes()[i]; + +switch (paramType.getTypeName()) { + case "java.lang.Integer": +_nonLiteralArgType.add(FieldSpec.DataType.INT); +break; + case "java.lang.String": +_nonLiteralArgType.add(FieldSpec.DataType.STRING); +break; + case "java.lang.Double": +_nonLiteralArgType.add(FieldSpec.DataType.DOUBLE); +break; + case "java.lang.Long": +_nonLiteralArgType.add(FieldSpec.DataType.LONG); +break; + default: +throw new RuntimeException( +"Unsupported data type " + paramType.getTypeName() + "for transform function " + getName()); +} + } +} + } + + @Override + public TransformResultMetadata getResultMetadata() { +return STRING_SV_NO_DICTIONARY_METADATA; Review comment: does it look better now? This is an automated
[GitHub] [incubator-pinot] kishoreg commented on a change in pull request #5409: Faster bit unpacking (Part 1)
kishoreg commented on a change in pull request #5409: URL: https://github.com/apache/incubator-pinot/pull/5409#discussion_r426940117 ## File path: pinot-core/src/main/java/org/apache/pinot/core/io/util/FixedBitIntReaderWriterV2.java ## @@ -0,0 +1,104 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.apache.pinot.core.io.util; + +import com.google.common.base.Preconditions; +import java.io.Closeable; +import java.io.IOException; +import org.apache.pinot.core.plan.DocIdSetPlanNode; +import org.apache.pinot.core.segment.memory.PinotDataBuffer; + + +public final class FixedBitIntReaderWriterV2 implements Closeable { + private volatile PinotDataBitSetV2 _dataBitSet; + private final int _numBitsPerValue; + + public FixedBitIntReaderWriterV2(PinotDataBuffer dataBuffer, int numValues, int numBitsPerValue) { +Preconditions +.checkState(dataBuffer.size() == (int) (((long) numValues * numBitsPerValue + Byte.SIZE - 1) / Byte.SIZE)); +_dataBitSet = PinotDataBitSetV2.createBitSet(dataBuffer, numBitsPerValue); +_numBitsPerValue = numBitsPerValue; + } + + /** + * Read dictionaryId for a particular docId + * @param index docId to get the dictionaryId for + * @return dictionaryId + */ + public int readInt(int index) { +return _dataBitSet.readInt(index); + } + + /** + * Array based API to read dictionaryIds for a contiguous + * range of docIds starting at startDocId for a given length + * @param startDocId docId range start + * @param length length of contiguous docId range + * @param buffer out buffer to read dictionaryIds into + */ + public void readInt(int startDocId, int length, int[] buffer) { +_dataBitSet.readInt(startDocId, length, buffer); + } + + /** + * Array based API to read dictionaryIds for an array of docIds + * which are monotonically increasing but not necessarily contiguous + * @param docIds array of docIds to read the dictionaryIds for + * @param docIdStartIndex start index in docIds array + * @param docIdLength length to process in docIds array + * @param values out array to store the dictionaryIds into + * @param valuesStartIndex start index in values array + */ + public void readValues(int[] docIds, int docIdStartIndex, int docIdLength, int[] values, int valuesStartIndex) { +int docIdEndIndex = docIdStartIndex + docIdLength - 1; +if (shouldBulkRead(docIds, docIdStartIndex, docIdEndIndex)) { + _dataBitSet.readInt(docIds, docIdStartIndex, docIdLength, values, valuesStartIndex); +} else { + for (int i = docIdStartIndex; i <= docIdEndIndex; i++) { +values[valuesStartIndex++] = _dataBitSet.readInt(docIds[i]); + } +} + } + + private boolean shouldBulkRead(int[] docIds, int startIndex, int endIndex) { +int numDocsToRead = endIndex - startIndex + 1; +int docIdRange = docIds[endIndex] - docIds[startIndex] + 1; +if (docIdRange > DocIdSetPlanNode.MAX_DOC_PER_CALL) { Review comment: why is this check needed? ## File path: pinot-core/src/main/java/org/apache/pinot/core/io/util/FixedBitIntReaderWriterV2.java ## @@ -0,0 +1,104 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.apache.pinot.core.io.util; + +import com.google.common.base.Preconditions; +import java.io.Closeable; +import java.io.IOException; +import org.apache.pinot.core.plan.DocIdSetPlanNode; +import org.apache.pinot.
[GitHub] [incubator-pinot] chenboat commented on a change in pull request #5444: Enhance and simplify the filtering
chenboat commented on a change in pull request #5444: URL: https://github.com/apache/incubator-pinot/pull/5444#discussion_r430783521 ## File path: pinot-core/src/main/java/org/apache/pinot/core/common/BlockDocIdSet.java ## @@ -21,6 +21,4 @@ public interface BlockDocIdSet { BlockDocIdIterator iterator(); Review comment: Java doc over this method and/or the interface? Since we are refactoring this interface, it is a good time to add javadoc for a public interface as well. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org For additional commands, e-mail: commits-h...@pinot.apache.org
[GitHub] [incubator-pinot] xiaohui-sun commented on a change in pull request #5435: [TE] clean up decprecated/unused code
xiaohui-sun commented on a change in pull request #5435: URL: https://github.com/apache/incubator-pinot/pull/5435#discussion_r430698991 ## File path: thirdeye/thirdeye-pinot/src/main/java/org/apache/pinot/thirdeye/completeness/checker/DataCompletenessUtils.java ## @@ -261,13 +261,4 @@ public static DateTimeFormatter getDateTimeFormatterForDataset(TimeSpec timeSpec return bucketNameToCountStar; } - public static double getPercentCompleteness(PercentCompletenessFunctionInput input) { Review comment: We should remove it and also remove getter/setter. That should be safe. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org For additional commands, e-mail: commits-h...@pinot.apache.org
[GitHub] [incubator-pinot] snleee merged pull request #5443: Add broker request info to the error msg in combine operator.
snleee merged pull request #5443: URL: https://github.com/apache/incubator-pinot/pull/5443 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org For additional commands, e-mail: commits-h...@pinot.apache.org
[GitHub] [incubator-pinot] fx19880617 commented on a change in pull request #5441: Adding files generated by running quickstart to gitignore
fx19880617 commented on a change in pull request #5441: URL: https://github.com/apache/incubator-pinot/pull/5441#discussion_r430280846 ## File path: .gitignore ## @@ -44,4 +44,8 @@ yarn.lock npm-debug.log* yarn-debug.log* -yarn-error.log* \ No newline at end of file +yarn-error.log* + +#quickstart files +examples/ Review comment: we have some directories named examples under `resources`, will this make git ignoring them? ## File path: .gitignore ## @@ -44,4 +44,8 @@ yarn.lock npm-debug.log* yarn-debug.log* -yarn-error.log* \ No newline at end of file +yarn-error.log* + +#quickstart files +examples/ Review comment: we have some directories named examples under `resources`, will this make git ignoring them? E.g. ``` ./docker/images/pinot-superset/examples ./pinot-tools/src/main/resources/examples ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org For additional commands, e-mail: commits-h...@pinot.apache.org
[GitHub] [incubator-pinot] chenboat closed pull request #4914: [POC] By-passing deep-store requirement for Realtime segment completion
chenboat closed pull request #4914: URL: https://github.com/apache/incubator-pinot/pull/4914 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org For additional commands, e-mail: commits-h...@pinot.apache.org
[GitHub] [incubator-pinot] snleee commented on a change in pull request #5443: Add broker request info to the error msg in combine operator.
snleee commented on a change in pull request #5443: URL: https://github.com/apache/incubator-pinot/pull/5443#discussion_r430115405 ## File path: pinot-core/src/main/java/org/apache/pinot/core/operator/CombineOperator.java ## @@ -168,13 +168,13 @@ public IntermediateResultsBlock callJob() try { mergedBlock = mergedBlockFuture.get(endTimeMs - System.currentTimeMillis(), TimeUnit.MILLISECONDS); } catch (InterruptedException e) { - LOGGER.error("Caught InterruptedException.", e); + LOGGER.error("Caught InterruptedException. (brokerRequest = {})", _brokerRequest, e); Review comment: I prefer brokerRequest. Otherwise, we need one more look up from the broker side, which will increase the time for debugging. Also, it's possible that broker side use the log sampling feature. How do you think? ## File path: pinot-core/src/main/java/org/apache/pinot/core/operator/CombineOperator.java ## @@ -168,13 +168,13 @@ public IntermediateResultsBlock callJob() try { mergedBlock = mergedBlockFuture.get(endTimeMs - System.currentTimeMillis(), TimeUnit.MILLISECONDS); } catch (InterruptedException e) { - LOGGER.error("Caught InterruptedException.", e); + LOGGER.error("Caught InterruptedException. (brokerRequest = {})", _brokerRequest, e); Review comment: Also, I checked the code and `brokerRequest` doesn't have the `requestId`. `instanceRequest` has the `requestId`. So, printing `brokerRequest` here looks like a better option to us. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org For additional commands, e-mail: commits-h...@pinot.apache.org
[GitHub] [incubator-pinot] siddharthteotia commented on a change in pull request #5440: Add UDFs for String Transformation
siddharthteotia commented on a change in pull request #5440: URL: https://github.com/apache/incubator-pinot/pull/5440#discussion_r430077444 ## File path: pinot-core/src/main/java/org/apache/pinot/core/operator/transform/function/TransformFunctionFactory.java ## @@ -112,13 +115,24 @@ public static TransformFunction get(TransformExpressionTree expression, Map transformFunctionClass = TRANSFORM_FUNCTION_MAP.get(functionName); +Class transformFunctionClass; +FunctionInfo functionInfo = null; +if (FunctionRegistry.containsFunctionByName(functionName)) { + transformFunctionClass = GenericTransformFunction.class; + functionInfo = FunctionRegistry.getFunctionByName(functionName); Review comment: I am not sure I follow the logic here. Is GenericTransformFunction going to be the wrapper or single point of entry for all transform functions in Pinot. The name seems to suggest so but this code implies that only the functions registered in the registry can be treated as GenericTransformFunction. This also brings the point that we should add good javadocs to the new class. ## File path: pinot-core/src/test/java/org/apache/pinot/core/operator/transform/function/GenericTransformFunctionTest.java ## @@ -0,0 +1,173 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.apache.pinot.core.operator.transform.function; + +import org.apache.commons.lang3.StringUtils; +import org.apache.pinot.common.request.transform.TransformExpressionTree; +import org.testng.Assert; +import org.testng.annotations.Test; + + +public class GenericTransformFunctionTest extends BaseTransformFunctionTest { + + @Test Review comment: I believe this exercises the compilation path. You should add tests in CalciteSqlCompiler test file as well. Also, we should add tests for end to end query execution. See TransformQueriesTest class or consider adding these to one of the existing query tests ## File path: pinot-core/src/test/java/org/apache/pinot/core/operator/transform/function/GenericTransformFunctionTest.java ## @@ -0,0 +1,173 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.apache.pinot.core.operator.transform.function; + +import org.apache.commons.lang3.StringUtils; +import org.apache.pinot.common.request.transform.TransformExpressionTree; +import org.testng.Assert; +import org.testng.annotations.Test; + + +public class GenericTransformFunctionTest extends BaseTransformFunctionTest { + + @Test Review comment: I believe this exercises the compilation path. You should add tests in CalciteSqlCompiler test file as well. Also, we should add tests for end to end query execution. See TransformQueriesTest class or consider adding execution tests to one of the existing query tests This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org For additional commands, e-mail: commits-h...@pinot.apache.org
[GitHub] [incubator-pinot] kishoreg commented on issue #5450: Clear results from Pinot Data Explorer
kishoreg commented on issue #5450: URL: https://github.com/apache/incubator-pinot/issues/5450#issuecomment-634324469 instead of a clear button, we can automatically clear every time we click on run (do this before making a call to Pinot) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org For additional commands, e-mail: commits-h...@pinot.apache.org
[GitHub] [incubator-pinot] kishoreg commented on a change in pull request #5440: Add UDFs for String Transformation
kishoreg commented on a change in pull request #5440: URL: https://github.com/apache/incubator-pinot/pull/5440#discussion_r430095391 ## File path: pinot-core/src/main/java/org/apache/pinot/core/operator/transform/function/GenericTransformFunction.java ## @@ -0,0 +1,170 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.apache.pinot.core.operator.transform.function; + +import com.google.common.base.Preconditions; +import java.lang.reflect.InvocationTargetException; +import java.lang.reflect.Method; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.List; +import java.util.Map; +import org.apache.pinot.common.function.FunctionInfo; +import org.apache.pinot.common.function.FunctionInvoker; +import org.apache.pinot.core.common.DataSource; +import org.apache.pinot.core.operator.blocks.ProjectionBlock; +import org.apache.pinot.core.operator.transform.TransformResultMetadata; +import org.apache.pinot.core.plan.DocIdSetPlanNode; +import org.apache.pinot.spi.data.FieldSpec; + + +public class GenericTransformFunction extends BaseTransformFunction { + + private FunctionInfo _info; + FunctionInvoker _functionInvoker; + String _name; + Object[] _args; + List _nonLiteralArgIndices; + List _nonLiteralArgType; + List _nonLiteralTransformFunction; + String[] _stringResult; + + public GenericTransformFunction() { +_nonLiteralArgIndices = new ArrayList<>(); +_nonLiteralArgType = new ArrayList<>(); +_nonLiteralTransformFunction = new ArrayList<>(); + } + + @Override + public String getName() { +return _name; + } + + public void setFunction(String functionName, FunctionInfo info) + throws Exception { +_name = functionName; +_info = info; +_functionInvoker = new FunctionInvoker(info); + } + + @Override + public void init(List arguments, Map dataSourceMap) { +Preconditions.checkArgument(arguments.size() == _functionInvoker.getParameterTypes().length, +"The number of arguments are not same for scalar function and transform function: %s", getName()); + +_args = new Object[arguments.size()]; +for (int i = 0; i < arguments.size(); i++) { + TransformFunction function = arguments.get(i); + if (function instanceof LiteralTransformFunction) { +String literal = ((LiteralTransformFunction) function).getLiteral(); +Class paramType = _functionInvoker.getParameterTypes()[i]; +switch (paramType.getTypeName()) { + case "java.lang.Integer": +_args[i] = Integer.parseInt(literal); +break; + case "java.lang.String": +_args[i] = literal; +break; + case "java.lang.Double": +_args[i] = Double.valueOf(literal); +break; + case "java.lang.Long": +_args[i] = Long.valueOf(literal); +break; + default: +throw new RuntimeException( +"Unsupported data type " + paramType.getTypeName() + "for transform function " + getName()); +} + } else { +_nonLiteralArgIndices.add(i); +_nonLiteralTransformFunction.add(function); +Class paramType = _functionInvoker.getParameterTypes()[i]; + +switch (paramType.getTypeName()) { + case "java.lang.Integer": +_nonLiteralArgType.add(FieldSpec.DataType.INT); +break; + case "java.lang.String": +_nonLiteralArgType.add(FieldSpec.DataType.STRING); +break; + case "java.lang.Double": +_nonLiteralArgType.add(FieldSpec.DataType.DOUBLE); +break; + case "java.lang.Long": +_nonLiteralArgType.add(FieldSpec.DataType.LONG); +break; + default: +throw new RuntimeException( +"Unsupported data type " + paramType.getTypeName() + "for transform function " + getName()); +} + } +} + } + + @Override + public TransformResultMetadata getResultMetadata() { +return STRING_SV_NO_DICTIONARY_METADATA; Review comment: You are right, we need to override methods for int, long, double, float. Shouldn't be hard, its mostly copy-past
[GitHub] [incubator-pinot] npawar closed issue #5448: java.lang.ClassCastException when running SQL queries
npawar closed issue #5448: URL: https://github.com/apache/incubator-pinot/issues/5448 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org For additional commands, e-mail: commits-h...@pinot.apache.org
[incubator-pinot] branch master updated: Include trailing empty string in group key split (#5449)
This is an automated email from the ASF dual-hosted git repository. nehapawar pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git The following commit(s) were added to refs/heads/master by this push: new 41f7ef5 Include trailing empty string in group key split (#5449) 41f7ef5 is described below commit 41f7ef575109ab940bd26dc3e8077a12c4373a8b Author: Neha Pawar AuthorDate: Tue May 26 15:50:39 2020 -0700 Include trailing empty string in group key split (#5449) --- .../operator/CombineGroupByOrderByOperator.java| 2 +- .../aggregation/groupby/GroupKeyGenerator.java | 8 +++ .../query/aggregation/groupby/GroupKeyTest.java| 66 ++ 3 files changed, 75 insertions(+), 1 deletion(-) diff --git a/pinot-core/src/main/java/org/apache/pinot/core/operator/CombineGroupByOrderByOperator.java b/pinot-core/src/main/java/org/apache/pinot/core/operator/CombineGroupByOrderByOperator.java index 610218a..8ec1841 100644 --- a/pinot-core/src/main/java/org/apache/pinot/core/operator/CombineGroupByOrderByOperator.java +++ b/pinot-core/src/main/java/org/apache/pinot/core/operator/CombineGroupByOrderByOperator.java @@ -169,7 +169,7 @@ public class CombineGroupByOrderByOperator extends BaseOperatorhttp://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.apache.pinot.query.aggregation.groupby; + +import org.apache.pinot.core.query.aggregation.groupby.GroupKeyGenerator; +import org.testng.Assert; +import org.testng.annotations.Test; + + +public class GroupKeyTest { + + @Test + public void testGetKeys() { +GroupKeyGenerator.GroupKey groupKey = new GroupKeyGenerator.GroupKey(); +groupKey._stringKey = "foo\tbar\t"; +String[] keys = groupKey.getKeys(); +Assert.assertEquals(keys.length, 3); +Assert.assertEquals(keys, new String[]{"foo", "bar", ""}); + +groupKey._stringKey = "foo\t\tbar"; +keys = groupKey.getKeys(); +Assert.assertEquals(keys.length, 3); +Assert.assertEquals(keys, new String[]{"foo", "", "bar"}); + +groupKey._stringKey = "\tfoo\tbar"; +keys = groupKey.getKeys(); +Assert.assertEquals(keys.length, 3); +Assert.assertEquals(keys, new String[]{"", "foo", "bar"}); + +groupKey._stringKey = "foo\t\t"; +keys = groupKey.getKeys(); +Assert.assertEquals(keys.length, 3); +Assert.assertEquals(keys, new String[]{"foo", "", ""}); + +groupKey._stringKey = "\t\t"; +keys = groupKey.getKeys(); +Assert.assertEquals(keys.length, 3); +Assert.assertEquals(keys, new String[]{"", "", ""}); + +groupKey._stringKey = ""; +keys = groupKey.getKeys(); +Assert.assertEquals(keys.length, 1); +Assert.assertEquals(keys, new String[]{""}); + +groupKey._stringKey = "\t"; +keys = groupKey.getKeys(); +Assert.assertEquals(keys.length, 2); +Assert.assertEquals(keys, new String[]{"", ""}); + } +} - To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org For additional commands, e-mail: commits-h...@pinot.apache.org
[incubator-pinot] branch master updated: Add broker request info to the error msg in combine operator. (#5443)
This is an automated email from the ASF dual-hosted git repository. snlee pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git The following commit(s) were added to refs/heads/master by this push: new 0bdb89c Add broker request info to the error msg in combine operator. (#5443) 0bdb89c is described below commit 0bdb89c4d4e8ecd4352a60a66b23806d7e39219a Author: Seunghyun Lee AuthorDate: Tue May 26 15:29:40 2020 -0700 Add broker request info to the error msg in combine operator. (#5443) * Add broker request info to the error msg in combine operator. Current log indicates exception/timeout in combine operator but this does not give much information for debugging. Adding broker request to the log will help for debugging. * Add broker requests to the error log in other combine operators --- .../org/apache/pinot/core/operator/CombineGroupByOperator.java| 8 +--- .../apache/pinot/core/operator/CombineGroupByOrderByOperator.java | 8 +--- .../main/java/org/apache/pinot/core/operator/CombineOperator.java | 6 +++--- 3 files changed, 13 insertions(+), 9 deletions(-) diff --git a/pinot-core/src/main/java/org/apache/pinot/core/operator/CombineGroupByOperator.java b/pinot-core/src/main/java/org/apache/pinot/core/operator/CombineGroupByOperator.java index a7bb5c2..4ce2a6e 100644 --- a/pinot-core/src/main/java/org/apache/pinot/core/operator/CombineGroupByOperator.java +++ b/pinot-core/src/main/java/org/apache/pinot/core/operator/CombineGroupByOperator.java @@ -174,8 +174,8 @@ public class CombineGroupByOperator extends BaseOperator { try { mergedBlock = mergedBlockFuture.get(endTimeMs - System.currentTimeMillis(), TimeUnit.MILLISECONDS); } catch (InterruptedException e) { - LOGGER.error("Caught InterruptedException.", e); + LOGGER.error("Caught InterruptedException. (brokerRequest = {})", _brokerRequest, e); mergedBlock = new IntermediateResultsBlock(QueryException.getException(QueryException.FUTURE_CALL_ERROR, e)); } catch (ExecutionException e) { - LOGGER.error("Caught ExecutionException.", e); + LOGGER.error("Caught ExecutionException. (brokerRequest = {})", _brokerRequest, e); mergedBlock = new IntermediateResultsBlock(QueryException.getException(QueryException.MERGE_RESPONSE_ERROR, e)); } catch (TimeoutException e) { - LOGGER.error("Caught TimeoutException", e); + LOGGER.error("Caught TimeoutException. (brokerRequest = {})", _brokerRequest, e); mergedBlockFuture.cancel(true); mergedBlock = new IntermediateResultsBlock(QueryException.getException(QueryException.EXECUTION_TIMEOUT_ERROR, e)); - To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org For additional commands, e-mail: commits-h...@pinot.apache.org
[incubator-pinot] branch default_pinot_startable_to_use_config_files updated (51b3e96 -> 8626188)
This is an automated email from the ASF dual-hosted git repository. xiangfu pushed a change to branch default_pinot_startable_to_use_config_files in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git. from 51b3e96 Make Pinot Broker/Server can start by just passing a config file add 8626188 Adding more config alias No new revisions were added by this update. Summary of changes: .../main/java/org/apache/pinot/tools/admin/command/AddTableCommand.java | 2 +- .../apache/pinot/tools/admin/command/LaunchDataIngestionJobCommand.java | 2 +- .../java/org/apache/pinot/tools/admin/command/StartBrokerCommand.java | 2 +- .../org/apache/pinot/tools/admin/command/StartControllerCommand.java| 2 +- .../java/org/apache/pinot/tools/admin/command/StartServerCommand.java | 2 +- 5 files changed, 5 insertions(+), 5 deletions(-) - To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org For additional commands, e-mail: commits-h...@pinot.apache.org
[incubator-pinot] branch enhance_log4j updated (9b7be2f -> 72c7411)
This is an automated email from the ASF dual-hosted git repository. xiangfu pushed a change to branch enhance_log4j in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git. from 9b7be2f Update Pinot-admin and quickstart log4j config add 72c7411 Update log4j2.xml No new revisions were added by this update. Summary of changes: pinot-tools/src/main/resources/conf/log4j2.xml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) - To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org For additional commands, e-mail: commits-h...@pinot.apache.org
[incubator-pinot] branch master updated: [TE] endpoints for alerts page filters (#5432)
This is an automated email from the ASF dual-hosted git repository. jihao pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git The following commit(s) were added to refs/heads/master by this push: new 4270def [TE] endpoints for alerts page filters (#5432) 4270def is described below commit 4270defa916eab338e461c2537204e8f321fc834 Author: Jihao Zhang AuthorDate: Tue May 26 10:17:59 2020 -0700 [TE] endpoints for alerts page filters (#5432) This commit adds the endpoints for the filters in the new alert list page. Auto-complete for detections, subscription groups, applications, and alert owners. Search for the alerts that are subscribed by a user. The endpoints to return a list of all rules in the detection pipeline. --- .../dashboard/ThirdEyeDashboardApplication.java| 2 +- .../dashboard/resources/v2/DataResource.java | 98 +++--- .../resources/v2/alerts/AlertResource.java | 6 +- .../resources/v2/alerts/AlertSearchFilter.java | 17 +++- .../resources/v2/alerts/AlertSearcher.java | 29 +-- .../DetectionConfigurationResource.java| 28 +-- .../resources/v2/alerts/AlertSearcherTest.java | 2 +- 7 files changed, 152 insertions(+), 30 deletions(-) diff --git a/thirdeye/thirdeye-pinot/src/main/java/org/apache/pinot/thirdeye/dashboard/ThirdEyeDashboardApplication.java b/thirdeye/thirdeye-pinot/src/main/java/org/apache/pinot/thirdeye/dashboard/ThirdEyeDashboardApplication.java index ccd70d3..97e24b5 100644 --- a/thirdeye/thirdeye-pinot/src/main/java/org/apache/pinot/thirdeye/dashboard/ThirdEyeDashboardApplication.java +++ b/thirdeye/thirdeye-pinot/src/main/java/org/apache/pinot/thirdeye/dashboard/ThirdEyeDashboardApplication.java @@ -75,7 +75,7 @@ import org.apache.pinot.thirdeye.datasource.loader.DefaultTimeSeriesLoader; import org.apache.pinot.thirdeye.datasource.loader.TimeSeriesLoader; import org.apache.pinot.thirdeye.datasource.sql.resources.SqlDataSourceResource; import org.apache.pinot.thirdeye.detection.DetectionResource; -import org.apache.pinot.thirdeye.detection.annotation.DetectionConfigurationResource; +import org.apache.pinot.thirdeye.detection.DetectionConfigurationResource; import org.apache.pinot.thirdeye.detection.yaml.YamlResource; import org.apache.pinot.thirdeye.detector.email.filter.AlertFilterFactory; import org.apache.pinot.thirdeye.detector.function.AnomalyFunctionFactory; diff --git a/thirdeye/thirdeye-pinot/src/main/java/org/apache/pinot/thirdeye/dashboard/resources/v2/DataResource.java b/thirdeye/thirdeye-pinot/src/main/java/org/apache/pinot/thirdeye/dashboard/resources/v2/DataResource.java index 34b8b5a..8b81e66 100644 --- a/thirdeye/thirdeye-pinot/src/main/java/org/apache/pinot/thirdeye/dashboard/resources/v2/DataResource.java +++ b/thirdeye/thirdeye-pinot/src/main/java/org/apache/pinot/thirdeye/dashboard/resources/v2/DataResource.java @@ -19,7 +19,10 @@ package org.apache.pinot.thirdeye.dashboard.resources.v2; -import org.apache.pinot.thirdeye.api.Constants; +import com.fasterxml.jackson.databind.ObjectMapper; +import com.google.common.base.Strings; +import com.google.common.cache.LoadingCache; +import com.google.common.collect.Multimap; import io.swagger.annotations.Api; import io.swagger.annotations.ApiOperation; import io.swagger.annotations.ApiParam; @@ -36,8 +39,9 @@ import java.util.LinkedHashMap; import java.util.List; import java.util.Map; import java.util.Set; +import java.util.TreeSet; import java.util.concurrent.TimeUnit; - +import java.util.stream.Collectors; import javax.ws.rs.GET; import javax.ws.rs.Path; import javax.ws.rs.PathParam; @@ -45,17 +49,8 @@ import javax.ws.rs.Produces; import javax.ws.rs.QueryParam; import javax.ws.rs.WebApplicationException; import javax.ws.rs.core.MediaType; - import org.apache.commons.lang3.StringUtils; -import org.joda.time.DateTime; -import org.joda.time.DateTimeZone; -import org.slf4j.Logger; -import org.slf4j.LoggerFactory; - -import com.fasterxml.jackson.databind.ObjectMapper; -import com.google.common.base.Strings; -import com.google.common.cache.LoadingCache; -import com.google.common.collect.Multimap; +import org.apache.pinot.thirdeye.api.Constants; import org.apache.pinot.thirdeye.common.time.TimeRange; import org.apache.pinot.thirdeye.dashboard.Utils; import org.apache.pinot.thirdeye.dashboard.views.heatmap.HeatMapViewHandler; @@ -63,13 +58,23 @@ import org.apache.pinot.thirdeye.dashboard.views.heatmap.HeatMapViewRequest; import org.apache.pinot.thirdeye.dashboard.views.heatmap.HeatMapViewResponse; import org.apache.pinot.thirdeye.datalayer.bao.AlertConfigManager; import org.apache.pinot.thirdeye.datalayer.bao.AnomalyFunctionManager; +import org.apache.pinot.thirdeye.datalayer.bao.ApplicationManager; import org.apache.pinot.thirdeye.datalayer.bao.DatasetConfigManager; +import org.apache.pinot.thirdeye.datalayer.bao.Detec
[incubator-pinot] branch default_pinot_startable_to_use_config_files updated (17fc399 -> 51b3e96)
This is an automated email from the ASF dual-hosted git repository. xiangfu pushed a change to branch default_pinot_startable_to_use_config_files in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git. discard 17fc399 Make Pinot Broker/Server can start by just passing a config file add 51b3e96 Make Pinot Broker/Server can start by just passing a config file This update added new revisions after undoing existing revisions. That is to say, some revisions that were in the old version of the branch are not in the new version. This situation occurs when a user --force pushes a change and generates a repository containing something like this: * -- * -- B -- O -- O -- O (17fc399) \ N -- N -- N refs/heads/default_pinot_startable_to_use_config_files (51b3e96) You should already have received notification emails for all of the O revisions, and so the following emails describe only the N revisions from the common base, B. Any revisions marked "omit" are not gone; other references still refer to them. Any revisions marked "discard" are gone forever. No new revisions were added by this update. Summary of changes: pinot-tools/src/main/resources/conf/pinot-controller.conf | 2 +- pinot-tools/src/main/resources/conf/pinot-server.conf | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) - To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org For additional commands, e-mail: commits-h...@pinot.apache.org
[incubator-pinot] branch default_pinot_startable_to_use_config_files updated (6534f74 -> 17fc399)
This is an automated email from the ASF dual-hosted git repository. xiangfu pushed a change to branch default_pinot_startable_to_use_config_files in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git. discard 6534f74 Make Pinot Broker/Server can start by just passing a config file add 17fc399 Make Pinot Broker/Server can start by just passing a config file This update added new revisions after undoing existing revisions. That is to say, some revisions that were in the old version of the branch are not in the new version. This situation occurs when a user --force pushes a change and generates a repository containing something like this: * -- * -- B -- O -- O -- O (6534f74) \ N -- N -- N refs/heads/default_pinot_startable_to_use_config_files (17fc399) You should already have received notification emails for all of the O revisions, and so the following emails describe only the N revisions from the common base, B. Any revisions marked "omit" are not gone; other references still refer to them. Any revisions marked "discard" are gone forever. No new revisions were added by this update. Summary of changes: .../admin/command/AbstractBaseAdminCommand.java | 1 + .../tools/admin/command/StartControllerCommand.java | 6 +- .../tools/admin/command/StartServerCommand.java | 20 ++-- .../tools/admin/command/StartZookeeperCommand.java | 2 +- 4 files changed, 21 insertions(+), 8 deletions(-) - To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org For additional commands, e-mail: commits-h...@pinot.apache.org
[incubator-pinot] branch default_pinot_startable_to_use_config_files updated (0d798dc -> 6534f74)
This is an automated email from the ASF dual-hosted git repository. xiangfu pushed a change to branch default_pinot_startable_to_use_config_files in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git. discard 0d798dc Make Pinot Broker/Server can start by just passing a config file add 6534f74 Make Pinot Broker/Server can start by just passing a config file This update added new revisions after undoing existing revisions. That is to say, some revisions that were in the old version of the branch are not in the new version. This situation occurs when a user --force pushes a change and generates a repository containing something like this: * -- * -- B -- O -- O -- O (0d798dc) \ N -- N -- N refs/heads/default_pinot_startable_to_use_config_files (6534f74) You should already have received notification emails for all of the O revisions, and so the following emails describe only the N revisions from the common base, B. Any revisions marked "omit" are not gone; other references still refer to them. Any revisions marked "discard" are gone forever. No new revisions were added by this update. Summary of changes: .../test/java/org/apache/pinot/integration/tests/ClusterTest.java | 6 ++ .../main/java/org/apache/pinot/tools/perf/PerfBenchmarkDriver.java | 7 ++- 2 files changed, 4 insertions(+), 9 deletions(-) - To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org For additional commands, e-mail: commits-h...@pinot.apache.org
[incubator-pinot] branch default_pinot_startable_to_use_config_files updated (b10ea9b -> 0d798dc)
This is an automated email from the ASF dual-hosted git repository. xiangfu pushed a change to branch default_pinot_startable_to_use_config_files in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git. discard b10ea9b Make Pinot Broker/Server can start by just passing a config file add 0d798dc Make Pinot Broker/Server can start by just passing a config file This update added new revisions after undoing existing revisions. That is to say, some revisions that were in the old version of the branch are not in the new version. This situation occurs when a user --force pushes a change and generates a repository containing something like this: * -- * -- B -- O -- O -- O (b10ea9b) \ N -- N -- N refs/heads/default_pinot_startable_to_use_config_files (0d798dc) You should already have received notification emails for all of the O revisions, and so the following emails describe only the N revisions from the common base, B. Any revisions marked "omit" are not gone; other references still refer to them. Any revisions marked "discard" are gone forever. No new revisions were added by this update. Summary of changes: pinot-tools/src/main/resources/conf/pinot-broker.conf | 2 +- pinot-tools/src/main/resources/conf/pinot-controller.conf | 2 +- pinot-tools/src/main/resources/conf/pinot-server.conf | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) - To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org For additional commands, e-mail: commits-h...@pinot.apache.org
[incubator-pinot] branch default_pinot_startable_to_use_config_files updated (01b6224 -> b10ea9b)
This is an automated email from the ASF dual-hosted git repository. xiangfu pushed a change to branch default_pinot_startable_to_use_config_files in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git. discard 01b6224 Make Pinot Broker/Server can start by just passing a config file add b10ea9b Make Pinot Broker/Server can start by just passing a config file This update added new revisions after undoing existing revisions. That is to say, some revisions that were in the old version of the branch are not in the new version. This situation occurs when a user --force pushes a change and generates a repository containing something like this: * -- * -- B -- O -- O -- O (01b6224) \ N -- N -- N refs/heads/default_pinot_startable_to_use_config_files (b10ea9b) You should already have received notification emails for all of the O revisions, and so the following emails describe only the N revisions from the common base, B. Any revisions marked "omit" are not gone; other references still refer to them. Any revisions marked "discard" are gone forever. No new revisions were added by this update. Summary of changes: pinot-tools/src/main/resources/conf/pinot-broker.conf | 9 + pinot-tools/src/main/resources/conf/pinot-controller.conf | 13 + pinot-tools/src/main/resources/conf/pinot-server.conf | 13 + 3 files changed, 35 insertions(+) - To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org For additional commands, e-mail: commits-h...@pinot.apache.org
[incubator-pinot] branch default_pinot_startable_to_use_config_files updated (1a1fac8 -> 01b6224)
This is an automated email from the ASF dual-hosted git repository. xiangfu pushed a change to branch default_pinot_startable_to_use_config_files in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git. discard 1a1fac8 Make Pinot Broker/Server can start by just passing a config file add 01b6224 Make Pinot Broker/Server can start by just passing a config file This update added new revisions after undoing existing revisions. That is to say, some revisions that were in the old version of the branch are not in the new version. This situation occurs when a user --force pushes a change and generates a repository containing something like this: * -- * -- B -- O -- O -- O (1a1fac8) \ N -- N -- N refs/heads/default_pinot_startable_to_use_config_files (01b6224) You should already have received notification emails for all of the O revisions, and so the following emails describe only the N revisions from the common base, B. Any revisions marked "omit" are not gone; other references still refer to them. Any revisions marked "discard" are gone forever. No new revisions were added by this update. Summary of changes: .../main/java/org/apache/pinot/controller/ControllerConf.java | 11 +++ pinot-tools/src/main/resources/conf/pinot-broker.conf | 8 pinot-tools/src/main/resources/conf/pinot-controller.conf | 8 pinot-tools/src/main/resources/conf/pinot-server.conf | 8 4 files changed, 19 insertions(+), 16 deletions(-) - To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org For additional commands, e-mail: commits-h...@pinot.apache.org
[incubator-pinot] 01/01: Make Pinot Broker/Server can start by just passing a config file
This is an automated email from the ASF dual-hosted git repository. xiangfu pushed a commit to branch default_pinot_startable_to_use_config_files in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git commit 1a1fac812b974ed9723691b8e2f090f8dccb9017 Author: Xiang Fu AuthorDate: Tue May 26 02:01:55 2020 -0700 Make Pinot Broker/Server can start by just passing a config file --- .../broker/broker/helix/HelixBrokerStarter.java| 28 ++ .../broker/broker/HelixBrokerStarterTest.java | 5 +++- .../apache/pinot/common/utils/CommonConstants.java | 3 +++ .../pinot/integration/tests/ClusterTest.java | 12 -- .../tests/ServerStarterIntegrationTest.java| 6 +++-- .../server/starter/helix/HelixServerStarter.java | 22 + .../tools/admin/command/StartBrokerCommand.java| 5 +++- .../tools/admin/command/StartServerCommand.java| 5 +++- .../pinot/tools/perf/PerfBenchmarkDriver.java | 11 +++-- .../src/main/resources/conf/pinot-broker.conf | 5 .../src/main/resources/conf/pinot-controller.conf | 7 ++ .../src/main/resources/conf/pinot-server.conf | 7 ++ 12 files changed, 99 insertions(+), 17 deletions(-) diff --git a/pinot-broker/src/main/java/org/apache/pinot/broker/broker/helix/HelixBrokerStarter.java b/pinot-broker/src/main/java/org/apache/pinot/broker/broker/helix/HelixBrokerStarter.java index 0fc14fb..b801044 100644 --- a/pinot-broker/src/main/java/org/apache/pinot/broker/broker/helix/HelixBrokerStarter.java +++ b/pinot-broker/src/main/java/org/apache/pinot/broker/broker/helix/HelixBrokerStarter.java @@ -93,21 +93,39 @@ public class HelixBrokerStarter { // Participant Helix manager handles Helix functionality such as state transitions and messages private HelixManager _participantHelixManager; + @Deprecated public HelixBrokerStarter(Configuration brokerConf, String clusterName, String zkServer) throws Exception { this(brokerConf, clusterName, zkServer, null); } + @Deprecated public HelixBrokerStarter(Configuration brokerConf, String clusterName, String zkServer, @Nullable String brokerHost) throws Exception { +this(applyBrokerConfigs(brokerConf, clusterName, zkServer, brokerHost)); + } + + @Deprecated + private static Configuration applyBrokerConfigs(Configuration brokerConf, String clusterName, String zkServers, @Nullable String brokerHost) { +brokerConf.setProperty(Helix.CONFIG_OF_CLUSTER_NAME, clusterName); +brokerConf.setProperty(Helix.CONFIG_OF_ZOOKEEPR_SERVER, zkServers); +if (brokerHost == null) { + brokerConf.clearProperty(Broker.CONFIG_OF_BROKER_HOSTNAME); +} else { + brokerConf.setProperty(Broker.CONFIG_OF_BROKER_HOSTNAME, brokerHost); +} +return brokerConf; + } + + public HelixBrokerStarter(Configuration brokerConf) throws Exception { _brokerConf = brokerConf; setupHelixSystemProperties(); -_clusterName = clusterName; +_clusterName = brokerConf.getString(Helix.CONFIG_OF_CLUSTER_NAME); // Remove all white-spaces from the list of zkServers (if any). -_zkServers = zkServer.replaceAll("\\s+", ""); - +_zkServers = brokerConf.getString(Helix.CONFIG_OF_ZOOKEEPR_SERVER).replaceAll("\\s+", ""); +String brokerHost = brokerConf.getString(Broker.CONFIG_OF_BROKER_HOSTNAME); if (brokerHost == null) { brokerHost = _brokerConf.getBoolean(CommonConstants.Helix.SET_INSTANCE_ID_TO_HOSTNAME_KEY, false) ? NetUtil .getHostnameOrAddress() : NetUtil.getHostAddress(); @@ -354,7 +372,9 @@ public class HelixBrokerStarter { int port = 5001; brokerConf.addProperty(Helix.KEY_OF_BROKER_QUERY_PORT, port); brokerConf.addProperty(Broker.CONFIG_OF_BROKER_TIMEOUT_MS, 60 * 1000L); -return new HelixBrokerStarter(brokerConf, "quickstart", "localhost:2122"); +brokerConf.addProperty(CommonConstants.Helix.CONFIG_OF_CLUSTER_NAME, "quickstart"); +brokerConf.addProperty(CommonConstants.Helix.CONFIG_OF_ZOOKEEPR_SERVER, "localhost:2122"); +return new HelixBrokerStarter(brokerConf); } public static void main(String[] args) diff --git a/pinot-broker/src/test/java/org/apache/pinot/broker/broker/HelixBrokerStarterTest.java b/pinot-broker/src/test/java/org/apache/pinot/broker/broker/HelixBrokerStarterTest.java index a2970ab..44f249f 100644 --- a/pinot-broker/src/test/java/org/apache/pinot/broker/broker/HelixBrokerStarterTest.java +++ b/pinot-broker/src/test/java/org/apache/pinot/broker/broker/HelixBrokerStarterTest.java @@ -31,6 +31,7 @@ import org.apache.pinot.broker.routing.RoutingTable; import org.apache.pinot.broker.routing.timeboundary.TimeBoundaryInfo; import org.apache.pinot.common.metadata.segment.OfflineSegmentZKMetadata; import org.apache.pinot.common.request.BrokerRequest; +import org.apache.pinot.common.utils.CommonConstants; import org.apache.pinot.common.utils.CommonConstants.Helix; import org.apache.pi
[incubator-pinot] branch default_pinot_startable_to_use_config_files created (now 1a1fac8)
This is an automated email from the ASF dual-hosted git repository. xiangfu pushed a change to branch default_pinot_startable_to_use_config_files in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git. at 1a1fac8 Make Pinot Broker/Server can start by just passing a config file This branch includes the following new commits: new 1a1fac8 Make Pinot Broker/Server can start by just passing a config file The 1 revisions listed above as "new" are entirely new to this repository and will be described in separate emails. The revisions listed as "add" were already present in the repository and have only been added to this reference. - To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org For additional commands, e-mail: commits-h...@pinot.apache.org