[GitHub] [drill] cgivre opened a new pull request, #2725: DRILL-8179: Convert LTSV Format Plugin to EVF2

2022-12-19 Thread GitBox


cgivre opened a new pull request, #2725:
URL: https://github.com/apache/drill/pull/2725

   # [DRILL-8179](https://issues.apache.org/jira/browse/DRILL-8179): Convert 
LTSV Format Plugin to EVF2
   
   ## Description
   With this PR, all format plugins are now using the EVF readers.   This is 
part of [DRILL-8132](https://issues.apache.org/jira/browse/DRILL-8312).  
   
   ## Documentation
   In addition to refactoring the plugin to use EVF V2, this code replaces the 
homegrown LTSV reader with a module that parses the data, and introduces new 
configuration variables.  These variables are all noted in the updated README.  
However they are all optional, so the user is not likely to notice any real 
difference. 
   
   One exception is the variable which controls error tolerance.  
   
   ## Testing
   Ran existing unit tests.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@drill.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [drill] weijunlu commented on issue #2693: Order by expression failed to execute in mysql plugin

2022-12-19 Thread GitBox


weijunlu commented on issue #2693:
URL: https://github.com/apache/drill/issues/2693#issuecomment-1358751539

   @vvysotskyi @cgivre.  If MySQL disables only_full_group_by, the sql can be 
executed.
   
   Jupiter (mysql.test)> select
   2..semicolon> extract(year from o_orderdate) as o_year
   3..semicolon> from orders
   4..semicolon> group by o_year
   5..semicolon> order by o_year;
   ++
   | o_year |
   ++
   | 1992   |
   | 1993   |
   | 1994   |
   | 1995   |
   | 1996   |
   | 1997   |
   | 1998   |
   ++
   7 rows selected (4.079 seconds)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@drill.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (DRILL-8198) XML EVF2 reader provideSchema usage

2022-12-19 Thread Charles Givre (Jira)


 [ 
https://issues.apache.org/jira/browse/DRILL-8198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charles Givre resolved DRILL-8198.
--
Resolution: Fixed

> XML EVF2 reader provideSchema usage
> ---
>
> Key: DRILL-8198
> URL: https://issues.apache.org/jira/browse/DRILL-8198
> Project: Apache Drill
>  Issue Type: Sub-task
>  Components: Storage - XML
>Affects Versions: 1.20.0
>Reporter: Vitalii Diravka
>Assignee: Vitalii Diravka
>Priority: Major
> Fix For: 2.0.0
>
>
> XMLBatchReader is converted to EVF2 reader, but not used provideSchema for 
> Schema Provision feature



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [drill] jnturton commented on a diff in pull request #2722: DRILL-8371: Add Write/Insert Capability to Splunk Plugin

2022-12-19 Thread GitBox


jnturton commented on code in PR #2722:
URL: https://github.com/apache/drill/pull/2722#discussion_r1052404918


##
contrib/storage-splunk/src/main/java/org/apache/drill/exec/store/splunk/SplunkBatchWriter.java:
##
@@ -0,0 +1,308 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.drill.exec.store.splunk;
+
+
+import com.splunk.Args;
+import com.splunk.Index;
+import com.splunk.IndexCollection;
+import com.splunk.Service;
+import org.apache.drill.common.exceptions.UserException;
+import org.apache.drill.exec.proto.UserBitShared.UserCredentials;
+import org.apache.drill.exec.record.VectorAccessible;
+import org.apache.drill.exec.store.AbstractRecordWriter;
+import org.apache.drill.exec.store.EventBasedRecordWriter.FieldConverter;
+import org.apache.drill.exec.vector.complex.reader.FieldReader;
+import org.json.simple.JSONObject;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.List;
+import java.util.Map;
+
+public class SplunkBatchWriter extends AbstractRecordWriter {
+
+  private static final Logger logger = 
LoggerFactory.getLogger(SplunkBatchWriter.class);
+  private static final String DEFAULT_SOURCETYPE = "drill";
+  private final UserCredentials userCredentials;
+  private final List tableIdentifier;
+  private final SplunkWriter config;
+  private final Args eventArgs;
+  protected final Service splunkService;
+  private JSONObject splunkEvent;
+  protected Index destinationIndex;
+
+
+  public SplunkBatchWriter(UserCredentials userCredentials, List 
tableIdentifier, SplunkWriter config) {
+this.config = config;
+this.tableIdentifier = tableIdentifier;
+this.userCredentials = userCredentials;
+
+SplunkConnection connection = new 
SplunkConnection(config.getPluginConfig(), userCredentials.getUserName());
+this.splunkService = connection.connect();
+
+// Populate event arguments
+this.eventArgs = new Args();
+eventArgs.put("sourcetype", DEFAULT_SOURCETYPE);
+  }
+
+  @Override
+  public void init(Map writerOptions) throws IOException {
+// No op
+  }
+
+  /**
+   * Update the schema in RecordWriter. Called at least once before starting 
writing the records. In this case,
+   * we add the index to Splunk here. Splunk's API is a little sparse and 
doesn't really do much in the way
+   * of error checking or providing feedback if the operation fails.
+   *
+   * @param batch {@link VectorAccessible} The incoming batch
+   */
+  @Override
+  public void updateSchema(VectorAccessible batch) {
+logger.debug("Updating schema for Splunk");
+
+//Get the collection of indexes
+IndexCollection indexes = splunkService.getIndexes();
+try {
+  String indexName = tableIdentifier.get(0);
+  indexes.create(indexName);
+  destinationIndex = splunkService.getIndexes().get(indexName);
+} catch (Exception e) {
+  // We have to catch a generic exception here, as Splunk's SDK does not 
really provide any kind of
+  // failure messaging.
+  throw UserException.systemError(e)
+.message("Error creating new index in Splunk plugin: " + 
e.getMessage())
+.build(logger);
+}
+  }
+
+
+  @Override
+  public void startRecord() {
+logger.debug("Starting record");
+splunkEvent = new JSONObject();
+  }
+
+  @Override
+  public void endRecord() throws IOException {
+logger.debug("Ending record");
+// Write the event to the Splunk index
+destinationIndex.submit(eventArgs, splunkEvent.toJSONString());

Review Comment:
   @cgivre can we leave a comment explaining this to readers then?



##
contrib/storage-splunk/src/main/java/org/apache/drill/exec/store/splunk/SplunkBatchWriter.java:
##
@@ -0,0 +1,309 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.

[GitHub] [drill] cgivre commented on a diff in pull request #2722: DRILL-8371: Add Write/Insert Capability to Splunk Plugin

2022-12-19 Thread GitBox


cgivre commented on code in PR #2722:
URL: https://github.com/apache/drill/pull/2722#discussion_r1052374395


##
contrib/storage-splunk/src/main/java/org/apache/drill/exec/store/splunk/SplunkBatchWriter.java:
##
@@ -0,0 +1,308 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.drill.exec.store.splunk;
+
+
+import com.splunk.Args;
+import com.splunk.Index;
+import com.splunk.IndexCollection;
+import com.splunk.Service;
+import org.apache.drill.common.exceptions.UserException;
+import org.apache.drill.exec.proto.UserBitShared.UserCredentials;
+import org.apache.drill.exec.record.VectorAccessible;
+import org.apache.drill.exec.store.AbstractRecordWriter;
+import org.apache.drill.exec.store.EventBasedRecordWriter.FieldConverter;
+import org.apache.drill.exec.vector.complex.reader.FieldReader;
+import org.json.simple.JSONObject;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.List;
+import java.util.Map;
+
+public class SplunkBatchWriter extends AbstractRecordWriter {
+
+  private static final Logger logger = 
LoggerFactory.getLogger(SplunkBatchWriter.class);
+  private static final String DEFAULT_SOURCETYPE = "drill";
+  private final UserCredentials userCredentials;
+  private final List tableIdentifier;
+  private final SplunkWriter config;
+  private final Args eventArgs;
+  protected final Service splunkService;
+  private JSONObject splunkEvent;
+  protected Index destinationIndex;
+
+
+  public SplunkBatchWriter(UserCredentials userCredentials, List 
tableIdentifier, SplunkWriter config) {
+this.config = config;
+this.tableIdentifier = tableIdentifier;
+this.userCredentials = userCredentials;
+
+SplunkConnection connection = new 
SplunkConnection(config.getPluginConfig(), userCredentials.getUserName());
+this.splunkService = connection.connect();
+
+// Populate event arguments
+this.eventArgs = new Args();
+eventArgs.put("sourcetype", DEFAULT_SOURCETYPE);
+  }
+
+  @Override
+  public void init(Map writerOptions) throws IOException {
+// No op
+  }
+
+  /**
+   * Update the schema in RecordWriter. Called at least once before starting 
writing the records. In this case,
+   * we add the index to Splunk here. Splunk's API is a little sparse and 
doesn't really do much in the way
+   * of error checking or providing feedback if the operation fails.
+   *
+   * @param batch {@link VectorAccessible} The incoming batch
+   */
+  @Override
+  public void updateSchema(VectorAccessible batch) {
+logger.debug("Updating schema for Splunk");
+
+//Get the collection of indexes
+IndexCollection indexes = splunkService.getIndexes();
+try {
+  String indexName = tableIdentifier.get(0);
+  indexes.create(indexName);
+  destinationIndex = splunkService.getIndexes().get(indexName);
+} catch (Exception e) {
+  // We have to catch a generic exception here, as Splunk's SDK does not 
really provide any kind of
+  // failure messaging.
+  throw UserException.systemError(e)
+.message("Error creating new index in Splunk plugin: " + 
e.getMessage())
+.build(logger);
+}
+  }
+
+
+  @Override
+  public void startRecord() {
+logger.debug("Starting record");
+splunkEvent = new JSONObject();
+  }
+
+  @Override
+  public void endRecord() throws IOException {
+logger.debug("Ending record");
+// Write the event to the Splunk index
+destinationIndex.submit(eventArgs, splunkEvent.toJSONString());
+// Clear out the splunk event.
+splunkEvent = new JSONObject();

Review Comment:
   Yes. This line clears out the event so every row we start fresh.   I 
discovered there is a `clear` method so I called that rather than creating a 
new object every time. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@drill.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [drill] cgivre commented on a diff in pull request #2722: DRILL-8371: Add Write/Insert Capability to Splunk Plugin

2022-12-19 Thread GitBox


cgivre commented on code in PR #2722:
URL: https://github.com/apache/drill/pull/2722#discussion_r1052368638


##
contrib/storage-splunk/src/main/java/org/apache/drill/exec/store/splunk/SplunkBatchWriter.java:
##
@@ -0,0 +1,308 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.drill.exec.store.splunk;
+
+
+import com.splunk.Args;
+import com.splunk.Index;
+import com.splunk.IndexCollection;
+import com.splunk.Service;
+import org.apache.drill.common.exceptions.UserException;
+import org.apache.drill.exec.proto.UserBitShared.UserCredentials;
+import org.apache.drill.exec.record.VectorAccessible;
+import org.apache.drill.exec.store.AbstractRecordWriter;
+import org.apache.drill.exec.store.EventBasedRecordWriter.FieldConverter;
+import org.apache.drill.exec.vector.complex.reader.FieldReader;
+import org.json.simple.JSONObject;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.List;
+import java.util.Map;
+
+public class SplunkBatchWriter extends AbstractRecordWriter {
+
+  private static final Logger logger = 
LoggerFactory.getLogger(SplunkBatchWriter.class);
+  private static final String DEFAULT_SOURCETYPE = "drill";
+  private final UserCredentials userCredentials;
+  private final List tableIdentifier;
+  private final SplunkWriter config;
+  private final Args eventArgs;
+  protected final Service splunkService;
+  private JSONObject splunkEvent;
+  protected Index destinationIndex;
+
+
+  public SplunkBatchWriter(UserCredentials userCredentials, List 
tableIdentifier, SplunkWriter config) {
+this.config = config;
+this.tableIdentifier = tableIdentifier;
+this.userCredentials = userCredentials;
+
+SplunkConnection connection = new 
SplunkConnection(config.getPluginConfig(), userCredentials.getUserName());
+this.splunkService = connection.connect();
+
+// Populate event arguments
+this.eventArgs = new Args();
+eventArgs.put("sourcetype", DEFAULT_SOURCETYPE);
+  }
+
+  @Override
+  public void init(Map writerOptions) throws IOException {
+// No op
+  }
+
+  /**
+   * Update the schema in RecordWriter. Called at least once before starting 
writing the records. In this case,

Review Comment:
   Sorry..  I clarified the comment.  This is called once before the records 
are written.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@drill.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [drill] cgivre commented on a diff in pull request #2722: DRILL-8371: Add Write/Insert Capability to Splunk Plugin

2022-12-19 Thread GitBox


cgivre commented on code in PR #2722:
URL: https://github.com/apache/drill/pull/2722#discussion_r1052367809


##
contrib/storage-splunk/src/main/java/org/apache/drill/exec/store/splunk/SplunkInsertWriter.java:
##
@@ -0,0 +1,72 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.drill.exec.store.splunk;
+
+import com.fasterxml.jackson.annotation.JacksonInject;
+import com.fasterxml.jackson.annotation.JsonCreator;
+import com.fasterxml.jackson.annotation.JsonIgnore;
+import com.fasterxml.jackson.annotation.JsonProperty;
+import org.apache.drill.exec.physical.base.PhysicalOperator;
+import org.apache.drill.exec.store.StoragePluginRegistry;
+
+import java.util.List;
+
+public class SplunkInsertWriter extends SplunkWriter {
+  public static final String OPERATOR_TYPE = "SPLUNK_INSERT_WRITER";
+
+  private final SplunkStoragePlugin plugin;
+  private final List tableIdentifier;
+
+  @JsonCreator
+  public SplunkInsertWriter(
+  @JsonProperty("child") PhysicalOperator child,
+  @JsonProperty("tableIdentifier") List tableIdentifier,
+  @JsonProperty("storage") SplunkPluginConfig storageConfig,
+  @JacksonInject StoragePluginRegistry engineRegistry) {
+super(child, tableIdentifier, engineRegistry.resolve(storageConfig, 
SplunkStoragePlugin.class));

Review Comment:
   Fixed.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@drill.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [drill] cgivre commented on a diff in pull request #2722: DRILL-8371: Add Write/Insert Capability to Splunk Plugin

2022-12-19 Thread GitBox


cgivre commented on code in PR #2722:
URL: https://github.com/apache/drill/pull/2722#discussion_r1052363847


##
contrib/storage-splunk/src/main/java/org/apache/drill/exec/store/splunk/SplunkBatchWriter.java:
##
@@ -0,0 +1,308 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.drill.exec.store.splunk;
+
+
+import com.splunk.Args;
+import com.splunk.Index;
+import com.splunk.IndexCollection;
+import com.splunk.Service;
+import org.apache.drill.common.exceptions.UserException;
+import org.apache.drill.exec.proto.UserBitShared.UserCredentials;
+import org.apache.drill.exec.record.VectorAccessible;
+import org.apache.drill.exec.store.AbstractRecordWriter;
+import org.apache.drill.exec.store.EventBasedRecordWriter.FieldConverter;
+import org.apache.drill.exec.vector.complex.reader.FieldReader;
+import org.json.simple.JSONObject;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.List;
+import java.util.Map;
+
+public class SplunkBatchWriter extends AbstractRecordWriter {
+
+  private static final Logger logger = 
LoggerFactory.getLogger(SplunkBatchWriter.class);
+  private static final String DEFAULT_SOURCETYPE = "drill";
+  private final UserCredentials userCredentials;
+  private final List tableIdentifier;
+  private final SplunkWriter config;
+  private final Args eventArgs;
+  protected final Service splunkService;
+  private JSONObject splunkEvent;
+  protected Index destinationIndex;
+
+
+  public SplunkBatchWriter(UserCredentials userCredentials, List 
tableIdentifier, SplunkWriter config) {
+this.config = config;
+this.tableIdentifier = tableIdentifier;
+this.userCredentials = userCredentials;
+
+SplunkConnection connection = new 
SplunkConnection(config.getPluginConfig(), userCredentials.getUserName());
+this.splunkService = connection.connect();
+
+// Populate event arguments
+this.eventArgs = new Args();
+eventArgs.put("sourcetype", DEFAULT_SOURCETYPE);
+  }
+
+  @Override
+  public void init(Map writerOptions) throws IOException {
+// No op
+  }
+
+  /**
+   * Update the schema in RecordWriter. Called at least once before starting 
writing the records. In this case,
+   * we add the index to Splunk here. Splunk's API is a little sparse and 
doesn't really do much in the way
+   * of error checking or providing feedback if the operation fails.
+   *
+   * @param batch {@link VectorAccessible} The incoming batch
+   */
+  @Override
+  public void updateSchema(VectorAccessible batch) {
+logger.debug("Updating schema for Splunk");
+
+//Get the collection of indexes
+IndexCollection indexes = splunkService.getIndexes();
+try {
+  String indexName = tableIdentifier.get(0);
+  indexes.create(indexName);
+  destinationIndex = splunkService.getIndexes().get(indexName);
+} catch (Exception e) {
+  // We have to catch a generic exception here, as Splunk's SDK does not 
really provide any kind of
+  // failure messaging.
+  throw UserException.systemError(e)
+.message("Error creating new index in Splunk plugin: " + 
e.getMessage())
+.build(logger);
+}
+  }
+
+
+  @Override
+  public void startRecord() {
+logger.debug("Starting record");
+splunkEvent = new JSONObject();
+  }
+
+  @Override
+  public void endRecord() throws IOException {
+logger.debug("Ending record");
+// Write the event to the Splunk index
+destinationIndex.submit(eventArgs, splunkEvent.toJSONString());

Review Comment:
   I think there may be some bug in the Splunk SDK.  



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@drill.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [drill] cgivre commented on a diff in pull request #2722: DRILL-8371: Add Write/Insert Capability to Splunk Plugin

2022-12-19 Thread GitBox


cgivre commented on code in PR #2722:
URL: https://github.com/apache/drill/pull/2722#discussion_r1052356461


##
contrib/storage-splunk/src/test/java/org/apache/drill/exec/store/splunk/SplunkWriterTest.java:
##
@@ -0,0 +1,191 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.drill.exec.store.splunk;
+
+import org.apache.drill.categories.SlowTest;
+import org.apache.drill.common.types.TypeProtos;
+import org.apache.drill.common.types.TypeProtos.MinorType;
+import org.apache.drill.exec.physical.rowSet.DirectRowSet;
+import org.apache.drill.exec.physical.rowSet.RowSet;
+import org.apache.drill.exec.physical.rowSet.RowSetBuilder;
+import org.apache.drill.exec.record.metadata.SchemaBuilder;
+import org.apache.drill.exec.record.metadata.TupleMetadata;
+import org.apache.drill.test.QueryBuilder.QuerySummary;
+import org.apache.drill.test.rowSet.RowSetUtilities;
+import org.junit.FixMethodOrder;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+import org.junit.runners.MethodSorters;
+
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
+
+@FixMethodOrder(MethodSorters.JVM)
+@Category({SlowTest.class})
+public class SplunkWriterTest extends SplunkBaseTest {
+
+  @Test
+  public void testBasicCTAS() throws Exception {
+
+// Verify that there is no index called t1 in Splunk
+String sql = "SELECT * FROM INFORMATION_SCHEMA.`TABLES` WHERE TABLE_SCHEMA 
= 'splunk' AND TABLE_NAME LIKE 't1'";
+RowSet results = client.queryBuilder().sql(sql).rowSet();
+assertEquals(0, results.rowCount());
+results.clear();
+
+// Now create the table
+sql = "CREATE TABLE `splunk`.`t1` AS SELECT * FROM cp.`test_data.csvh`";
+QuerySummary summary = client.queryBuilder().sql(sql).run();
+assertTrue(summary.succeeded());
+
+// Verify that an index was created called t1 in Splunk
+sql = "SELECT * FROM INFORMATION_SCHEMA.`TABLES` WHERE TABLE_SCHEMA = 
'splunk' AND TABLE_NAME LIKE 't1'";
+results = client.queryBuilder().sql(sql).rowSet();
+assertEquals(1, results.rowCount());
+results.clear();
+
+// There seems to be some delay between the Drill query writing the data 
and the data being made
+// accessible.
+Thread.sleep(3);

Review Comment:
   Yeah.. There seems to be a processing delay between inserting data and it 
actually being queryable.   I don't think this is a Drill issue. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@drill.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [drill] cgivre commented on a diff in pull request #2722: DRILL-8371: Add Write/Insert Capability to Splunk Plugin

2022-12-19 Thread GitBox


cgivre commented on code in PR #2722:
URL: https://github.com/apache/drill/pull/2722#discussion_r1052354949


##
contrib/storage-splunk/src/main/java/org/apache/drill/exec/store/splunk/SplunkBatchWriter.java:
##
@@ -0,0 +1,308 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.drill.exec.store.splunk;
+
+
+import com.splunk.Args;
+import com.splunk.Index;
+import com.splunk.IndexCollection;
+import com.splunk.Service;
+import org.apache.drill.common.exceptions.UserException;
+import org.apache.drill.exec.proto.UserBitShared.UserCredentials;
+import org.apache.drill.exec.record.VectorAccessible;
+import org.apache.drill.exec.store.AbstractRecordWriter;
+import org.apache.drill.exec.store.EventBasedRecordWriter.FieldConverter;
+import org.apache.drill.exec.vector.complex.reader.FieldReader;
+import org.json.simple.JSONObject;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.List;
+import java.util.Map;
+
+public class SplunkBatchWriter extends AbstractRecordWriter {
+
+  private static final Logger logger = 
LoggerFactory.getLogger(SplunkBatchWriter.class);
+  private static final String DEFAULT_SOURCETYPE = "drill";
+  private final UserCredentials userCredentials;
+  private final List tableIdentifier;
+  private final SplunkWriter config;
+  private final Args eventArgs;
+  protected final Service splunkService;
+  private JSONObject splunkEvent;
+  protected Index destinationIndex;
+
+
+  public SplunkBatchWriter(UserCredentials userCredentials, List 
tableIdentifier, SplunkWriter config) {
+this.config = config;
+this.tableIdentifier = tableIdentifier;
+this.userCredentials = userCredentials;
+
+SplunkConnection connection = new 
SplunkConnection(config.getPluginConfig(), userCredentials.getUserName());
+this.splunkService = connection.connect();
+
+// Populate event arguments
+this.eventArgs = new Args();
+eventArgs.put("sourcetype", DEFAULT_SOURCETYPE);
+  }
+
+  @Override
+  public void init(Map writerOptions) throws IOException {
+// No op
+  }
+
+  /**
+   * Update the schema in RecordWriter. Called at least once before starting 
writing the records. In this case,
+   * we add the index to Splunk here. Splunk's API is a little sparse and 
doesn't really do much in the way
+   * of error checking or providing feedback if the operation fails.
+   *
+   * @param batch {@link VectorAccessible} The incoming batch
+   */
+  @Override
+  public void updateSchema(VectorAccessible batch) {
+logger.debug("Updating schema for Splunk");
+
+//Get the collection of indexes
+IndexCollection indexes = splunkService.getIndexes();
+try {
+  String indexName = tableIdentifier.get(0);
+  indexes.create(indexName);
+  destinationIndex = splunkService.getIndexes().get(indexName);
+} catch (Exception e) {
+  // We have to catch a generic exception here, as Splunk's SDK does not 
really provide any kind of
+  // failure messaging.
+  throw UserException.systemError(e)
+.message("Error creating new index in Splunk plugin: " + 
e.getMessage())
+.build(logger);
+}
+  }
+
+
+  @Override
+  public void startRecord() {
+logger.debug("Starting record");
+splunkEvent = new JSONObject();
+  }
+
+  @Override
+  public void endRecord() throws IOException {
+logger.debug("Ending record");
+// Write the event to the Splunk index
+destinationIndex.submit(eventArgs, splunkEvent.toJSONString());

Review Comment:
   @jnturton 
   I actually tried this first and I couldn't get Splunk to actually write any 
data.  I literally cut/pasted their code into Drill to no avail. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@drill.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [drill] cgivre commented on a diff in pull request #2722: DRILL-8371: Add Write/Insert Capability to Splunk Plugin

2022-12-19 Thread GitBox


cgivre commented on code in PR #2722:
URL: https://github.com/apache/drill/pull/2722#discussion_r1052352513


##
contrib/storage-splunk/src/main/java/org/apache/drill/exec/store/splunk/SplunkBatchWriter.java:
##
@@ -0,0 +1,308 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.drill.exec.store.splunk;
+
+
+import com.splunk.Args;
+import com.splunk.Index;
+import com.splunk.IndexCollection;
+import com.splunk.Service;
+import org.apache.drill.common.exceptions.UserException;
+import org.apache.drill.exec.proto.UserBitShared.UserCredentials;
+import org.apache.drill.exec.record.VectorAccessible;
+import org.apache.drill.exec.store.AbstractRecordWriter;
+import org.apache.drill.exec.store.EventBasedRecordWriter.FieldConverter;
+import org.apache.drill.exec.vector.complex.reader.FieldReader;
+import org.json.simple.JSONObject;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.List;
+import java.util.Map;
+
+public class SplunkBatchWriter extends AbstractRecordWriter {
+
+  private static final Logger logger = 
LoggerFactory.getLogger(SplunkBatchWriter.class);
+  private static final String DEFAULT_SOURCETYPE = "drill";
+  private final UserCredentials userCredentials;
+  private final List tableIdentifier;
+  private final SplunkWriter config;
+  private final Args eventArgs;
+  protected final Service splunkService;
+  private JSONObject splunkEvent;
+  protected Index destinationIndex;
+
+
+  public SplunkBatchWriter(UserCredentials userCredentials, List 
tableIdentifier, SplunkWriter config) {
+this.config = config;
+this.tableIdentifier = tableIdentifier;
+this.userCredentials = userCredentials;
+
+SplunkConnection connection = new 
SplunkConnection(config.getPluginConfig(), userCredentials.getUserName());
+this.splunkService = connection.connect();
+
+// Populate event arguments
+this.eventArgs = new Args();
+eventArgs.put("sourcetype", DEFAULT_SOURCETYPE);
+  }
+
+  @Override
+  public void init(Map writerOptions) throws IOException {
+// No op
+  }
+
+  /**
+   * Update the schema in RecordWriter. Called at least once before starting 
writing the records. In this case,
+   * we add the index to Splunk here. Splunk's API is a little sparse and 
doesn't really do much in the way
+   * of error checking or providing feedback if the operation fails.
+   *
+   * @param batch {@link VectorAccessible} The incoming batch
+   */
+  @Override
+  public void updateSchema(VectorAccessible batch) {
+logger.debug("Updating schema for Splunk");
+
+//Get the collection of indexes
+IndexCollection indexes = splunkService.getIndexes();
+try {
+  String indexName = tableIdentifier.get(0);
+  indexes.create(indexName);
+  destinationIndex = splunkService.getIndexes().get(indexName);
+} catch (Exception e) {
+  // We have to catch a generic exception here, as Splunk's SDK does not 
really provide any kind of
+  // failure messaging.
+  throw UserException.systemError(e)
+.message("Error creating new index in Splunk plugin: " + 
e.getMessage())
+.build(logger);
+}
+  }
+
+
+  @Override
+  public void startRecord() {
+logger.debug("Starting record");
+splunkEvent = new JSONObject();
+  }
+
+  @Override
+  public void endRecord() throws IOException {
+logger.debug("Ending record");
+// Write the event to the Splunk index
+destinationIndex.submit(eventArgs, splunkEvent.toJSONString());
+// Clear out the splunk event.
+splunkEvent = new JSONObject();
+  }
+
+  @Override
+  public void abort() {
+// No op
+  }
+
+  @Override
+  public void cleanup() {
+// No op
+  }
+
+
+  @Override
+  public FieldConverter getNewNullableIntConverter(int fieldId, String 
fieldName, FieldReader reader) {
+return new ScalarSplunkConverter(fieldId, fieldName, reader);
+  }
+
+  @Override
+  public FieldConverter getNewIntConverter(int fieldId, String fieldName, 
FieldReader reader) {
+return new ScalarSplunkConverter(fieldId, fieldName, reader);
+  }
+
+  @Override
+  public FieldConverter getNewNullableBigIntConverter(int fieldId, String 
fieldName, Fiel

[GitHub] [drill] jnturton commented on a diff in pull request #2722: DRILL-8371: Add Write/Insert Capability to Splunk Plugin

2022-12-19 Thread GitBox


jnturton commented on code in PR #2722:
URL: https://github.com/apache/drill/pull/2722#discussion_r1052337380


##
contrib/storage-splunk/src/main/java/org/apache/drill/exec/store/splunk/SplunkBatchWriter.java:
##
@@ -0,0 +1,308 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.drill.exec.store.splunk;
+
+
+import com.splunk.Args;
+import com.splunk.Index;
+import com.splunk.IndexCollection;
+import com.splunk.Service;
+import org.apache.drill.common.exceptions.UserException;
+import org.apache.drill.exec.proto.UserBitShared.UserCredentials;
+import org.apache.drill.exec.record.VectorAccessible;
+import org.apache.drill.exec.store.AbstractRecordWriter;
+import org.apache.drill.exec.store.EventBasedRecordWriter.FieldConverter;
+import org.apache.drill.exec.vector.complex.reader.FieldReader;
+import org.json.simple.JSONObject;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.List;
+import java.util.Map;
+
+public class SplunkBatchWriter extends AbstractRecordWriter {
+
+  private static final Logger logger = 
LoggerFactory.getLogger(SplunkBatchWriter.class);
+  private static final String DEFAULT_SOURCETYPE = "drill";
+  private final UserCredentials userCredentials;
+  private final List tableIdentifier;
+  private final SplunkWriter config;
+  private final Args eventArgs;
+  protected final Service splunkService;
+  private JSONObject splunkEvent;
+  protected Index destinationIndex;
+
+
+  public SplunkBatchWriter(UserCredentials userCredentials, List 
tableIdentifier, SplunkWriter config) {
+this.config = config;
+this.tableIdentifier = tableIdentifier;
+this.userCredentials = userCredentials;
+
+SplunkConnection connection = new 
SplunkConnection(config.getPluginConfig(), userCredentials.getUserName());
+this.splunkService = connection.connect();
+
+// Populate event arguments
+this.eventArgs = new Args();
+eventArgs.put("sourcetype", DEFAULT_SOURCETYPE);
+  }
+
+  @Override
+  public void init(Map writerOptions) throws IOException {
+// No op
+  }
+
+  /**
+   * Update the schema in RecordWriter. Called at least once before starting 
writing the records. In this case,
+   * we add the index to Splunk here. Splunk's API is a little sparse and 
doesn't really do much in the way
+   * of error checking or providing feedback if the operation fails.
+   *
+   * @param batch {@link VectorAccessible} The incoming batch
+   */
+  @Override
+  public void updateSchema(VectorAccessible batch) {
+logger.debug("Updating schema for Splunk");
+
+//Get the collection of indexes
+IndexCollection indexes = splunkService.getIndexes();
+try {
+  String indexName = tableIdentifier.get(0);
+  indexes.create(indexName);
+  destinationIndex = splunkService.getIndexes().get(indexName);
+} catch (Exception e) {
+  // We have to catch a generic exception here, as Splunk's SDK does not 
really provide any kind of
+  // failure messaging.
+  throw UserException.systemError(e)
+.message("Error creating new index in Splunk plugin: " + 
e.getMessage())
+.build(logger);
+}
+  }
+
+
+  @Override
+  public void startRecord() {
+logger.debug("Starting record");
+splunkEvent = new JSONObject();
+  }
+
+  @Override
+  public void endRecord() throws IOException {
+logger.debug("Ending record");
+// Write the event to the Splunk index
+destinationIndex.submit(eventArgs, splunkEvent.toJSONString());
+// Clear out the splunk event.
+splunkEvent = new JSONObject();
+  }
+
+  @Override
+  public void abort() {
+// No op
+  }
+
+  @Override
+  public void cleanup() {
+// No op
+  }
+
+
+  @Override
+  public FieldConverter getNewNullableIntConverter(int fieldId, String 
fieldName, FieldReader reader) {
+return new ScalarSplunkConverter(fieldId, fieldName, reader);
+  }
+
+  @Override
+  public FieldConverter getNewIntConverter(int fieldId, String fieldName, 
FieldReader reader) {
+return new ScalarSplunkConverter(fieldId, fieldName, reader);
+  }
+
+  @Override
+  public FieldConverter getNewNullableBigIntConverter(int fieldId, String 
fieldName, Fi

[GitHub] [drill] jnturton commented on a diff in pull request #2722: DRILL-8371: Add Write/Insert Capability to Splunk Plugin

2022-12-19 Thread GitBox


jnturton commented on code in PR #2722:
URL: https://github.com/apache/drill/pull/2722#discussion_r1052328977


##
contrib/storage-splunk/src/main/java/org/apache/drill/exec/store/splunk/SplunkInsertWriter.java:
##
@@ -0,0 +1,72 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.drill.exec.store.splunk;
+
+import com.fasterxml.jackson.annotation.JacksonInject;
+import com.fasterxml.jackson.annotation.JsonCreator;
+import com.fasterxml.jackson.annotation.JsonIgnore;
+import com.fasterxml.jackson.annotation.JsonProperty;
+import org.apache.drill.exec.physical.base.PhysicalOperator;
+import org.apache.drill.exec.store.StoragePluginRegistry;
+
+import java.util.List;
+
+public class SplunkInsertWriter extends SplunkWriter {
+  public static final String OPERATOR_TYPE = "SPLUNK_INSERT_WRITER";
+
+  private final SplunkStoragePlugin plugin;
+  private final List tableIdentifier;
+
+  @JsonCreator
+  public SplunkInsertWriter(
+  @JsonProperty("child") PhysicalOperator child,
+  @JsonProperty("tableIdentifier") List tableIdentifier,
+  @JsonProperty("storage") SplunkPluginConfig storageConfig,
+  @JacksonInject StoragePluginRegistry engineRegistry) {
+super(child, tableIdentifier, engineRegistry.resolve(storageConfig, 
SplunkStoragePlugin.class));

Review Comment:
   Did you mean to name this engineRegistry rather than, say, pluginRegistry?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@drill.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



RE: Towards Drill 2.0

2022-12-19 Thread Максим Римар
Hi Charles,

Could you please tell me, you said that you wanted a few pieces to be
wrapped up in the next release, what are they?

And another one mind, does the Drill have some breaking changes to release
it as Drill 2.0?
Of course, Calcite update is a significant update, but it is only one of
many changes that we wanted to make for release 2.0 (
https://github.com/apache/drill/wiki/Drill-2.0-Proposal). Maybe it has
sense to release the current Drill as 1.21.0 and don't make release 2.0
until we will make structural changes (
https://github.com/apache/drill/wiki/Drill-2.0-Proposal#project-structure-packaging-and-distribution
)?

Regards,
Maksym

On 2022/12/13 14:51:56 Charles Givre wrote:
> Hello all,
> We've been doing a lot of really great work on Drill in the last few
months, and I wanted to thank those who have been involved with releasing
the bug fix releases throughout the year.  With all that said, there's been
enormous progress on Drill 2.0 and while it isn't quite what we had
originally discussed, IMHO, it's time to get this in people's hands.
>
> The work that Vova did updating Drill to Calcite 1.32 and getting us off
of the fork has been a HUGE improvement in performance and stability.
Likewise the casting work that James T. did has made Drill significantly
more usable.  From my perspective, there are a few pieces that I'd like to
see wrapped up but I wanted to start the dialogue to see if we might be
able to get Drill 2.0 released either by the end of the year or early
January.  What do you think?
>
> Best,
> -- C


[GitHub] [drill] jnturton opened a new pull request, #2724: [BACKPORT-TO-STABLE] Bugfix Release 1.20.3 Phase 3

2022-12-19 Thread GitBox


jnturton opened a new pull request, #2724:
URL: https://github.com/apache/drill/pull/2724

   # [BACKPORT-TO-STABLE] Bugfix Release 1.20.3 Phase 3
   
   ## Description
   
   Merges the following backport-to-stable commits into the 1.20 branch.
   
   - 3861abe76 (HEAD -> 1.20, jnturton/1.20) [MINOR-UPDATE] Bump commons-net 
from 3.6 to 3.9.0 (#2720)
   - f2515c416 Update bug_report.md to include the user's Drill version number
   - f82468f6c DRILL-8368: Update Yauaa to 7.9.0 (#2717)
   - 713b7222c DRILL-8366: Late release of compressor memory in the Parquet 
writer (#2716)
   - 032fa12a5 DRILL-8365: HTTP Plugin Places Parameters in Wrong Place (#2715)
   - ea955d2dc DRILL-8363: Upgrade postgresql to 42.4.3 due to security issue 
(#2712)
   - 2d5f6bb15 DRILL-8362: bump excel-streaming-reader to v4.0.5 (#2711)
   - ea11455ec DRILL-8296: Possible type mismatch bug in SplunkBatchReader 
(#2700)
   - 50ad8fa85 DRILL-8238: Translation of IS NOT NULL($1) is not supported by 
MongoProject
   - 77fa95ed7 Access dirModifCheckMap using a Path not a String. (#2697)
   - bb1ec3835 DRILL-8348: Cannot delete disabled storage plugins (#2696)
   - c474e1cfb DRILL-8338: Upgrade jQuery to 3.6.1 and DataTables to 1.12.1 due 
to sonatype-2020-0988 (#2685)
   - dc4152497 DRILL-8337: Upgrade Hive libs to 3.1.3 due to sonatype-2019-0400 
(#2684)
   
   ## Documentation
   N/A
   
   ## Testing
   Unit test suite.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@drill.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [drill] jnturton merged pull request #2692: [BACKPORT-TO-STABLE] Bugfix Release 1.20.3 Phase 2

2022-12-19 Thread GitBox


jnturton merged PR #2692:
URL: https://github.com/apache/drill/pull/2692


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@drill.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org