[GitHub] vvysotskyi edited a comment on issue #1481: DRILL-6763: Codegen optimization of SQL functions with constant values
vvysotskyi edited a comment on issue #1481: DRILL-6763: Codegen optimization of SQL functions with constant values URL: https://github.com/apache/drill/pull/1481#issuecomment-424602274 @lushuifeng, thanks for the contribution! Could you please provide the time of execution of the query with and without your change? Please note, that Drill has its own framework for scalar replacement and passing value holders to the methods as arguments may break a scalar replacement for those holders. Also, Drill can generate too large classes. To avoid problems with the class constant pool overflow was implemented splitting the classes into the smaller ones. Is this change incorporates or at least does not break it? This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] vvysotskyi edited a comment on issue #1481: DRILL-6763: Codegen optimization of SQL functions with constant values
vvysotskyi edited a comment on issue #1481: DRILL-6763: Codegen optimization of SQL functions with constant values URL: https://github.com/apache/drill/pull/1481#issuecomment-424602274 @lushuifeng, thanks for the contribution! Besides setters added to the generated code, are there any changes which optimize the generated code, are these setters used somewhere? Please note, that Drill has its own framework for scalar replacement and passing value holders to the methods as arguments may break a scalar replacement for those holders. Also, Drill can generate too large classes. To avoid problems with the class constant pool overflow was implemented splitting the classes into the smaller ones. Is this change incorporates or at least does not break it? This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] vvysotskyi commented on issue #1481: DRILL-6763: Codegen optimization of SQL functions with constant values
vvysotskyi commented on issue #1481: DRILL-6763: Codegen optimization of SQL functions with constant values URL: https://github.com/apache/drill/pull/1481#issuecomment-424602274 @lushuifeng, thanks for the contribution! Besides setters added to the generated code, are there any changes which optimize the generated code, are these setters used somewhere? Please note, that Drill has its own framework for scalar replacement and passing value holders to the methods as arguments make a scalar replacement for those holders impossible. Also, Drill can generate too large classes. To avoid problems with the class constant pool overflow was implemented splitting the classes into the smaller ones. Is this change incorporates or at least does not break it? This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] lushuifeng opened a new pull request #1481: DRILL-6763: Codegen optimization of SQL functions with constant values
lushuifeng opened a new pull request #1481: DRILL-6763: Codegen optimization of SQL functions with constant values URL: https://github.com/apache/drill/pull/1481 Details in DRILL-6763: Here is the descriptions of the change: 1. add system option `exec.optimize_function_compilation` to toggle to state of this functionality. 2. codegen is changed by declaring setter method in EvaluationVisitor#visitXXXconstants 3. the member declared in step2 is initialized when the instance of the class is created in XXBatch and others 4. attachment is the code of the same query mentioned in DRILL-6763 generated by setting the value of exec.optimize_function_compilation to true @arina-ielchiieva @vdiravka Would you please take a look? What kind of unit tests should be added? [query.txt](https://github.com/apache/drill/files/2418035/query.txt) This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Created] (DRILL-6763) Codegen optimiztion of SQL functions with constant values
shuifeng lu created DRILL-6763: -- Summary: Codegen optimiztion of SQL functions with constant values Key: DRILL-6763 URL: https://issues.apache.org/jira/browse/DRILL-6763 Project: Apache Drill Issue Type: Improvement Components: Execution - Codegen Affects Versions: 1.14.0 Reporter: shuifeng lu Assignee: shuifeng lu Attachments: Query1.java, Query2.java, code_compare.png, compilation_time.png Codegen class compilation takes tens to hundreds of milliseconds, a class cache is hit when generifiedCode of code generator is exactly the same. It works fine when UDF only takes columns or symbols, but not efficient when one or more parameters in UDF is always distinct from the other. Take face recognition for example, the face images are almost distinct from each other according to lighting, facial expressions and details. It is important to reduce redundant class compilation especially for those low latency queries. Cache miss rate and metaspace gc can also be reduced by eliminating the redundant classes. Here is the query to get the persons whose last name is Brunner and hire from 1st Jan 1990: SELECT full_name, hire_date FROM cp.`employee.json` where last_name = 'Brunner' and hire_date >= '1990-01-01 00:00:00.0'; Now get the persons whose last name is Bernard and hire from 1st Jan 1990. SELECT full_name, hire_date FROM cp.`employee.json` where last_name = 'Bernard' and hire_date >= '1990-01-01 00:00:00.0'; Figure !compilation_time.png! shows the compilation time of the generated code by the above query in FilterRecordBatch on my laptop Figure !code_compare.png! shows the only difference of the generated code from the attachments is the last_name value at line 156. It is straightforward that the redundant class compilation can be eliminated by making the string12 as a member of the class and set the value when the instance is created -- This message was sent by Atlassian JIRA (v7.6.3#76005)
Multi Commit PRs (Re: Drill Hangout tomorrow 09/18)
More on splitting a PR into multiple commits - link [1] below shows how to take the last commit and break it (thanks Hanumath.) I just practiced this method on a PR (1480 - see [2]); this separates the actual logic of the change from the less relevant definitions, cleanups, etc. This does require careful manual work from the developer; like if two changes are adjacent (i.e. become a single "hunk"), then you need to select the "e" option and edit that "hunk". An open question: Must we eventually squash those multiple commits, or would it work better to keep them apart committed into the master ? Thanks, Boaz [1] https://stackoverflow.com/questions/1440050/how-to-split-last-commit-into-two-in-git/1440200 [2] https://github.com/apache/drill/pull/1480/commits On 9/24/18 1:49 PM, Jyothsna Reddy wrote: Notes from the Hangout session Attendes: Jyothsna, Boaz, Sorabh, Arina, Bohdan, Ihor, Hanumath, Pritesh, Vitali, Kunal, Robert Interesting thing shared by Boaz : All the minor fragments are assigned to Drillbits in round robin fashion and not in a sequential order. Boaz brought up the topic of improving the quality of code reviews: Topic of the Hangout: How do we improve the process of code review? It is highly difficult for the reviewer to do a code review if he/she doesn't know the context and hard to figure out if the PR contains too many code changes. Ideas to improve the code review process: - One idea is to break the commits into smaller commits so that each commit is coherent and keeping the refactoring changes in a different commit. But its hard for the developers to separate out into multiple commits if they are too deeply tangled. Although it creates more work for developers, it makes reviewers job easier by doing this. This helps in finding bugs in earlier stages too. - It would be easier if someone can find ways where Git allows to split the commits. Hanumath had tried this earlier. - Mandating check style before code review and it shouldn't be code reviewer's job to point out those. - Bring a reviewer early on into code review process rather than dumping a 1 line code changes at a go. - Push smaller commits into master if they make sense. - Do some live code review sessions where external contributors and reviewer can have discussions related to pull requests in a hangout. - Don't squash the commits unless needed. - Reviewers should give full set of comments at one go and there shouldn't be more than 4-5 rounds of code reviews. - Check style should be included for spaces and stuff and developers should try to use IntelliJ IDE and should pay attention to the warnings. - Its helpful for reviewers if developers provide screenshots of UI for UI changes and attach before and after code if changes are made to code generators. Please feel free to add ideas to the above incase if you have any ideas to improve the code review process. Thank you, Jyothsna On Mon, Sep 17, 2018 at 12:55 PM Jyothsna Reddy wrote: The Apache Drill Hangout will be held tomorrow at 10:00am PST; please let us know should you have a topic for tomorrow's hangout. We will also ask for topics at the beginning of the hangout. Hangout Link - https://urldefense.proofpoint.com/v2/url?u=https-3A__hangouts.google.com_hangouts_-5F_event_ci4rdiju8bv04a64efj5fedd0lc&d=DwIBaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=7lXQnf0aC8VQ0iMXwVgNHw&m=9AQiac0o0ILqquFD8t1gtRKb9VgnUsNWPhyNGEa7x4Q&s=tNdr_LHgocB7NB3XiSCrp296AMXJgG7YHuOaKD95X74&e= Thank you, Jyothsna
Re: [DISCUSS] Drill Hackathon/Drill Developers Day - 2018!
HI there, I am interested in attending, but it may have to be virtual as I am based in DC. I’d love to hear about (and possibly present) use cases of how Drill is actually being used. I am also interested in further discussion about the possible Arrow integration. —C > On Sep 25, 2018, at 18:13, Gautam Parai wrote: > > Hi Drill Developers, > > It has already been one year since our last Drill Hackathon/Developers Day! > I would like to get the ball rolling for this year's Drill Developer Day ™. > The tentative date is Nov 14 (Wednesday). > > The goal is to get the community together for a half-day long technical > discussion on key topics in preparation for a Drill 2.0 release as well as > potential improvements in the upcoming releases. Depending on the interest > areas, we could form groups and have a volunteer lead each group. > > There is already a list of topics from last year but we would like to > solicit new topics for this year as well. > > Please reply back to this email, if you are interested in attending the > event and mention any topics you would like to be hear more about! Based on > the interest, we would follow-up with community members for volunteers, > topics and the rest. > > We hope to see you all at the next Developer Day! > > Keep Drilling, > Gautam
[GitHub] Ben-Zvi opened a new pull request #1480: DRILL-6755: Avoid building Hash Table for inner/left join when probe side is empty
Ben-Zvi opened a new pull request #1480: DRILL-6755: Avoid building Hash Table for inner/left join when probe side is empty URL: https://github.com/apache/drill/pull/1480 This PR is split into two commits to help the reviewers: (1) Preparations and cleanups: No change to the code's logic, only new definitions and minor cleaning. (2) The actual change: Define the flag `skipHashTableBuild` and use it to skip initial hash-table setting, and later to kill the build side upstream and skip the hash table build. (This also saves the useless work in the upstream; e.g., more scanning). Also included is a test of an inner join, with a NONE probe input, and multiple input batches on the build side. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
Re: storage plugin test case
You can try looking for existing unit tests that look up the classpath to see where the resource might need to be. You probably want it here: /src/test/resources/ For e.g., a lot of java-exec module's test files are here : drill/exec/java-exec/src/test/resources https://github.com/apache/drill/tree/master/exec/java-exec/src/test/resources On 9/25/2018 2:00:15 PM, Jean-Claude Cote wrote: I have writing a msgpack storage plugin from drill. https://github.com/jcmcote/drill/tree/master/contrib/storage-msgpack I'm now trying to write test cases like testBuilder() .sqlQuery("select * from cp.`msgpack/testBasic.mp`") .ordered() .baselineColumns("a").baselineValues("1").baselineValues("1") .baselineColumns("b").baselineValues("2").baselineValues("2") .build().run(); However when I run the test case it says it cannot find the msgpack/testBasic.mp file. However it is in my src/test/resources folder. Should this work? I'm I going at it the right way? Thanks jc
[DISCUSS] Drill Hackathon/Drill Developers Day - 2018!
Hi Drill Developers, It has already been one year since our last Drill Hackathon/Developers Day! I would like to get the ball rolling for this year's Drill Developer Day ™. The tentative date is Nov 14 (Wednesday). The goal is to get the community together for a half-day long technical discussion on key topics in preparation for a Drill 2.0 release as well as potential improvements in the upcoming releases. Depending on the interest areas, we could form groups and have a volunteer lead each group. There is already a list of topics from last year but we would like to solicit new topics for this year as well. Please reply back to this email, if you are interested in attending the event and mention any topics you would like to be hear more about! Based on the interest, we would follow-up with community members for volunteers, topics and the rest. We hope to see you all at the next Developer Day! Keep Drilling, Gautam
storage plugin test case
I have writing a msgpack storage plugin from drill. https://github.com/jcmcote/drill/tree/master/contrib/storage-msgpack I'm now trying to write test cases like testBuilder() .sqlQuery("select * from cp.`msgpack/testBasic.mp`") .ordered() .baselineColumns("a").baselineValues("1").baselineValues("1") .baselineColumns("b").baselineValues("2").baselineValues("2") .build().run(); However when I run the test case it says it cannot find the msgpack/testBasic.mp file. However it is in my src/test/resources folder. Should this work? I'm I going at it the right way? Thanks jc
[GitHub] KazydubB commented on a change in pull request #1455: DRILL-6724: Dump operator context to logs when error occurs during query execution
KazydubB commented on a change in pull request #1455: DRILL-6724: Dump operator context to logs when error occurs during query execution URL: https://github.com/apache/drill/pull/1455#discussion_r219799183 ## File path: exec/java-exec/src/test/java/org/apache/drill/TestOperatorDump.java ## @@ -0,0 +1,180 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.drill; + +import ch.qos.logback.classic.spi.ILoggingEvent; +import ch.qos.logback.core.ConsoleAppender; +import org.apache.drill.common.exceptions.UserRemoteException; +import org.apache.drill.exec.exception.OutOfMemoryException; +import org.apache.drill.exec.physical.impl.ScanBatch; +import org.apache.drill.exec.physical.impl.join.HashJoinBatch; +import org.apache.drill.exec.physical.impl.limit.LimitRecordBatch; +import org.apache.drill.exec.testing.Controls; +import org.apache.drill.exec.testing.ControlsInjectionUtil; +import org.apache.drill.test.ClusterFixture; +import org.apache.drill.test.ClusterFixtureBuilder; +import org.apache.drill.test.ClusterTest; +import org.apache.drill.test.LogFixture; +import org.junit.After; +import org.junit.Before; +import org.junit.BeforeClass; +import org.junit.Test; + +import java.nio.file.Paths; +import java.util.ArrayList; +import java.util.Collections; +import java.util.Iterator; +import java.util.List; +import java.util.Set; +import java.util.stream.Collectors; + +import static org.junit.Assert.assertTrue; + +public class TestOperatorDump extends ClusterTest { + + private static final String ENTRY_DUMP_COMPLETED = "Operator dump completed"; + private static final String ENTRY_DUMP_STARTED = "Operator dump started"; + + private LogFixture logFixture; + private TestAppender appender; + + @BeforeClass + public static void setupFiles() { +dirTestWatcher.copyResourceToRoot(Paths.get("multilevel")); + } + + @Before + public void setup() throws Exception { +ClusterFixtureBuilder builder = ClusterFixture.builder(dirTestWatcher); +appender = new TestAppender(); +logFixture = LogFixture.builder() +.toConsole(appender, LogFixture.DEFAULT_CONSOLE_FORMAT) +.build(); +startCluster(builder); + } + + @After + public void tearUp(){ Review comment: Done. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] KazydubB commented on a change in pull request #1455: DRILL-6724: Dump operator context to logs when error occurs during query execution
KazydubB commented on a change in pull request #1455: DRILL-6724: Dump operator context to logs when error occurs during query execution URL: https://github.com/apache/drill/pull/1455#discussion_r219799239 ## File path: exec/java-exec/src/test/java/org/apache/drill/test/LogFixture.java ## @@ -204,10 +210,14 @@ private void setupConsole(LogFixtureBuilder builder) { ple.setContext(lc); ple.start(); -appender = new ConsoleAppender<>( ); +if (builder.appender == null) { Review comment: Done. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] KazydubB commented on a change in pull request #1455: DRILL-6724: Dump operator context to logs when error occurs during query execution
KazydubB commented on a change in pull request #1455: DRILL-6724: Dump operator context to logs when error occurs during query execution URL: https://github.com/apache/drill/pull/1455#discussion_r220253584 ## File path: exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/join/HashJoinBatch.java ## @@ -435,6 +440,8 @@ public IterOutcome innerNext() { // Build the hash table, using the build side record batches. final IterOutcome buildExecuteTermination = executeBuildPhase(); +injector.injectUnchecked(context.getExecutionControls(), "hashjoin-innerNext"); Review comment: Yes, of course it is better to re-use existing ones. Done. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] KazydubB commented on a change in pull request #1455: DRILL-6724: Dump operator context to logs when error occurs during query execution
KazydubB commented on a change in pull request #1455: DRILL-6724: Dump operator context to logs when error occurs during query execution URL: https://github.com/apache/drill/pull/1455#discussion_r220254605 ## File path: exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/BaseRootExec.java ## @@ -125,17 +126,24 @@ public void receivingFragmentFinished(final FragmentHandle handle) { @Override public void dumpOperators() { -if (!operators.isEmpty()) { - logger.info("Operator dump started."); - for (CloseableRecordBatch batch : operators) { -batch.dump(); -if (batch.isFailed()) { - // No need to proceed further as this batch is the one that failed - return; +final int numberOfOperatorsToDump = 2; Review comment: Actually, batches are being dumped there. Renamed constants to represent this. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] KazydubB commented on a change in pull request #1455: DRILL-6724: Dump operator context to logs when error occurs during query execution
KazydubB commented on a change in pull request #1455: DRILL-6724: Dump operator context to logs when error occurs during query execution URL: https://github.com/apache/drill/pull/1455#discussion_r219799214 ## File path: exec/java-exec/src/test/java/org/apache/drill/TestOperatorDump.java ## @@ -0,0 +1,180 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.drill; + +import ch.qos.logback.classic.spi.ILoggingEvent; +import ch.qos.logback.core.ConsoleAppender; +import org.apache.drill.common.exceptions.UserRemoteException; +import org.apache.drill.exec.exception.OutOfMemoryException; +import org.apache.drill.exec.physical.impl.ScanBatch; +import org.apache.drill.exec.physical.impl.join.HashJoinBatch; +import org.apache.drill.exec.physical.impl.limit.LimitRecordBatch; +import org.apache.drill.exec.testing.Controls; +import org.apache.drill.exec.testing.ControlsInjectionUtil; +import org.apache.drill.test.ClusterFixture; +import org.apache.drill.test.ClusterFixtureBuilder; +import org.apache.drill.test.ClusterTest; +import org.apache.drill.test.LogFixture; +import org.junit.After; +import org.junit.Before; +import org.junit.BeforeClass; +import org.junit.Test; + +import java.nio.file.Paths; +import java.util.ArrayList; +import java.util.Collections; +import java.util.Iterator; +import java.util.List; +import java.util.Set; +import java.util.stream.Collectors; + +import static org.junit.Assert.assertTrue; + +public class TestOperatorDump extends ClusterTest { + + private static final String ENTRY_DUMP_COMPLETED = "Operator dump completed"; + private static final String ENTRY_DUMP_STARTED = "Operator dump started"; + + private LogFixture logFixture; + private TestAppender appender; + + @BeforeClass + public static void setupFiles() { +dirTestWatcher.copyResourceToRoot(Paths.get("multilevel")); + } + + @Before + public void setup() throws Exception { +ClusterFixtureBuilder builder = ClusterFixture.builder(dirTestWatcher); +appender = new TestAppender(); +logFixture = LogFixture.builder() +.toConsole(appender, LogFixture.DEFAULT_CONSOLE_FORMAT) +.build(); +startCluster(builder); + } + + @After + public void tearUp(){ +logFixture.close(); + } + + @Test(expected = UserRemoteException.class) + public void testScanBatchChecked() throws Exception { +String exceptionDesc = "next-allocate"; +final String controls = Controls.newBuilder() +.addException(ScanBatch.class, exceptionDesc, OutOfMemoryException.class, 0, 1) +.build(); +ControlsInjectionUtil.setControls(client.client(), controls); +String query = "select * from dfs.`multilevel/parquet` limit 100"; +try { + client.queryBuilder().sql(query).run(); +} catch (UserRemoteException e) { + assertTrue(e.getMessage().contains(exceptionDesc)); + + String[] expectedEntries = new String[] {ENTRY_DUMP_STARTED, ENTRY_DUMP_COMPLETED}; + validateContainsEntries(expectedEntries, ScanBatch.class.getName()); + throw e; +} + } + + @Test(expected = UserRemoteException.class) + public void testLimitRecordBatchUnchecked() throws Exception { +String exceptionDesc = "limit-do-work"; +final String controls = Controls.newBuilder() +.addException(LimitRecordBatch.class, exceptionDesc, IndexOutOfBoundsException.class, 0, 1) +.build(); +ControlsInjectionUtil.setControls(client.client(), controls); +String query = "select * from dfs.`multilevel/parquet` limit 5"; +try { + client.queryBuilder().sql(query).run(); +} catch (UserRemoteException e) { + assertTrue(e.getMessage().contains(exceptionDesc)); + + String[] expectedEntries = new String[] {ENTRY_DUMP_STARTED, ENTRY_DUMP_COMPLETED}; + validateContainsEntries(expectedEntries, LimitRecordBatch.class.getName()); + throw e; +} + } + + @Test(expected = UserRemoteException.class) + public void testHashJoinBatchUnchecked() throws Exception { +String exceptionDesc = "hashjoin-innerNext"; +final String controls = Controls.newBuilder() +.addException(HashJoinBatch.class, exceptionDesc, RuntimeException.class, 0, 1) +.build(); +
[GitHub] KazydubB commented on a change in pull request #1455: DRILL-6724: Dump operator context to logs when error occurs during query execution
KazydubB commented on a change in pull request #1455: DRILL-6724: Dump operator context to logs when error occurs during query execution URL: https://github.com/apache/drill/pull/1455#discussion_r219863225 ## File path: exec/java-exec/src/main/java/org/apache/drill/exec/record/AbstractRecordBatch.java ## @@ -47,7 +47,8 @@ protected final boolean unionTypeEnabled; protected BatchState state; - // In case of Exception will be IterOutcome.STOP + // Used for state dump + protected boolean failed; Review comment: The `state` is converted to `lastOutcome` in `next()` which represents last outcome of `next()` invocation and `failed` represents if there was an `Exception` thrown during `next()` execution. But you are right that there are unnecessary settings of `failed` and `lastOutcome` fields, so moved managing of the fields of child classes to the class only. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] KazydubB commented on a change in pull request #1455: DRILL-6724: Dump operator context to logs when error occurs during query execution
KazydubB commented on a change in pull request #1455: DRILL-6724: Dump operator context to logs when error occurs during query execution URL: https://github.com/apache/drill/pull/1455#discussion_r219799067 ## File path: exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/filter/RuntimeFilterRecordBatch.java ## @@ -250,7 +254,8 @@ private void computeBitSet(int fieldId, BloomFilter bloomFilter, BitSet bitSet) @Override public void dump() { -logger.info("RuntimeFilterRecordBatch[selectionVector={}, toFilterFields={}, originalRecordCount={}, " + -"batchSchema={}]", sv2, toFilterFields, originalRecordCount, incoming.getSchema()); +logger.error("RuntimeFilterRecordBatch[container={}, selectionVector={}, toFilterFields={}, " ++ "originalRecordCount={}, batchSchema={}]", +container, sv2, toFilterFields, originalRecordCount, incoming.getSchema()); } } Review comment: Done. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] KazydubB commented on a change in pull request #1455: DRILL-6724: Dump operator context to logs when error occurs during query execution
KazydubB commented on a change in pull request #1455: DRILL-6724: Dump operator context to logs when error occurs during query execution URL: https://github.com/apache/drill/pull/1455#discussion_r220261501 ## File path: common/src/main/java/org/apache/drill/common/exceptions/UserExceptionContext.java ## @@ -141,7 +141,7 @@ String generateContextMessage(boolean includeErrorIdAndIdentity, boolean include } if (includeSeeLogsMessage) { - sb.append("Please, refer to logs for more information.\n"); + sb.append("\nPlease, refer to logs for more information.\n"); Review comment: Done. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Created] (DRILL-6762) Dynamic UDFs registered on one Drillbit are not visible on other Drillbits
Kunal Khatua created DRILL-6762: --- Summary: Dynamic UDFs registered on one Drillbit are not visible on other Drillbits Key: DRILL-6762 URL: https://issues.apache.org/jira/browse/DRILL-6762 Project: Apache Drill Issue Type: Bug Components: Functions - Drill Affects Versions: 1.14.0 Reporter: Kunal Khatua Fix For: 1.15.0 Originally Reported : https://stackoverflow.com/questions/52480160/dynamic-udf-in-apache-drill-cluster When using a 4-node Drill 1.14 cluster, UDF jars registered on one node are not usable on other nodes despite the {{/registry}} and ZK showing the UDFs as registered. This was previously working on 1.14.0 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] denysord88 commented on issue #1479: DRILL-6761: Updated table of contents on the REST-API page
denysord88 commented on issue #1479: DRILL-6761: Updated table of contents on the REST-API page URL: https://github.com/apache/drill/pull/1479#issuecomment-424288851 @bbevens could you please review this PR? This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] denysord88 opened a new pull request #1479: DRILL-6761: Updated table of contents on the REST-API page
denysord88 opened a new pull request #1479: DRILL-6761: Updated table of contents on the REST-API page URL: https://github.com/apache/drill/pull/1479 This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] denysord88 closed pull request #1478: DRILL-6761: Updated table of contents on the REST-API page
denysord88 closed pull request #1478: DRILL-6761: Updated table of contents on the REST-API page URL: https://github.com/apache/drill/pull/1478 This is a PR merged from a forked repository. As GitHub hides the original diff on merge, it is displayed below for the sake of provenance: As this is a foreign pull request (from a fork), the diff is supplied below (as it won't show otherwise due to GitHub magic): diff --git a/_docs/developer-information/rest-api/010-rest-api-introduction.md b/_docs/developer-information/rest-api/010-rest-api-introduction.md index 7b0f90f2cdc..7db18ad4a65 100644 --- a/_docs/developer-information/rest-api/010-rest-api-introduction.md +++ b/_docs/developer-information/rest-api/010-rest-api-introduction.md @@ -18,17 +18,17 @@ Several examples in the document use the donuts.json file. To download this file This documentation presents HTTP methods in the same order as functions appear in the Web Console: -[**Query**]({{site.baseurl}}/docs/rest-api/#query) +[**Query**]({{site.baseurl}}/docs/rest-api-introduction/#query) Submit a query and return results. -[**Profiles**](({{site.baseurl}}/docs/rest-api/#profiles)) +[**Profiles**]({{site.baseurl}}/docs/rest-api-introduction/#profiles) * Get the profiles of running and completed queries. * Get the profile of the query that has the given queryid. * Cancel the query that has the given queryid. -[**Storage**]({{site.baseurl}}/docs/rest-api/#storage) +[**Storage**]({{site.baseurl}}/docs/rest-api-introduction/#storage) * Get the list of storage plugin names and configurations. * Get the definition of the named storage plugin. @@ -38,11 +38,11 @@ Submit a query and return results. * Get Drillbit information, such as ports numbers. * Get the current memory metrics. -[**Threads**](({{site.baseurl}}/docs/rest-api/#threads)) +[**Threads**]({{site.baseurl}}/docs/rest-api-introduction/#threads) Get the status of threads. -[**Options**]({{site.baseurl}}/docs/rest-api/#options) +[**Options**]({{site.baseurl}}/docs/rest-api-introduction/#options) List information about system/session options. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Created] (DRILL-6761) Documentation - broken table of contents on the REST-API page
Denys Ordynskiy created DRILL-6761: -- Summary: Documentation - broken table of contents on the REST-API page Key: DRILL-6761 URL: https://issues.apache.org/jira/browse/DRILL-6761 Project: Apache Drill Issue Type: Bug Affects Versions: 1.15.0 Reporter: Denys Ordynskiy Assignee: Denys Ordynskiy Page [https://drill.apache.org/docs/rest-api-introduction/] has a broken links on the table of contents: [https://drill.apache.org/docs/rest-api/#query] [https://drill.apache.org/docs/rest-api-introduction/(/docs/rest-api/#profiles)] [https://drill.apache.org/docs/rest-api/#storage] [https://drill.apache.org/docs/rest-api-introduction/(/docs/rest-api/#threads)] [https://drill.apache.org/docs/rest-api/#options] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] denysord88 opened a new pull request #1478: Updated table of contents on the REST-API page
denysord88 opened a new pull request #1478: Updated table of contents on the REST-API page URL: https://github.com/apache/drill/pull/1478 This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services