[jira] [Updated] (DRILL-7417) Add user logged in/out event in info level logs
[ https://issues.apache.org/jira/browse/DRILL-7417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sorabh Hamirwasia updated DRILL-7417: - Reviewer: Arina Ielchiieva > Add user logged in/out event in info level logs > --- > > Key: DRILL-7417 > URL: https://issues.apache.org/jira/browse/DRILL-7417 > Project: Apache Drill > Issue Type: Improvement > Components: Security >Reporter: Sorabh Hamirwasia >Assignee: Sorabh Hamirwasia >Priority: Major > > Sample output logs: > WebUser: > Note: for WebUser log in/out events the port may be different since Web based > connection is stateless. > {code:java} > 2019-10-22 13:47:24,888 [qtp480678786-70] INFO > o.a.d.e.s.r.a.DrillRestLoginService - WebUser alice logged in from > 172.30.8.49:60558 > 2019-10-22 13:47:30,508 [qtp480678786-64] INFO > o.a.d.e.s.rest.LogInLogOutResources - WebUser alice logged out from > 172.30.8.49:60567{code} > JDBC/ODBC: > {code:java} > 2019-10-22 13:48:16,977 [UserServer-1] INFO > o.a.drill.exec.rpc.user.UserServer - User alice logged in from > /10.10.100.163:59846 > 2019-10-22 13:48:19,858 [UserServer-1] INFO > o.a.drill.exec.rpc.user.UserServer - User alice logged out from > /10.10.100.163:59846{code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (DRILL-7417) Add user logged in/out event in info level logs
[ https://issues.apache.org/jira/browse/DRILL-7417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sorabh Hamirwasia updated DRILL-7417: - Description: Sample output logs: WebUser: Note: for WebUser log in/out events the port may be different since Web based connection is stateless. {code:java} 2019-10-22 13:47:24,888 [qtp480678786-70] INFO o.a.d.e.s.r.a.DrillRestLoginService - WebUser alice logged in from 172.30.8.49:60558 2019-10-22 13:47:30,508 [qtp480678786-64] INFO o.a.d.e.s.rest.LogInLogOutResources - WebUser alice logged out from 172.30.8.49:60567{code} JDBC/ODBC: {code:java} 2019-10-22 13:48:16,977 [UserServer-1] INFO o.a.drill.exec.rpc.user.UserServer - User alice logged in from /10.10.100.163:59846 2019-10-22 13:48:19,858 [UserServer-1] INFO o.a.drill.exec.rpc.user.UserServer - User alice logged out from /10.10.100.163:59846{code} > Add user logged in/out event in info level logs > --- > > Key: DRILL-7417 > URL: https://issues.apache.org/jira/browse/DRILL-7417 > Project: Apache Drill > Issue Type: Improvement > Components: Security >Reporter: Sorabh Hamirwasia >Assignee: Sorabh Hamirwasia >Priority: Major > > Sample output logs: > WebUser: > Note: for WebUser log in/out events the port may be different since Web based > connection is stateless. > {code:java} > 2019-10-22 13:47:24,888 [qtp480678786-70] INFO > o.a.d.e.s.r.a.DrillRestLoginService - WebUser alice logged in from > 172.30.8.49:60558 > 2019-10-22 13:47:30,508 [qtp480678786-64] INFO > o.a.d.e.s.rest.LogInLogOutResources - WebUser alice logged out from > 172.30.8.49:60567{code} > JDBC/ODBC: > {code:java} > 2019-10-22 13:48:16,977 [UserServer-1] INFO > o.a.drill.exec.rpc.user.UserServer - User alice logged in from > /10.10.100.163:59846 > 2019-10-22 13:48:19,858 [UserServer-1] INFO > o.a.drill.exec.rpc.user.UserServer - User alice logged out from > /10.10.100.163:59846{code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (DRILL-7177) Format Plugin for Excel Files
[ https://issues.apache.org/jira/browse/DRILL-7177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16959205#comment-16959205 ] ASF GitHub Bot commented on DRILL-7177: --- cgivre commented on pull request #1749: DRILL-7177: Format Plugin for Excel Files URL: https://github.com/apache/drill/pull/1749#discussion_r338761283 ## File path: contrib/format-excel/src/main/java/org/apache/drill/exec/store/excel/ExcelBatchReader.java ## @@ -0,0 +1,444 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.drill.exec.store.excel; + +import org.apache.drill.common.exceptions.UserException; +import org.apache.drill.common.types.TypeProtos; +import org.apache.drill.common.types.TypeProtos.MinorType; +import org.apache.drill.exec.physical.impl.scan.file.FileScanFramework; +import org.apache.drill.exec.physical.impl.scan.framework.ManagedReader; +import org.apache.drill.exec.physical.resultSet.ResultSetLoader; +import org.apache.drill.exec.physical.resultSet.RowSetLoader; +import org.apache.drill.exec.record.metadata.ColumnMetadata; +import org.apache.drill.exec.record.metadata.MetadataUtils; +import org.apache.drill.exec.record.metadata.SchemaBuilder; +import org.apache.drill.exec.record.metadata.TupleMetadata; +import org.apache.drill.exec.vector.accessor.ScalarWriter; +import org.apache.drill.exec.vector.accessor.TupleWriter; +import org.apache.hadoop.fs.FSDataInputStream; +import org.apache.hadoop.mapred.FileSplit; +import org.apache.poi.ss.usermodel.Cell; +import org.apache.poi.ss.usermodel.CellValue; +import org.apache.poi.ss.usermodel.DateUtil; +import org.apache.poi.ss.usermodel.FormulaEvaluator; +import org.apache.poi.ss.usermodel.Row; +import org.apache.poi.xssf.usermodel.XSSFSheet; +import org.apache.poi.xssf.usermodel.XSSFWorkbook; +import org.apache.drill.exec.physical.impl.scan.file.FileScanFramework.FileSchemaNegotiator; +import org.joda.time.Instant; +import java.util.Iterator; +import java.io.IOException; +import java.util.ArrayList; + +public class ExcelBatchReader implements ManagedReader { + private ExcelReaderConfig readerConfig; + + private static final org.slf4j.Logger logger = org.slf4j.LoggerFactory.getLogger(ExcelBatchReader.class); + + private static final String SAFE_WILDCARD = "_$"; + + private static final String SAFE_SEPARATOR = "_"; + + private static final String PARSER_WILDCARD = ".*"; + + private static final String HEADER_NEW_LINE_REPLACEMENT = "__"; + + private static final String MISSING_FIELD_NAME_HEADER = "field_"; + + private XSSFSheet sheet; + + private XSSFWorkbook workbook; + + private FSDataInputStream fsStream; + + private FormulaEvaluator evaluator; + + private ArrayList excelFieldNames; + + private ArrayList columnWriters; + + private Iterator rowIterator; + + private RowSetLoader rowWriter; + + private int totalColumnCount; + + private int lineCount; + + private boolean firstLine; + + private FileSplit split; + + private ResultSetLoader loader; + + private int recordCount; + + public static class ExcelReaderConfig { +protected final ExcelFormatPlugin plugin; + +protected final int headerRow; + +protected final int lastRow; + +protected final int firstColumn; + +protected final int lastColumn; + +protected final boolean readAllFieldsAsVarChar; + +protected String sheetName; + +public ExcelReaderConfig(ExcelFormatPlugin plugin) { + this.plugin = plugin; + headerRow = plugin.getConfig().getHeaderRow(); + lastRow = plugin.getConfig().getLastRow(); + firstColumn = plugin.getConfig().getFirstColumn(); + lastColumn = plugin.getConfig().getLastColumn(); + readAllFieldsAsVarChar = plugin.getConfig().getReadAllFieldsAsVarChar(); + sheetName = plugin.getConfig().getSheetName(); +} + } + + public ExcelBatchReader(ExcelReaderConfig readerConfig) { +this.readerConfig = readerConfig; +firstLine = true; + } + + @Override + public boolean open(FileSchemaNegotiator negotiator) { +verifyConfigOptions(); +split = negotiator.split(); +loader = negotiator.build(); +rowWriter = loader.writer
[jira] [Commented] (DRILL-7177) Format Plugin for Excel Files
[ https://issues.apache.org/jira/browse/DRILL-7177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16959177#comment-16959177 ] ASF GitHub Bot commented on DRILL-7177: --- cgivre commented on pull request #1749: DRILL-7177: Format Plugin for Excel Files URL: https://github.com/apache/drill/pull/1749#discussion_r338761283 ## File path: contrib/format-excel/src/main/java/org/apache/drill/exec/store/excel/ExcelBatchReader.java ## @@ -0,0 +1,444 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.drill.exec.store.excel; + +import org.apache.drill.common.exceptions.UserException; +import org.apache.drill.common.types.TypeProtos; +import org.apache.drill.common.types.TypeProtos.MinorType; +import org.apache.drill.exec.physical.impl.scan.file.FileScanFramework; +import org.apache.drill.exec.physical.impl.scan.framework.ManagedReader; +import org.apache.drill.exec.physical.resultSet.ResultSetLoader; +import org.apache.drill.exec.physical.resultSet.RowSetLoader; +import org.apache.drill.exec.record.metadata.ColumnMetadata; +import org.apache.drill.exec.record.metadata.MetadataUtils; +import org.apache.drill.exec.record.metadata.SchemaBuilder; +import org.apache.drill.exec.record.metadata.TupleMetadata; +import org.apache.drill.exec.vector.accessor.ScalarWriter; +import org.apache.drill.exec.vector.accessor.TupleWriter; +import org.apache.hadoop.fs.FSDataInputStream; +import org.apache.hadoop.mapred.FileSplit; +import org.apache.poi.ss.usermodel.Cell; +import org.apache.poi.ss.usermodel.CellValue; +import org.apache.poi.ss.usermodel.DateUtil; +import org.apache.poi.ss.usermodel.FormulaEvaluator; +import org.apache.poi.ss.usermodel.Row; +import org.apache.poi.xssf.usermodel.XSSFSheet; +import org.apache.poi.xssf.usermodel.XSSFWorkbook; +import org.apache.drill.exec.physical.impl.scan.file.FileScanFramework.FileSchemaNegotiator; +import org.joda.time.Instant; +import java.util.Iterator; +import java.io.IOException; +import java.util.ArrayList; + +public class ExcelBatchReader implements ManagedReader { + private ExcelReaderConfig readerConfig; + + private static final org.slf4j.Logger logger = org.slf4j.LoggerFactory.getLogger(ExcelBatchReader.class); + + private static final String SAFE_WILDCARD = "_$"; + + private static final String SAFE_SEPARATOR = "_"; + + private static final String PARSER_WILDCARD = ".*"; + + private static final String HEADER_NEW_LINE_REPLACEMENT = "__"; + + private static final String MISSING_FIELD_NAME_HEADER = "field_"; + + private XSSFSheet sheet; + + private XSSFWorkbook workbook; + + private FSDataInputStream fsStream; + + private FormulaEvaluator evaluator; + + private ArrayList excelFieldNames; + + private ArrayList columnWriters; + + private Iterator rowIterator; + + private RowSetLoader rowWriter; + + private int totalColumnCount; + + private int lineCount; + + private boolean firstLine; + + private FileSplit split; + + private ResultSetLoader loader; + + private int recordCount; + + public static class ExcelReaderConfig { +protected final ExcelFormatPlugin plugin; + +protected final int headerRow; + +protected final int lastRow; + +protected final int firstColumn; + +protected final int lastColumn; + +protected final boolean readAllFieldsAsVarChar; + +protected String sheetName; + +public ExcelReaderConfig(ExcelFormatPlugin plugin) { + this.plugin = plugin; + headerRow = plugin.getConfig().getHeaderRow(); + lastRow = plugin.getConfig().getLastRow(); + firstColumn = plugin.getConfig().getFirstColumn(); + lastColumn = plugin.getConfig().getLastColumn(); + readAllFieldsAsVarChar = plugin.getConfig().getReadAllFieldsAsVarChar(); + sheetName = plugin.getConfig().getSheetName(); +} + } + + public ExcelBatchReader(ExcelReaderConfig readerConfig) { +this.readerConfig = readerConfig; +firstLine = true; + } + + @Override + public boolean open(FileSchemaNegotiator negotiator) { +verifyConfigOptions(); +split = negotiator.split(); +loader = negotiator.build(); +rowWriter = loader.writer
[jira] [Commented] (DRILL-7177) Format Plugin for Excel Files
[ https://issues.apache.org/jira/browse/DRILL-7177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16959176#comment-16959176 ] ASF GitHub Bot commented on DRILL-7177: --- cgivre commented on pull request #1749: DRILL-7177: Format Plugin for Excel Files URL: https://github.com/apache/drill/pull/1749#discussion_r338761283 ## File path: contrib/format-excel/src/main/java/org/apache/drill/exec/store/excel/ExcelBatchReader.java ## @@ -0,0 +1,444 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.drill.exec.store.excel; + +import org.apache.drill.common.exceptions.UserException; +import org.apache.drill.common.types.TypeProtos; +import org.apache.drill.common.types.TypeProtos.MinorType; +import org.apache.drill.exec.physical.impl.scan.file.FileScanFramework; +import org.apache.drill.exec.physical.impl.scan.framework.ManagedReader; +import org.apache.drill.exec.physical.resultSet.ResultSetLoader; +import org.apache.drill.exec.physical.resultSet.RowSetLoader; +import org.apache.drill.exec.record.metadata.ColumnMetadata; +import org.apache.drill.exec.record.metadata.MetadataUtils; +import org.apache.drill.exec.record.metadata.SchemaBuilder; +import org.apache.drill.exec.record.metadata.TupleMetadata; +import org.apache.drill.exec.vector.accessor.ScalarWriter; +import org.apache.drill.exec.vector.accessor.TupleWriter; +import org.apache.hadoop.fs.FSDataInputStream; +import org.apache.hadoop.mapred.FileSplit; +import org.apache.poi.ss.usermodel.Cell; +import org.apache.poi.ss.usermodel.CellValue; +import org.apache.poi.ss.usermodel.DateUtil; +import org.apache.poi.ss.usermodel.FormulaEvaluator; +import org.apache.poi.ss.usermodel.Row; +import org.apache.poi.xssf.usermodel.XSSFSheet; +import org.apache.poi.xssf.usermodel.XSSFWorkbook; +import org.apache.drill.exec.physical.impl.scan.file.FileScanFramework.FileSchemaNegotiator; +import org.joda.time.Instant; +import java.util.Iterator; +import java.io.IOException; +import java.util.ArrayList; + +public class ExcelBatchReader implements ManagedReader { + private ExcelReaderConfig readerConfig; + + private static final org.slf4j.Logger logger = org.slf4j.LoggerFactory.getLogger(ExcelBatchReader.class); + + private static final String SAFE_WILDCARD = "_$"; + + private static final String SAFE_SEPARATOR = "_"; + + private static final String PARSER_WILDCARD = ".*"; + + private static final String HEADER_NEW_LINE_REPLACEMENT = "__"; + + private static final String MISSING_FIELD_NAME_HEADER = "field_"; + + private XSSFSheet sheet; + + private XSSFWorkbook workbook; + + private FSDataInputStream fsStream; + + private FormulaEvaluator evaluator; + + private ArrayList excelFieldNames; + + private ArrayList columnWriters; + + private Iterator rowIterator; + + private RowSetLoader rowWriter; + + private int totalColumnCount; + + private int lineCount; + + private boolean firstLine; + + private FileSplit split; + + private ResultSetLoader loader; + + private int recordCount; + + public static class ExcelReaderConfig { +protected final ExcelFormatPlugin plugin; + +protected final int headerRow; + +protected final int lastRow; + +protected final int firstColumn; + +protected final int lastColumn; + +protected final boolean readAllFieldsAsVarChar; + +protected String sheetName; + +public ExcelReaderConfig(ExcelFormatPlugin plugin) { + this.plugin = plugin; + headerRow = plugin.getConfig().getHeaderRow(); + lastRow = plugin.getConfig().getLastRow(); + firstColumn = plugin.getConfig().getFirstColumn(); + lastColumn = plugin.getConfig().getLastColumn(); + readAllFieldsAsVarChar = plugin.getConfig().getReadAllFieldsAsVarChar(); + sheetName = plugin.getConfig().getSheetName(); +} + } + + public ExcelBatchReader(ExcelReaderConfig readerConfig) { +this.readerConfig = readerConfig; +firstLine = true; + } + + @Override + public boolean open(FileSchemaNegotiator negotiator) { +verifyConfigOptions(); +split = negotiator.split(); +loader = negotiator.build(); +rowWriter = loader.writer
[jira] [Commented] (DRILL-7177) Format Plugin for Excel Files
[ https://issues.apache.org/jira/browse/DRILL-7177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16959143#comment-16959143 ] ASF GitHub Bot commented on DRILL-7177: --- cgivre commented on pull request #1749: DRILL-7177: Format Plugin for Excel Files URL: https://github.com/apache/drill/pull/1749#discussion_r338733373 ## File path: contrib/format-excel/src/main/java/org/apache/drill/exec/store/excel/ExcelBatchReader.java ## @@ -0,0 +1,398 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.drill.exec.store.excel; + +import org.apache.drill.common.exceptions.UserException; +import org.apache.drill.common.types.TypeProtos; +import org.apache.drill.exec.physical.impl.scan.file.FileScanFramework; +import org.apache.drill.exec.physical.impl.scan.framework.ManagedReader; +import org.apache.drill.exec.physical.resultSet.ResultSetLoader; +import org.apache.drill.exec.physical.resultSet.RowSetLoader; +import org.apache.drill.exec.record.metadata.ColumnMetadata; +import org.apache.drill.exec.record.metadata.MetadataUtils; +import org.apache.drill.exec.record.metadata.SchemaBuilder; +import org.apache.drill.exec.record.metadata.TupleMetadata; +import org.apache.drill.exec.vector.accessor.ScalarWriter; +import org.apache.drill.exec.vector.accessor.TupleWriter; +import org.apache.hadoop.fs.FSDataInputStream; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.mapred.FileSplit; + +import org.apache.poi.ss.usermodel.Cell; +import org.apache.poi.ss.usermodel.CellValue; +import org.apache.poi.ss.usermodel.DateUtil; +import org.apache.poi.ss.usermodel.FormulaEvaluator; +import org.apache.poi.ss.usermodel.Row; +import org.apache.poi.xssf.usermodel.XSSFSheet; +import org.apache.poi.xssf.usermodel.XSSFWorkbook; +import org.apache.drill.exec.physical.impl.scan.file.FileScanFramework.FileSchemaNegotiator; +import org.joda.time.Instant; + +import java.util.Iterator; +import java.io.IOException; +import java.util.ArrayList; + +public class ExcelBatchReader implements ManagedReader { + private ExcelReaderConfig readerConfig; + + private static final org.slf4j.Logger logger = org.slf4j.LoggerFactory.getLogger(ExcelBatchReader.class); + + private XSSFWorkbook workbook; + + private FSDataInputStream fsStream; + + private static final String SAFE_WILDCARD = "_$"; + + private static final String SAFE_SEPARATOR = "_"; + + private static final String PARSER_WILDCARD = ".*"; + + private static final String MISSING_FIELD_NAME_HEADER = "field_"; + + private static final String SAFE_NEWLINE_REPLACEMENT = " "; + + private XSSFSheet sheet; + + private FormulaEvaluator evaluator; + + private ArrayList excelFieldNames; + + private Iterator rowIterator; + + private int totalColumnCount; + + private int lineCount; + + private FileSplit split; + + private ResultSetLoader loader; + + private int recordCount; + + public static class ExcelReaderConfig { +protected final ExcelFormatPlugin plugin; + +protected int headerRow; + +protected int lastRow; + +protected int firstColumn; + +protected int lastColumn; + +protected boolean readAllFieldsAsVarChar; + +protected boolean evaluateFormulae; + +protected TupleMetadata schema; + +protected String sheetName; + +public ExcelReaderConfig(ExcelFormatPlugin plugin, int headerRow, int lastRow, int firstColumn, int lastColumn, boolean readAllFieldsAsVarChar, boolean evaluateFormulae, + //TupleMetadata schema, + String sheetName) { + this.plugin = plugin; + this.headerRow = headerRow; + this.lastRow = lastRow; + this.firstColumn = firstColumn; + this.lastColumn = lastColumn; + this.readAllFieldsAsVarChar = readAllFieldsAsVarChar; + this.evaluateFormulae = evaluateFormulae; + this.sheetName = sheetName; + +} + } + + public ExcelBatchReader(ExcelReaderConfig readerConfig) { +this.readerConfig = readerConfig; + } + + @Override + public boolean open(FileSchemaNegotiator negotiator) { +verifyConfigOptions(); +split = negotiator.split(); +op
[jira] [Commented] (DRILL-7417) Add user logged in/out event in info level logs
[ https://issues.apache.org/jira/browse/DRILL-7417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16959133#comment-16959133 ] ASF GitHub Bot commented on DRILL-7417: --- sohami commented on issue #1880: DRILL-7417: Add user logged in/out event in info level logs URL: https://github.com/apache/drill/pull/1880#issuecomment-546042281 @arina-ielchiieva - Made the changes to add import for logger. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org > Add user logged in/out event in info level logs > --- > > Key: DRILL-7417 > URL: https://issues.apache.org/jira/browse/DRILL-7417 > Project: Apache Drill > Issue Type: Improvement > Components: Security >Reporter: Sorabh Hamirwasia >Assignee: Sorabh Hamirwasia >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (DRILL-7417) Add user logged in/out event in info level logs
[ https://issues.apache.org/jira/browse/DRILL-7417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16959130#comment-16959130 ] ASF GitHub Bot commented on DRILL-7417: --- sohami commented on pull request #1880: DRILL-7417: Add user logged in/out event in info level logs URL: https://github.com/apache/drill/pull/1880#discussion_r338723489 ## File path: exec/java-exec/src/main/java/org/apache/drill/exec/server/rest/LogInLogOutResources.java ## @@ -120,6 +123,12 @@ public Viewable getLoginPageAfterValidationError() { public void logout(@Context HttpServletRequest req, @Context HttpServletResponse resp) throws Exception { final HttpSession session = req.getSession(); if (session != null) { + final Object authCreds = session.getAttribute(SessionAuthentication.__J_AUTHENTICATED); + if (authCreds != null) { +final SessionAuthentication sessionAuth = (SessionAuthentication) authCreds; +logger.info("WebUser {} logged out from {}:{}", sessionAuth.getUserIdentity().getUserPrincipal().getName(), req Review comment: I will prefer it without space since that's the usual convention for `ip:port` string. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org > Add user logged in/out event in info level logs > --- > > Key: DRILL-7417 > URL: https://issues.apache.org/jira/browse/DRILL-7417 > Project: Apache Drill > Issue Type: Improvement > Components: Security >Reporter: Sorabh Hamirwasia >Assignee: Sorabh Hamirwasia >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (DRILL-7421) FILTER clause for aggregate functions is ignored
[ https://issues.apache.org/jira/browse/DRILL-7421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vova Vysotskyi updated DRILL-7421: -- Description: Drill ignores FILTER clause for aggregate functions. The following query returns the wrong result: {code:sql} select count(*), count(employee_id) filter(where employee_id < 5) from {code} {noformat} cp.`employee.json`; +++ | EXPR$0 | EXPR$1 | +++ | 1155 | 1155 | +++ {noformat} Calcite already supports this feature (CALCITE-704) and as it is mentioned in the Jira ticket, such syntax is allowed by SQL standard. As a short solution, we should throw an exception for such queries that this functionality is not supported. As was mentioned in Calcite's Jira, some queries may be rewritten using switch case: {code:sql} select count(*), count(case when employee_id < 5 then employee_id else null end) from cp.`employee.json`; {code} It is possible to add functionality into Drill to rewrite filtered aggregate calls in such a way, but some aggregate functions still would not be supported, for example, {{count(\*)}}. was: Drill ignores FILTER clause for aggregate functions. The following query returns the wrong result: {code:sql} select count(*), count(employee_id) filter(where employee_id < 5) from {code} {noformat} cp.`employee.json`; +++ | EXPR$0 | EXPR$1 | +++ | 1155 | 1155 | +++ {noformat} Calcite already supports this feature (CALCITE-704) and as it is mentioned in the Jira ticket, such syntax is allowed by SQL standard. As a short solution, we should throw an exception for such queries that this functionality is not supported. As was mentioned in Calcite's Jira, some queries may be rewritten using switch case: {code:sql} select count(*), count(case when employee_id < 5 then employee_id else null end) from cp.`employee.json`; {code} It is possible to add functionality into Drill to rewrite filtered aggregate calls in such a way, but some aggregate functions still would not be supported, for example, {{count(*)}}. > FILTER clause for aggregate functions is ignored > > > Key: DRILL-7421 > URL: https://issues.apache.org/jira/browse/DRILL-7421 > Project: Apache Drill > Issue Type: Bug >Affects Versions: 1.16.0 >Reporter: Vova Vysotskyi >Priority: Major > Fix For: Future > > > Drill ignores FILTER clause for aggregate functions. > The following query returns the wrong result: > {code:sql} > select count(*), count(employee_id) filter(where employee_id < 5) from > {code} > {noformat} > cp.`employee.json`; > +++ > | EXPR$0 | EXPR$1 | > +++ > | 1155 | 1155 | > +++ > {noformat} > Calcite already supports this feature (CALCITE-704) and as it is mentioned in > the Jira ticket, such syntax is allowed by SQL standard. > As a short solution, we should throw an exception for such queries that this > functionality is not supported. > As was mentioned in Calcite's Jira, some queries may be rewritten using > switch case: > {code:sql} > select count(*), count(case when employee_id < 5 then employee_id else null > end) from cp.`employee.json`; > {code} > It is possible to add functionality into Drill to rewrite filtered aggregate > calls in such a way, but some aggregate functions still would not be > supported, for example, {{count(\*)}}. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (DRILL-7421) FILTER clause for aggregate functions is ignored
Vova Vysotskyi created DRILL-7421: - Summary: FILTER clause for aggregate functions is ignored Key: DRILL-7421 URL: https://issues.apache.org/jira/browse/DRILL-7421 Project: Apache Drill Issue Type: Bug Affects Versions: 1.16.0 Reporter: Vova Vysotskyi Fix For: Future Drill ignores FILTER clause for aggregate functions. The following query returns the wrong result: {code:sql} select count(*), count(employee_id) filter(where employee_id < 5) from {code} {noformat} cp.`employee.json`; +++ | EXPR$0 | EXPR$1 | +++ | 1155 | 1155 | +++ {noformat} Calcite already supports this feature (CALCITE-704) and as it is mentioned in the Jira ticket, such syntax is allowed by SQL standard. As a short solution, we should throw an exception for such queries that this functionality is not supported. As was mentioned in Calcite's Jira, some queries may be rewritten using switch case: {code:sql} select count(*), count(case when employee_id < 5 then employee_id else null end) from cp.`employee.json`; {code} It is possible to add functionality into Drill to rewrite filtered aggregate calls in such a way, but some aggregate functions still would not be supported, for example, {{count(*)}}. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (DRILL-7290) “Failed to construct kafka consumer” using Apache Drill
[ https://issues.apache.org/jira/browse/DRILL-7290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arina Ielchiieva updated DRILL-7290: Priority: Major (was: Blocker) > “Failed to construct kafka consumer” using Apache Drill > --- > > Key: DRILL-7290 > URL: https://issues.apache.org/jira/browse/DRILL-7290 > Project: Apache Drill > Issue Type: Bug > Components: Functions - Drill >Affects Versions: 1.14.0, 1.16.0 >Reporter: Aravind Voruganti >Priority: Major > > {noformat} > {noformat} > I am using the Apache Drill (1.14) JDBC driver in my application which > consumes the data from the Kafka. The application works just fine for some > time and after few iterations it fails to execute due to the following *Too > many files open* issue. I made sure there are no file handle leaks in my code > but still nor sure why this issue is happening? > > It looks like the issue is happening from with-in the Apache drill libraries > when constructing the Kafka consumer. Can any one please guide me help this > problem fixed? > The problem perishes when I restart my Apache drillbit but very soon it > happens again. I did check the file descriptor count on my unix machine using > *{{ulimit -a | wc -l}} & {{lsof -a -p | wc -l}}* before and after the > drill process restart and it seems the drill process is considerably taking a > lot of file descriptors. I tried increasing the file descriptor count on the > system but still no luck. > I have followed the Apache Drill storage plugin documentation in configuring > the Kafka plugin into Apache Drill at > [https://drill.apache.org/docs/kafka-storage-plugin/] > Any help on this issue is highly appreciated. Thanks. > JDBC URL: *{{jdbc:drill:drillbit=localhost:31010;schema=kafka}}* > NOTE: I am pushing down the filters in my query {{SELECT * FROM myKafkaTopic > WHERE kafkaMsgTimestamp > 1560210931626}} > > 2019-06-11 08:43:13,639 [230033ed-d410-ae7c-90cb-ac01d3b404cc:foreman] INFO > o.a.d.e.store.kafka.KafkaGroupScan - User Error Occurred: Failed to fetch > start/end offsets of the topic myKafkaTopic (Failed to construct kafka > consumer) > org.apache.drill.common.exceptions.UserException: DATA_READ ERROR: Failed to > fetch start/end offsets of the topic myKafkaTopic > Failed to construct kafka consumer > [Error Id: 73f896a7-09d4-425b-8cd5-f269c3a6e69a ] > at > org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:633) > ~[drill-common-1.14.0.jar:1.14.0] > at > org.apache.drill.exec.store.kafka.KafkaGroupScan.init(KafkaGroupScan.java:198) > [drill-storage-kafka-1.14.0.jar:1.14.0] > at > org.apache.drill.exec.store.kafka.KafkaGroupScan.(KafkaGroupScan.java:98) > [drill-storage-kafka-1.14.0.jar:1.14.0] > at > org.apache.drill.exec.store.kafka.KafkaStoragePlugin.getPhysicalScan(KafkaStoragePlugin.java:83) > [drill-storage-kafka-1.14.0.jar:1.14.0] > at > org.apache.drill.exec.store.AbstractStoragePlugin.getPhysicalScan(AbstractStoragePlugin.java:111) > [drill-java-exec-1.14.0.jar:1.14.0] > at > org.apache.drill.exec.planner.logical.DrillTable.getGroupScan(DrillTable.java:99) > [drill-java-exec-1.14.0.jar:1.14.0] > at > org.apache.drill.exec.planner.logical.DrillScanRel.(DrillScanRel.java:89) > [drill-java-exec-1.14.0.jar:1.14.0] > at > org.apache.drill.exec.planner.logical.DrillScanRel.(DrillScanRel.java:69) > [drill-java-exec-1.14.0.jar:1.14.0] > at > org.apache.drill.exec.planner.logical.DrillScanRel.(DrillScanRel.java:62) > [drill-java-exec-1.14.0.jar:1.14.0] > at > org.apache.drill.exec.planner.logical.DrillScanRule.onMatch(DrillScanRule.java:38) > [drill-java-exec-1.14.0.jar:1.14.0] > at > org.apache.calcite.plan.volcano.VolcanoRuleCall.onMatch(VolcanoRuleCall.java:212) > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6] > at > org.apache.calcite.plan.volcano.VolcanoPlanner.findBestExp(VolcanoPlanner.java:652) > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6] > at org.apache.calcite.tools.Programs$RuleSetProgram.run(Programs.java:368) > [calcite-core-1.16.0-drill-r6.jar:1.16.0-drill-r6] > at > org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.transform(DefaultSqlHandler.java:429) > [drill-java-exec-1.14.0.jar:1.14.0] > at > org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.transform(DefaultSqlHandler.java:369) > [drill-java-exec-1.14.0.jar:1.14.0] > at > org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.convertToRawDrel(DefaultSqlHandler.java:255) > [drill-java-exec-1.14.0.jar:1.14.0] > at > org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.convertToDrel(DefaultSqlHandler.java:318) > [drill-java-exec-1.14.0.jar:1.14.0] > at > org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.getPlan(DefaultSqlHandler.java:180)
[jira] [Comment Edited] (DRILL-7388) Apache Drill Kafka Storage module fails to return results for partitions containing single offset record
[ https://issues.apache.org/jira/browse/DRILL-7388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16958935#comment-16958935 ] Arina Ielchiieva edited comment on DRILL-7388 at 10/24/19 2:39 PM: --- This is also relates to the following issue: While Kafka storage plugin works with stream/topics with multiple messages, it returns "No result found" if a topic contains only one message for query like {code:java} select * from kafka.`/s1:t1` {code} [~atlassdan] are you interested in submitting the PR with suggested fix? was (Author: arina): This is also related to the following issue: While Kafka storage plugin works with stream/topics with multiple messages, it returns "No result found" if a topic contains only one message for query like {code:java} select * from kafka.`/s1:t1` {code} [~atlassdan] are you interested in submitting the PR with suggested fix? > Apache Drill Kafka Storage module fails to return results for partitions > containing single offset record > > > Key: DRILL-7388 > URL: https://issues.apache.org/jira/browse/DRILL-7388 > Project: Apache Drill > Issue Type: Bug >Affects Versions: 1.16.0 >Reporter: daniel kelly >Priority: Major > > If a partition only contains one record - e.g. > [topicName=myTopic, partitionId=117, startOffset=0, endOffset=1] > no data is returned. > I fixed this locally with the following code change in contrib/storage-kafka > :- > {code:java} > git diff > src/main/java/org/apache/drill/exec/store/kafka/KafkaRecordReader.java > @@ -109,7 +109,7 @@ public class KafkaRecordReader extends > AbstractRecordReader { > currentMessageCount = 0; > > try { > - while (currentOffset < subScanSpec.getEndOffset() - 1 && > msgItr.hasNext()) { > + while (currentOffset < subScanSpec.getEndOffset() && msgItr.hasNext()) > { > ConsumerRecord consumerRecord = msgItr.next(); > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (DRILL-7388) Apache Drill Kafka Storage module fails to return results for partitions containing single offset record
[ https://issues.apache.org/jira/browse/DRILL-7388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16958935#comment-16958935 ] Arina Ielchiieva commented on DRILL-7388: - This is also related to the following issue: While Kafka storage plugin works with stream/topics with multiple messages, it returns "No result found" if a topic contains only one message for query like {code:java} select * from kafka.`/s1:t1` {code} [~atlassdan] are you interested in submitting the PR with suggested fix? > Apache Drill Kafka Storage module fails to return results for partitions > containing single offset record > > > Key: DRILL-7388 > URL: https://issues.apache.org/jira/browse/DRILL-7388 > Project: Apache Drill > Issue Type: Bug >Reporter: daniel kelly >Priority: Blocker > > If a partition only contains one record - e.g. > [topicName=myTopic, partitionId=117, startOffset=0, endOffset=1] > no data is returned. > I fixed this locally with the following code change in contrib/storage-kafka > :- > {code:java} > git diff > src/main/java/org/apache/drill/exec/store/kafka/KafkaRecordReader.java > @@ -109,7 +109,7 @@ public class KafkaRecordReader extends > AbstractRecordReader { > currentMessageCount = 0; > > try { > - while (currentOffset < subScanSpec.getEndOffset() - 1 && > msgItr.hasNext()) { > + while (currentOffset < subScanSpec.getEndOffset() && msgItr.hasNext()) > { > ConsumerRecord consumerRecord = msgItr.next(); > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (DRILL-7388) Apache Drill Kafka Storage module fails to return results for partitions containing single offset record
[ https://issues.apache.org/jira/browse/DRILL-7388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arina Ielchiieva updated DRILL-7388: Affects Version/s: 1.16.0 > Apache Drill Kafka Storage module fails to return results for partitions > containing single offset record > > > Key: DRILL-7388 > URL: https://issues.apache.org/jira/browse/DRILL-7388 > Project: Apache Drill > Issue Type: Bug >Affects Versions: 1.16.0 >Reporter: daniel kelly >Priority: Blocker > > If a partition only contains one record - e.g. > [topicName=myTopic, partitionId=117, startOffset=0, endOffset=1] > no data is returned. > I fixed this locally with the following code change in contrib/storage-kafka > :- > {code:java} > git diff > src/main/java/org/apache/drill/exec/store/kafka/KafkaRecordReader.java > @@ -109,7 +109,7 @@ public class KafkaRecordReader extends > AbstractRecordReader { > currentMessageCount = 0; > > try { > - while (currentOffset < subScanSpec.getEndOffset() - 1 && > msgItr.hasNext()) { > + while (currentOffset < subScanSpec.getEndOffset() && msgItr.hasNext()) > { > ConsumerRecord consumerRecord = msgItr.next(); > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (DRILL-7388) Apache Drill Kafka Storage module fails to return results for partitions containing single offset record
[ https://issues.apache.org/jira/browse/DRILL-7388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arina Ielchiieva updated DRILL-7388: Priority: Major (was: Blocker) > Apache Drill Kafka Storage module fails to return results for partitions > containing single offset record > > > Key: DRILL-7388 > URL: https://issues.apache.org/jira/browse/DRILL-7388 > Project: Apache Drill > Issue Type: Bug >Affects Versions: 1.16.0 >Reporter: daniel kelly >Priority: Major > > If a partition only contains one record - e.g. > [topicName=myTopic, partitionId=117, startOffset=0, endOffset=1] > no data is returned. > I fixed this locally with the following code change in contrib/storage-kafka > :- > {code:java} > git diff > src/main/java/org/apache/drill/exec/store/kafka/KafkaRecordReader.java > @@ -109,7 +109,7 @@ public class KafkaRecordReader extends > AbstractRecordReader { > currentMessageCount = 0; > > try { > - while (currentOffset < subScanSpec.getEndOffset() - 1 && > msgItr.hasNext()) { > + while (currentOffset < subScanSpec.getEndOffset() && msgItr.hasNext()) > { > ConsumerRecord consumerRecord = msgItr.next(); > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (DRILL-1709) desc => describe command
[ https://issues.apache.org/jira/browse/DRILL-1709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vova Vysotskyi updated DRILL-1709: -- Labels: ready-to-commit (was: ) > desc => describe command > > > Key: DRILL-1709 > URL: https://issues.apache.org/jira/browse/DRILL-1709 > Project: Apache Drill > Issue Type: Improvement >Affects Versions: 0.6.0 > Environment: MapR 4.0.1 >Reporter: Hari Sekhon >Assignee: Arina Ielchiieva >Priority: Minor > Labels: ready-to-commit > Fix For: 1.17.0 > > > There is no desc command, can you please add that shortcut to describe. > Regards, > Hari Sekhon > http://www.linkedin.com/in/harisekhon > DESCRIBE statements in Drill that should support desc alias: > 1. describe schema for table dfs.tmp.`test_table`; > 2. describe schema dfs.tmp; > 3. describe information_schema.`catalogs`; > 4. describe table information_schema.`catalogs`; -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (DRILL-1709) desc => describe command
[ https://issues.apache.org/jira/browse/DRILL-1709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16958890#comment-16958890 ] ASF GitHub Bot commented on DRILL-1709: --- arina-ielchiieva commented on pull request #1881: DRILL-1709: Add desc alias for describe command URL: https://github.com/apache/drill/pull/1881 `describe [table]` command already supported `desc` alias. Adding `desc` alias support for `describe schema [for table]` for consistency. Jira - [DRILL-1709](https://issues.apache.org/jira/browse/DRILL-1709). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org > desc => describe command > > > Key: DRILL-1709 > URL: https://issues.apache.org/jira/browse/DRILL-1709 > Project: Apache Drill > Issue Type: Improvement >Affects Versions: 0.6.0 > Environment: MapR 4.0.1 >Reporter: Hari Sekhon >Assignee: Arina Ielchiieva >Priority: Minor > Fix For: 1.17.0 > > > There is no desc command, can you please add that shortcut to describe. > Regards, > Hari Sekhon > http://www.linkedin.com/in/harisekhon > DESCRIBE statements in Drill that should support desc alias: > 1. describe schema for table dfs.tmp.`test_table`; > 2. describe schema dfs.tmp; > 3. describe information_schema.`catalogs`; > 4. describe table information_schema.`catalogs`; -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (DRILL-1709) desc => describe command
[ https://issues.apache.org/jira/browse/DRILL-1709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arina Ielchiieva updated DRILL-1709: Description: There is no desc command, can you please add that shortcut to describe. Regards, Hari Sekhon http://www.linkedin.com/in/harisekhon DESCRIBE statements in Drill: 1. describe schema for table dfs.tmp.`test_table`; 2. describe schema dfs.tmp; 3. describe information_schema.`catalogs`; 4. describe table information_schema.`catalogs`; was: There is no desc command, can you please add that shortcut to describe. Regards, Hari Sekhon http://www.linkedin.com/in/harisekhon > desc => describe command > > > Key: DRILL-1709 > URL: https://issues.apache.org/jira/browse/DRILL-1709 > Project: Apache Drill > Issue Type: Improvement >Affects Versions: 0.6.0 > Environment: MapR 4.0.1 >Reporter: Hari Sekhon >Assignee: Arina Ielchiieva >Priority: Minor > Fix For: 1.17.0 > > > There is no desc command, can you please add that shortcut to describe. > Regards, > Hari Sekhon > http://www.linkedin.com/in/harisekhon > DESCRIBE statements in Drill: > 1. describe schema for table dfs.tmp.`test_table`; > 2. describe schema dfs.tmp; > 3. describe information_schema.`catalogs`; > 4. describe table information_schema.`catalogs`; -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (DRILL-1709) desc => describe command
[ https://issues.apache.org/jira/browse/DRILL-1709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arina Ielchiieva updated DRILL-1709: Description: There is no desc command, can you please add that shortcut to describe. Regards, Hari Sekhon http://www.linkedin.com/in/harisekhon DESCRIBE statements in Drill that should support desc alias: 1. describe schema for table dfs.tmp.`test_table`; 2. describe schema dfs.tmp; 3. describe information_schema.`catalogs`; 4. describe table information_schema.`catalogs`; was: There is no desc command, can you please add that shortcut to describe. Regards, Hari Sekhon http://www.linkedin.com/in/harisekhon DESCRIBE statements in Drill: 1. describe schema for table dfs.tmp.`test_table`; 2. describe schema dfs.tmp; 3. describe information_schema.`catalogs`; 4. describe table information_schema.`catalogs`; > desc => describe command > > > Key: DRILL-1709 > URL: https://issues.apache.org/jira/browse/DRILL-1709 > Project: Apache Drill > Issue Type: Improvement >Affects Versions: 0.6.0 > Environment: MapR 4.0.1 >Reporter: Hari Sekhon >Assignee: Arina Ielchiieva >Priority: Minor > Fix For: 1.17.0 > > > There is no desc command, can you please add that shortcut to describe. > Regards, > Hari Sekhon > http://www.linkedin.com/in/harisekhon > DESCRIBE statements in Drill that should support desc alias: > 1. describe schema for table dfs.tmp.`test_table`; > 2. describe schema dfs.tmp; > 3. describe information_schema.`catalogs`; > 4. describe table information_schema.`catalogs`; -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (DRILL-1709) desc => describe command
[ https://issues.apache.org/jira/browse/DRILL-1709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arina Ielchiieva updated DRILL-1709: Reviewer: Vova Vysotskyi > desc => describe command > > > Key: DRILL-1709 > URL: https://issues.apache.org/jira/browse/DRILL-1709 > Project: Apache Drill > Issue Type: Improvement >Affects Versions: 0.6.0 > Environment: MapR 4.0.1 >Reporter: Hari Sekhon >Assignee: Arina Ielchiieva >Priority: Minor > Fix For: 1.17.0 > > > There is no desc command, can you please add that shortcut to describe. > Regards, > Hari Sekhon > http://www.linkedin.com/in/harisekhon -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (DRILL-7413) Scan operator does not set the container record count
[ https://issues.apache.org/jira/browse/DRILL-7413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16958844#comment-16958844 ] ASF GitHub Bot commented on DRILL-7413: --- arina-ielchiieva commented on pull request #1877: DRILL-7413: Test and fix scan operator vectors URL: https://github.com/apache/drill/pull/1877 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org > Scan operator does not set the container record count > - > > Key: DRILL-7413 > URL: https://issues.apache.org/jira/browse/DRILL-7413 > Project: Apache Drill > Issue Type: Bug >Reporter: Paul Rogers >Assignee: Paul Rogers >Priority: Minor > Labels: ready-to-commit > Fix For: 1.17.0 > > > Enable the vector checking provided in DRILL-7403. Enable just for the JSON > reader. You will get the following error: > {noformat} > 12:36:57.399 [22549a3d-a937-df51-2e13-4b032ba143f9:frag:0:0] ERROR > o.a.d.e.p.i.validate.BatchValidator - Found one or more vector errors from > ScanBatch > ScanBatch: Container record count not set > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (DRILL-1709) desc => describe command
[ https://issues.apache.org/jira/browse/DRILL-1709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arina Ielchiieva updated DRILL-1709: Fix Version/s: (was: Future) 1.17.0 > desc => describe command > > > Key: DRILL-1709 > URL: https://issues.apache.org/jira/browse/DRILL-1709 > Project: Apache Drill > Issue Type: Improvement >Affects Versions: 0.6.0 > Environment: MapR 4.0.1 >Reporter: Hari Sekhon >Assignee: Arina Ielchiieva >Priority: Minor > Fix For: 1.17.0 > > > There is no desc command, can you please add that shortcut to describe. > Regards, > Hari Sekhon > http://www.linkedin.com/in/harisekhon -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (DRILL-1709) desc => describe command
[ https://issues.apache.org/jira/browse/DRILL-1709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arina Ielchiieva reassigned DRILL-1709: --- Assignee: Arina Ielchiieva > desc => describe command > > > Key: DRILL-1709 > URL: https://issues.apache.org/jira/browse/DRILL-1709 > Project: Apache Drill > Issue Type: Improvement >Affects Versions: 0.6.0 > Environment: MapR 4.0.1 >Reporter: Hari Sekhon >Assignee: Arina Ielchiieva >Priority: Minor > Fix For: Future > > > There is no desc command, can you please add that shortcut to describe. > Regards, > Hari Sekhon > http://www.linkedin.com/in/harisekhon -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (DRILL-7417) Add user logged in/out event in info level logs
[ https://issues.apache.org/jira/browse/DRILL-7417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16958808#comment-16958808 ] ASF GitHub Bot commented on DRILL-7417: --- arina-ielchiieva commented on pull request #1880: DRILL-7417: Add user logged in/out event in info level logs URL: https://github.com/apache/drill/pull/1880#discussion_r338507748 ## File path: exec/java-exec/src/main/java/org/apache/drill/exec/server/rest/auth/DrillSpnegoLoginService.java ## @@ -122,7 +122,8 @@ private UserIdentity spnegoLogin(Object credentials) { // Get the client user short name final String userShortName = new HadoopKerberosName(clientName).getShortName(); - + logger.info("WebUser {} logged in from {}:{}", userShortName, request.getRemoteHost(), Review comment: ```suggestion logger.info("WebUser {} logged in from {}: {}", userShortName, request.getRemoteHost(), ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org > Add user logged in/out event in info level logs > --- > > Key: DRILL-7417 > URL: https://issues.apache.org/jira/browse/DRILL-7417 > Project: Apache Drill > Issue Type: Improvement > Components: Security >Reporter: Sorabh Hamirwasia >Assignee: Sorabh Hamirwasia >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (DRILL-7417) Add user logged in/out event in info level logs
[ https://issues.apache.org/jira/browse/DRILL-7417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16958810#comment-16958810 ] ASF GitHub Bot commented on DRILL-7417: --- arina-ielchiieva commented on pull request #1880: DRILL-7417: Add user logged in/out event in info level logs URL: https://github.com/apache/drill/pull/1880#discussion_r338507480 ## File path: exec/java-exec/src/main/java/org/apache/drill/exec/server/rest/LogInLogOutResources.java ## @@ -51,6 +52,8 @@ @PermitAll public class LogInLogOutResources { + private static final org.slf4j.Logger logger = org.slf4j.LoggerFactory.getLogger(LogInLogOutResources.class); Review comment: Please user imports instead of full path. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org > Add user logged in/out event in info level logs > --- > > Key: DRILL-7417 > URL: https://issues.apache.org/jira/browse/DRILL-7417 > Project: Apache Drill > Issue Type: Improvement > Components: Security >Reporter: Sorabh Hamirwasia >Assignee: Sorabh Hamirwasia >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (DRILL-7417) Add user logged in/out event in info level logs
[ https://issues.apache.org/jira/browse/DRILL-7417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16958807#comment-16958807 ] ASF GitHub Bot commented on DRILL-7417: --- arina-ielchiieva commented on pull request #1880: DRILL-7417: Add user logged in/out event in info level logs URL: https://github.com/apache/drill/pull/1880#discussion_r338507530 ## File path: exec/java-exec/src/main/java/org/apache/drill/exec/server/rest/LogInLogOutResources.java ## @@ -120,6 +123,12 @@ public Viewable getLoginPageAfterValidationError() { public void logout(@Context HttpServletRequest req, @Context HttpServletResponse resp) throws Exception { final HttpSession session = req.getSession(); if (session != null) { + final Object authCreds = session.getAttribute(SessionAuthentication.__J_AUTHENTICATED); + if (authCreds != null) { +final SessionAuthentication sessionAuth = (SessionAuthentication) authCreds; +logger.info("WebUser {} logged out from {}:{}", sessionAuth.getUserIdentity().getUserPrincipal().getName(), req Review comment: ```suggestion logger.info("WebUser {} logged out from {}: {}", sessionAuth.getUserIdentity().getUserPrincipal().getName(), req ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org > Add user logged in/out event in info level logs > --- > > Key: DRILL-7417 > URL: https://issues.apache.org/jira/browse/DRILL-7417 > Project: Apache Drill > Issue Type: Improvement > Components: Security >Reporter: Sorabh Hamirwasia >Assignee: Sorabh Hamirwasia >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (DRILL-7417) Add user logged in/out event in info level logs
[ https://issues.apache.org/jira/browse/DRILL-7417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16958809#comment-16958809 ] ASF GitHub Bot commented on DRILL-7417: --- arina-ielchiieva commented on pull request #1880: DRILL-7417: Add user logged in/out event in info level logs URL: https://github.com/apache/drill/pull/1880#discussion_r338507582 ## File path: exec/java-exec/src/main/java/org/apache/drill/exec/server/rest/auth/DrillRestLoginService.java ## @@ -78,7 +78,7 @@ public UserIdentity login(String username, Object credentials, ServletRequest re // Authenticate the user with configured Authenticator userAuthenticator.authenticate(username, credentials.toString()); - logger.debug("WebUser {} is successfully authenticated", username); + logger.info("WebUser {} logged in from {}:{}", username, request.getRemoteHost(), request.getRemotePort()); Review comment: ```suggestion logger.info("WebUser {} logged in from {}: {}", username, request.getRemoteHost(), request.getRemotePort()); ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org > Add user logged in/out event in info level logs > --- > > Key: DRILL-7417 > URL: https://issues.apache.org/jira/browse/DRILL-7417 > Project: Apache Drill > Issue Type: Improvement > Components: Security >Reporter: Sorabh Hamirwasia >Assignee: Sorabh Hamirwasia >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)