[GitHub] drill issue #938: DRILL-5694: Handle HashAgg OOM by spill and retry, plus pe...

2017-09-21 Thread Ben-Zvi
Github user Ben-Zvi commented on the issue:

https://github.com/apache/drill/pull/938
  
  These tests passed for me (embedded, on the Mac) after squashing and 
rebasing yesterday.

Were these failures seen on the cluster ?

The test that "failed" was supposed to hit a failure internally (user error 
- not enough memory).




From: Paul Rogers 
Sent: Thursday, September 21, 2017 10:03:57 PM
To: apache/drill
Cc: Boaz Ben-Zvi; Mention
Subject: Re: [apache/drill] DRILL-5694: Handle HashAgg OOM by spill and 
retry, plus perf improvement (#938)


@Ben-Zvi, unit tests failed with these errors:

Failed tests:
  TestHashAggrSpill.testHashAggrFailWithFallbackDisabed:165 null

Tests in error:
  TestHashAggrSpill.testNoPredictHashAggrSpill:135->testSpill:110 » 
IllegalState
  TestHashAggrSpill.testHashAggrSecondaryTertiarySpill:147->testSpill:110 
» IllegalState
  TestHashAggrSpill.testSimpleHashAggrSpill:123->testSpill:110 » 
IllegalState Cl...
  
TestHashAggrSpill.testHashAggrSuccessWithFallbackEnabled:179->testSpill:110 » 
IllegalState


Is this expected at this stage? Are these due to the test framework issue 
we discussed recently?

—
You are receiving this because you were mentioned.
Reply to this email directly, view it on 
GitHub, or 
mute the 
thread.



---


[GitHub] drill issue #938: DRILL-5694: Handle HashAgg OOM by spill and retry, plus pe...

2017-09-21 Thread paul-rogers
Github user paul-rogers commented on the issue:

https://github.com/apache/drill/pull/938
  
@Ben-Zvi, unit tests failed with these errors:
```
Failed tests: 
  TestHashAggrSpill.testHashAggrFailWithFallbackDisabed:165 null

Tests in error: 
  TestHashAggrSpill.testNoPredictHashAggrSpill:135->testSpill:110 » 
IllegalState
  TestHashAggrSpill.testHashAggrSecondaryTertiarySpill:147->testSpill:110 
» IllegalState
  TestHashAggrSpill.testSimpleHashAggrSpill:123->testSpill:110 » 
IllegalState Cl...
  
TestHashAggrSpill.testHashAggrSuccessWithFallbackEnabled:179->testSpill:110 » 
IllegalState
```
Is this expected at this stage? Are these due to the test framework issue 
we discussed recently?


---


[GitHub] drill pull request #950: DRILL-5431: SSL Support

2017-09-21 Thread sohami
Github user sohami commented on a diff in the pull request:

https://github.com/apache/drill/pull/950#discussion_r140356738
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/ssl/SSLConfig.java ---
@@ -0,0 +1,325 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.ssl;
+
+import com.google.common.base.Preconditions;
+import io.netty.handler.ssl.SslContext;
+import io.netty.handler.ssl.SslProvider;
+import io.netty.handler.ssl.util.InsecureTrustManagerFactory;
+import org.apache.drill.common.config.DrillConfig;
+import org.apache.drill.common.exceptions.DrillException;
+import org.apache.drill.exec.ExecConstants;
+import org.apache.drill.exec.memory.BufferAllocator;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.security.ssl.SSLFactory;
+
+import javax.net.ssl.KeyManagerFactory;
+import javax.net.ssl.SSLContext;
+import javax.net.ssl.SSLEngine;
+import javax.net.ssl.TrustManagerFactory;
+import java.io.FileInputStream;
+import java.io.InputStream;
+import java.security.KeyStore;
+import java.text.MessageFormat;
+
+public abstract class SSLConfig {
+
+  private static final org.slf4j.Logger logger = 
org.slf4j.LoggerFactory.getLogger(SSLConfig.class);
+
+  public static final String DEFAULT_SSL_PROVIDER = "JDK"; // JDK or 
OPENSSL
+  public static final String DEFAULT_SSL_PROTOCOL = "TLSv1.2";
+  public static final int DEFAULT_SSL_HANDSHAKE_TIMEOUT_MS = 10 * 1000; // 
10 seconds
+
+  protected final boolean httpsEnabled;
+  protected final DrillConfig config;
+  protected final Configuration hadoopConfig;
+
+  // Either the Netty SSL context or the JDK SSL context will be 
initialized
+  // The JDK SSL context is use iff the useSystemTrustStore setting is 
enabled.
+  protected SslContext nettySslContext;
+  protected SSLContext jdkSSlContext;
+
+  private static final boolean isWindows = 
System.getProperty("os.name").toLowerCase().indexOf("win") >= 0;
+  private static final boolean isMacOs = 
System.getProperty("os.name").toLowerCase().indexOf("mac") >= 0;
+
+  public static final String HADOOP_SSL_CONF_TPL_KEY = 
"hadoop.ssl.{0}.conf";
+  public static final String HADOOP_SSL_KEYSTORE_LOCATION_TPL_KEY = 
"ssl.{0}.keystore.location";
+  public static final String HADOOP_SSL_KEYSTORE_PASSWORD_TPL_KEY = 
"ssl.{0}.keystore.password";
+  public static final String HADOOP_SSL_KEYSTORE_TYPE_TPL_KEY = 
"ssl.{0}.keystore.type";
+  public static final String HADOOP_SSL_KEYSTORE_KEYPASSWORD_TPL_KEY =
+  "ssl.{0}.keystore.keypassword";
+  public static final String HADOOP_SSL_TRUSTSTORE_LOCATION_TPL_KEY = 
"ssl.{0}.truststore.location";
+  public static final String HADOOP_SSL_TRUSTSTORE_PASSWORD_TPL_KEY = 
"ssl.{0}.truststore.password";
+  public static final String HADOOP_SSL_TRUSTSTORE_TYPE_TPL_KEY = 
"ssl.{0}.truststore.type";
+
+  public SSLConfig(DrillConfig config, Configuration hadoopConfig, 
SSLFactory.Mode mode)
+  throws DrillException {
+
+this.config = config;
+httpsEnabled =
+config.hasPath(ExecConstants.HTTP_ENABLE_SSL) && 
config.getBoolean(ExecConstants.HTTP_ENABLE_SSL);
+// For testing we will mock up a hadoop configuration, however for 
regular use, we find the actual hadoop config.
+boolean enableHadoopConfig = 
config.getBoolean(ExecConstants.SSL_USE_HADOOP_CONF);
+if (enableHadoopConfig && this instanceof SSLConfigServer) {
+  if (hadoopConfig == null) {
+this.hadoopConfig = new Configuration(); // get hadoop 
configuration
+  } else {
+this.hadoopConfig = hadoopConfig;
+  }
+  String hadoopSSLConfigFile =
+  
this.hadoopConfig.get(resolveHadoopPropertyName(HADOOP_SSL_CONF_TPL_KEY, mode));
+  logger.debug("Using Hadoop configuration for SSL");
+  logger.debug("Hadoop SSL configuration file: {}", 
hadoopSSLConfigFile);
+  

[GitHub] drill pull request #950: DRILL-5431: SSL Support

2017-09-21 Thread sohami
Github user sohami commented on a diff in the pull request:

https://github.com/apache/drill/pull/950#discussion_r140134229
  
--- Diff: exec/rpc/src/main/java/org/apache/drill/exec/rpc/BasicClient.java 
---
@@ -120,6 +125,25 @@ protected void initChannel(SocketChannel ch) throws 
Exception {
 // }
   }
 
+  // Adds a SSL handler if enabled. Required only for client and server 
communications, so
+  // a real implementation is only available for UserClient
+  protected void setupSSL(ChannelPipeline pipe, 
ConnectionMultiListener.SSLHandshakeListener sslHandshakeListener) {
+// Do nothing
--- End diff --

we can throw UnSupportedException to catch any error in flows where this is 
called in context of Data/control server.


---


[GitHub] drill pull request #950: DRILL-5431: SSL Support

2017-09-21 Thread sohami
Github user sohami commented on a diff in the pull request:

https://github.com/apache/drill/pull/950#discussion_r140132632
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/rpc/user/UserClient.java ---
@@ -102,19 +115,78 @@
   // these are used for authentication
   private volatile List serverAuthMechanisms = null;
   private volatile boolean authComplete = true;
+  private SSLConfig sslConfig;
+  private Channel sslChannel;
+  private DrillbitEndpoint endpoint;
 
   public UserClient(String clientName, DrillConfig config, boolean 
supportComplexTypes,
-  BufferAllocator allocator, EventLoopGroup eventLoopGroup, Executor 
eventExecutor) {
-super(
-UserRpcConfig.getMapping(config, eventExecutor),
-allocator.getAsByteBufAllocator(),
-eventLoopGroup,
-RpcType.HANDSHAKE,
-BitToUserHandshake.class,
-BitToUserHandshake.PARSER);
+  BufferAllocator allocator, EventLoopGroup eventLoopGroup, Executor 
eventExecutor,
+  DrillbitEndpoint endpoint) throws NonTransientRpcException {
+super(UserRpcConfig.getMapping(config, eventExecutor), 
allocator.getAsByteBufAllocator(),
+eventLoopGroup, RpcType.HANDSHAKE, BitToUserHandshake.class, 
BitToUserHandshake.PARSER);
+this.endpoint = endpoint; // save the endpoint; it might be needed by 
SSL init.
 this.clientName = clientName;
 this.allocator = allocator;
 this.supportComplexTypes = supportComplexTypes;
+this.sslChannel = null;
+try {
+  this.sslConfig = new 
SSLConfigBuilder().config(config).mode(SSLFactory.Mode.CLIENT)
+  .initializeSSLContext(true).validateKeyStore(false).build();
+} catch (DrillException e) {
--- End diff --

So based on comment in previous commit if we don't pass the info object 
which contains the Connection URL parameters inside DrillConfig to keep both 
separate then that will work well here as well. We can do the following:

1) Add a method in SSLConfigBuilder to accept Properties type config as 
well not just DrillConfig.
2) For SSLFactory.Mode.CLIENT we will always pass an instance of Properties 
type config whereas for SSLFactory.Mode.SERVER we will always pass an instance 
of DrillConfig. This check can be enforced inside the builder.build()
3) Inside build() method when mode is client we are referencing all the 
DrillProperties inside DrillConfig which actually is part of the instance of 
Properties object passed to connect call. But with above change it will be 
consistent.


---


[GitHub] drill pull request #950: DRILL-5431: SSL Support

2017-09-21 Thread sohami
Github user sohami commented on a diff in the pull request:

https://github.com/apache/drill/pull/950#discussion_r140398730
  
--- Diff: contrib/native/client/readme.linux ---
@@ -84,6 +84,21 @@ OR
 ln -svf libboost_filesystem.a libboost_filesystem-mt.a
 ln -svf libboost_date_time.a libboost_date_time-mt.a
 
+5) Install or  Cyrus SASL 
--- End diff --

Install or **build** ?


---


[GitHub] drill pull request #950: DRILL-5431: SSL Support

2017-09-21 Thread sohami
Github user sohami commented on a diff in the pull request:

https://github.com/apache/drill/pull/950#discussion_r140395265
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/rpc/user/UserServer.java ---
@@ -70,22 +78,80 @@
   private static final org.slf4j.Logger logger = 
org.slf4j.LoggerFactory.getLogger(UserServer.class);
   private static final String SERVER_NAME = "Apache Drill Server";
 
+  private final BootStrapContext bootStrapContext;
+  private final BufferAllocator allocator;
--- End diff --

No need to make these member variables since they are already available via 
_UserConnectionConfig_ object.


---


[GitHub] drill pull request #950: DRILL-5431: SSL Support

2017-09-21 Thread sohami
Github user sohami commented on a diff in the pull request:

https://github.com/apache/drill/pull/950#discussion_r140357506
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/ssl/SSLConfig.java ---
@@ -0,0 +1,325 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.ssl;
+
+import com.google.common.base.Preconditions;
+import io.netty.handler.ssl.SslContext;
+import io.netty.handler.ssl.SslProvider;
+import io.netty.handler.ssl.util.InsecureTrustManagerFactory;
+import org.apache.drill.common.config.DrillConfig;
+import org.apache.drill.common.exceptions.DrillException;
+import org.apache.drill.exec.ExecConstants;
+import org.apache.drill.exec.memory.BufferAllocator;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.security.ssl.SSLFactory;
+
+import javax.net.ssl.KeyManagerFactory;
+import javax.net.ssl.SSLContext;
+import javax.net.ssl.SSLEngine;
+import javax.net.ssl.TrustManagerFactory;
+import java.io.FileInputStream;
+import java.io.InputStream;
+import java.security.KeyStore;
+import java.text.MessageFormat;
+
+public abstract class SSLConfig {
+
+  private static final org.slf4j.Logger logger = 
org.slf4j.LoggerFactory.getLogger(SSLConfig.class);
+
+  public static final String DEFAULT_SSL_PROVIDER = "JDK"; // JDK or 
OPENSSL
+  public static final String DEFAULT_SSL_PROTOCOL = "TLSv1.2";
+  public static final int DEFAULT_SSL_HANDSHAKE_TIMEOUT_MS = 10 * 1000; // 
10 seconds
+
+  protected final boolean httpsEnabled;
+  protected final DrillConfig config;
+  protected final Configuration hadoopConfig;
+
+  // Either the Netty SSL context or the JDK SSL context will be 
initialized
+  // The JDK SSL context is use iff the useSystemTrustStore setting is 
enabled.
+  protected SslContext nettySslContext;
+  protected SSLContext jdkSSlContext;
+
+  private static final boolean isWindows = 
System.getProperty("os.name").toLowerCase().indexOf("win") >= 0;
+  private static final boolean isMacOs = 
System.getProperty("os.name").toLowerCase().indexOf("mac") >= 0;
+
+  public static final String HADOOP_SSL_CONF_TPL_KEY = 
"hadoop.ssl.{0}.conf";
+  public static final String HADOOP_SSL_KEYSTORE_LOCATION_TPL_KEY = 
"ssl.{0}.keystore.location";
+  public static final String HADOOP_SSL_KEYSTORE_PASSWORD_TPL_KEY = 
"ssl.{0}.keystore.password";
+  public static final String HADOOP_SSL_KEYSTORE_TYPE_TPL_KEY = 
"ssl.{0}.keystore.type";
+  public static final String HADOOP_SSL_KEYSTORE_KEYPASSWORD_TPL_KEY =
+  "ssl.{0}.keystore.keypassword";
+  public static final String HADOOP_SSL_TRUSTSTORE_LOCATION_TPL_KEY = 
"ssl.{0}.truststore.location";
+  public static final String HADOOP_SSL_TRUSTSTORE_PASSWORD_TPL_KEY = 
"ssl.{0}.truststore.password";
+  public static final String HADOOP_SSL_TRUSTSTORE_TYPE_TPL_KEY = 
"ssl.{0}.truststore.type";
+
+  public SSLConfig(DrillConfig config, Configuration hadoopConfig, 
SSLFactory.Mode mode)
+  throws DrillException {
--- End diff --

doesn't throws exception in any case so we can remove this from signature.


---


[GitHub] drill pull request #950: DRILL-5431: SSL Support

2017-09-21 Thread sohami
Github user sohami commented on a diff in the pull request:

https://github.com/apache/drill/pull/950#discussion_r140130105
  
--- Diff: 
exec/jdbc/src/main/java/org/apache/drill/jdbc/impl/DrillConnectionImpl.java ---
@@ -140,12 +140,12 @@ protected DrillConnectionImpl(DriverImpl driver, 
AvaticaFactory factory,
 
 this.client = new DrillClient(dConfig, set.getCoordinator());
   } else if(config.isDirect()) {
-final DrillConfig dConfig = DrillConfig.forClient();
+final DrillConfig dConfig = DrillConfig.forClient(info);
--- End diff --

in the connect call this _info_ is available.


---


[GitHub] drill pull request #950: DRILL-5431: SSL Support

2017-09-21 Thread sohami
Github user sohami commented on a diff in the pull request:

https://github.com/apache/drill/pull/950#discussion_r140396437
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/ssl/SSLConfig.java ---
@@ -0,0 +1,325 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.ssl;
+
+import com.google.common.base.Preconditions;
+import io.netty.handler.ssl.SslContext;
+import io.netty.handler.ssl.SslProvider;
+import io.netty.handler.ssl.util.InsecureTrustManagerFactory;
+import org.apache.drill.common.config.DrillConfig;
+import org.apache.drill.common.exceptions.DrillException;
+import org.apache.drill.exec.ExecConstants;
+import org.apache.drill.exec.memory.BufferAllocator;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.security.ssl.SSLFactory;
+
+import javax.net.ssl.KeyManagerFactory;
+import javax.net.ssl.SSLContext;
+import javax.net.ssl.SSLEngine;
+import javax.net.ssl.TrustManagerFactory;
+import java.io.FileInputStream;
+import java.io.InputStream;
+import java.security.KeyStore;
+import java.text.MessageFormat;
+
+public abstract class SSLConfig {
+
+  private static final org.slf4j.Logger logger = 
org.slf4j.LoggerFactory.getLogger(SSLConfig.class);
+
+  public static final String DEFAULT_SSL_PROVIDER = "JDK"; // JDK or 
OPENSSL
+  public static final String DEFAULT_SSL_PROTOCOL = "TLSv1.2";
+  public static final int DEFAULT_SSL_HANDSHAKE_TIMEOUT_MS = 10 * 1000; // 
10 seconds
+
+  protected final boolean httpsEnabled;
+  protected final DrillConfig config;
+  protected final Configuration hadoopConfig;
+
+  // Either the Netty SSL context or the JDK SSL context will be 
initialized
+  // The JDK SSL context is use iff the useSystemTrustStore setting is 
enabled.
+  protected SslContext nettySslContext;
+  protected SSLContext jdkSSlContext;
+
+  private static final boolean isWindows = 
System.getProperty("os.name").toLowerCase().indexOf("win") >= 0;
+  private static final boolean isMacOs = 
System.getProperty("os.name").toLowerCase().indexOf("mac") >= 0;
+
+  public static final String HADOOP_SSL_CONF_TPL_KEY = 
"hadoop.ssl.{0}.conf";
+  public static final String HADOOP_SSL_KEYSTORE_LOCATION_TPL_KEY = 
"ssl.{0}.keystore.location";
+  public static final String HADOOP_SSL_KEYSTORE_PASSWORD_TPL_KEY = 
"ssl.{0}.keystore.password";
+  public static final String HADOOP_SSL_KEYSTORE_TYPE_TPL_KEY = 
"ssl.{0}.keystore.type";
+  public static final String HADOOP_SSL_KEYSTORE_KEYPASSWORD_TPL_KEY =
+  "ssl.{0}.keystore.keypassword";
+  public static final String HADOOP_SSL_TRUSTSTORE_LOCATION_TPL_KEY = 
"ssl.{0}.truststore.location";
+  public static final String HADOOP_SSL_TRUSTSTORE_PASSWORD_TPL_KEY = 
"ssl.{0}.truststore.password";
+  public static final String HADOOP_SSL_TRUSTSTORE_TYPE_TPL_KEY = 
"ssl.{0}.truststore.type";
+
+  public SSLConfig(DrillConfig config, Configuration hadoopConfig, 
SSLFactory.Mode mode)
+  throws DrillException {
+
+this.config = config;
+httpsEnabled =
+config.hasPath(ExecConstants.HTTP_ENABLE_SSL) && 
config.getBoolean(ExecConstants.HTTP_ENABLE_SSL);
+// For testing we will mock up a hadoop configuration, however for 
regular use, we find the actual hadoop config.
+boolean enableHadoopConfig = 
config.getBoolean(ExecConstants.SSL_USE_HADOOP_CONF);
+if (enableHadoopConfig && this instanceof SSLConfigServer) {
+  if (hadoopConfig == null) {
+this.hadoopConfig = new Configuration(); // get hadoop 
configuration
+  } else {
+this.hadoopConfig = hadoopConfig;
+  }
+  String hadoopSSLConfigFile =
+  
this.hadoopConfig.get(resolveHadoopPropertyName(HADOOP_SSL_CONF_TPL_KEY, mode));
+  logger.debug("Using Hadoop configuration for SSL");
+  logger.debug("Hadoop SSL configuration file: {}", 
hadoopSSLConfigFile);
+  

[GitHub] drill pull request #950: DRILL-5431: SSL Support

2017-09-21 Thread sohami
Github user sohami commented on a diff in the pull request:

https://github.com/apache/drill/pull/950#discussion_r140395374
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/rpc/user/UserServer.java ---
@@ -70,22 +78,80 @@
   private static final org.slf4j.Logger logger = 
org.slf4j.LoggerFactory.getLogger(UserServer.class);
   private static final String SERVER_NAME = "Apache Drill Server";
 
+  private final BootStrapContext bootStrapContext;
+  private final BufferAllocator allocator;
   private final UserConnectionConfig config;
+  private final SSLConfig sslConfig;
+  private Channel sslChannel;
   private final UserWorker userWorker;
 
   public UserServer(BootStrapContext context, BufferAllocator allocator, 
EventLoopGroup eventLoopGroup,
 UserWorker worker) throws DrillbitStartupException {
 super(UserRpcConfig.getMapping(context.getConfig(), 
context.getExecutor()),
 allocator.getAsByteBufAllocator(),
 eventLoopGroup);
+this.bootStrapContext = context;
+this.allocator = allocator;
 this.config = new UserConnectionConfig(allocator, context, new 
UserServerRequestHandler(worker));
+this.sslChannel = null;
+try {
+  this.sslConfig = new SSLConfigBuilder()
+  .config(bootStrapContext.getConfig())
+  .mode(SSLFactory.Mode.SERVER)
+  .initializeSSLContext(true)
+  .validateKeyStore(true)
+  .build();
+} catch (DrillException e) {
+  throw new DrillbitStartupException(e.getMessage(), e.getCause());
+}
 this.userWorker = worker;
 
 // Initialize Singleton instance of UserRpcMetrics.
 
((UserRpcMetrics)UserRpcMetrics.getInstance()).initialize(config.isEncryptionEnabled(),
 allocator);
   }
 
   @Override
+  protected void setupSSL(ChannelPipeline pipe) {
+if (sslConfig.isUserSslEnabled()) {
+
+  SSLEngine sslEngine = sslConfig.createSSLEngine(allocator, null, 0);
--- End diff --

replace allocator with `config.getAllocator()`


---


[GitHub] drill pull request #950: DRILL-5431: SSL Support

2017-09-21 Thread sohami
Github user sohami commented on a diff in the pull request:

https://github.com/apache/drill/pull/950#discussion_r140400536
  
--- Diff: exec/rpc/src/main/java/org/apache/drill/exec/rpc/BasicServer.java 
---
@@ -82,6 +83,9 @@ protected void initChannel(SocketChannel ch) throws 
Exception {
 ch.closeFuture().addListener(getCloseHandler(ch, connection));
 
 final ChannelPipeline pipe = ch.pipeline();
+// Make sure that the SSL handler is the first handler in the 
pipeline so everything is encrypted
+setupSSL(pipe);
--- End diff --

how about checking `isSslEnabled()` and then calling `setupSSL(..)` here 
and in `BasicClient` ? instead of checking it inside the `setupSSL` function. 
Since cases when SSL is disabled that will avoid the function call.


---


[GitHub] drill pull request #950: DRILL-5431: SSL Support

2017-09-21 Thread sohami
Github user sohami commented on a diff in the pull request:

https://github.com/apache/drill/pull/950#discussion_r140397986
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/rpc/user/UserServer.java ---
@@ -70,22 +78,80 @@
   private static final org.slf4j.Logger logger = 
org.slf4j.LoggerFactory.getLogger(UserServer.class);
   private static final String SERVER_NAME = "Apache Drill Server";
 
+  private final BootStrapContext bootStrapContext;
+  private final BufferAllocator allocator;
   private final UserConnectionConfig config;
+  private final SSLConfig sslConfig;
+  private Channel sslChannel;
   private final UserWorker userWorker;
 
   public UserServer(BootStrapContext context, BufferAllocator allocator, 
EventLoopGroup eventLoopGroup,
 UserWorker worker) throws DrillbitStartupException {
 super(UserRpcConfig.getMapping(context.getConfig(), 
context.getExecutor()),
 allocator.getAsByteBufAllocator(),
 eventLoopGroup);
+this.bootStrapContext = context;
+this.allocator = allocator;
 this.config = new UserConnectionConfig(allocator, context, new 
UserServerRequestHandler(worker));
+this.sslChannel = null;
+try {
+  this.sslConfig = new SSLConfigBuilder()
+  .config(bootStrapContext.getConfig())
+  .mode(SSLFactory.Mode.SERVER)
+  .initializeSSLContext(true)
+  .validateKeyStore(true)
+  .build();
+} catch (DrillException e) {
+  throw new DrillbitStartupException(e.getMessage(), e.getCause());
+}
 this.userWorker = worker;
 
 // Initialize Singleton instance of UserRpcMetrics.
 
((UserRpcMetrics)UserRpcMetrics.getInstance()).initialize(config.isEncryptionEnabled(),
 allocator);
   }
 
   @Override
+  protected void setupSSL(ChannelPipeline pipe) {
+if (sslConfig.isUserSslEnabled()) {
+
+  SSLEngine sslEngine = sslConfig.createSSLEngine(allocator, null, 0);
+  sslEngine.setUseClientMode(false);
+
+  // No need for client side authentication (HTTPS like behaviour)
+  sslEngine.setNeedClientAuth(false);
+
+  // set Security property jdk.certpath.disabledAlgorithms  to disable 
specific ssl algorithms
+  sslEngine.setEnabledProtocols(sslEngine.getEnabledProtocols());
+
+  // set Security property jdk.tls.disabledAlgorithms to disable 
specific cipher suites
+  sslEngine.setEnabledCipherSuites(sslEngine.getEnabledCipherSuites());
+  sslEngine.setEnableSessionCreation(true);
+
--- End diff --

All these setup of sslEngine can be moved to 
`SSLConfigServer:createSSLEngine(..)` and same thing for client side setupSSL 
which can be moved to `SSLConfigClient::createSSLEngine(..)`


---


[GitHub] drill pull request #950: DRILL-5431: SSL Support

2017-09-21 Thread sohami
Github user sohami commented on a diff in the pull request:

https://github.com/apache/drill/pull/950#discussion_r140394414
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/rpc/user/UserClient.java ---
@@ -176,31 +256,28 @@ public void connect(final DrillbitEndpoint endpoint, 
final DrillProperties prope
   }
 
   private CheckedFuture connect(final 
UserToBitHandshake handshake,
-final DrillbitEndpoint 
endpoint) {
+  final DrillbitEndpoint endpoint) {
 final SettableFuture connectionSettable = 
SettableFuture.create();
 final CheckedFuture connectionFuture =
 new AbstractCheckedFuture(connectionSettable) {
-  @Override
-  protected RpcException mapException(Exception e) {
+  @Override protected RpcException mapException(Exception e) {
--- End diff --

Please undo these formatting changes above and below


---


[GitHub] drill pull request #950: DRILL-5431: SSL Support

2017-09-21 Thread sohami
Github user sohami commented on a diff in the pull request:

https://github.com/apache/drill/pull/950#discussion_r140133705
  
--- Diff: exec/rpc/src/main/java/org/apache/drill/exec/rpc/BasicClient.java 
---
@@ -100,6 +103,8 @@ protected void initChannel(SocketChannel ch) throws 
Exception {
 ch.closeFuture().addListener(getCloseHandler(ch, connection));
 
 final ChannelPipeline pipe = ch.pipeline();
+// Make sure that the SSL handler is the first handler in the 
pipeline so everything is encrypted
+setupSSL(pipe, sslHandshakeListener);
--- End diff --

this will be called all the time even when SSL is not enabled and then 
later we have a check inside setupSSL where we are doing all the setup inside 
that if condition. How about check that here instead and then calling setupSSL 
method based on that check ? That way we know setupSSL is to do some setup and 
will be called only when SSL is enabled.
```

if (isSslEnabled()) {
   sslHandshakeListener = new 
ConnectionMultiListener.SSLHandshakeListener();
setupSSL(pipe, sslHandshakeListener);
}
```
and then remove that check from inside the setupSSL method.


---


[GitHub] drill pull request #950: DRILL-5431: SSL Support

2017-09-21 Thread sohami
Github user sohami commented on a diff in the pull request:

https://github.com/apache/drill/pull/950#discussion_r140130050
  
--- Diff: 
common/src/main/java/org/apache/drill/common/config/DrillConfig.java ---
@@ -110,8 +110,8 @@ public static DrillConfig create() {
*
* @return {@link DrillConfig} instance
*/
-  public static DrillConfig forClient() {
-return create(null, false);
+  public static DrillConfig forClient(Properties props) {
+return create(null, props, false);
--- End diff --

This change is not required since this will duplicate the parameters passed 
by JDBC application to DrillClient. We are passing that information using _info 
(Properties type object) in DrillConnectionImpl_


---


[GitHub] drill pull request #950: DRILL-5431: SSL Support

2017-09-21 Thread sohami
Github user sohami commented on a diff in the pull request:

https://github.com/apache/drill/pull/950#discussion_r140395315
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/rpc/user/UserServer.java ---
@@ -70,22 +78,80 @@
   private static final org.slf4j.Logger logger = 
org.slf4j.LoggerFactory.getLogger(UserServer.class);
   private static final String SERVER_NAME = "Apache Drill Server";
 
+  private final BootStrapContext bootStrapContext;
+  private final BufferAllocator allocator;
   private final UserConnectionConfig config;
+  private final SSLConfig sslConfig;
+  private Channel sslChannel;
   private final UserWorker userWorker;
 
   public UserServer(BootStrapContext context, BufferAllocator allocator, 
EventLoopGroup eventLoopGroup,
 UserWorker worker) throws DrillbitStartupException {
 super(UserRpcConfig.getMapping(context.getConfig(), 
context.getExecutor()),
 allocator.getAsByteBufAllocator(),
 eventLoopGroup);
+this.bootStrapContext = context;
+this.allocator = allocator;
 this.config = new UserConnectionConfig(allocator, context, new 
UserServerRequestHandler(worker));
+this.sslChannel = null;
+try {
+  this.sslConfig = new SSLConfigBuilder()
+  .config(bootStrapContext.getConfig())
--- End diff --

instead `context.getConfig()`


---


[GitHub] drill pull request #950: DRILL-5431: SSL Support

2017-09-21 Thread sohami
Github user sohami commented on a diff in the pull request:

https://github.com/apache/drill/pull/950#discussion_r140400292
  
--- Diff: exec/rpc/src/main/java/org/apache/drill/exec/rpc/BasicServer.java 
---
@@ -105,6 +109,25 @@ protected void initChannel(SocketChannel ch) throws 
Exception {
 // }
   }
 
+  // Adds a SSL handler if enabled. Required only for client and server 
communications, so
+  // a real implementation is only available for UserServer
+  protected void setupSSL(ChannelPipeline pipe) {
+// Do nothing
+  }
+
+  protected boolean isSslEnabled() {
+return false;
+  }
+
+  // Save the SslChannel after the SSL handshake so it can be closed later
+  public void setSslChannel(Channel c) {
+return;
+  }
+
+  protected void closeSSL() {
+return;
+  }
--- End diff --

redundant `return` statement in both `closeSSL` and `setSSLChannel`.


---


[GitHub] drill pull request #950: DRILL-5431: SSL Support

2017-09-21 Thread sohami
Github user sohami commented on a diff in the pull request:

https://github.com/apache/drill/pull/950#discussion_r140398781
  
--- Diff: contrib/native/client/readme.macos ---
@@ -44,12 +44,23 @@ Install Prerequisites
 2.2) Install zookeeper
   $> brew install zookeeper
 
-2.3) Install boost
+2.3) Install or  Cyrus SASL 
--- End diff --

Same as readme.linux


---


[GitHub] drill pull request #950: DRILL-5431: SSL Support

2017-09-21 Thread sohami
Github user sohami commented on a diff in the pull request:

https://github.com/apache/drill/pull/950#discussion_r140121083
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/ExecConstants.java ---
@@ -110,6 +110,18 @@
   String HASHAGG_FALLBACK_ENABLED_KEY = 
"drill.exec.hashagg.fallback.enabled";
   BooleanValidator HASHAGG_FALLBACK_ENABLED_VALIDATOR = new 
BooleanValidator(HASHAGG_FALLBACK_ENABLED_KEY);
 
+  String SSL_PROVIDER = "drill.exec.ssl.provider"; // valid values are 
"JDK", "OPENSSL" // default JDK
+  String SSL_PROTOCOL = "drill.exec.ssl.protocol"; // valid values are 
SSL, SSLV2, SSLV3, TLS, TLSV1, TLSv1.1, TLSv1.2(default)
+  String SSL_KEYSTORE_TYPE = "drill.exec.ssl.keyStoreType";
+  String SSL_KEYSTORE_PATH = "drill.exec.ssl.keyStorePath"; // path to 
keystore. default : $JRE_HOME/lib/security/keystore.jks
+  String SSL_KEYSTORE_PASSWORD = "drill.exec.ssl.keyStorePassword"; // 
default: changeit
+  String SSL_KEY_PASSWORD = "drill.exec.ssl.keyPassword"; //
+  String SSL_TRUSTSTORE_TYPE = "drill.exec.ssl.trustStoreType"; // valid 
values are jks(default), jceks, pkcs12
+  String SSL_TRUSTSTORE_PATH = "drill.exec.ssl.trustStorePath"; // path to 
keystore. default : $JRE_HOME/lib/security/cacerts.jks
+  String SSL_TRUSTSTORE_PASSWORD = "drill.exec.ssl.trustStorePassword"; // 
default: changeit
+  String SSL_USE_HADOOP_CONF = "drill.exec.ssl.useHadoopConfig"; // 
Initialize ssl params from hadoop if not provided by drill. default: true
+  String SSL_HANDSHAKE_TIMEOUT = 
"drill.exec.security.user.encryption.ssl.handshakeTimeout"; // Default 10 
seconds
--- End diff --

General convention for naming the config properties looks to be lowercase. 
I would suggest that we stick to that for ease of admin configuring the cluster.


---


[GitHub] drill pull request #950: DRILL-5431: SSL Support

2017-09-21 Thread sohami
Github user sohami commented on a diff in the pull request:

https://github.com/apache/drill/pull/950#discussion_r140392979
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/ssl/SSLConfig.java ---
@@ -0,0 +1,325 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.ssl;
+
+import com.google.common.base.Preconditions;
+import io.netty.handler.ssl.SslContext;
+import io.netty.handler.ssl.SslProvider;
+import io.netty.handler.ssl.util.InsecureTrustManagerFactory;
+import org.apache.drill.common.config.DrillConfig;
+import org.apache.drill.common.exceptions.DrillException;
+import org.apache.drill.exec.ExecConstants;
+import org.apache.drill.exec.memory.BufferAllocator;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.security.ssl.SSLFactory;
+
+import javax.net.ssl.KeyManagerFactory;
+import javax.net.ssl.SSLContext;
+import javax.net.ssl.SSLEngine;
+import javax.net.ssl.TrustManagerFactory;
+import java.io.FileInputStream;
+import java.io.InputStream;
+import java.security.KeyStore;
+import java.text.MessageFormat;
+
+public abstract class SSLConfig {
+
+  private static final org.slf4j.Logger logger = 
org.slf4j.LoggerFactory.getLogger(SSLConfig.class);
+
+  public static final String DEFAULT_SSL_PROVIDER = "JDK"; // JDK or 
OPENSSL
+  public static final String DEFAULT_SSL_PROTOCOL = "TLSv1.2";
+  public static final int DEFAULT_SSL_HANDSHAKE_TIMEOUT_MS = 10 * 1000; // 
10 seconds
+
+  protected final boolean httpsEnabled;
+  protected final DrillConfig config;
+  protected final Configuration hadoopConfig;
+
+  // Either the Netty SSL context or the JDK SSL context will be 
initialized
+  // The JDK SSL context is use iff the useSystemTrustStore setting is 
enabled.
+  protected SslContext nettySslContext;
+  protected SSLContext jdkSSlContext;
+
+  private static final boolean isWindows = 
System.getProperty("os.name").toLowerCase().indexOf("win") >= 0;
+  private static final boolean isMacOs = 
System.getProperty("os.name").toLowerCase().indexOf("mac") >= 0;
+
+  public static final String HADOOP_SSL_CONF_TPL_KEY = 
"hadoop.ssl.{0}.conf";
+  public static final String HADOOP_SSL_KEYSTORE_LOCATION_TPL_KEY = 
"ssl.{0}.keystore.location";
+  public static final String HADOOP_SSL_KEYSTORE_PASSWORD_TPL_KEY = 
"ssl.{0}.keystore.password";
+  public static final String HADOOP_SSL_KEYSTORE_TYPE_TPL_KEY = 
"ssl.{0}.keystore.type";
+  public static final String HADOOP_SSL_KEYSTORE_KEYPASSWORD_TPL_KEY =
+  "ssl.{0}.keystore.keypassword";
+  public static final String HADOOP_SSL_TRUSTSTORE_LOCATION_TPL_KEY = 
"ssl.{0}.truststore.location";
+  public static final String HADOOP_SSL_TRUSTSTORE_PASSWORD_TPL_KEY = 
"ssl.{0}.truststore.password";
+  public static final String HADOOP_SSL_TRUSTSTORE_TYPE_TPL_KEY = 
"ssl.{0}.truststore.type";
+
+  public SSLConfig(DrillConfig config, Configuration hadoopConfig, 
SSLFactory.Mode mode)
+  throws DrillException {
+
+this.config = config;
+httpsEnabled =
+config.hasPath(ExecConstants.HTTP_ENABLE_SSL) && 
config.getBoolean(ExecConstants.HTTP_ENABLE_SSL);
+// For testing we will mock up a hadoop configuration, however for 
regular use, we find the actual hadoop config.
+boolean enableHadoopConfig = 
config.getBoolean(ExecConstants.SSL_USE_HADOOP_CONF);
+if (enableHadoopConfig && this instanceof SSLConfigServer) {
+  if (hadoopConfig == null) {
+this.hadoopConfig = new Configuration(); // get hadoop 
configuration
+  } else {
+this.hadoopConfig = hadoopConfig;
+  }
+  String hadoopSSLConfigFile =
+  
this.hadoopConfig.get(resolveHadoopPropertyName(HADOOP_SSL_CONF_TPL_KEY, mode));
+  logger.debug("Using Hadoop configuration for SSL");
+  logger.debug("Hadoop SSL configuration file: {}", 
hadoopSSLConfigFile);
+  

[GitHub] drill pull request #950: DRILL-5431: SSL Support

2017-09-21 Thread sohami
Github user sohami commented on a diff in the pull request:

https://github.com/apache/drill/pull/950#discussion_r140333837
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/rpc/user/UserClient.java ---
@@ -102,19 +115,78 @@
   // these are used for authentication
   private volatile List serverAuthMechanisms = null;
   private volatile boolean authComplete = true;
+  private SSLConfig sslConfig;
+  private Channel sslChannel;
+  private DrillbitEndpoint endpoint;
 
   public UserClient(String clientName, DrillConfig config, boolean 
supportComplexTypes,
-  BufferAllocator allocator, EventLoopGroup eventLoopGroup, Executor 
eventExecutor) {
-super(
-UserRpcConfig.getMapping(config, eventExecutor),
-allocator.getAsByteBufAllocator(),
-eventLoopGroup,
-RpcType.HANDSHAKE,
-BitToUserHandshake.class,
-BitToUserHandshake.PARSER);
+  BufferAllocator allocator, EventLoopGroup eventLoopGroup, Executor 
eventExecutor,
+  DrillbitEndpoint endpoint) throws NonTransientRpcException {
+super(UserRpcConfig.getMapping(config, eventExecutor), 
allocator.getAsByteBufAllocator(),
+eventLoopGroup, RpcType.HANDSHAKE, BitToUserHandshake.class, 
BitToUserHandshake.PARSER);
+this.endpoint = endpoint; // save the endpoint; it might be needed by 
SSL init.
 this.clientName = clientName;
 this.allocator = allocator;
 this.supportComplexTypes = supportComplexTypes;
+this.sslChannel = null;
+try {
+  this.sslConfig = new 
SSLConfigBuilder().config(config).mode(SSLFactory.Mode.CLIENT)
+  .initializeSSLContext(true).validateKeyStore(false).build();
+} catch (DrillException e) {
+  throw new NonTransientRpcException(e.getMessage());
--- End diff --

The exception will be thrown if there is any issue with passed parameters 
for SSL so it would be better if we change this exception to 
InvalidConnectionInfoException.


---


[GitHub] drill pull request #950: DRILL-5431: SSL Support

2017-09-21 Thread sohami
Github user sohami commented on a diff in the pull request:

https://github.com/apache/drill/pull/950#discussion_r140357288
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/ssl/SSLConfigClient.java ---
@@ -0,0 +1,229 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.ssl;
+
+import io.netty.handler.ssl.SslContext;
+import io.netty.handler.ssl.SslContextBuilder;
+import io.netty.handler.ssl.SslProvider;
+import org.apache.drill.common.config.DrillConfig;
+import org.apache.drill.common.config.DrillProperties;
+import org.apache.drill.common.exceptions.DrillException;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.security.ssl.SSLFactory;
+
+import javax.net.ssl.SSLContext;
+import javax.net.ssl.TrustManagerFactory;
+
+public class SSLConfigClient extends SSLConfig {
+
+  private static final org.slf4j.Logger logger = 
org.slf4j.LoggerFactory.getLogger(SSLConfigClient.class);
+
+  private final SSLFactory.Mode mode; // Let's reuse Hadoop's 
SSLFactory.Mode to distinguish client/server
--- End diff --

_SSLFactory.Mode mode_ can be moved to base class


---


[GitHub] drill pull request #950: DRILL-5431: SSL Support

2017-09-21 Thread sohami
Github user sohami commented on a diff in the pull request:

https://github.com/apache/drill/pull/950#discussion_r140366914
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/ssl/SSLConfig.java ---
@@ -0,0 +1,325 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.ssl;
+
+import com.google.common.base.Preconditions;
+import io.netty.handler.ssl.SslContext;
+import io.netty.handler.ssl.SslProvider;
+import io.netty.handler.ssl.util.InsecureTrustManagerFactory;
+import org.apache.drill.common.config.DrillConfig;
+import org.apache.drill.common.exceptions.DrillException;
+import org.apache.drill.exec.ExecConstants;
+import org.apache.drill.exec.memory.BufferAllocator;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.security.ssl.SSLFactory;
+
+import javax.net.ssl.KeyManagerFactory;
+import javax.net.ssl.SSLContext;
+import javax.net.ssl.SSLEngine;
+import javax.net.ssl.TrustManagerFactory;
+import java.io.FileInputStream;
+import java.io.InputStream;
+import java.security.KeyStore;
+import java.text.MessageFormat;
+
+public abstract class SSLConfig {
+
+  private static final org.slf4j.Logger logger = 
org.slf4j.LoggerFactory.getLogger(SSLConfig.class);
+
+  public static final String DEFAULT_SSL_PROVIDER = "JDK"; // JDK or 
OPENSSL
+  public static final String DEFAULT_SSL_PROTOCOL = "TLSv1.2";
+  public static final int DEFAULT_SSL_HANDSHAKE_TIMEOUT_MS = 10 * 1000; // 
10 seconds
+
+  protected final boolean httpsEnabled;
+  protected final DrillConfig config;
+  protected final Configuration hadoopConfig;
+
+  // Either the Netty SSL context or the JDK SSL context will be 
initialized
+  // The JDK SSL context is use iff the useSystemTrustStore setting is 
enabled.
+  protected SslContext nettySslContext;
+  protected SSLContext jdkSSlContext;
+
+  private static final boolean isWindows = 
System.getProperty("os.name").toLowerCase().indexOf("win") >= 0;
+  private static final boolean isMacOs = 
System.getProperty("os.name").toLowerCase().indexOf("mac") >= 0;
+
+  public static final String HADOOP_SSL_CONF_TPL_KEY = 
"hadoop.ssl.{0}.conf";
+  public static final String HADOOP_SSL_KEYSTORE_LOCATION_TPL_KEY = 
"ssl.{0}.keystore.location";
+  public static final String HADOOP_SSL_KEYSTORE_PASSWORD_TPL_KEY = 
"ssl.{0}.keystore.password";
+  public static final String HADOOP_SSL_KEYSTORE_TYPE_TPL_KEY = 
"ssl.{0}.keystore.type";
+  public static final String HADOOP_SSL_KEYSTORE_KEYPASSWORD_TPL_KEY =
+  "ssl.{0}.keystore.keypassword";
+  public static final String HADOOP_SSL_TRUSTSTORE_LOCATION_TPL_KEY = 
"ssl.{0}.truststore.location";
+  public static final String HADOOP_SSL_TRUSTSTORE_PASSWORD_TPL_KEY = 
"ssl.{0}.truststore.password";
+  public static final String HADOOP_SSL_TRUSTSTORE_TYPE_TPL_KEY = 
"ssl.{0}.truststore.type";
+
+  public SSLConfig(DrillConfig config, Configuration hadoopConfig, 
SSLFactory.Mode mode)
+  throws DrillException {
+
+this.config = config;
+httpsEnabled =
+config.hasPath(ExecConstants.HTTP_ENABLE_SSL) && 
config.getBoolean(ExecConstants.HTTP_ENABLE_SSL);
+// For testing we will mock up a hadoop configuration, however for 
regular use, we find the actual hadoop config.
+boolean enableHadoopConfig = 
config.getBoolean(ExecConstants.SSL_USE_HADOOP_CONF);
+if (enableHadoopConfig && this instanceof SSLConfigServer) {
+  if (hadoopConfig == null) {
+this.hadoopConfig = new Configuration(); // get hadoop 
configuration
+  } else {
+this.hadoopConfig = hadoopConfig;
+  }
+  String hadoopSSLConfigFile =
+  
this.hadoopConfig.get(resolveHadoopPropertyName(HADOOP_SSL_CONF_TPL_KEY, mode));
+  logger.debug("Using Hadoop configuration for SSL");
+  logger.debug("Hadoop SSL configuration file: {}", 
hadoopSSLConfigFile);
+  

[GitHub] drill pull request #950: DRILL-5431: SSL Support

2017-09-21 Thread sohami
Github user sohami commented on a diff in the pull request:

https://github.com/apache/drill/pull/950#discussion_r140124213
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/ExecConstants.java ---
@@ -126,10 +138,10 @@
   String HTTP_SESSION_MEMORY_RESERVATION = 
"drill.exec.http.session.memory.reservation";
   String HTTP_SESSION_MEMORY_MAXIMUM = 
"drill.exec.http.session.memory.maximum";
   String HTTP_SESSION_MAX_IDLE_SECS = 
"drill.exec.http.session_max_idle_secs";
-  String HTTP_KEYSTORE_PATH = "drill.exec.ssl.keyStorePath";
-  String HTTP_KEYSTORE_PASSWORD = "drill.exec.ssl.keyStorePassword";
-  String HTTP_TRUSTSTORE_PATH = "drill.exec.ssl.trustStorePath";
-  String HTTP_TRUSTSTORE_PASSWORD = "drill.exec.ssl.trustStorePassword";
+  String HTTP_KEYSTORE_PATH = SSL_KEYSTORE_PATH;
+  String HTTP_KEYSTORE_PASSWORD = SSL_KEYSTORE_PASSWORD;
+  String HTTP_TRUSTSTORE_PATH = SSL_TRUSTSTORE_PATH;
+  String HTTP_TRUSTSTORE_PASSWORD = SSL_TRUSTSTORE_PASSWORD;
--- End diff --

Why not just keep _SSL_KEYSTORE_ and _SSL_TRUSTSTORE_ constants and remove 
the HTTP ones ?


---


[GitHub] drill pull request #950: DRILL-5431: SSL Support

2017-09-21 Thread sohami
Github user sohami commented on a diff in the pull request:

https://github.com/apache/drill/pull/950#discussion_r140122588
  
--- Diff: distribution/src/resources/drill-override-example.conf ---
@@ -222,7 +222,35 @@ drill.exec: {
   # Full workspace name should be indicated (including schema and 
workspace separated by dot).
   # Workspace MUST be file-based and writable. Workspace name is 
case-sensitive.
   default_temporary_workspace: "dfs.tmp"
+
+  # Enable and provide additional parameters for Client-Server 
communication over SSL
+  # see also the javax.net.ssl parameters below
+  security.user.encryption.ssl: {
+#Set this to true to enable all client server communication to occur 
over SSL.
+enabled: false,
+#key password is optional if it is the same as the keystore password
+keyPassword: "key_passwd",
+#Optional handshakeTimeout in milliseconds. Default is 1 ms (10 
seconds)
+handshakeTimeout: 1,
+#protocol is optional. Drill will default to TLSv1.2
+protocol: "TLSv1.2"
+  }
+}
+
+# The SSL parameters below need to be set for custom transport layer 
settings. These are used by
+# both the WebServer (for HTTPS) and for Client-Server communication over 
SSL.
+javax.net.ssl {
--- End diff --

These are java system property and recommendation should not be to set 
these in configuration file instead it should be passed as command line 
arguments. Later when config resolution will happen it will take care of 
merging system property and the drill's configuration property value. Same for 
below options.


---


[GitHub] drill pull request #950: DRILL-5431: SSL Support

2017-09-21 Thread sohami
Github user sohami commented on a diff in the pull request:

https://github.com/apache/drill/pull/950#discussion_r140115903
  
--- Diff: 
exec/memory/base/src/main/java/io/netty/buffer/UnsafeDirectLittleEndian.java ---
@@ -69,20 +74,20 @@ private UnsafeDirectLittleEndian(AbstractByteBuf buf, 
boolean fake, AtomicLong b
 this.memoryAddress = buf.memoryAddress();
   }
 
-  private long addr(int index) {
-return memoryAddress + index;
-  }
+private long addr(int index) {
+return memoryAddress + index;
--- End diff --

Please fix indentation here and below.


---


[GitHub] drill issue #914: DRILL-5657: Size-aware vector writer structure

2017-09-21 Thread paul-rogers
Github user paul-rogers commented on the issue:

https://github.com/apache/drill/pull/914
  
Rebased onto master to resolve conflict in `exec/jdbc-all/pom.xml`.


---


Re: Added "spinner" code to allow debugging of failure cause

2017-09-21 Thread Timothy Farkas
Hi Boaz,

Would it be possible to implement this as a System option, so that there can be 
a uniform way for toggling these features?

Thanks,
Tim


From: Boaz Ben-Zvi 
Sent: Wednesday, September 20, 2017 5:23:43 PM
To: dev@drill.apache.org
Subject: Added "spinner" code to allow debugging of failure cause

  FYI and for feedback:

  As part of Pull Request #938 I added a “spinner” code in the build() method 
of the UserException class, such that when this method is called (i.e., before 
reporting of a failure to the user), that code can go into a looping spin 
(instead of continuing to termination).

This can be useful when investigating the original failure, allowing to attach 
a debugger, or use jstack to see the stacks at this point of execution, or 
check some external things (like condition of the spill files at that point), 
etc.

To trigger this feature ON, need to create (an empty) flag file named 
/tmp/drill/spin at every node where this stop-spinning needs to take place 
(e.g., use “clush –a touch /tmp/drill/spin” to set it all across the cluster).  
Once a thread hits this code, it checks for the existence of this spin file, 
and if exists, the thread creates a temp file named something like: 
/tmp/drill/spin4148663301172491613.tmp  which contains its process ID (e.g., to 
allow jstack) and the error message, like:

~ 5 > cat /tmp/drill/spin5273075865809469794.tmp
Spinning process: 16966@BBenZvi-E754-MBP13.local
Error cause: SYSTEM ERROR: CannotPlanException: Node 
[rel#232:Subset#10.PHYSICAL.SINGLETON([]).[]] could not be implemented; planner 
state:

Root: rel#232:Subset#10.PHYSICAL.SINGLETON([]).[]
. . . . . . .

~ 6 > jstack 16966
Picked up JAVA_TOOL_OPTIONS: -ea
2017-09-20 17:15:21
Full thread dump Java HotSpot(TM) 64-Bit Server VM (25.101-b13 mixed mode):

"Attach Listener" #91 daemon prio=9 os_prio=31 tid=0x7fdd8830b000 
nid=0x4f07 waiting on condition [0x]
   java.lang.Thread.State: RUNNABLE

"263cfbd5-329d-b9fb-d96e-392e4fe0be4d:foreman" #53 daemon prio=10 os_prio=31 
tid=0x7fdd8823a000 nid=0x7203 waiting on condition [0x72224000]
   java.lang.Thread.State: TIMED_WAITING (sleeping)
 at java.lang.Thread.sleep(Native Method)
 at 
org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:570)
. . . . . . . .

The spinning thread then loops – sleeps for a second and then rechecks that 
flag file. To turn this feature OFF and release the spinning threads one need 
to delete that empty spin files (e.g., use “clush –a rm /tmp/drill/spin”). This 
will also clean the relevant temp files.

   Hope this is useful, and welcome any feedback or suggestions.

  Boaz



Re: Backwards Compatibility Policy

2017-09-21 Thread Paul Rogers
Hi Tim,

Unfortunately, Drill has no version compatibility guidelines. We break 
compatibility all the time — often by accident. As Drill matures, we should 
consider defining such a policy.

Experience with other systems suggests that provide an automatic way to handle 
the inevitable evolution of APIs, data formats and the like. This is done for 
user convenience, and (in commercial products) to avoid support calls.

Would have been great if we had a version number in ZK. Would solve problems 
not just with options, but with storage plugins when we change their classes 
(and hence the JSON stored in ZK.) But, we don’t…

Vitalii recently added versioning for the Parquet metadata file to avoid the 
need for users to delete all the metadata each time we make a change. Would be 
great if we could follow that example for other areas.

In the meantime, perhaps you can implement a way to read the old format ZK but 
write the new format.

I believe you also changed the layout of the system table for options. These 
were long-needed improvements. Still, I wonder if anyone has a script that 
depends on the old format? Do we need a way to support the old format, while 
offering a new table with the new format? Jyothsna recently did that as part of 
her work in options; I wonder if something like that is needed here also? In 
fact, you may be able to simply alter the table that she added: it hasn’t see 
the light of a Drill release yet.

Thanks,

- Paul

> On Sep 21, 2017, at 4:16 PM, Timothy Farkas  wrote:
> 
> Makes sense Abhishek, I'll work on making it backwards compatible then.
> 
> Thanks,
> Tim
> 
> 
> From: Abhishek Girish 
> Sent: Thursday, September 21, 2017 3:58:55 PM
> To: dev@drill.apache.org
> Subject: Re: Backwards Compatibility Policy
> 
> Hey Tim,
> 
> Requiring users to purge Drill's ZK data is not advisable and we might not
> want to go that route. We need to have a seamless upgrade path - for
> instance modifying values found to be in an older format to the new one,
> without explicit user interaction.
> 
> Regards,
> Abhishek
> 
> On Thu, Sep 21, 2017 at 3:46 PM, Timothy Farkas  wrote:
> 
>> Hi All,
>> 
>> I recently made a change to the option system which impacted the fields
>> contained in OptionValues and hence the format of option information we are
>> storing in zookeeper. So it is currently not backward compatible with old
>> system options stored in zookeeper. Two ways to resolve the issue are to
>> require old data to be purged from zookeeper when upgrading the cluster or
>> to attempt to allow backward compatibility by modifying the deserializer
>> for OptionValue. So my question is what is our stance on backwards
>> compatibility?
>> 
>> Thanks,
>> Tim
>> 



[jira] [Resolved] (DRILL-5259) Allow listing a user-defined number of profiles

2017-09-21 Thread Kunal Khatua (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Khatua resolved DRILL-5259.
-
Resolution: Fixed

> Allow listing a user-defined number of profiles 
> 
>
> Key: DRILL-5259
> URL: https://issues.apache.org/jira/browse/DRILL-5259
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Web Server
>Affects Versions: 1.9.0
>Reporter: Kunal Khatua
>Assignee: Kunal Khatua
>Priority: Trivial
> Fix For: 1.12.0, 1.10.0
>
>
> Currently, the web UI only lists the last 100 profiles. 
> This count is currently hard coded. The proposed change would be to create an 
> option in drill-override.conf to provide a flexible default value, and also 
> an option within the UI (via optional parameter in the path). 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: Backwards Compatibility Policy

2017-09-21 Thread Timothy Farkas
Makes sense Abhishek, I'll work on making it backwards compatible then.

Thanks,
Tim


From: Abhishek Girish 
Sent: Thursday, September 21, 2017 3:58:55 PM
To: dev@drill.apache.org
Subject: Re: Backwards Compatibility Policy

Hey Tim,

Requiring users to purge Drill's ZK data is not advisable and we might not
want to go that route. We need to have a seamless upgrade path - for
instance modifying values found to be in an older format to the new one,
without explicit user interaction.

Regards,
Abhishek

On Thu, Sep 21, 2017 at 3:46 PM, Timothy Farkas  wrote:

> Hi All,
>
> I recently made a change to the option system which impacted the fields
> contained in OptionValues and hence the format of option information we are
> storing in zookeeper. So it is currently not backward compatible with old
> system options stored in zookeeper. Two ways to resolve the issue are to
> require old data to be purged from zookeeper when upgrading the cluster or
> to attempt to allow backward compatibility by modifying the deserializer
> for OptionValue. So my question is what is our stance on backwards
> compatibility?
>
> Thanks,
> Tim
>


[GitHub] drill pull request #953: DRILL-5259: Allow listing a user-defined number of ...

2017-09-21 Thread kkhatua
GitHub user kkhatua opened a pull request:

https://github.com/apache/drill/pull/953

DRILL-5259: Allow listing a user-defined number of profiles

Added an additional field in the UI allowing a user to specify the max 
number of profiles to display

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/kkhatua/drill DRILL-5259

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/drill/pull/953.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #953


commit 276f8cb72412d5bcb0b88c0616e65a04dc7f1b52
Author: Kunal Khatua 
Date:   2017-09-21T23:03:41Z

DRILL-5259: Allow listing a user-defined number of profiles 

Added an additional field in the UI allowing a user to specify the max 
number of profiles to display




---


Re: Backwards Compatibility Policy

2017-09-21 Thread Abhishek Girish
Hey Tim,

Requiring users to purge Drill's ZK data is not advisable and we might not
want to go that route. We need to have a seamless upgrade path - for
instance modifying values found to be in an older format to the new one,
without explicit user interaction.

Regards,
Abhishek

On Thu, Sep 21, 2017 at 3:46 PM, Timothy Farkas  wrote:

> Hi All,
>
> I recently made a change to the option system which impacted the fields
> contained in OptionValues and hence the format of option information we are
> storing in zookeeper. So it is currently not backward compatible with old
> system options stored in zookeeper. Two ways to resolve the issue are to
> require old data to be purged from zookeeper when upgrading the cluster or
> to attempt to allow backward compatibility by modifying the deserializer
> for OptionValue. So my question is what is our stance on backwards
> compatibility?
>
> Thanks,
> Tim
>


[jira] [Created] (DRILL-5810) Create instructions to create RPMs and DEBs for Apache Drill.

2017-09-21 Thread Arina Ielchiieva (JIRA)
Arina Ielchiieva created DRILL-5810:
---

 Summary: Create instructions to create RPMs and DEBs for Apache 
Drill. 
 Key: DRILL-5810
 URL: https://issues.apache.org/jira/browse/DRILL-5810
 Project: Apache Drill
  Issue Type: Task
  Components: Tools, Build & Test
Reporter: Arina Ielchiieva


Create instructions to create RPMs and DEBs for Apache Drill. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Backwards Compatibility Policy

2017-09-21 Thread Timothy Farkas
Hi All,

I recently made a change to the option system which impacted the fields 
contained in OptionValues and hence the format of option information we are 
storing in zookeeper. So it is currently not backward compatible with old 
system options stored in zookeeper. Two ways to resolve the issue are to 
require old data to be purged from zookeeper when upgrading the cluster or to 
attempt to allow backward compatibility by modifying the deserializer for 
OptionValue. So my question is what is our stance on backwards compatibility?

Thanks,
Tim


[GitHub] drill issue #914: DRILL-5657: Size-aware vector writer structure

2017-09-21 Thread paul-rogers
Github user paul-rogers commented on the issue:

https://github.com/apache/drill/pull/914
  
This next commit reintroduces a projection feature. With this change, a 
client can:

* Define the set of columns to project
* Define the schema of the data source (table, file, etc.)
* Write columns according to the schema
* Harvest only the projected columns.

### Example

Here is a simple example adapted from `TestResultSetLoaderProjection`. 
First, declare the projection

```
List selection = Lists.newArrayList(
SchemaPath.getSimplePath("c"),
SchemaPath.getSimplePath("b"),
SchemaPath.getSimplePath("e"));
```

Then, declare the schema. (Here, we declare the schema up-front. Projection 
also works if the schema is defined as columns are discovered while creating a 
batch.)

```
TupleMetadata schema = new SchemaBuilder()
.add("a", MinorType.INT)
.add("b", MinorType.INT)
.add("c", MinorType.INT)
.add("d", MinorType.INT)
.buildSchema();
 ```

Then, use the options mechanisms to pass the information to the result set 
loader:

```
   ResultSetOptions options = new OptionBuilder()
.setProjection(selection)
.setSchema(schema)
.build();
ResultSetLoader rsLoader = new ResultSetLoaderImpl(fixture.allocator(), 
options);
```

Now, we can write the four columns in the data source:

```
RowSetLoader rootWriter = rsLoader.writer();
rsLoader.startBatch();
…
rootWriter.start();
rootWriter.scalar(“a”).setInt(10);
rootWriter.scalar(“b”).setInt(20);
rootWriter.scalar(“c”).setInt(30);
rootWriter.scalar(“d”).setInt(40);
rootWriter.save();

```

But, when we harvest the results, we get only the projected columns. Notice 
that “e” is projected, but does not exist in the table, and so is not 
projected to the output. A higher level of code will handle this case.

```
#: b, c
0: 20, 30
```

### Maps

Although the above example does not show the feature, the mechanism also 
handles maps and arrays of maps. The rules are:

* If the projection list includes specific map members (such as “m.b”), 
then project only those map members.
* If the projection list includes just the map name (such as “m”), then 
project all map members (such as “m.a” and “m.b”.)
* If the projection list does not include the map at all, then project 
neither the map nor any of its members.

### Implementation

The implementation builds on previous commits. The idea is that we create a 
“dummy” column and writer, but we do not create the underlying value 
vector. This allows the client to be blissfully ignorant of whether the column 
is projected or not. On the other hand, if the client wants to know if a column 
is projected (perhaps to optimize away certain operations), then the projection 
status is available in the column metadata.

 Projection Set

Projection starts with a `ProjectionSet` abstraction. Each tuple (row, map) 
has a projection set. The projection set can be a set of names 
(`ProjectionSetImpl`) or a default (`NullProjectionSet`).

 Result Set Loader

The result set loader is modified to check if a column is projected. If so, 
the code flow is the same as previously. If not, then the code will create the 
dummy vector state and dummy writers described above.

Adding support for non-projected columns involved the usual amount of 
refactoring and moving bits of code around to get a simple solution.

 Accessor Factories

Prior versions had a `ColumnAccessorFactory` class that created both 
readers and writers. This commit splits the class into separate reader and 
writer factories. The writer factory now creates dummy writers if asked to 
create a writer when the backing vector is null. To make this easier, factory 
code that previously appeared in each writer has moved into the writer factory. 
(Note that readers don’t support projection: there is no need.)

 Dummy Writers

The accessor layer is modified to create a set of dummy writers. Scalar 
writers have a wide (internal) interface. Dummy scalar writers simply ignore 
the unsupported operations. Dummy array and tuple writers are also provided.

 Unit Test

The new `TestResultSetLoaderProjection` test exercises the new code. The 
new `DummyWriterTest` exercises the dummy writers.


---


[GitHub] drill issue #951: DRILL-5727: Update release profile to generate SHA-512 che...

2017-09-21 Thread arina-ielchiieva
Github user arina-ielchiieva commented on the issue:

https://github.com/apache/drill/pull/951
  
+1, LGTM.


---


[GitHub] drill issue #949: DRILL-5795: Parquet Filter push down at rowgroup level

2017-09-21 Thread dprofeta
Github user dprofeta commented on the issue:

https://github.com/apache/drill/pull/949
  
I have updated the patch with a unit test and fixed an issue when 
everything is filtered.


---


[jira] [Created] (DRILL-5809) Queries failed with UnrecognizedPropertyException

2017-09-21 Thread Dechang Gu (JIRA)
Dechang Gu created DRILL-5809:
-

 Summary: Queries failed with UnrecognizedPropertyException
 Key: DRILL-5809
 URL: https://issues.apache.org/jira/browse/DRILL-5809
 Project: Apache Drill
  Issue Type: Bug
  Components: Functions - Drill
Affects Versions: 1.12.0
 Environment: RHEL 6, 2.6.32-358.el6.x86_64, Apache Drill gitid 
4c99f0cdd98b1a0e789c5d11b353f88a2f6d54aa
Reporter: Dechang Gu
 Fix For: 1.12.0


Run tpch queries, all failed due to the following error:
{code}
java.sql.SQLException: SYSTEM ERROR: UnrecognizedPropertyException: 
Unrecognized field "type" (class 
org.apache.drill.exec.server.options.OptionValue), not marked as ignorable (8 
known properties: "string_val", "kind", "accessibleScopes", "num_val", "name", 
"bool_val", "float_val", "scope"])
 at [Source: [B@4ec364a3; line: 6, column: 2] (through reference chain: 
org.apache.drill.exec.server.options.OptionValue["type"])


[Error Id: 9bad34d9-fe21-47fd-ade1-fbee3d5111e5 on ucs-node7.perf.lab:31010]
at 
org.apache.drill.jdbc.impl.DrillCursor.nextRowInternally(DrillCursor.java:489)
at 
org.apache.drill.jdbc.impl.DrillCursor.loadInitialSchema(DrillCursor.java:561)
at 
org.apache.drill.jdbc.impl.DrillResultSetImpl.execute(DrillResultSetImpl.java:1895)
at 
org.apache.drill.jdbc.impl.DrillResultSetImpl.execute(DrillResultSetImpl.java:61)
at 
org.apache.calcite.avatica.AvaticaConnection$1.execute(AvaticaConnection.java:473)
at 
org.apache.drill.jdbc.impl.DrillMetaImpl.prepareAndExecute(DrillMetaImpl.java:1100)
at 
org.apache.calcite.avatica.AvaticaConnection.prepareAndExecuteInternal(AvaticaConnection.java:477)
at 
org.apache.drill.jdbc.impl.DrillConnectionImpl.prepareAndExecuteInternal(DrillConnectionImpl.java:181)
at 
org.apache.calcite.avatica.AvaticaStatement.executeInternal(AvaticaStatement.java:109)
at 
org.apache.calcite.avatica.AvaticaStatement.executeQuery(AvaticaStatement.java:130)
at 
org.apache.drill.jdbc.impl.DrillStatementImpl.executeQuery(DrillStatementImpl.java:112)
at PipSQueak.executeQuery(PipSQueak.java:289)
at PipSQueak.runTest(PipSQueak.java:104)
at PipSQueak.main(PipSQueak.java:477)
Caused by: org.apache.drill.common.exceptions.UserRemoteException: SYSTEM 
ERROR: UnrecognizedPropertyException: Unrecognized field "type" (class 
org.apache.drill.exec.server.options.OptionValue), not marked as ignorable (8 
known properties: "string_val", "kind", "accessibleScopes", "num_val", "name", 
"bool_val", "float_val", "scope"])
 at [Source: [B@4ec364a3; line: 6, column: 2] (through reference chain: 
org.apache.drill.exec.server.options.OptionValue["type"])
{code}

Looks like the issue was introduced in commit: 
6adeb986016a769755fd5e8fc66244ee1e8d18e1 
DRILL-5723: Added System Internal Options That can be Modified at Run…

The commit prior to this one did not show the issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] drill pull request #951: DRILL-5727: Update release profile to generate SHA-...

2017-09-21 Thread parthchandra
Github user parthchandra commented on a diff in the pull request:

https://github.com/apache/drill/pull/951#discussion_r140309349
  
--- Diff: pom.xml ---
@@ -977,6 +977,7 @@
   
 MD5
 SHA-1
+SHA-512
--- End diff --

Done


---


[GitHub] drill issue #942: DRILL-5781: Fix unit test failures to use tests config eve...

2017-09-21 Thread arina-ielchiieva
Github user arina-ielchiieva commented on the issue:

https://github.com/apache/drill/pull/942
  
+1, LGTM.


---


[GitHub] drill pull request #942: DRILL-5781: Fix unit test failures to use tests con...

2017-09-21 Thread vvysotskyi
Github user vvysotskyi commented on a diff in the pull request:

https://github.com/apache/drill/pull/942#discussion_r140213645
  
--- Diff: contrib/storage-hbase/src/test/resources/hbase-site.xml ---
@@ -66,15 +66,13 @@
 Default is 10.
 
   

[GitHub] drill pull request #942: DRILL-5781: Fix unit test failures to use tests con...

2017-09-21 Thread vvysotskyi
Github user vvysotskyi commented on a diff in the pull request:

https://github.com/apache/drill/pull/942#discussion_r140215549
  
--- Diff: exec/java-exec/src/test/java/org/apache/drill/exec/ExecTest.java 
---
@@ -100,6 +101,14 @@ public void run() {
 return dir.getAbsolutePath() + File.separator + dirName;
   }
 
+  /**
+   * Sets zookeeper server and client SASL test config properties.
+   */
+  public static void setZookeeperSaslTestConfigProps() {
+System.setProperty(ZooKeeperSaslServer.LOGIN_CONTEXT_NAME_KEY, 
"Test_server");
--- End diff --

Thanks, replaced.


---


[GitHub] drill pull request #942: DRILL-5781: Fix unit test failures to use tests con...

2017-09-21 Thread vvysotskyi
Github user vvysotskyi commented on a diff in the pull request:

https://github.com/apache/drill/pull/942#discussion_r140209669
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/coord/zk/PathUtils.java ---
@@ -70,4 +72,14 @@ public static final String normalize(final String path) {
 return builder.toString();
   }
 
+  /**
+   * Creates and returns path with the protocol at the beginning from 
specified {@code url}.
+   */
--- End diff --

Thanks, done.


---


[GitHub] drill pull request #942: DRILL-5781: Fix unit test failures to use tests con...

2017-09-21 Thread vvysotskyi
Github user vvysotskyi commented on a diff in the pull request:

https://github.com/apache/drill/pull/942#discussion_r140214189
  
--- Diff: exec/java-exec/src/test/java/org/apache/drill/exec/ExecTest.java 
---
@@ -100,6 +101,14 @@ public void run() {
 return dir.getAbsolutePath() + File.separator + dirName;
   }
 
+  /**
+   * Sets zookeeper server and client SASL test config properties.
+   */
+  public static void setZookeeperSaslTestConfigProps() {
--- End diff --

Thanks, it looks better with these changes


---


[GitHub] drill issue #952: DRILL-5438: Amazon S3 bucket can't be queried directly at ...

2017-09-21 Thread xhochy
Github user xhochy commented on the issue:

https://github.com/apache/drill/pull/952
  
It would be nice if someone could point me to the location where I should 
add a unit test for this.


---


[GitHub] drill pull request #952: DRILL-5438: Amazon S3 bucket can't be queried direc...

2017-09-21 Thread xhochy
GitHub user xhochy opened a pull request:

https://github.com/apache/drill/pull/952

DRILL-5438: Amazon S3 bucket can't be queried directly at root



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/xhochy/drill DRILL-5438

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/drill/pull/952.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #952


commit 8e43de6970a74fa001eabfa20f88da97feedad5b
Author: Korn, Uwe 
Date:   2017-09-21T12:52:58Z

DRILL-5438: Amazon S3 bucket can't be queried directly at root




---


[GitHub] drill issue #946: DRILL-5799: native-client: Support alternative build direc...

2017-09-21 Thread xhochy
Github user xhochy commented on the issue:

https://github.com/apache/drill/pull/946
  
@paul-rogers Thanks, rebase fixed the build.


---


[GitHub] drill pull request #889: DRILL-5691: enhance scalar sub queries checking for...

2017-09-21 Thread amansinha100
Github user amansinha100 commented on a diff in the pull request:

https://github.com/apache/drill/pull/889#discussion_r140160556
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/planner/cost/DrillRelMdMaxRowCount.java
 ---
@@ -0,0 +1,42 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.planner.cost;
+
+import org.apache.calcite.rel.core.TableScan;
+import org.apache.calcite.rel.metadata.ReflectiveRelMetadataProvider;
+import org.apache.calcite.rel.metadata.RelMdMaxRowCount;
+import org.apache.calcite.rel.metadata.RelMetadataProvider;
+import org.apache.calcite.rel.metadata.RelMetadataQuery;
+import org.apache.calcite.util.BuiltInMethod;
+import org.apache.drill.exec.planner.logical.DrillLimitRel;
+
+public class DrillRelMdMaxRowCount extends RelMdMaxRowCount {
+
+  private static final DrillRelMdMaxRowCount INSTANCE = new 
DrillRelMdMaxRowCount();
+
+  public static final RelMetadataProvider SOURCE = 
ReflectiveRelMetadataProvider.reflectiveSource(BuiltInMethod.MAX_ROW_COUNT.method,
 INSTANCE);
+
+  public Double getMaxRowCount(DrillLimitRel rel, RelMetadataQuery mq) {
+return rel.getRows();
+  }
+
+  @Override
+  public Double getMaxRowCount(TableScan rel, RelMetadataQuery mq) {
+return rel.getRows();
--- End diff --

Sorry, I had been meaning to reply sooner.  I think overloading the 
getMaxRowCount() to return rel.getRows() can create potential issues...because 
getMaxRowCount() should always return whatever is the *maximum* row count 
possible for that RelNode.  Here, if you return TableScan.getRows(), the value 
is an *estimate*, which means in reality it could be higher.   The caller might 
make incorrect decision based on this value.  

I am thinking about your original motivation for the changes.  Are you 
materializing the results into a single-row table ?  It sounds like you want a 
special table scan whose max row count is 1.Is materializing the only 
option ?   (the reason I am curious is it is odd to materialize very small data 
sets such as 1 row). 



---


[GitHub] drill pull request #889: DRILL-5691: enhance scalar sub queries checking for...

2017-09-21 Thread amansinha100
Github user amansinha100 commented on a diff in the pull request:

https://github.com/apache/drill/pull/889#discussion_r140161176
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/join/JoinUtils.java
 ---
@@ -203,32 +203,23 @@ public static void 
addLeastRestrictiveCasts(LogicalExpression[] leftExpressions,
   }
 
   /**
-   * Utility method to check if a subquery (represented by its root 
RelNode) is provably scalar. Currently
-   * only aggregates with no group-by are considered scalar. In the 
future, this method should be generalized
-   * to include more cases and reconciled with Calcite's notion of scalar.
+   * Utility method to check if a subquery (represented by its root 
RelNode) is provably scalar.
* @param root The root RelNode to be examined
* @return True if the root rel or its descendant is scalar, False 
otherwise
*/
   public static boolean isScalarSubquery(RelNode root) {
-DrillAggregateRel agg = null;
-RelNode currentrel = root;
-while (agg == null && currentrel != null) {
-  if (currentrel instanceof DrillAggregateRel) {
-agg = (DrillAggregateRel)currentrel;
-  } else if (currentrel instanceof RelSubset) {
-currentrel = ((RelSubset)currentrel).getBest() ;
-  } else if (currentrel.getInputs().size() == 1) {
-// If the rel is not an aggregate or RelSubset, but is a 
single-input rel (could be Project,
-// Filter, Sort etc.), check its input
-currentrel = currentrel.getInput(0);
-  } else {
-break;
+RelMetadataQuery relMetadataQuery = RelMetadataQuery.instance();
+RelNode currentRel = root;
+while (currentRel != null) {
+  if (currentRel instanceof RelSubset) {
+currentRel = ((RelSubset) currentRel).getBest();
+continue;
   }
-}
-
-if (agg != null) {
-  if (agg.getGroupSet().isEmpty()) {
+  Double rowCount = relMetadataQuery.getMaxRowCount(currentRel);
+  if (rowCount != null && rowCount.equals(1.0)) {
--- End diff --

For scalar, the value could be <= 1.0  (0 also qualifies). 


---