[jira] [Created] (ZEPPELIN-4136) Class Cast Exception with Spark Implementations that Backported SparkR Security Fix

2019-04-29 Thread Russell Spitzer (JIRA)
Russell Spitzer created ZEPPELIN-4136:
-

 Summary: Class Cast Exception with Spark Implementations that 
Backported SparkR Security Fix
 Key: ZEPPELIN-4136
 URL: https://issues.apache.org/jira/browse/ZEPPELIN-4136
 Project: Zeppelin
  Issue Type: Bug
  Components: security, spark
Affects Versions: 0.8.1
Reporter: Russell Spitzer


Zeppelin uses a version check to determine the return type of the SparkR 
channel 

https://github.com/apache/zeppelin/blob/8e6974fdc33e834bc01a5ee594e2cfca4ff3045f/spark/interpreter/src/main/java/org/apache/zeppelin/spark/SparkVersion.java#L92-L97

and

https://github.com/apache/zeppelin/blob/735064fdc57ae958fabae85b399bb5af3cb79144/spark/interpreter/src/main/scala/org/apache/spark/SparkRBackend.scala#L34-L44

Datastax Enterprise build of Spark includes this security fix in 2.2.2.X, but 
since Zeppelin doesn't have knowledge of this (for obvious reasons) it attempts 
to connect without the secret. While I know this isn't an issue for everyone I 
think we could fix this issue by attempting to match on return type and then we 
could remove the version check portion of the code. This may end up looking a 
bit cleaner too although that may just be my opinion



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ZEPPELIN-2416) Make Spark Properties Optional in Spark Interperter

2017-04-17 Thread Russell Spitzer (JIRA)
Russell Spitzer created ZEPPELIN-2416:
-

 Summary: Make Spark Properties Optional in Spark Interperter
 Key: ZEPPELIN-2416
 URL: https://issues.apache.org/jira/browse/ZEPPELIN-2416
 Project: Zeppelin
  Issue Type: Improvement
  Components: Interpreters
Affects Versions: 0.7.1
Reporter: Russell Spitzer
Priority: Trivial


I was doing some experimentation connecting Zepplin to DataStax Enterprise and 
I noticed that the Spark Interperter requires the "master" interpreter 
parameter to be present in order to work. There are several locations which do  
null-unsafe operations against the return value of getProperty("master")

See
https://github.com/apache/zeppelin/blob/v0.7.1/spark/src/main/java/org/apache/zeppelin/spark/SparkInterpreter.java#L310

Then the value of this property is set in the SparkConf even if the value is 
already set there.
https://github.com/apache/zeppelin/blob/v0.7.1/spark/src/main/java/org/apache/zeppelin/spark/SparkInterpreter.java#L335
and
https://github.com/apache/zeppelin/blob/v0.7.1/spark/src/main/java/org/apache/zeppelin/spark/SparkInterpreter.java#L449

For this and other parameters it might make sense to allow values that are 
already set to be used and not overwritten. This would allow Zepplin to work 
better with "spark-defaults" and those vendors which preset these parameters in 
the environment.

I would propose that all interpreter properties be optional. Operationally this 
means that the "master" is allowed to be unset if it can be automatically 
inherited from the environment in the SparkConf. Other parameters would simply 
need a null check, where they would not be set if they are already null (unset).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)