Hello Marcus

Maybe it has something to do with

https://stackoverflow.com/questions/13008792/how-to-import-class-using-fully-qualified-namehttps://stackoverflow.com/questions/13008792/how-to-import-class-using-fully-qualified-name


<https://stackoverflow.com/questions/13008792/how-to-import-class-using-fully-qualified-name>I
 have implemented user defined functions in spark and used them in my code with 
jar being loaded in classpath and i didn't have any issues with import.


Can you give me idea of how you are loading this jar datavec-api for zeppelin 
or spark-submit to access?


Best

Karan

________________________________
From: Marcus <marcus.hun...@gmail.com>
Sent: Saturday, March 10, 2018 10:43:25 AM
To: users@zeppelin.apache.org
Subject: Spark Interpreter error: 'not found: type'

Hi,

I am new to Zeppelin and encountered a strange behavior. When copying my 
running scala-code to a notebook, I've got errors from the spark interpreter, 
saying it could not find some types. Strangely the code worked, when I used the 
fqcn instead of the simple name.
But since I want the create a workflow for me, where I use my IDE to write 
scala and transfer it to a notebook, I'd prefer to not be forced to using fqcn.

Here's an example:


| %spark.dep
| z.reset()
| z.load("org.deeplearning4j:deeplearning4j-core:0.9.1")
| z.load("org.nd4j:nd4j-native-platform:0.9.1")

res0: org.apache.zeppelin.dep.Dependency = 
org.apache.zeppelin.dep.Dependency@2e10d1e4

| import org.datavec.api.records.reader.impl.FileRecordReader
|
| class Test extends FileRecordReader {
| }
|
| val t = new Test()

import org.datavec.api.records.reader.impl.FileRecordReader
<console>:12: error: not found: type FileRecordReader
class Test extends FileRecordReader {

Thanks, Marcus

Reply via email to