Github user rawkintrevo commented on the issue:

    https://github.com/apache/zeppelin/pull/928
  
    @Bzz,  Re: the three Pyspark related tests in Zeppelin Spark Cluster- 
Similar scenarios would always work on my machine, and I am unable to reproduce 
the test failure locally.  (Have been the whole time).  With the exception of 
bumping to 0.7.0, I successfully run `mvn clean verify` locally on each run of 
code.  
    
    Since updating to `0.7.0`  running `mvn clean package -Ppyspark` I get the 
following errors:
    
    ```
    Tests in error: 
      HeliumApplicationFactoryTest.testUnloadOnInterpreterUnbind:227 » 
ClassCast jav...
      HeliumApplicationFactoryTest.testLoadRunUnloadApplication:145 » 
ClassCast java...
      HeliumApplicationFactoryTest.testUnloadOnInterpreterRestart:272 » 
ClassCast ja...
      HeliumApplicationFactoryTest.testUnloadOnParagraphRemove:189 » ClassCast 
java....
    ```
    
    **tog** was having similiar issues, (he attached his error log in the 
original email) 
[thread](https://lists.apache.org/thread.html/e71088130d5e71058890f09fa91df5c1c5111ecb401c63f27a9762a7@%3Cdev.zeppelin.apache.org%3E).
  I tried building with 
    - `mvn clean package`
    - `mvn clean install`
    - `mvn clean package -Pbuild-dist`
    
    All will yield said error in testing.  Though, I can successfully build and 
check the cases where the pyspark tests noted above are failing.  (However, 
even before I was unable to recreate said tests locally).  
    
    So the several pushes I made yesterday were related to the fact that I am 
basically flying *quasi* blind locally. ( I can still `mvn clean package` in 
the `zeppelin/mahout` directory )
    
    For brevity I will not re copy and paste, but here is my thought:
    - I never messed with anything in pyspark, or the Zeppelin Server.  
    - Functionality still works full when actually running Zeppelin
    - Thus this is an issue of test configuration.
    
    Upon reviewing the failed tests I noticed, the note_id they are using 
(which was originally designated for Spark Tests) is the sameone being used by 
the Mahout tests, even though they are in different interpreter groups. 
    
    So I was attempting to change the note IDs, which is when I got the `null 
pointer` errors of the last test.  
    
    That issue I was able to recreate locally, it had to do with not setting 
`Resource Pool` when creating the context in the MahoutTest. I've fixed that 
locally and have just pushed.  The fact that this seemingly small oversight was 
calling failures leads me to believe that the *note* which Spark Test and 
Mahout Test are using, are also somehow being used by Zeppelin server, and this 
mash up being the root of the problem.  
    
    



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

Reply via email to