[ 
https://issues.apache.org/jira/browse/GEODE-194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16037209#comment-16037209
 ] 

ASF GitHub Bot commented on GEODE-194:
--------------------------------------

GitHub user metatype opened a pull request:

    https://github.com/apache/geode/pull/558

    GEODE-194: Remove spark connector

    Remove the spark connector code until it can be updated
    for the current spark release. We should also integrate
    the build lifecycle and consider how to extract this into
    a separate reo.
    
    Thank you for submitting a contribution to Apache Geode.
    
    In order to streamline the review of the contribution we ask you
    to ensure the following steps have been taken:
    
    ### For all changes:
    - [X] Is there a JIRA ticket associated with this PR? Is it referenced in 
the commit message?
    
    - [X] Has your PR been rebased against the latest commit within the target 
branch (typically `develop`)?
    
    - [X] Is your initial contribution a single, squashed commit?
    
    - [X] Does `gradlew build` run cleanly?
    
    - [ ] Have you written or updated unit tests to verify your changes?
    
    - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
    
    ### Note:
    Please ensure that once the PR is submitted, you check travis-ci for build 
issues and
    submit an update to your PR as soon as possible. If you need help, please 
send an
    email to dev@geode.apache.org.


You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/metatype/incubator-geode remove-spark

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/geode/pull/558.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #558
    
----
commit d95993b944d706baeb0c5e6c13a7d00754159123
Author: Anthony Baker <aba...@apache.org>
Date:   2017-05-06T00:00:02Z

    GEODE-194: Remove spark connector
    
    Remove the spark connector code until it can be updated
    for the current spark release. We should also integrate
    the build lifecycle and consider how to extract this into
    a separate reo.

----


> Geode Spark Connector does not support Spark 2.0
> ------------------------------------------------
>
>                 Key: GEODE-194
>                 URL: https://issues.apache.org/jira/browse/GEODE-194
>             Project: Geode
>          Issue Type: Bug
>          Components: extensions
>            Reporter: Jianxia Chen
>              Labels: experimental, gsoc2016
>
> The BasicIntegrationTest fails when using spark 1.4. e.g.
> [info] - GemFire OQL query with more complex UDT: Partitioned Region *** 
> FAILED ***
> [info]   org.apache.spark.SparkException: Job aborted due to stage failure: 
> Task 0 in stage 24.0 failed 1 times, most recent failure: Lost task 0.0 in 
> stage 24.0 (TID 48, localhost): scala.MatchError: 
> [info]        Portfolio [id=3 status=active type=type3
> [info]                AOL:Position [secId=AOL qty=978.0 mktValue=40.373], 
> [info]                MSFT:Position [secId=MSFT qty=98327.0 mktValue=23.32]] 
> (of class ittest.io.pivotal.gemfire.spark.connector.Portfolio)
> [info]        at 
> org.apache.spark.sql.catalyst.CatalystTypeConverters$$anonfun$createToCatalystConverter$4.apply(CatalystTypeConverters.scala:178)
> [info]        at 
> org.apache.spark.sql.execution.RDDConversions$$anonfun$rowToRowRdd$1$$anonfun$apply$2.apply(ExistingRDD.scala:62)
> [info]        at 
> org.apache.spark.sql.execution.RDDConversions$$anonfun$rowToRowRdd$1$$anonfun$apply$2.apply(ExistingRDD.scala:59)
> [info]        at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
> [info]        at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
> [info]        at scala.collection.Iterator$class.foreach(Iterator.scala:727)
> [info]        at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
> [info]        at 
> scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
> [info]        at 
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
> [info]        at 
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
> [info]        at 
> scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
> [info]        at scala.collection.AbstractIterator.to(Iterator.scala:1157)
> [info]        at 
> scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
> [info]        at 
> scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
> [info]        at 
> scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
> [info]        at 
> scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
> [info]        at 
> org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$12.apply(RDD.scala:885)
> [info]        at 
> org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$12.apply(RDD.scala:885)
> [info]        at 
> org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1765)
> [info]        at 
> org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1765)
> [info]        at 
> org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:63)
> [info]        at org.apache.spark.scheduler.Task.run(Task.scala:70)
> [info]        at 
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
> [info]        at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> [info]        at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> [info]        at java.lang.Thread.run(Thread.java:745)
> [info] 
> [info] Driver stacktrace:
> [info]   at 
> org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1266)
> [info]   at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1257)
> [info]   at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1256)
> [info]   at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
> [info]   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
> [info]   at 
> org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1256)
> [info]   at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730)
> [info]   at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730)
> [info]   at scala.Option.foreach(Option.scala:236)
> [info]   at 
> org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:730)
> [info]   ...



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to