Github user coderfi commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61449633
@liancheng PR #3072 created (all of one line! :) ).
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61437220
@coderfi This is a good catch, would you mind to file a JIRA ticket for
this? A PR would be even better :)
---
If your project is set up for it, you can reply to this
Github user scwf commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61438681
Yes, here should be 0.13.1a.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61442885
Yes - we just need to change that to 0.13.1a
On Sun, Nov 2, 2014 at 8:05 PM, wangfei notificati...@github.com wrote:
Yes, here should be 0.13.1a.
Github user scwf commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61443613
And note just to change ```hive.version``` to 0.13.1a,
```hive.version.short``` should be 0.31.1.
---
If your project is set up for it, you can reply to this email and
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-6147
it is not possible to mutate them after the fact. Let's stick with 0.13.1a
- I will fully release it now. But don't remove the extra repository (we can
remove it later)
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/2685#discussion_r19653250
--- Diff: dev/run-tests ---
@@ -142,17 +142,24 @@ CURRENT_BLOCK=$BLOCK_BUILD
# We always build with Hive because the PySpark Spark SQL tests need it.
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/2685#discussion_r19653264
--- Diff: dev/run-tests ---
@@ -142,17 +142,24 @@ CURRENT_BLOCK=$BLOCK_BUILD
# We always build with Hive because the PySpark Spark SQL tests need it.
Github user zhzhan commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61222998
whew, finally.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user scwf commented on a diff in the pull request:
https://github.com/apache/spark/pull/2685#discussion_r19653726
--- Diff: dev/run-tests ---
@@ -142,17 +142,24 @@ CURRENT_BLOCK=$BLOCK_BUILD
# We always build with Hive because the PySpark Spark SQL tests need it.
Github user scwf commented on a diff in the pull request:
https://github.com/apache/spark/pull/2685#discussion_r19653776
--- Diff: dev/run-tests ---
@@ -142,17 +142,24 @@ CURRENT_BLOCK=$BLOCK_BUILD
# We always build with Hive because the PySpark Spark SQL tests need it.
Github user scwf commented on a diff in the pull request:
https://github.com/apache/spark/pull/2685#discussion_r19653777
--- Diff: dev/run-tests ---
@@ -142,17 +142,24 @@ CURRENT_BLOCK=$BLOCK_BUILD
# We always build with Hive because the PySpark Spark SQL tests need it.
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61224765
[Test build #22602 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/22602/consoleFull)
for PR 2685 at commit
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/2685#discussion_r19654494
--- Diff: dev/run-tests ---
@@ -142,17 +142,24 @@ CURRENT_BLOCK=$BLOCK_BUILD
# We always build with Hive because the PySpark Spark SQL tests need it.
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61227318
@zhzhan Had a glance of the Kryo issue you pointed out should be related to
the POM inconsistency problem I mentioned above, but I'm not sure whether they
are
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61231536
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61231533
[Test build #22602 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/22602/consoleFull)
for PR 2685 at commit
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61306597
Thanks guys for all your hard work on this! Merging to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/2685
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user zhzhan commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61050366
@scwf Hmm, you mean the dev/run-test does not run pyspark? I locally run
dev/run-test today and months ago, and didn't met pyspark error. How can I
invoke pyspark test
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61051413
The problem here is that Hive 0.13 upgrades the Kryo version from 2.21 to
2.22. Spark previously depends on Kryo 2.22 via chill. In Kryo 2.22 they made a
build change
Github user zhzhan commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61051681
@scwf I checked dev/run-tests, it does invoke python/run-tests. Didn't you
also run it locally and succeed, or I miss anything?
---
If your project is set up for it, you
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61051691
Just to make it more intuitive, made a dependency graph to illustrate the
issue:
![dependency-hell](http://tinyurl.com/q5opqe2)
---
If your project is
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61051870
Based on the most recent failures, it seems like somehow the test classpath
is still using kryo 2.22.
---
If your project is set up for it, you can reply to this email
Github user scwf commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61051898
@pwendellï¼spark depends on kryo 2.21 which not shaded objenesis while
hive 0.13 depends on kryo 2.22 and it shaded objenesis. So excluding will not
fix the problem
Github user scwf commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61051983
actually the most recent failures, it is using kryo 2.21
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user scwf commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61052028
```com.esotericsoftware.shaded.org.objenesis.strategy.InstantiatorStrategy```
is in kryo 2.22
---
If your project is set up for it, you can reply to this email and have
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61052007
@scwf the hive classes only link against kryo... they don't link against
objenesis directly. As long as kryo did not make a binary-incompatible change
between 2.21 and
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61052076
Another thing to notice is that Kryo 2.21 is a really weird release. [Kryo
2.21
POM](https://repo1.maven.org/maven2/com/esotericsoftware/kryo/kryo/2.21/kryo-2.21.pom)
Github user zhzhan commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61052124
@pwendell com.esotericsoftware is already shaded in hive. Will it work if
we keep it in hive-exec.jar? Please advice.
---
If your project is set up for it, you can reply
Github user scwf commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61052281
@pwendell, right in hive 0.13.1 it use the shaded
```com.esotericsoftware.shaded.org.objenesis.strategy.InstantiatorStrategy```
in kryo 2.22.
So if we exclude it, we
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61052515
Okay I think the issue is pretty tough. Unfortunately hive is directly
using the shaded objenesis classes. However, Spark needs Kryo 2.21 which
depends on the original
Github user zhzhan commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61052493
@pwendell @scwf What I mean is that com.esotericsoftware is again shaded in
hive as org.apache.hive.com.esotericsoftware. I think that's the reason why the
original hive
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61052558
Unfortunately the most recent version of Chill still stick on Kryo 2.21 :(
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61052630
@liancheng yeah - I just noticed that :(
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user scwf commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61052641
@pwendell link
https://github.com/twitter/chill/commit/3869b0122660c908e189ff08b615bd7221956224
chill revert kryo for unknown reason
---
If your project is set up for
Github user zhzhan commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61052655
Please refer to following hive-exec directory, as we can see
esotericsoftware are all in org.apache.hive.
HW11188:tmp1 zzhang$ ls
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61052650
Actually @scwf found Chill had once tried to upgrade to Kryo 2.22, but
reverted it.
---
If your project is set up for it, you can reply to this email and have your
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61052753
Another idea - what if we upgrade Kryo to 2.22 explicitly in our core pom?
If 2.22 is binary compatible with 2.21 it could work. If chill direclty uses
objenesis, we
Github user scwf commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61052820
upgrade Kryo in core will get compile error
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61052784
Hm, since Kryo 2.21 refer to the non-shaded version of Objenesis, while
Kryo 2.22 refer to the shaded version, it should OK to let them coexist in
Spark, right?
Github user scwf commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61052933
yeah we can have a try to use kryo 2.22 and original objenesis in core
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user zhzhan commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61052977
@pwendell @liancheng @scwf Folks, why the shaded com.esotericsoftware in
hive cannot coexist with com.esotericsoftware in spark?
---
If your project is set up for it,
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61053147
@zhzhan they can both exist. The issue is that Spark uses a library Chill
that requires Kryo 2.21. If 2.21 and 2.22 are not binary compatible, this will
break it and
Github user scwf commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61053748
I am testing as follows, it's ok?
```
--- a/core/pom.xml
+++ b/core/pom.xml
@@ -125,10 +125,32 @@
dependency
groupIdcom.twitter/groupId
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61053806
Yeah this seems reasonable to try. Whether it will work depends on whether
kryo 2.22 and 2.21 are compatible.
---
If your project is set up for it, you can reply to
Github user zhzhan commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61054034
@pwendell Kryo 2.22 in hive is already shaded by hive itself. My
understanding is that shaded actually make it private to hive itself, and it is
invisible to other
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61054248
@zhzhan we use a special hive-exec jar that doesn't shade any dependencies.
The original hive-exec jar includes a bunch of other stuff that we don't want.
However, it
Github user zhzhan commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61054467
@pwendell Thanks for the clarification. That's what I mean to shade
com.esotericsoftware in spark-project:hive-exec.
---
If your project is set up for it, you can reply
Github user scwf commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61054698
Actually before we merge the #2241, we even not test core with hive-0.13,
so this issue comes here:)
---
If your project is set up for it, you can reply to this email
Github user zhzhan commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61055173
@scwf I locally test it, but with the hive uber jar. Later, it seems that
the spark-project jar for 0.13.1 is not available
---
If your project is set up for it, you
Github user scwf commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61055622
Still failed...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user scwf commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61055765
@zhzhan, you can use spark-project jar to test and it will failed, based on
#2241
---
If your project is set up for it, you can reply to this email and have your
reply
Github user scwf commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61056033
locally test result
1
```
diff --git a/core/pom.xml b/core/pom.xml
index 5cd21e1..c87f661 100644
--- a/core/pom.xml
+++ b/core/pom.xml
@@ -131,6
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61056389
@scwf Yea, same here.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61134677
Did a bunch of local tests, but didn't have any luck. I'd suggest to
re-shade the org.spark-project.hive:hive-exec:0.13.1 jar and include the shaded
Kryo 2.22 in it.
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61135531
@liancheng okay - I'll see what I can do. If Kryo 2.21 and 2.22 are source
compatible it should be doable.
---
If your project is set up for it, you can reply to this
Github user scwf commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61137061
If they are not source compatible, maybe we should re-shade hive to use
kryo 2.21.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61146852
I am publishing a version of Hive that relies on Kryo 2.21. We test this
patch with it. I'll update when it's ready.
---
If your project is set up for it, you can
Github user scwf commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61150244
Ok, thanks for that.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61161407
Okay I pushed a new version but you'll need to add a repository:
https://oss.sonatype.org/content/repositories/orgspark-project-1089/
Also, you'll need to
Github user scwf commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61162936
Ok, i am testing.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user scwf commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61165478
Locally test seems ok(just test with thrift server and hash shuffle),
updated.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61165577
cross your fingers
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61165645
Jenkins, test this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61166027
[Test build #22560 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/22560/consoleFull)
for PR 2685 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61178005
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61178002
[Test build #22560 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/22560/consoleFull)
for PR 2685 at commit
Github user scwf commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61178184
hmm, the result is same as #3004, core and hive passed but pyspark failed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user scwf commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61179025
How can i test pyspark separately locally
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user zhzhan commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61180228
@scwf
: org.apache.spark.sql.execution.QueryExecutionException: FAILED:
SemanticException [Error 10072]: Database does not exist: default
Looks like need
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61180469
The server will do a git clean -fdx before every run. Maybe you can
reproduce it with this option? (warning, this will remove any working state you
have in your local
Github user scwf commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61180636
Or in python test we need call ```reset``` of TestHiveContext, i am not
good at python, but i will have a try.
---
If your project is set up for it, you can reply to this
Github user zhzhan commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61183571
@scwf It looks like pyspark does not use TestHiveContext. Instead probably
we can change the sql.py,
hiveCtx = LocalHiveContext(sc)
+ try:
+ ...
Github user scwf commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61183861
it used, see
https://github.com/apache/spark/blob/master/python/pyspark/sql.py#L1439-L1442
---
If your project is set up for it, you can reply to this email and have your
Github user scwf commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61184011
can you add reset to there and have a test?@zhzhan
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61184081
Yeah, but its not used here:
https://github.com/apache/spark/blob/master/python/pyspark/sql.py#L1409
We can probably just get rid of this test. That class is
Github user zhzhan commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61189157
@scwf It seems that dev/run-tests does not run python test. I don't have
right environment setup for python test. I think you can either add/use the
default database or
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61195313
[Test build #22580 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/22580/consoleFull)
for PR 2685 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61203445
[Test build #22580 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/22580/consoleFull)
for PR 2685 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61203450
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user scwf commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61203563
retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61203698
I think this failed an already flaky test - so let's try again.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61203788
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61204241
[Test build #22588 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/22588/consoleFull)
for PR 2685 at commit
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61209877
@scwf @zhzhan Just FYI, to run PySpark tests locally, you may first export
`SPARK_HOME` and `PYTHONPATH` as follows:
```bash
export SPARK_HOME=...
Github user scwf commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61209998
Cool, thanks for this.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61214247
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61214243
**[Test build #22588 timed
out](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/22588/consoleFull)**
for PR 2685 at commit
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61214280
@pwendell Yesterday, I tried to include both Kryo 2.21 and Kryo 2.22 into
the assembly jar, so that both shaded and un-shaded Objenesis should co-exist.
But to my
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61214412
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61214405
add to whitelist
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61214875
@pwendell So I think another doable plan is to make a shaded Kryo 2.21
artifact. In this artifact, we leave the binary jar file untouched, but make
the POM consistent
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61214972
[Test build #22591 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/22591/consoleFull)
for PR 2685 at commit
Github user zhzhan commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61216213
@liancheng do you mean the problem mentioned here?
https://github.com/EsotericSoftware/kryo/issues/189
---
If your project is set up for it, you can reply to this
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61217097
@liancheng I already published a version of Hive that uses Kryo 2.21. This
should no longer require 2.22 anywhere.
---
If your project is set up for it, you can reply
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61221784
[Test build #22591 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/22591/consoleFull)
for PR 2685 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61221786
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61222008
Yay! It passed!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user scwf commented on the pull request:
https://github.com/apache/spark/pull/2685#issuecomment-61222108
So will you re-publish the hive 0.13 jar @pwendell? or use 0.13.1a?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
1 - 100 of 195 matches
Mail list logo