Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4611#issuecomment-74408762
[Test build #27510 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27510/consoleFull)
for PR 4611 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4602#issuecomment-74400196
[Test build #27499 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27499/consoleFull)
for PR 4602 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4506#issuecomment-74405134
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4592#issuecomment-74406090
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4608#issuecomment-74398014
[Test build #27497 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27497/consoleFull)
for PR 4608 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4527#issuecomment-74410038
[Test build #27513 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27513/consoleFull)
for PR 4527 at commit
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/4607#issuecomment-74405871
Yes, please close it.
`insert` is used by `INSERT INTO/OVERWRITE` and `DataFrame.insertInto`.
---
If your project is set up for it, you can reply to this email
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4563#issuecomment-74410802
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user chenghao-intel commented on the pull request:
https://github.com/apache/spark/pull/4506#issuecomment-74404814
@marmbrus Seems there is a bug in `TestHive`, which will load the table
during logical plan analyzing, I've fixed the logic and let's see if we need to
GitHub user chenghao-intel opened a pull request:
https://github.com/apache/spark/pull/4611
[SPARK-5825] [Spark Submit] Remove the double checking when killing process
`spark-daemon.sh` will confirm the process id by fuzzy matching the class
name while stopping the service,
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4527#issuecomment-74410073
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4527#issuecomment-74410071
[Test build #27513 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27513/consoleFull)
for PR 4527 at commit
Github user davies commented on a diff in the pull request:
https://github.com/apache/spark/pull/3920#discussion_r24719372
--- Diff: examples/src/main/python/hbase_inputformat.py ---
@@ -16,6 +16,7 @@
#
import sys
+import simplejson as json
--- End diff --
Github user chenghao-intel commented on the pull request:
https://github.com/apache/spark/pull/4611#issuecomment-74406965
#4382 failed due to the `SparkSubmit`(HiveThriftServer) cannot be killed in
unit testing. Probably we need to manually kill those processes, or restart the
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/4583#discussion_r24718371
--- Diff: ec2/spark_ec2.py ---
@@ -931,6 +947,22 @@ def deploy_files(conn, root_dir, opts, master_nodes,
slave_nodes, modules):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4602#issuecomment-74401862
[Test build #27499 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27499/consoleFull)
for PR 4602 at commit
Github user davies commented on a diff in the pull request:
https://github.com/apache/spark/pull/4592#discussion_r24720235
--- Diff: python/pyspark/sql/dataframe.py ---
@@ -664,6 +664,18 @@ def _api(self):
return _api
+def df_varargs_api(f, *args):
+
Github user davies commented on a diff in the pull request:
https://github.com/apache/spark/pull/4592#discussion_r24720234
--- Diff: python/pyspark/sql/dataframe.py ---
@@ -664,6 +664,18 @@ def _api(self):
return _api
+def df_varargs_api(f, *args):
+
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4382#issuecomment-74402603
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4453#issuecomment-74400098
[Test build #27498 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27498/consoleFull)
for PR 4453 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4610#issuecomment-74406236
[Test build #27509 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27509/consoleFull)
for PR 4610 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4610#issuecomment-74405412
[Test build #27506 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27506/consoleFull)
for PR 4610 at commit
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/4592#discussion_r24720200
--- Diff: python/pyspark/sql/dataframe.py ---
@@ -714,28 +726,28 @@ def count(self):
[Row(age=2, count=1), Row(age=5, count=1)]
Github user yanbohappy closed the pull request at:
https://github.com/apache/spark/pull/4607
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4585#issuecomment-74411009
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4609#issuecomment-74403617
[Test build #27503 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27503/consoleFull)
for PR 4609 at commit
Github user chenghao-intel commented on the pull request:
https://github.com/apache/spark/pull/4602#issuecomment-74402072
@marmbrus any more comments on this?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
GitHub user srowen opened a pull request:
https://github.com/apache/spark/pull/4608
SPARK-5795 [STREAMING] api.java.JavaPairDStream.saveAsNewAPIHadoopFiles may
not friendly to java
Revise JavaPairDStream API declaration on saveAs Hadoop methods, to allow
it to be called directly
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4382#issuecomment-74401576
[Test build #27501 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27501/consoleFull)
for PR 4382 at commit
Github user davies commented on a diff in the pull request:
https://github.com/apache/spark/pull/3920#discussion_r24719439
--- Diff:
examples/src/main/scala/org/apache/spark/examples/pythonconverters/HBaseConverters.scala
---
@@ -18,20 +18,34 @@
package
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4382#issuecomment-74402600
[Test build #27501 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27501/consoleFull)
for PR 4382 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4563#issuecomment-74413558
[Test build #27517 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27517/consoleFull)
for PR 4563 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4563#issuecomment-74413560
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4613#issuecomment-74413982
[Test build #27518 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27518/consoleFull)
for PR 4613 at commit
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/4611#issuecomment-74414416
I don't think we want to take out this check entirely. This was changed
recently to not just test whether the process can be killed (with `kill -0`) as
a way of testing
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4527#issuecomment-74410504
[Test build #27514 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27514/consoleFull)
for PR 4527 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4587#issuecomment-74400194
[Test build #27500 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27500/consoleFull)
for PR 4587 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4592#issuecomment-74410972
[Test build #27511 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27511/consoleFull)
for PR 4592 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4609#issuecomment-74405139
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4453#issuecomment-74401698
[Test build #27498 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27498/consoleFull)
for PR 4453 at commit
Github user florianverhein commented on a diff in the pull request:
https://github.com/apache/spark/pull/4583#discussion_r24718645
--- Diff: ec2/spark_ec2.py ---
@@ -931,6 +947,22 @@ def deploy_files(conn, root_dir, opts, master_nodes,
slave_nodes, modules):
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4610#issuecomment-74405734
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/4585#discussion_r24720745
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Column.scala ---
@@ -576,6 +578,25 @@ trait Column extends DataFrame {
override def as(alias:
GitHub user liancheng opened a pull request:
https://github.com/apache/spark/pull/4613
[Minor] [SQL] Renames stringRddToDataFrame to stringRddToDataFrameHolder
for consistency
You can merge this pull request into a Git repository by running:
$ git pull
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4585#issuecomment-74411006
[Test build #27512 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27512/consoleFull)
for PR 4585 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4610#issuecomment-74406985
[Test build #27509 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27509/consoleFull)
for PR 4610 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4592#issuecomment-74412125
[Test build #27519 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27519/consoleFull)
for PR 4592 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4602#issuecomment-74401863
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4607#issuecomment-74403043
[Test build #27502 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27502/consoleFull)
for PR 4607 at commit
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/4610#discussion_r24720131
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/json/JSONRelation.scala ---
@@ -104,21 +116,35 @@ private[sql] case class JSONRelation(
override
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/4453#issuecomment-74412266
@mengxr I tried removing all but the Linux 64-bit native code (it does not
package libgfortran actually, as we know from experience and
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4592#issuecomment-74404099
[Test build #27504 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27504/consoleFull)
for PR 4592 at commit
GitHub user adrian-wang opened a pull request:
https://github.com/apache/spark/pull/4609
[SPARK-5824] [SQL] add null format in ctas and set default col comment to
null
You can merge this pull request into a Git repository by running:
$ git pull
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4592#issuecomment-74410977
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/4607#issuecomment-74405239
@yanbohappy Thank you for working on it! For SPARK-5746, I think it is
better to add an analysis rule to do a check and throw an exception when you
find that users try to
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4607#issuecomment-74403044
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/4602#discussion_r24718555
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -137,6 +137,11 @@ class Analyzer(catalog: Catalog,
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4610#issuecomment-74405439
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4587#issuecomment-74401911
[Test build #27500 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27500/consoleFull)
for PR 4587 at commit
Github user scwf commented on the pull request:
https://github.com/apache/spark/pull/4380#issuecomment-74400909
Yeah getit, but that maybe much later, is it possible to let this in for
transition since
1 this syntax is a basic functional point in hive ql and it is useful from
our
Github user chenghao-intel commented on the pull request:
https://github.com/apache/spark/pull/4609#issuecomment-74405404
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4592#issuecomment-74406088
[Test build #27504 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27504/consoleFull)
for PR 4592 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4591#issuecomment-74399544
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/4592#discussion_r24720203
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/DataFrameImpl.scala
---
@@ -88,12 +88,24 @@ private[sql] class DataFrameImpl protected[sql](
}
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4611#issuecomment-74406895
[Test build #27510 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27510/consoleFull)
for PR 4611 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4607#issuecomment-74403026
[Test build #27502 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27502/consoleFull)
for PR 4607 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4612#issuecomment-74413661
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/4565#issuecomment-74414488
@XuTingjun I'm not sure what you're responding to. I'm first suggesting
that this code change can be simpler and doesn't seem to need a new flag. But
I'm also suggesting
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/4610#discussion_r24720095
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/json/JSONRelation.scala ---
@@ -67,7 +67,6 @@ private[sql] class DefaultSource
case
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/4563#issuecomment-74410672
Squashed all commits to ease rebasing.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/4613#issuecomment-74411695
cc @rxin
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/4592#discussion_r24720617
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/DataFrameImpl.scala
---
@@ -88,12 +88,24 @@ private[sql] class DataFrameImpl protected[sql](
}
Github user XuTingjun commented on the pull request:
https://github.com/apache/spark/pull/4565#issuecomment-74399664
Thanks, @srowen. I think there have the users who starts a long-running
sparkcontext, and add jars to run different cases.
---
If your project is set up for it, you
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4611#issuecomment-74408764
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/4602#discussion_r24718498
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -137,6 +137,11 @@ class Analyzer(catalog: Catalog,
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4591#issuecomment-74399542
[Test build #27496 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27496/consoleFull)
for PR 4591 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4506#issuecomment-74404516
[Test build #27505 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27505/consoleFull)
for PR 4506 at commit
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/4592#discussion_r24720621
--- Diff: python/pyspark/sql/dataframe.py ---
@@ -664,6 +664,18 @@ def _api(self):
return _api
+def df_varargs_api(f, *args):
+
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4609#issuecomment-74407418
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4527#issuecomment-74410182
[Test build #27514 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27514/consoleFull)
for PR 4527 at commit
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/4585#discussion_r24720118
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Column.scala ---
@@ -576,6 +578,25 @@ trait Column extends DataFrame {
override def as(alias:
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/4607#issuecomment-74405659
Actually, I think we just need to throw an exception if the delete returns
false when we try to do OVERWRITE (we also need to make the change at
Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/4602#discussion_r24718543
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/Generate.scala ---
@@ -34,17 +36,22 @@ import
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4610#issuecomment-74405438
[Test build #27506 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27506/consoleFull)
for PR 4610 at commit
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/4591#issuecomment-74406082
It seems to me it is a corner case. We might be better off just documenting
it rather than adding an empty partition to the rdd, since conceptually I
expect a empty RDD to
Github user yanbohappy commented on the pull request:
https://github.com/apache/spark/pull/4607#issuecomment-74405767
Actually, the insert function
(https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/json/JSONRelation.scala#L107)
will be not
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4612#issuecomment-74413641
[Test build #27516 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27516/consoleFull)
for PR 4612 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4613#issuecomment-74413983
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user justinuang commented on the pull request:
https://github.com/apache/spark/pull/3173#issuecomment-74416823
Hi, this looks great! Is there a reason why sort based join is not in spark
core, only in spark SQL?
---
If your project is set up for it, you can reply to this
Github user Liuchang0812 closed the pull request at:
https://github.com/apache/spark/pull/4606
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/4612#discussion_r24722382
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/dstream/FileInputDStream.scala
---
@@ -74,10 +74,12 @@ class FileInputDStream[K, V, F :
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/4612#discussion_r24722476
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/dstream/FileInputDStream.scala
---
@@ -74,10 +74,12 @@ class FileInputDStream[K, V, F :
GitHub user azagrebin opened a pull request:
https://github.com/apache/spark/pull/4616
[SPARK-3340] Deprecate ADD_JARS and ADD_FILES
I created a patch that disables the environment variables.
Thereby scala or python shell log a warning message to notify user about
the
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4614#issuecomment-74422587
[Test build #27521 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27521/consoleFull)
for PR 4614 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4614#issuecomment-74422593
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4592#issuecomment-74415009
[Test build #27519 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27519/consoleFull)
for PR 4592 at commit
GitHub user maropu opened a pull request:
https://github.com/apache/spark/pull/4615
[SPARK-5827][SQL] Add missing import in the example of SqlContext
If one tries an example by using copypaste, throw an exception.
You can merge this pull request into a Git repository by running:
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4614#issuecomment-74417688
[Test build #27520 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27520/consoleFull)
for PR 4614 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4614#issuecomment-74417720
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/4612#discussion_r24722873
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/dstream/FileInputDStream.scala
---
@@ -74,10 +74,12 @@ class FileInputDStream[K, V, F :
1 - 100 of 284 matches
Mail list logo