Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/14507
For batch jobs running for say ~10 hours, with 3 sec frequency, there would
be 18k lines from the progress bar. That sounds like a lot. In Hadoop land they
used to have 3 sec but it was made
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/14507
@srowen : Our daily jobs run remotely and users see the stdout / stderr
logs in case they want to dig whats going on. With current update frequency,
longer job's log tend to be flooded
GitHub user tejasapatil opened a pull request:
https://github.com/apache/spark/pull/14507
[SPARK-16919] Configurable update interval for console progress bar
## What changes were proposed in this pull request?
Currently the update interval for the console progress bar is
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/14475
Addressed unit test failure due to `SparkEnv.get()` returning null
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/14475#discussion_r73283038
--- Diff:
core/src/main/java/org/apache/spark/util/collection/unsafe/sort/UnsafeSorterSpillReader.java
---
@@ -50,7 +55,19 @@ public
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/14475#discussion_r73283047
--- Diff:
core/src/main/java/org/apache/spark/util/collection/unsafe/sort/UnsafeSorterSpillReader.java
---
@@ -31,6 +34,8 @@
* of the file format
GitHub user tejasapatil opened a pull request:
https://github.com/apache/spark/pull/14475
[SPARK-16862] Configurable buffer size in `UnsafeSorterSpillReader`
## What changes were proposed in this pull request?
Jira: https://issues.apache.org/jira/browse/SPARK-16862
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/14475
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/14267
Thanks for notifying @rxin.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/14202
@zsxwing : I am done with the change(s) you suggested.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/14202
Jenkins test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/14202#discussion_r71010365
--- Diff:
core/src/main/scala/org/apache/spark/executor/CoarseGrainedExecutorBackend.scala
---
@@ -147,7 +148,10 @@ private[spark] class
Github user tejasapatil closed the pull request at:
https://github.com/apache/spark/pull/14205
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/14202
Hi @zsxwing , can you please review this diff ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user tejasapatil opened a pull request:
https://github.com/apache/spark/pull/14205
[SPARK-16547] [CORE] EventLoggingListener to auto create log base dir if it
does not exist
## What changes were proposed in this pull request?
When the HDFS namenode gets changed or
GitHub user tejasapatil opened a pull request:
https://github.com/apache/spark/pull/14202
[SPARK-16230] [CORE] CoarseGrainedExecutorBackend to self kill if there is
an exception while creating an Executor
## What changes were proposed in this pull request?
With the fix
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/14033#discussion_r69667347
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/generators.scala
---
@@ -94,6 +94,59 @@ case class UserDefinedGenerator
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/14033#discussion_r69667329
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/generators.scala
---
@@ -94,6 +94,59 @@ case class UserDefinedGenerator
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/14033#discussion_r69648741
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/generators.scala
---
@@ -94,6 +94,59 @@ case class UserDefinedGenerator
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/13834
@srowen : I also changed the `NonFatal` to `Throwable` to account for your
comment at : https://github.com/apache/spark/pull/13834#issuecomment-230238421
The `proc` should always be terminated
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/13834
`./dev/mima` passed on my box.
Jenkins re-test please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/13834#discussion_r69591839
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/ScriptTransformation.scala
---
@@ -312,17 +312,17 @@ private class
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/13834
@srowen : In case of exception, we `destroy()` the `proc` which cleans up
all the associated streams :
http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/8u40-b25/java/lang
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/13834
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/13834
Jenkins retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/13834
Looks like jenkins is having issues. Thanks to @srowen for triggering
retests. I will try once more.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/13857
@srowen : Since there is a pluggable cluster manager (CM) support in Spark,
it is possible to use CM apart from YARN or Mesos which can follow its own
naming convention for executors. Spark
GitHub user tejasapatil opened a pull request:
https://github.com/apache/spark/pull/13834
[TRIVIAL] [CORE] [ScriptTransform] move printing of stderr buffer before
closing the outstream
## What changes were proposed in this pull request?
Currently, if due to some failure
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/13231
Will re-open when I am ready
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user tejasapatil closed the pull request at:
https://github.com/apache/spark/pull/13231
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/13563
@zsxwing : Updated the PR title, description and the Jira title. Thanks for
your review !!
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/13563
@zsxwing : I am done with changes from my end. Let me know if you have any
more comments.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/13563
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/13563
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/13664
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/13563
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/13563
Jenkins retest please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/13563
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/13563#discussion_r66873430
--- Diff: core/src/main/scala/org/apache/spark/rdd/RDD.scala ---
@@ -734,12 +737,14 @@ abstract class RDD[T: ClassTag](
printPipeContext
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/13563
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/13563
@rxin @zsxwing Thanks for confirming. I will work on the change. Just to
clarify, I will use the configuration value for interactions with all the
streams (stderr, stdout and stdin).
---
If
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/13563
@sadikovi : That will not work in all cases. See :
https://github.com/apache/spark/pull/13563#discussion_r66318006
---
If your project is set up for it, you can reply to this email and have
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/13563#discussion_r66383126
--- Diff: core/src/main/scala/org/apache/spark/rdd/PipedRDD.scala ---
@@ -129,7 +130,7 @@ private[spark] class PipedRDD[T: ClassTag](
override
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/13563#discussion_r66361928
--- Diff: core/src/main/scala/org/apache/spark/rdd/PipedRDD.scala ---
@@ -171,7 +172,7 @@ private[spark] class PipedRDD[T: ClassTag](
}.start
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/13563#discussion_r66361939
--- Diff: core/src/main/scala/org/apache/spark/rdd/PipedRDD.scala ---
@@ -147,7 +148,7 @@ private[spark] class PipedRDD[T: ClassTag](
override
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/13563
@zsxwing : I do like the idea of making this completely configurable by
user. Are you ok with adding a new param to the `pipe()` function ? Another
option could be to create a new spark config
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/13563
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
GitHub user tejasapatil opened a pull request:
https://github.com/apache/spark/pull/13563
[SPARK-15826] [CORE] PipedRDD to strictly use UTF-8 and not rely on default
encoding
## What changes were proposed in this pull request?
Link to jira which describes the problem
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/13492#discussion_r65732323
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/rules.scala
---
@@ -78,8 +78,8 @@ private[sql] object
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/13451#discussion_r65553085
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala ---
@@ -317,17 +317,19 @@ class SparkSqlAstBuilder(conf: SQLConf
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/13419
I guess that the caching is done over multiple nodes. If the data for a
dataset is updated physically and some of the nodes where the data was cached
go down, would the existing `cached
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/13452#discussion_r65467411
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/sources/BucketedWriteSuite.scala
---
@@ -59,11 +59,18 @@ class BucketedWriteSuite extends
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/13452#discussion_r65467221
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala ---
@@ -502,8 +503,18 @@ final class DataFrameWriter private[sql](df
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/13059
I initially did not knew that DESC order is inherently not supported in
Spark so had worked on this PR. One could add that in the engine but thats not
my priority right now. I am working on
Github user tejasapatil closed the pull request at:
https://github.com/apache/spark/pull/13059
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/13409#discussion_r65193655
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveQuerySuite.scala
---
@@ -1057,7 +1057,7 @@ class HiveQuerySuite extends
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/13409#discussion_r65193366
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/InsertIntoHiveTable.scala
---
@@ -170,6 +170,13 @@ case class InsertIntoHiveTable
Github user tejasapatil commented on the pull request:
https://github.com/apache/spark/pull/13351#issuecomment-222583109
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user tejasapatil commented on the pull request:
https://github.com/apache/spark/pull/13351#issuecomment-222573544
ok to test.
I ran `sbt core/test:test` locally and it worked. Not sure what failed over
Jenkins because it says `[info] Passed: Total 1832, Failed 0
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/13351#discussion_r65110502
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -2345,28 +2345,23 @@ private[spark] class RedirectThread(
*/
private
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/13351#discussion_r65110506
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -2345,28 +2345,23 @@ private[spark] class RedirectThread(
*/
private
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/13351#discussion_r65110497
--- Diff: core/src/test/scala/org/apache/spark/util/UtilsSuite.scala ---
@@ -686,9 +686,22 @@ class UtilsSuite extends SparkFunSuite with
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/13351#discussion_r65015447
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -2345,17 +2345,19 @@ private[spark] class RedirectThread(
*/
private
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/13351#discussion_r65015360
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -2345,17 +2345,19 @@ private[spark] class RedirectThread(
*/
private
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/13395#discussion_r65013372
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala ---
@@ -936,7 +936,47 @@ class SparkSqlAstBuilder(conf: SQLConf
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/13395#discussion_r65013342
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala ---
@@ -936,7 +936,47 @@ class SparkSqlAstBuilder(conf: SQLConf
Github user tejasapatil commented on the pull request:
https://github.com/apache/spark/pull/13396#issuecomment-222387582
nit: the title of the PR could be made better:
`[SPARK-15641] HistoryServer to not show invalid date for incomplete
application`
---
If your project is
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/13396#discussion_r65013295
--- Diff: core/src/main/resources/org/apache/spark/ui/static/historypage.js
---
@@ -54,7 +54,8 @@ function makeIdNumeric(id
Github user tejasapatil commented on the pull request:
https://github.com/apache/spark/pull/13351#issuecomment-222074923
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/12194#discussion_r64864622
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/ScriptTransformation.scala
---
@@ -127,45 +127,78 @@ case class
GitHub user tejasapatil opened a pull request:
https://github.com/apache/spark/pull/13351
[SPARK-15601][CORE] CircularBuffer's toString() to print only the contents
written if buffer isn't full
## What changes were proposed in this pull request?
[~sameerag] sugg
Github user tejasapatil commented on the pull request:
https://github.com/apache/spark/pull/12194#issuecomment-222068305
@rxin : Sorry for delay... was caught up in some other things. I have
updated the PR now with the review comments.
Re your suggestion about removing hive
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/12194#discussion_r64861172
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/ScriptTransformation.scala
---
@@ -127,45 +127,78 @@ case class
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/12194#discussion_r64861159
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/ScriptTransformation.scala
---
@@ -127,45 +127,78 @@ case class
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/13302#discussion_r64687912
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala ---
@@ -288,9 +288,10 @@ case class TruncateTableCommand
Github user tejasapatil commented on the pull request:
https://github.com/apache/spark/pull/12194#issuecomment-220848477
Can anyone please review this PR ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
GitHub user tejasapatil opened a pull request:
https://github.com/apache/spark/pull/13231
[SPARK-15453] [SQL] SMB Join for datasource
## What changes were proposed in this pull request?
Currently for bucketed and sorted tables, SORT MERGE JOIN doesn't use this
Github user tejasapatil commented on the pull request:
https://github.com/apache/spark/pull/13231#issuecomment-220714715
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user tejasapatil commented on the pull request:
https://github.com/apache/spark/pull/13174#issuecomment-220108276
@srowen : Sorry about this. In order to not let this happen again, are we
going to do something ? If trunk needs to work with Java 7 or higher, then
Jenkins tests
Github user tejasapatil commented on the pull request:
https://github.com/apache/spark/pull/12194#issuecomment-220031022
Can anyone please review this PR ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user tejasapatil commented on the pull request:
https://github.com/apache/spark/pull/12194#issuecomment-219941867
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user tejasapatil commented on the pull request:
https://github.com/apache/spark/pull/12194#issuecomment-219853586
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user tejasapatil commented on the pull request:
https://github.com/apache/spark/pull/13042#issuecomment-219761677
Ok. Changed TestShuffleDataContext to use the method in JavaUtil
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user tejasapatil commented on the pull request:
https://github.com/apache/spark/pull/13042#issuecomment-219506107
- I am fine with changing `TestShuffleDataContext` to use the method in
JavaUtil.
- For `Utils.deleteRecursively`, I am not sure if that will have impact on
Github user tejasapatil commented on the pull request:
https://github.com/apache/spark/pull/13042#issuecomment-219128637
There are no outstanding review comments left to address here. If
everything looks good, can someone merge this PR ? Thanks !!
---
If your project is set up for
Github user tejasapatil commented on the pull request:
https://github.com/apache/spark/pull/13042#issuecomment-219128249
@vanzin : I haven't gathered numbers with `Files.walkFileTree`. Based on
its source code, it will have the same quirks wrt race conditions and speed. I
be
Github user tejasapatil commented on the pull request:
https://github.com/apache/spark/pull/13086#issuecomment-219080926
@rxin : Thanks for review. I have made the suggested change.
```
scala> import org.apache.spark.sql._
import org.apache.spark.sql._
sc
GitHub user tejasapatil opened a pull request:
https://github.com/apache/spark/pull/13086
[TRIVIAL][Doc] SparkSession class doc example correction
## What changes were proposed in this pull request?
Was trying out `SparkSession` for the first time and the given class doc
Github user tejasapatil commented on the pull request:
https://github.com/apache/spark/pull/13059#issuecomment-218901875
Came across https://github.com/apache/spark/pull/12759/ and realised that
DESC ordering is not supported inherently.
---
If your project is set up for it, you can
Github user tejasapatil commented on the pull request:
https://github.com/apache/spark/pull/13042#issuecomment-218835975
Not sure of the build failure was related from my recent change(s). Retest
please.
---
If your project is set up for it, you can reply to this email and have your
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/13042#discussion_r63051771
--- Diff:
common/network-common/src/main/java/org/apache/spark/network/util/JavaUtils.java
---
@@ -79,14 +81,32 @@ public static String bytesToString
Github user tejasapatil commented on the pull request:
https://github.com/apache/spark/pull/13042#issuecomment-218799080
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/13042#discussion_r63044500
--- Diff:
common/network-common/src/main/java/org/apache/spark/network/util/JavaUtils.java
---
@@ -109,6 +129,40 @@ public static void
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/13042#discussion_r62960246
--- Diff:
common/network-common/src/main/java/org/apache/spark/network/util/JavaUtils.java
---
@@ -109,6 +128,27 @@ public static void
GitHub user tejasapatil opened a pull request:
https://github.com/apache/spark/pull/13059
[SPARK-15275] [SQL] CatalogTable should store sort ordering for sorted
columns
## What changes were proposed in this pull request?
Jira link : https://issues.apache.org/jira/browse
Github user tejasapatil commented on the pull request:
https://github.com/apache/spark/pull/13042#issuecomment-218540631
@podwhitehawk I agree with @srowen about single argument being passed.
Generally speaking, there is not a maximum path length in Unix but there
might be
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/13042#discussion_r62808325
--- Diff:
common/network-common/src/main/java/org/apache/spark/network/util/JavaUtils.java
---
@@ -109,6 +128,27 @@ public static void
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/13042#discussion_r62808334
--- Diff:
common/network-common/src/main/java/org/apache/spark/network/util/JavaUtils.java
---
@@ -109,6 +128,27 @@ public static void
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/13042#discussion_r62808363
--- Diff:
common/network-common/src/main/java/org/apache/spark/network/util/JavaUtils.java
---
@@ -109,6 +128,27 @@ public static void
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/13042#discussion_r62808294
--- Diff:
common/network-common/src/main/java/org/apache/spark/network/util/JavaUtils.java
---
@@ -79,14 +80,32 @@ public static String bytesToString
601 - 700 of 769 matches
Mail list logo