Github user radekg commented on the issue:
https://github.com/apache/spark/pull/9608
@cherryii feel free to reach out if you need additional help. I'm sure your
issues can be sorted out.
---
If your project is set up for it, you can reply to this email and have your
reply appe
Github user radekg commented on the issue:
https://github.com/apache/spark/pull/9608
It's not going to work with bridge network. The options will work fine with
host network. Bridge is a different story.
---
If your project is set up for it, you can reply to this email and have
Github user radekg commented on the issue:
https://github.com/apache/spark/pull/9608
No matter if you was to run this in bridge or host mode, the best idea
would be to request a number of ports. In case of `HOST` network only the
`port` needs to be requested. If you can use Marathon
Github user radekg commented on the issue:
https://github.com/apache/spark/pull/9608
Is this running on Mesos?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user radekg commented on the issue:
https://github.com/apache/spark/pull/9608
Are you deploying with Docker? If so, what problems are you facing when
using host networking?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user radekg commented on the issue:
https://github.com/apache/spark/pull/9608
@cherryii FYI, it's possible to achieve the same setup without using bridge
networking, if you can use host network, it's perfectly fine to set this up.
---
If your project is set up for i
Github user radekg commented on the pull request:
https://github.com/apache/spark/pull/9608#issuecomment-196016688
@tnachen Keeping up with the pace of changes to Spark code was a bit too
much.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user radekg closed the pull request at:
https://github.com/apache/spark/pull/9608
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user radekg commented on a diff in the pull request:
https://github.com/apache/spark/pull/9608#discussion_r48834671
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -854,7 +854,8 @@ private[spark] object Utils extends Logging {
* Get the local
Github user radekg commented on the pull request:
https://github.com/apache/spark/pull/9608#issuecomment-168972745
@tnachen what would be the best place to publish such documentation?
Regarding `worth nothing` comment. The whole concept behind this pr is to allow
running spark master
Github user radekg commented on the pull request:
https://github.com/apache/spark/pull/9608#issuecomment-168344020
I will need to verify this patch with torrent broadcast.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user radekg commented on the pull request:
https://github.com/apache/spark/pull/9608#issuecomment-167826085
I have 3 tests failing locally but I don't think these are related to my
changes. `scalastyle` seems to be ok now. Failing tests:
```
- launch s
Github user radekg commented on the pull request:
https://github.com/apache/spark/pull/9608#issuecomment-167755475
I will take a look at those scalastyle errors.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user radekg commented on a diff in the pull request:
https://github.com/apache/spark/pull/9608#discussion_r48442211
--- Diff: core/src/main/scala/org/apache/spark/rpc/netty/NettyRpcEnv.scala
---
@@ -122,7 +122,8 @@ private[netty] class NettyRpcEnv(
@Nullable
Github user radekg commented on the pull request:
https://github.com/apache/spark/pull/9608#issuecomment-16247
Regarding the `TorrentBroadcast`. I think there is to be some magic done
around `blockManager.port`.
---
If your project is set up for it, you can reply to this email
Github user radekg commented on the pull request:
https://github.com/apache/spark/pull/9608#issuecomment-16205
I've added the code for 1.6. It works (tasks are successfully finishing).
However, I am not 100% sure what is the impact of this change. Would be great
if som
Github user radekg commented on the pull request:
https://github.com/apache/spark/pull/9608#issuecomment-159867250
@skonto no, the NettyRpcEnv has not been tested yet, let me have a look
today / tomorrow. I'll post an update here.
---
If your project is set up for it, you can
Github user radekg commented on a diff in the pull request:
https://github.com/apache/spark/pull/9608#discussion_r45681436
--- Diff: core/src/main/scala/org/apache/spark/HttpFileServer.scala ---
@@ -42,10 +42,11 @@ private[spark] class HttpFileServer(
fileDir.mkdir
Github user radekg commented on a diff in the pull request:
https://github.com/apache/spark/pull/9608#discussion_r44563045
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -810,7 +810,7 @@ private[spark] object Utils extends Logging {
* Get the local
Github user radekg commented on a diff in the pull request:
https://github.com/apache/spark/pull/9608#discussion_r44562422
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -810,7 +810,7 @@ private[spark] object Utils extends Logging {
* Get the local
Github user radekg commented on a diff in the pull request:
https://github.com/apache/spark/pull/9608#discussion_r44533312
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -810,7 +810,7 @@ private[spark] object Utils extends Logging {
* Get the local
Github user radekg commented on a diff in the pull request:
https://github.com/apache/spark/pull/9608#discussion_r44533187
--- Diff: core/src/main/scala/org/apache/spark/HttpFileServer.scala ---
@@ -42,10 +42,11 @@ private[spark] class HttpFileServer(
fileDir.mkdir
GitHub user radekg opened a pull request:
https://github.com/apache/spark/pull/9608
[SPARK-11638] [Mesos + Docker Bridge networking]: Run Spark on Mesos,â¦
⦠in Docker with Bridge networking
Provides `spark.driver.advertisedPort`, `spark.fileserver.advertisedPort
23 matches
Mail list logo