Github user neoremind commented on the issue:
https://github.com/apache/spark/pull/18964
@zsxwing Thanks for reviewing. The project I mentioned above is for
studying purpose and hope it will help others who are interested. I totally
agree that spark rpc mainly for internal use
Github user neoremind commented on a diff in the pull request:
https://github.com/apache/spark/pull/18964#discussion_r135081475
--- Diff:
common/network-common/src/main/java/org/apache/spark/network/client/TransportClientFactory.java
---
@@ -210,6 +210,14 @@ private
Github user neoremind commented on the issue:
https://github.com/apache/spark/pull/18964
@zsxwing I did try to create a performance test against spark rpc, the test
result can be found
[here](https://github.com/neoremind/kraps-rpc#4-performance-test), note that I
created the project
Github user neoremind commented on the issue:
https://github.com/apache/spark/pull/18964
@cloud-fan would you take a look of the PR, the update is very simple.
Thanks very much!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user neoremind commented on the issue:
https://github.com/apache/spark/pull/18965
@srowen I see your concern, it is more internal oriented and maybe updated
by developer as lib evolves, I will close the PR then. Thanks for reviewing!
---
If your project is set up for it, you
Github user neoremind closed the pull request at:
https://github.com/apache/spark/pull/18965
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user neoremind commented on the issue:
https://github.com/apache/spark/pull/18965
I see, anyway this is what I found when I dig into the wire protocol of
spark rpc since wire format is a big part of understanding the message
structure. If someone thinks this is not necessary I
Github user neoremind commented on the issue:
https://github.com/apache/spark/pull/18964
Not yet since it is OK to keep buffer size as default system value, but to
keep it consistent as user would like to specify, this makes sense.
I also notice that Spark RPC by default uses
Github user neoremind commented on the issue:
https://github.com/apache/spark/pull/18965
@jerryshao please review my separated PR. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user neoremind commented on the issue:
https://github.com/apache/spark/pull/18964
@jerryshao please review my separated PR. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user neoremind commented on the issue:
https://github.com/apache/spark/pull/18922
@jerryshao Thanks for your reviewing! Per your advice, I have separated the
issue into different PRs,
https://github.com/apache/spark/pull/18964
and
https://github.com/apache/spark
GitHub user neoremind opened a pull request:
https://github.com/apache/spark/pull/18965
[SPARK-21749][CORE] Add comments for MessageEncoder to explain the wire
format
## What changes were proposed in this pull request?
Spark RPC is built upon TCP tier and leverage netty
GitHub user neoremind opened a pull request:
https://github.com/apache/spark/pull/18964
[SPARK-21701][CORE] Enable RPC client to use ` SO_RCVBUF` and ` SO_SNDBUF`
in SparkConf.
## What changes were proposed in this pull request?
TCP parameters like SO_RCVBUF and SO_SNDBUF
Github user neoremind commented on a diff in the pull request:
https://github.com/apache/spark/pull/18922#discussion_r133467528
--- Diff:
common/network-common/src/main/java/org/apache/spark/network/client/TransportClientFactory.java
---
@@ -210,6 +210,18 @@ private
Github user neoremind closed the pull request at:
https://github.com/apache/spark/pull/18922
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user neoremind commented on the issue:
https://github.com/apache/spark/pull/18922
@zsxwing Would you mind verifying the patch for me? I notice you have
contributed to rpc module in spark. Many thanks!
---
If your project is set up for it, you can reply to this email and have
GitHub user neoremind opened a pull request:
https://github.com/apache/spark/pull/18922
[SPARK-21701][CORE] Enable RPC client to use SO_RCVBUF, SO_SNDBUF and
SO_BACKLOG in SparkConf
## What changes were proposed in this pull request?
1. TCP parameters like SO_RCVBUF
17 matches
Mail list logo