[jira] [Updated] (SPARK-21455) RpcFailure should be call on RpcResponseCallback.onFailure

2017-07-18 Thread Xianyang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xianyang Liu updated SPARK-21455:
-
Description: 
Currently, when there is a `RpcFailure` need be sent back to client, we call 
`RpcCallContext.sendFailure`, then it will call `NettyRpcCallContext.send`. 
However, we can see the follow code snippets in the implementation class.

{code:java}
private[netty] class RemoteNettyRpcCallContext(
nettyEnv: NettyRpcEnv,
callback: RpcResponseCallback,
senderAddress: RpcAddress)
  extends NettyRpcCallContext(senderAddress) {

  override protected def send(message: Any): Unit = {
val reply = nettyEnv.serialize(message)
callback.onSuccess(reply)
  }
}
{code}

This is unreasonable, there are two reasons:
 #  Send back failure message by `RpcResponseCallback.onSuccess`, we can get 
the details exception messages(such as `StackTrace`) in currently implements.
 #  `RpcResponseCallback.onSuccess` and `RpcResponseCallback.onFailure` could 
have different behavior. Such as:
NettyBlockTransferService#uploadBlock

{code:java}
new RpcResponseCallback {
override def onSuccess(response: ByteBuffer): Unit = {
  logTrace(s"Successfully uploaded block $blockId")
  result.success((): Unit)
}
override def onFailure(e: Throwable): Unit = {
  logError(s"Error while uploading block $blockId", e)
  result.failure(e)
}
}
{code}

  was:
Currently, when there is a `RpcFailure` need be sent back to client, we call 
`RpcCallContext.sendFailure`, then it will call `NettyRpcCallContext.send`. 
However, we can see the follow code snippets in the implementation class.

{code:scala}
private[netty] class RemoteNettyRpcCallContext(
nettyEnv: NettyRpcEnv,
callback: RpcResponseCallback,
senderAddress: RpcAddress)
  extends NettyRpcCallContext(senderAddress) {

  override protected def send(message: Any): Unit = {
val reply = nettyEnv.serialize(message)
callback.onSuccess(reply)
  }
}
{code}

This is unreasonable, there are two reasons:
 #  Send back failure message by `RpcResponseCallback.onSuccess`, we can get 
the details exception messages(such as `StackTrace`) in currently implements.
 #  `RpcResponseCallback.onSuccess` and `RpcResponseCallback.onFailure` could 
have different behavior. Such as:
NettyBlockTransferService#uploadBlock

{code:scala}
new RpcResponseCallback {
override def onSuccess(response: ByteBuffer): Unit = {
  logTrace(s"Successfully uploaded block $blockId")
  result.success((): Unit)
}
override def onFailure(e: Throwable): Unit = {
  logError(s"Error while uploading block $blockId", e)
  result.failure(e)
}
}
{code}


> RpcFailure should be call on RpcResponseCallback.onFailure
> --
>
> Key: SPARK-21455
> URL: https://issues.apache.org/jira/browse/SPARK-21455
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 2.2.0
>Reporter: Xianyang Liu
>
> Currently, when there is a `RpcFailure` need be sent back to client, we call 
> `RpcCallContext.sendFailure`, then it will call `NettyRpcCallContext.send`. 
> However, we can see the follow code snippets in the implementation class.
> {code:java}
> private[netty] class RemoteNettyRpcCallContext(
> nettyEnv: NettyRpcEnv,
> callback: RpcResponseCallback,
> senderAddress: RpcAddress)
>   extends NettyRpcCallContext(senderAddress) {
>   override protected def send(message: Any): Unit = {
> val reply = nettyEnv.serialize(message)
> callback.onSuccess(reply)
>   }
> }
> {code}
> This is unreasonable, there are two reasons:
>  #  Send back failure message by `RpcResponseCallback.onSuccess`, we can get 
> the details exception messages(such as `StackTrace`) in currently implements.
>  #  `RpcResponseCallback.onSuccess` and `RpcResponseCallback.onFailure` could 
> have different behavior. Such as:
> NettyBlockTransferService#uploadBlock
> {code:java}
> new RpcResponseCallback {
> override def onSuccess(response: ByteBuffer): Unit = {
>   logTrace(s"Successfully uploaded block $blockId")
>   result.success((): Unit)
> }
> override def onFailure(e: Throwable): Unit = {
>   logError(s"Error while uploading block $blockId", e)
>   result.failure(e)
> }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-21455) RpcFailure should be call on RpcResponseCallback.onFailure

2017-07-18 Thread Xianyang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xianyang Liu updated SPARK-21455:
-
Description: 
Currently, when there is a `RpcFailure` need be sent back to client, we call 
`RpcCallContext.sendFailure`, then it will call `NettyRpcCallContext.send`. 
However, we can see the follow code snippets in the implementation class.

{code:scala}
private[netty] class RemoteNettyRpcCallContext(
nettyEnv: NettyRpcEnv,
callback: RpcResponseCallback,
senderAddress: RpcAddress)
  extends NettyRpcCallContext(senderAddress) {

  override protected def send(message: Any): Unit = {
val reply = nettyEnv.serialize(message)
callback.onSuccess(reply)
  }
}
{code}

This is unreasonable, there are two reasons:
 #  Send back failure message by `RpcResponseCallback.onSuccess`, we can get 
the details exception messages(such as `StackTrace`) in currently implements.
 #  `RpcResponseCallback.onSuccess` and `RpcResponseCallback.onFailure` could 
have different behavior. Such as:
NettyBlockTransferService#uploadBlock

{code:scala}
new RpcResponseCallback {
override def onSuccess(response: ByteBuffer): Unit = {
  logTrace(s"Successfully uploaded block $blockId")
  result.success((): Unit)
}
override def onFailure(e: Throwable): Unit = {
  logError(s"Error while uploading block $blockId", e)
  result.failure(e)
}
}
{code}

  was:
Currently, when there is a `RpcFailure` need be sent back to client, we call 
`RpcCallContext.sendFailure`, then it will call `NettyRpcCallContext.send`. 
However, we can see the follow code snippets in the implementation class.


```scala
private[netty] class RemoteNettyRpcCallContext(
nettyEnv: NettyRpcEnv,
callback: RpcResponseCallback,
senderAddress: RpcAddress)
  extends NettyRpcCallContext(senderAddress) {

  override protected def send(message: Any): Unit = {
val reply = nettyEnv.serialize(message)
callback.onSuccess(reply)
  }
}
```

This is unreasonable, there are two reasons:
 #  Send back failure message by `RpcResponseCallback.onSuccess`, we can get 
the details exception messages(such as `StackTrace`) in currently implements.
 #  `RpcResponseCallback.onSuccess` and `RpcResponseCallback.onFailure` could 
have different behavior. Such as:
NettyBlockTransferService#uploadBlock

```scala

new RpcResponseCallback {
override def onSuccess(response: ByteBuffer): Unit = {
  logTrace(s"Successfully uploaded block $blockId")
  result.success((): Unit)
}
override def onFailure(e: Throwable): Unit = {
  logError(s"Error while uploading block $blockId", e)
  result.failure(e)
}
}
```


> RpcFailure should be call on RpcResponseCallback.onFailure
> --
>
> Key: SPARK-21455
> URL: https://issues.apache.org/jira/browse/SPARK-21455
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 2.2.0
>Reporter: Xianyang Liu
>
> Currently, when there is a `RpcFailure` need be sent back to client, we call 
> `RpcCallContext.sendFailure`, then it will call `NettyRpcCallContext.send`. 
> However, we can see the follow code snippets in the implementation class.
> {code:scala}
> private[netty] class RemoteNettyRpcCallContext(
> nettyEnv: NettyRpcEnv,
> callback: RpcResponseCallback,
> senderAddress: RpcAddress)
>   extends NettyRpcCallContext(senderAddress) {
>   override protected def send(message: Any): Unit = {
> val reply = nettyEnv.serialize(message)
> callback.onSuccess(reply)
>   }
> }
> {code}
> This is unreasonable, there are two reasons:
>  #  Send back failure message by `RpcResponseCallback.onSuccess`, we can get 
> the details exception messages(such as `StackTrace`) in currently implements.
>  #  `RpcResponseCallback.onSuccess` and `RpcResponseCallback.onFailure` could 
> have different behavior. Such as:
> NettyBlockTransferService#uploadBlock
> {code:scala}
> new RpcResponseCallback {
> override def onSuccess(response: ByteBuffer): Unit = {
>   logTrace(s"Successfully uploaded block $blockId")
>   result.success((): Unit)
> }
> override def onFailure(e: Throwable): Unit = {
>   logError(s"Error while uploading block $blockId", e)
>   result.failure(e)
> }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-21455) RpcFailure should be call on RpcResponseCallback.onFailure

2017-07-18 Thread Xianyang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xianyang Liu updated SPARK-21455:
-
Description: 
Currently, when there is a `RpcFailure` need be sent back to client, we call 
`RpcCallContext.sendFailure`, then it will call `NettyRpcCallContext.send`. 
However, we can see the follow code snippets in the implementation class.


```scala
private[netty] class RemoteNettyRpcCallContext(
nettyEnv: NettyRpcEnv,
callback: RpcResponseCallback,
senderAddress: RpcAddress)
  extends NettyRpcCallContext(senderAddress) {

  override protected def send(message: Any): Unit = {
val reply = nettyEnv.serialize(message)
callback.onSuccess(reply)
  }
}
```

This is unreasonable, there are two reasons:
 #  Send back failure message by `RpcResponseCallback.onSuccess`, we can get 
the details exception messages(such as `StackTrace`) in currently implements.
 #  `RpcResponseCallback.onSuccess` and `RpcResponseCallback.onFailure` could 
have different behavior. Such as:
NettyBlockTransferService#uploadBlock

```scala

new RpcResponseCallback {
override def onSuccess(response: ByteBuffer): Unit = {
  logTrace(s"Successfully uploaded block $blockId")
  result.success((): Unit)
}
override def onFailure(e: Throwable): Unit = {
  logError(s"Error while uploading block $blockId", e)
  result.failure(e)
}
}
```

  was:
Currently, when there is a `RpcFailure` need be sent back to client, we call 
`RpcCallContext.sendFailure`, then it will call `NettyRpcCallContext.send`. 
However, we can see the follow code snippets in the implementation class.


```scala
private[netty] class RemoteNettyRpcCallContext(
nettyEnv: NettyRpcEnv,
callback: RpcResponseCallback,
senderAddress: RpcAddress)
  extends NettyRpcCallContext(senderAddress) {

  override protected def send(message: Any): Unit = {
val reply = nettyEnv.serialize(message)
callback.onSuccess(reply)
  }
}
```

This is unreasonable, there are two reasons:
 #  Send back failure message by `RpcResponseCallback.onSuccess`, we can get 
the details exception messages(such as `StackTrace`) in currently implements.
 #  `RpcResponseCallback.onSuccess` and `RpcResponseCallback.onFailure` could 
have different behavior. Such as:
NettyBlockTransferService#uploadBlock


```scala

new RpcResponseCallback {
override def onSuccess(response: ByteBuffer): Unit = {
  logTrace(s"Successfully uploaded block $blockId")
  result.success((): Unit)
}
override def onFailure(e: Throwable): Unit = {
  logError(s"Error while uploading block $blockId", e)
  result.failure(e)
}
}

```


> RpcFailure should be call on RpcResponseCallback.onFailure
> --
>
> Key: SPARK-21455
> URL: https://issues.apache.org/jira/browse/SPARK-21455
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 2.2.0
>Reporter: Xianyang Liu
>
> Currently, when there is a `RpcFailure` need be sent back to client, we call 
> `RpcCallContext.sendFailure`, then it will call `NettyRpcCallContext.send`. 
> However, we can see the follow code snippets in the implementation class.
> ```scala
> private[netty] class RemoteNettyRpcCallContext(
> nettyEnv: NettyRpcEnv,
> callback: RpcResponseCallback,
> senderAddress: RpcAddress)
>   extends NettyRpcCallContext(senderAddress) {
>   override protected def send(message: Any): Unit = {
> val reply = nettyEnv.serialize(message)
> callback.onSuccess(reply)
>   }
> }
> ```
> This is unreasonable, there are two reasons:
>  #  Send back failure message by `RpcResponseCallback.onSuccess`, we can get 
> the details exception messages(such as `StackTrace`) in currently implements.
>  #  `RpcResponseCallback.onSuccess` and `RpcResponseCallback.onFailure` could 
> have different behavior. Such as:
> NettyBlockTransferService#uploadBlock
> ```scala
> new RpcResponseCallback {
> override def onSuccess(response: ByteBuffer): Unit = {
>   logTrace(s"Successfully uploaded block $blockId")
>   result.success((): Unit)
> }
> override def onFailure(e: Throwable): Unit = {
>   logError(s"Error while uploading block $blockId", e)
>   result.failure(e)
> }
> }
> ```



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-21455) RpcFailure should be call on RpcResponseCallback.onFailure

2017-07-18 Thread Xianyang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xianyang Liu updated SPARK-21455:
-
Description: 
Currently, when there is a `RpcFailure` need be sent back to client, we call 
`RpcCallContext.sendFailure`, then it will call `NettyRpcCallContext.send`. 
However, we can see the follow code snippets in the implementation class.


```scala
private[netty] class RemoteNettyRpcCallContext(
nettyEnv: NettyRpcEnv,
callback: RpcResponseCallback,
senderAddress: RpcAddress)
  extends NettyRpcCallContext(senderAddress) {

  override protected def send(message: Any): Unit = {
val reply = nettyEnv.serialize(message)
callback.onSuccess(reply)
  }
}
```

This is unreasonable, there are two reasons:
 #  Send back failure message by `RpcResponseCallback.onSuccess`, we can get 
the details exception messages(such as `StackTrace`) in currently implements.
 #  `RpcResponseCallback.onSuccess` and `RpcResponseCallback.onFailure` could 
have different behavior. Such as:
NettyBlockTransferService#uploadBlock


```scala
new RpcResponseCallback {
override def onSuccess(response: ByteBuffer): Unit = {
  logTrace(s"Successfully uploaded block $blockId")
  result.success((): Unit)
}
override def onFailure(e: Throwable): Unit = {
  logError(s"Error while uploading block $blockId", e)
  result.failure(e)
}
}
```

  was:
Currently, when there is a `RpcFailure` need be sent back to client, we call 
`RpcCallContext.sendFailure`, then it will call `NettyRpcCallContext.send`. 
However, we can see the follow code snippets in the implementation class.

```scala
private[netty] class RemoteNettyRpcCallContext(
nettyEnv: NettyRpcEnv,
callback: RpcResponseCallback,
senderAddress: RpcAddress)
  extends NettyRpcCallContext(senderAddress) {

  override protected def send(message: Any): Unit = {
val reply = nettyEnv.serialize(message)
callback.onSuccess(reply)
  }
}
```

This is unreasonable, there are two reasons:
 #  Send back failure message by `RpcResponseCallback.onSuccess`, we can get 
the details exception messages(such as `StackTrace`) in currently implements.
 #  `RpcResponseCallback.onSuccess` and `RpcResponseCallback.onFailure` could 
have different behavior. Such as:
NettyBlockTransferService#uploadBlock

```scala
new RpcResponseCallback {
override def onSuccess(response: ByteBuffer): Unit = {
  logTrace(s"Successfully uploaded block $blockId")
  result.success((): Unit)
}
override def onFailure(e: Throwable): Unit = {
  logError(s"Error while uploading block $blockId", e)
  result.failure(e)
}
}
```


> RpcFailure should be call on RpcResponseCallback.onFailure
> --
>
> Key: SPARK-21455
> URL: https://issues.apache.org/jira/browse/SPARK-21455
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 2.2.0
>Reporter: Xianyang Liu
>
> Currently, when there is a `RpcFailure` need be sent back to client, we call 
> `RpcCallContext.sendFailure`, then it will call `NettyRpcCallContext.send`. 
> However, we can see the follow code snippets in the implementation class.
> ```scala
> private[netty] class RemoteNettyRpcCallContext(
> nettyEnv: NettyRpcEnv,
> callback: RpcResponseCallback,
> senderAddress: RpcAddress)
>   extends NettyRpcCallContext(senderAddress) {
>   override protected def send(message: Any): Unit = {
> val reply = nettyEnv.serialize(message)
> callback.onSuccess(reply)
>   }
> }
> ```
> This is unreasonable, there are two reasons:
>  #  Send back failure message by `RpcResponseCallback.onSuccess`, we can get 
> the details exception messages(such as `StackTrace`) in currently implements.
>  #  `RpcResponseCallback.onSuccess` and `RpcResponseCallback.onFailure` could 
> have different behavior. Such as:
> NettyBlockTransferService#uploadBlock
> ```scala
> new RpcResponseCallback {
> override def onSuccess(response: ByteBuffer): Unit = {
>   logTrace(s"Successfully uploaded block $blockId")
>   result.success((): Unit)
> }
> override def onFailure(e: Throwable): Unit = {
>   logError(s"Error while uploading block $blockId", e)
>   result.failure(e)
> }
> }
> ```



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-21455) RpcFailure should be call on RpcResponseCallback.onFailure

2017-07-18 Thread Xianyang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xianyang Liu updated SPARK-21455:
-
Description: 
Currently, when there is a `RpcFailure` need be sent back to client, we call 
`RpcCallContext.sendFailure`, then it will call `NettyRpcCallContext.send`. 
However, we can see the follow code snippets in the implementation class.


```scala
private[netty] class RemoteNettyRpcCallContext(
nettyEnv: NettyRpcEnv,
callback: RpcResponseCallback,
senderAddress: RpcAddress)
  extends NettyRpcCallContext(senderAddress) {

  override protected def send(message: Any): Unit = {
val reply = nettyEnv.serialize(message)
callback.onSuccess(reply)
  }
}
```

This is unreasonable, there are two reasons:
 #  Send back failure message by `RpcResponseCallback.onSuccess`, we can get 
the details exception messages(such as `StackTrace`) in currently implements.
 #  `RpcResponseCallback.onSuccess` and `RpcResponseCallback.onFailure` could 
have different behavior. Such as:
NettyBlockTransferService#uploadBlock


```scala

new RpcResponseCallback {
override def onSuccess(response: ByteBuffer): Unit = {
  logTrace(s"Successfully uploaded block $blockId")
  result.success((): Unit)
}
override def onFailure(e: Throwable): Unit = {
  logError(s"Error while uploading block $blockId", e)
  result.failure(e)
}
}

```

  was:
Currently, when there is a `RpcFailure` need be sent back to client, we call 
`RpcCallContext.sendFailure`, then it will call `NettyRpcCallContext.send`. 
However, we can see the follow code snippets in the implementation class.


```scala
private[netty] class RemoteNettyRpcCallContext(
nettyEnv: NettyRpcEnv,
callback: RpcResponseCallback,
senderAddress: RpcAddress)
  extends NettyRpcCallContext(senderAddress) {

  override protected def send(message: Any): Unit = {
val reply = nettyEnv.serialize(message)
callback.onSuccess(reply)
  }
}
```

This is unreasonable, there are two reasons:
 #  Send back failure message by `RpcResponseCallback.onSuccess`, we can get 
the details exception messages(such as `StackTrace`) in currently implements.
 #  `RpcResponseCallback.onSuccess` and `RpcResponseCallback.onFailure` could 
have different behavior. Such as:
NettyBlockTransferService#uploadBlock


```scala
new RpcResponseCallback {
override def onSuccess(response: ByteBuffer): Unit = {
  logTrace(s"Successfully uploaded block $blockId")
  result.success((): Unit)
}
override def onFailure(e: Throwable): Unit = {
  logError(s"Error while uploading block $blockId", e)
  result.failure(e)
}
}
```


> RpcFailure should be call on RpcResponseCallback.onFailure
> --
>
> Key: SPARK-21455
> URL: https://issues.apache.org/jira/browse/SPARK-21455
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 2.2.0
>Reporter: Xianyang Liu
>
> Currently, when there is a `RpcFailure` need be sent back to client, we call 
> `RpcCallContext.sendFailure`, then it will call `NettyRpcCallContext.send`. 
> However, we can see the follow code snippets in the implementation class.
> ```scala
> private[netty] class RemoteNettyRpcCallContext(
> nettyEnv: NettyRpcEnv,
> callback: RpcResponseCallback,
> senderAddress: RpcAddress)
>   extends NettyRpcCallContext(senderAddress) {
>   override protected def send(message: Any): Unit = {
> val reply = nettyEnv.serialize(message)
> callback.onSuccess(reply)
>   }
> }
> ```
> This is unreasonable, there are two reasons:
>  #  Send back failure message by `RpcResponseCallback.onSuccess`, we can get 
> the details exception messages(such as `StackTrace`) in currently implements.
>  #  `RpcResponseCallback.onSuccess` and `RpcResponseCallback.onFailure` could 
> have different behavior. Such as:
> NettyBlockTransferService#uploadBlock
> ```scala
> new RpcResponseCallback {
> override def onSuccess(response: ByteBuffer): Unit = {
>   logTrace(s"Successfully uploaded block $blockId")
>   result.success((): Unit)
> }
> override def onFailure(e: Throwable): Unit = {
>   logError(s"Error while uploading block $blockId", e)
>   result.failure(e)
> }
> }
> ```



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-21455) RpcFailure should be call on RpcResponseCallback.onFailure

2017-07-18 Thread Xianyang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xianyang Liu updated SPARK-21455:
-
Description: 
Currently, when there is a `RpcFailure` need be sent back to client, we call 
`RpcCallContext.sendFailure`, then it will call `NettyRpcCallContext.send`. 
However, we can see the follow code snippets in the implementation class.

```
private[netty] class RemoteNettyRpcCallContext(
nettyEnv: NettyRpcEnv,
callback: RpcResponseCallback,
senderAddress: RpcAddress)
  extends NettyRpcCallContext(senderAddress) {

  override protected def send(message: Any): Unit = {
val reply = nettyEnv.serialize(message)
callback.onSuccess(reply)
  }
}
```

This is unreasonable, there are two reasons:
 #  Send back failure message by `RpcResponseCallback.onSuccess`, we can get 
the details exception messages(such as `StackTrace`) in currently implements.
 #  `RpcResponseCallback.onSuccess` and `RpcResponseCallback.onFailure` could 
have different behavior. Such as:
NettyBlockTransferService#uploadBlock

```
new RpcResponseCallback {
override def onSuccess(response: ByteBuffer): Unit = {
  logTrace(s"Successfully uploaded block $blockId")
  result.success((): Unit)
}
override def onFailure(e: Throwable): Unit = {
  logError(s"Error while uploading block $blockId", e)
  result.failure(e)
}
}
```

  was:
Currently, when there is a `RpcFailure` need be sent back to client, we call 
`RpcCallContext.sendFailure`, then it will call `NettyRpcCallContext.send`. 
However, we can see the follow code snippets in the implementation class.

```
private[netty] class RemoteNettyRpcCallContext(
nettyEnv: NettyRpcEnv,
callback: RpcResponseCallback,
senderAddress: RpcAddress)
  extends NettyRpcCallContext(senderAddress) {

  override protected def send(message: Any): Unit = {
val reply = nettyEnv.serialize(message)
callback.onSuccess(reply)
  }
}

```

This is unreasonable, there are two reasons:
# Send back failure message by `RpcResponseCallback.onSuccess`, we can get the 
details exception messages(such as `StackTrace`) in currently implements.
# `RpcResponseCallback.onSuccess` and `RpcResponseCallback.onFailure` could 
have different behavior. Such as:
NettyBlockTransferService#uploadBlock

```
new RpcResponseCallback {
override def onSuccess(response: ByteBuffer): Unit = {
  logTrace(s"Successfully uploaded block $blockId")
  result.success((): Unit)
}
override def onFailure(e: Throwable): Unit = {
  logError(s"Error while uploading block $blockId", e)
  result.failure(e)
}
  })
```


> RpcFailure should be call on RpcResponseCallback.onFailure
> --
>
> Key: SPARK-21455
> URL: https://issues.apache.org/jira/browse/SPARK-21455
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 2.2.0
>Reporter: Xianyang Liu
>
> Currently, when there is a `RpcFailure` need be sent back to client, we call 
> `RpcCallContext.sendFailure`, then it will call `NettyRpcCallContext.send`. 
> However, we can see the follow code snippets in the implementation class.
> ```
> private[netty] class RemoteNettyRpcCallContext(
> nettyEnv: NettyRpcEnv,
> callback: RpcResponseCallback,
> senderAddress: RpcAddress)
>   extends NettyRpcCallContext(senderAddress) {
>   override protected def send(message: Any): Unit = {
> val reply = nettyEnv.serialize(message)
> callback.onSuccess(reply)
>   }
> }
> ```
> This is unreasonable, there are two reasons:
>  #  Send back failure message by `RpcResponseCallback.onSuccess`, we can get 
> the details exception messages(such as `StackTrace`) in currently implements.
>  #  `RpcResponseCallback.onSuccess` and `RpcResponseCallback.onFailure` could 
> have different behavior. Such as:
> NettyBlockTransferService#uploadBlock
> ```
> new RpcResponseCallback {
> override def onSuccess(response: ByteBuffer): Unit = {
>   logTrace(s"Successfully uploaded block $blockId")
>   result.success((): Unit)
> }
> override def onFailure(e: Throwable): Unit = {
>   logError(s"Error while uploading block $blockId", e)
>   result.failure(e)
> }
> }
> ```



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-21455) RpcFailure should be call on RpcResponseCallback.onFailure

2017-07-18 Thread Xianyang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xianyang Liu updated SPARK-21455:
-
Description: 
Currently, when there is a `RpcFailure` need be sent back to client, we call 
`RpcCallContext.sendFailure`, then it will call `NettyRpcCallContext.send`. 
However, we can see the follow code snippets in the implementation class.

```scala
private[netty] class RemoteNettyRpcCallContext(
nettyEnv: NettyRpcEnv,
callback: RpcResponseCallback,
senderAddress: RpcAddress)
  extends NettyRpcCallContext(senderAddress) {

  override protected def send(message: Any): Unit = {
val reply = nettyEnv.serialize(message)
callback.onSuccess(reply)
  }
}
```

This is unreasonable, there are two reasons:
 #  Send back failure message by `RpcResponseCallback.onSuccess`, we can get 
the details exception messages(such as `StackTrace`) in currently implements.
 #  `RpcResponseCallback.onSuccess` and `RpcResponseCallback.onFailure` could 
have different behavior. Such as:
NettyBlockTransferService#uploadBlock

```scala
new RpcResponseCallback {
override def onSuccess(response: ByteBuffer): Unit = {
  logTrace(s"Successfully uploaded block $blockId")
  result.success((): Unit)
}
override def onFailure(e: Throwable): Unit = {
  logError(s"Error while uploading block $blockId", e)
  result.failure(e)
}
}
```

  was:
Currently, when there is a `RpcFailure` need be sent back to client, we call 
`RpcCallContext.sendFailure`, then it will call `NettyRpcCallContext.send`. 
However, we can see the follow code snippets in the implementation class.

```
private[netty] class RemoteNettyRpcCallContext(
nettyEnv: NettyRpcEnv,
callback: RpcResponseCallback,
senderAddress: RpcAddress)
  extends NettyRpcCallContext(senderAddress) {

  override protected def send(message: Any): Unit = {
val reply = nettyEnv.serialize(message)
callback.onSuccess(reply)
  }
}
```

This is unreasonable, there are two reasons:
 #  Send back failure message by `RpcResponseCallback.onSuccess`, we can get 
the details exception messages(such as `StackTrace`) in currently implements.
 #  `RpcResponseCallback.onSuccess` and `RpcResponseCallback.onFailure` could 
have different behavior. Such as:
NettyBlockTransferService#uploadBlock

```
new RpcResponseCallback {
override def onSuccess(response: ByteBuffer): Unit = {
  logTrace(s"Successfully uploaded block $blockId")
  result.success((): Unit)
}
override def onFailure(e: Throwable): Unit = {
  logError(s"Error while uploading block $blockId", e)
  result.failure(e)
}
}
```


> RpcFailure should be call on RpcResponseCallback.onFailure
> --
>
> Key: SPARK-21455
> URL: https://issues.apache.org/jira/browse/SPARK-21455
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 2.2.0
>Reporter: Xianyang Liu
>
> Currently, when there is a `RpcFailure` need be sent back to client, we call 
> `RpcCallContext.sendFailure`, then it will call `NettyRpcCallContext.send`. 
> However, we can see the follow code snippets in the implementation class.
> ```scala
> private[netty] class RemoteNettyRpcCallContext(
> nettyEnv: NettyRpcEnv,
> callback: RpcResponseCallback,
> senderAddress: RpcAddress)
>   extends NettyRpcCallContext(senderAddress) {
>   override protected def send(message: Any): Unit = {
> val reply = nettyEnv.serialize(message)
> callback.onSuccess(reply)
>   }
> }
> ```
> This is unreasonable, there are two reasons:
>  #  Send back failure message by `RpcResponseCallback.onSuccess`, we can get 
> the details exception messages(such as `StackTrace`) in currently implements.
>  #  `RpcResponseCallback.onSuccess` and `RpcResponseCallback.onFailure` could 
> have different behavior. Such as:
> NettyBlockTransferService#uploadBlock
> ```scala
> new RpcResponseCallback {
> override def onSuccess(response: ByteBuffer): Unit = {
>   logTrace(s"Successfully uploaded block $blockId")
>   result.success((): Unit)
> }
> override def onFailure(e: Throwable): Unit = {
>   logError(s"Error while uploading block $blockId", e)
>   result.failure(e)
> }
> }
> ```



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org