[jira] [Updated] (SPARK-22382) Spark on mesos: doesn't support public IP setup for agent and master.

2017-10-28 Thread DUC LIEM NGUYEN (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-22382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DUC LIEM NGUYEN updated SPARK-22382:

Description: 
I've installed a system as followed:

--mesos master private IP of 10.x.x.2 , Public 35.x.x.6

--mesos slave private IP of 192.x.x.10, Public 111.x.x.2

Now the master assigned the task successfully to the slave, however, the task 
failed. The error message is as followed:

{color:#d04437}Exception in thread "main" 17/10/11 22:38:01 ERROR 
RpcOutboxMessage: Ask timeout before connecting successfully

Caused by: org.apache.spark.rpc.RpcTimeoutException: Cannot receive any reply 
in 120 seconds. This timeout is controlled by spark.rpc.askTimeout
{color}

When I look at the environment page, the spark.driver.host points to the 
private IP address of the master 10.x.x.2 instead of it public IP address 
35.x.x.6. I look at the Wireshark capture and indeed, there was failed TCP 
package to the master private IP address.

Now if I set spark.driver.bindAddress from the master to its local IP address, 
spark.driver.host from the master to its public IP address, I get the following 
message.

{color:#d04437}ERROR TaskSchedulerImpl: Lost executor 1 on 
myhostname.singnet.com.sg: Unable to create executor due to Cannot assign 
requested address.{color}

>From my understanding, the spark.driver.bindAddress set it for both master and 
>slave, hence the slave get the said error. Now I'm really wondering how do I 
>proper setup spark to work on this clustering over public IP?

  was:
I've installed a system as followed:

--mesos master private IP of 10.x.x.2 , Public 35.x.x.6

--mesos slave private IP of 192.x.x.10, Public 111.x.x.2

Now the master assigned the task successfully to the slave, however, the task 
failed. The error message is as followed:

{color:#d04437}Exception in thread "main" 17/10/11 22:38:01 ERROR 
RpcOutboxMessage: Ask timeout before connecting successfully

Caused by: org.apache.spark.rpc.RpcTimeoutException: Cannot receive any reply 
in 120 seconds. This timeout is controlled by spark.rpc.askTimeout
{color}
When I look at the environment, the spark.driver.host points to the private IP 
address of the master 10.x.x.2 instead of it public IP address 35.x.x.6. I look 
at the Wireshark capture and indeed, there was failed TCP package to the master 
private IP address.

Now if I set spark.driver.bindAddress from the master to its local IP address, 
spark.driver.host from the master to its public IP address, I get the following 
message.

{color:#d04437}ERROR TaskSchedulerImpl: Lost executor 1 on 
myhostname.singnet.com.sg: Unable to create executor due to Cannot assign 
requested address.{color}

>From my understanding, the spark.driver.bindAddress set it for both master and 
>slave, hence the slave get the said error. Now I'm really wondering how do I 
>proper setup spark to work on this clustering over public IP?


> Spark on mesos: doesn't support public IP setup for agent and master. 
> --
>
> Key: SPARK-22382
> URL: https://issues.apache.org/jira/browse/SPARK-22382
> Project: Spark
>  Issue Type: Question
>  Components: Mesos
>Affects Versions: 2.1.0
>Reporter: DUC LIEM NGUYEN
>
> I've installed a system as followed:
> --mesos master private IP of 10.x.x.2 , Public 35.x.x.6
> --mesos slave private IP of 192.x.x.10, Public 111.x.x.2
> Now the master assigned the task successfully to the slave, however, the task 
> failed. The error message is as followed:
> {color:#d04437}Exception in thread "main" 17/10/11 22:38:01 ERROR 
> RpcOutboxMessage: Ask timeout before connecting successfully
> Caused by: org.apache.spark.rpc.RpcTimeoutException: Cannot receive any reply 
> in 120 seconds. This timeout is controlled by spark.rpc.askTimeout
> {color}
> When I look at the environment page, the spark.driver.host points to the 
> private IP address of the master 10.x.x.2 instead of it public IP address 
> 35.x.x.6. I look at the Wireshark capture and indeed, there was failed TCP 
> package to the master private IP address.
> Now if I set spark.driver.bindAddress from the master to its local IP 
> address, spark.driver.host from the master to its public IP address, I get 
> the following message.
> {color:#d04437}ERROR TaskSchedulerImpl: Lost executor 1 on 
> myhostname.singnet.com.sg: Unable to create executor due to Cannot assign 
> requested address.{color}
> From my understanding, the spark.driver.bindAddress set it for both master 
> and slave, hence the slave get the said error. Now I'm really wondering how 
> do I proper setup spark to work on this clustering over public IP?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: 

[jira] [Updated] (SPARK-22382) Spark on mesos: doesn't support public IP setup for agent and master.

2017-10-28 Thread DUC LIEM NGUYEN (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-22382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DUC LIEM NGUYEN updated SPARK-22382:

Affects Version/s: (was: 2.1.1)
   2.1.0
  Description: 
I've installed a system as followed:

--mesos master private IP of 10.x.x.2 , Public 35.x.x.6

--mesos slave private IP of 192.x.x.10, Public 111.x.x.2

Now the master assigned the task successfully to the slave, however, the task 
failed. The error message is as followed:

{color:#d04437}Exception in thread "main" 17/10/11 22:38:01 ERROR 
RpcOutboxMessage: Ask timeout before connecting successfully

Caused by: org.apache.spark.rpc.RpcTimeoutException: Cannot receive any reply 
in 120 seconds. This timeout is controlled by spark.rpc.askTimeout
{color}
When I look at the environment, the spark.driver.host points to the private IP 
address of the master 10.x.x.2 instead of it public IP address 35.x.x.6. I look 
at the Wireshark capture and indeed, there was failed TCP package to the master 
private IP address.

Now if I set spark.driver.bindAddress from the master to its local IP address, 
spark.driver.host from the master to its public IP address, I get the following 
message.

{color:#d04437}ERROR TaskSchedulerImpl: Lost executor 1 on 
myhostname.singnet.com.sg: Unable to create executor due to Cannot assign 
requested address.{color}

>From my understanding, the spark.driver.bindAddress set it for both master and 
>slave, hence the slave get the said error. Now I'm really wondering how do I 
>proper setup spark to work on this clustering over public IP?

  was:
I've installed a system as followed:

--mesos master private IP of 10.x.x.2 , Public 35.x.x.6

--mesos slave private IP of 192.x.x.10, Public 111.x.x.2

Now the master assigned the task successfully to the slave, however, the task 
failed. The error message is as followed:

{color:#d04437}{{Exception in thread "main" 17/10/11 22:38:01 ERROR 
RpcOutboxMessage: Ask timeout before connecting successfully

Caused by: org.apache.spark.rpc.RpcTimeoutException: Cannot receive any reply 
in 120 seconds. This timeout is controlled by spark.rpc.askTimeout
}}{color}
When I look at the environment, the spark.driver.host points to the private IP 
address of the master 10.x.x.2 instead of it public IP address 35.x.x.6. I look 
at the Wireshark capture and indeed, there was failed TCP package to the master 
private IP address.

Now if I set spark.driver.bindAddress from the master to its local IP address, 
spark.driver.host from the master to its public IP address, I get the following 
message.

{{ERROR TaskSchedulerImpl: Lost executor 1 on myhostname.singnet.com.sg: Unable 
to create executor due to Cannot assign requested address.}}

>From my understanding, the spark.driver.bindAddress set it for both master and 
>slave, hence the slave get the said error. Now I'm really wondering how do I 
>proper setup spark to work on this clustering over public IP?


> Spark on mesos: doesn't support public IP setup for agent and master. 
> --
>
> Key: SPARK-22382
> URL: https://issues.apache.org/jira/browse/SPARK-22382
> Project: Spark
>  Issue Type: Question
>  Components: Mesos
>Affects Versions: 2.1.0
>Reporter: DUC LIEM NGUYEN
>
> I've installed a system as followed:
> --mesos master private IP of 10.x.x.2 , Public 35.x.x.6
> --mesos slave private IP of 192.x.x.10, Public 111.x.x.2
> Now the master assigned the task successfully to the slave, however, the task 
> failed. The error message is as followed:
> {color:#d04437}Exception in thread "main" 17/10/11 22:38:01 ERROR 
> RpcOutboxMessage: Ask timeout before connecting successfully
> Caused by: org.apache.spark.rpc.RpcTimeoutException: Cannot receive any reply 
> in 120 seconds. This timeout is controlled by spark.rpc.askTimeout
> {color}
> When I look at the environment, the spark.driver.host points to the private 
> IP address of the master 10.x.x.2 instead of it public IP address 35.x.x.6. I 
> look at the Wireshark capture and indeed, there was failed TCP package to the 
> master private IP address.
> Now if I set spark.driver.bindAddress from the master to its local IP 
> address, spark.driver.host from the master to its public IP address, I get 
> the following message.
> {color:#d04437}ERROR TaskSchedulerImpl: Lost executor 1 on 
> myhostname.singnet.com.sg: Unable to create executor due to Cannot assign 
> requested address.{color}
> From my understanding, the spark.driver.bindAddress set it for both master 
> and slave, hence the slave get the said error. Now I'm really wondering how 
> do I proper setup spark to work on this clustering over public IP?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (SPARK-22382) Spark on mesos: doesn't support public IP setup for agent and master.

2017-10-28 Thread DUC LIEM NGUYEN (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-22382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DUC LIEM NGUYEN updated SPARK-22382:

Description: 
I've installed a system as followed:

--mesos master private IP of 10.x.x.2 , Public 35.x.x.6

--mesos slave private IP of 192.x.x.10, Public 111.x.x.2

Now the master assigned the task successfully to the slave, however, the task 
failed. The error message is as followed:

{color:#d04437}{{Exception in thread "main" 17/10/11 22:38:01 ERROR 
RpcOutboxMessage: Ask timeout before connecting successfully

Caused by: org.apache.spark.rpc.RpcTimeoutException: Cannot receive any reply 
in 120 seconds. This timeout is controlled by spark.rpc.askTimeout
}}{color}
When I look at the environment, the spark.driver.host points to the private IP 
address of the master 10.x.x.2 instead of it public IP address 35.x.x.6. I look 
at the Wireshark capture and indeed, there was failed TCP package to the master 
private IP address.

Now if I set spark.driver.bindAddress from the master to its local IP address, 
spark.driver.host from the master to its public IP address, I get the following 
message.

{{ERROR TaskSchedulerImpl: Lost executor 1 on myhostname.singnet.com.sg: Unable 
to create executor due to Cannot assign requested address.}}

>From my understanding, the spark.driver.bindAddress set it for both master and 
>slave, hence the slave get the said error. Now I'm really wondering how do I 
>proper setup spark to work on this clustering over public IP?

  was:
I've installed a system as followed:

--mesos master private IP of 10.x.x.2 , Public 35.x.x.6

--mesos slave private IP of 192.x.x.10, Public 111.x.x.2

Now the master assigned the task successfully to the slave, however, the task 
failed. The error message is as followed:

{{Exception in thread "main" 17/10/11 22:38:01 ERROR RpcOutboxMessage: Ask 
timeout before connecting successfully

Caused by: org.apache.spark.rpc.RpcTimeoutException: Cannot receive any reply 
in 120 seconds. This timeout is controlled by spark.rpc.askTimeout}}

When I look at the environment, the spark.driver.host points to the private IP 
address of the master 10.x.x.2 instead of it public IP address 35.x.x.6. I look 
at the Wireshark capture and indeed, there was failed TCP package to the master 
private IP address.

Now if I set spark.driver.bindAddress from the master to its local IP address, 
spark.driver.host from the master to its public IP address, I get the following 
message.

{{ERROR TaskSchedulerImpl: Lost executor 1 on myhostname.singnet.com.sg: Unable 
to create executor due to Cannot assign requested address.}}

>From my understanding, the spark.driver.bindAddress set it for both master and 
>slave, hence the slave get the said error. Now I'm really wondering how do I 
>proper setup spark to work on this clustering over public IP?


> Spark on mesos: doesn't support public IP setup for agent and master. 
> --
>
> Key: SPARK-22382
> URL: https://issues.apache.org/jira/browse/SPARK-22382
> Project: Spark
>  Issue Type: Question
>  Components: Mesos
>Affects Versions: 2.1.0
>Reporter: DUC LIEM NGUYEN
>
> I've installed a system as followed:
> --mesos master private IP of 10.x.x.2 , Public 35.x.x.6
> --mesos slave private IP of 192.x.x.10, Public 111.x.x.2
> Now the master assigned the task successfully to the slave, however, the task 
> failed. The error message is as followed:
> {color:#d04437}{{Exception in thread "main" 17/10/11 22:38:01 ERROR 
> RpcOutboxMessage: Ask timeout before connecting successfully
> Caused by: org.apache.spark.rpc.RpcTimeoutException: Cannot receive any reply 
> in 120 seconds. This timeout is controlled by spark.rpc.askTimeout
> }}{color}
> When I look at the environment, the spark.driver.host points to the private 
> IP address of the master 10.x.x.2 instead of it public IP address 35.x.x.6. I 
> look at the Wireshark capture and indeed, there was failed TCP package to the 
> master private IP address.
> Now if I set spark.driver.bindAddress from the master to its local IP 
> address, spark.driver.host from the master to its public IP address, I get 
> the following message.
> {{ERROR TaskSchedulerImpl: Lost executor 1 on myhostname.singnet.com.sg: 
> Unable to create executor due to Cannot assign requested address.}}
> From my understanding, the spark.driver.bindAddress set it for both master 
> and slave, hence the slave get the said error. Now I'm really wondering how 
> do I proper setup spark to work on this clustering over public IP?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

[jira] [Updated] (SPARK-22382) Spark on mesos: doesn't support public IP setup for agent and master.

2017-10-28 Thread DUC LIEM NGUYEN (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-22382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DUC LIEM NGUYEN updated SPARK-22382:

Description: 
I've installed a system as followed:

--mesos master private IP of 10.x.x.2 , Public 35.x.x.6

--mesos slave private IP of 192.x.x.10, Public 111.x.x.2

Now the master assigned the task successfully to the slave, however, the task 
failed. The error message is as followed:

{{Exception in thread "main" 17/10/11 22:38:01 ERROR RpcOutboxMessage: Ask 
timeout before connecting successfully

Caused by: org.apache.spark.rpc.RpcTimeoutException: Cannot receive any reply 
in 120 seconds. This timeout is controlled by spark.rpc.askTimeout}}

When I look at the environment, the spark.driver.host points to the private IP 
address of the master 10.x.x.2 instead of it public IP address 35.x.x.6. I look 
at the Wireshark capture and indeed, there was failed TCP package to the master 
private IP address.

Now if I set spark.driver.bindAddress from the master to its local IP address, 
spark.driver.host from the master to its public IP address, I get the following 
message.

{{ERROR TaskSchedulerImpl: Lost executor 1 on myhostname.singnet.com.sg: Unable 
to create executor due to Cannot assign requested address.}}

>From my understanding, the spark.driver.bindAddress set it for both master and 
>slave, hence the slave get the said error. Now I'm really wondering how do I 
>proper setup spark to work on this clustering over public IP?

  was:
I've installed a system as followed:

--mesos master private IP of 10.x.x.2 , Public 35.x.x.6

--mesos slave private IP of 192.x.x.10, Public 111.x.x.2

Now the master assigned the task successfully to the slave, however, the task 
failed. The error message is as followed:

Exception in thread "main" 17/10/11 22:38:01 ERROR RpcOutboxMessage: Ask 
timeout before connecting successfully

Caused by: org.apache.spark.rpc.RpcTimeoutException: Cannot receive any reply 
in 120 seconds. This timeout is controlled by spark.rpc.askTimeout
When I look at the environment, the spark.driver.host points to the private IP 
address of the master 10.x.x.2 instead of it public IP address 35.x.x.6. I look 
at the Wireshark capture and indeed, there was failed TCP package to the master 
private IP address.

Now if I set spark.driver.bindAddress from the master to its local IP address, 
spark.driver.host from the master to its public IP address, I get the following 
message.

ERROR TaskSchedulerImpl: Lost executor 1 on myhostname.singnet.com.sg: Unable 
to create executor due to Cannot assign requested address.

>From my understanding, the spark.driver.bindAddress set it for both master and 
>slave, hence the slave get the said error. Now I'm really wondering how do I 
>proper setup spark to work on this clustering over public IP?


> Spark on mesos: doesn't support public IP setup for agent and master. 
> --
>
> Key: SPARK-22382
> URL: https://issues.apache.org/jira/browse/SPARK-22382
> Project: Spark
>  Issue Type: Question
>  Components: Mesos
>Affects Versions: 2.1.1
>Reporter: DUC LIEM NGUYEN
>
> I've installed a system as followed:
> --mesos master private IP of 10.x.x.2 , Public 35.x.x.6
> --mesos slave private IP of 192.x.x.10, Public 111.x.x.2
> Now the master assigned the task successfully to the slave, however, the task 
> failed. The error message is as followed:
> {{Exception in thread "main" 17/10/11 22:38:01 ERROR RpcOutboxMessage: Ask 
> timeout before connecting successfully
> Caused by: org.apache.spark.rpc.RpcTimeoutException: Cannot receive any reply 
> in 120 seconds. This timeout is controlled by spark.rpc.askTimeout}}
> When I look at the environment, the spark.driver.host points to the private 
> IP address of the master 10.x.x.2 instead of it public IP address 35.x.x.6. I 
> look at the Wireshark capture and indeed, there was failed TCP package to the 
> master private IP address.
> Now if I set spark.driver.bindAddress from the master to its local IP 
> address, spark.driver.host from the master to its public IP address, I get 
> the following message.
> {{ERROR TaskSchedulerImpl: Lost executor 1 on myhostname.singnet.com.sg: 
> Unable to create executor due to Cannot assign requested address.}}
> From my understanding, the spark.driver.bindAddress set it for both master 
> and slave, hence the slave get the said error. Now I'm really wondering how 
> do I proper setup spark to work on this clustering over public IP?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-22382) Spark on mesos: doesn't support public IP setup for agent and master.

2017-10-28 Thread DUC LIEM NGUYEN (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-22382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DUC LIEM NGUYEN updated SPARK-22382:

Description: 
I've installed a system as followed:

--mesos master private IP of 10.x.x.2 , Public 35.x.x.6

--mesos slave private IP of 192.x.x.10, Public 111.x.x.2

Now the master assigned the task successfully to the slave, however, the task 
failed. The error message is as followed:

Exception in thread "main" 17/10/11 22:38:01 ERROR RpcOutboxMessage: Ask 
timeout before connecting successfully

Caused by: org.apache.spark.rpc.RpcTimeoutException: Cannot receive any reply 
in 120 seconds. This timeout is controlled by spark.rpc.askTimeout
When I look at the environment, the spark.driver.host points to the private IP 
address of the master 10.x.x.2 instead of it public IP address 35.x.x.6. I look 
at the Wireshark capture and indeed, there was failed TCP package to the master 
private IP address.

Now if I set spark.driver.bindAddress from the master to its local IP address, 
spark.driver.host from the master to its public IP address, I get the following 
message.

ERROR TaskSchedulerImpl: Lost executor 1 on myhostname.singnet.com.sg: Unable 
to create executor due to Cannot assign requested address.

>From my understanding, the spark.driver.bindAddress set it for both master and 
>slave, hence the slave get the said error. Now I'm really wondering how do I 
>proper setup spark to work on this clustering over public IP?

> Spark on mesos: doesn't support public IP setup for agent and master. 
> --
>
> Key: SPARK-22382
> URL: https://issues.apache.org/jira/browse/SPARK-22382
> Project: Spark
>  Issue Type: Question
>  Components: Mesos
>Affects Versions: 2.1.1
>Reporter: DUC LIEM NGUYEN
>
> I've installed a system as followed:
> --mesos master private IP of 10.x.x.2 , Public 35.x.x.6
> --mesos slave private IP of 192.x.x.10, Public 111.x.x.2
> Now the master assigned the task successfully to the slave, however, the task 
> failed. The error message is as followed:
> Exception in thread "main" 17/10/11 22:38:01 ERROR RpcOutboxMessage: Ask 
> timeout before connecting successfully
> Caused by: org.apache.spark.rpc.RpcTimeoutException: Cannot receive any reply 
> in 120 seconds. This timeout is controlled by spark.rpc.askTimeout
> When I look at the environment, the spark.driver.host points to the private 
> IP address of the master 10.x.x.2 instead of it public IP address 35.x.x.6. I 
> look at the Wireshark capture and indeed, there was failed TCP package to the 
> master private IP address.
> Now if I set spark.driver.bindAddress from the master to its local IP 
> address, spark.driver.host from the master to its public IP address, I get 
> the following message.
> ERROR TaskSchedulerImpl: Lost executor 1 on myhostname.singnet.com.sg: Unable 
> to create executor due to Cannot assign requested address.
> From my understanding, the spark.driver.bindAddress set it for both master 
> and slave, hence the slave get the said error. Now I'm really wondering how 
> do I proper setup spark to work on this clustering over public IP?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-22382) Spark on mesos: doesn't support public IP setup for agent and master.

2017-10-28 Thread DUC LIEM NGUYEN (JIRA)
DUC LIEM NGUYEN created SPARK-22382:
---

 Summary: Spark on mesos: doesn't support public IP setup for agent 
and master. 
 Key: SPARK-22382
 URL: https://issues.apache.org/jira/browse/SPARK-22382
 Project: Spark
  Issue Type: Question
  Components: Mesos
Affects Versions: 2.1.1
Reporter: DUC LIEM NGUYEN






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-21053) Number overflow on agg function of Dataframe

2017-06-10 Thread DUC LIEM NGUYEN (JIRA)
DUC LIEM NGUYEN created SPARK-21053:
---

 Summary: Number overflow on agg function of Dataframe
 Key: SPARK-21053
 URL: https://issues.apache.org/jira/browse/SPARK-21053
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 2.1.0
 Environment: Databricks Community version
Reporter: DUC LIEM NGUYEN


The use of average on aggregation function on a large data set return a NaN 
instead of the desired numerical value although it's range between 0 and 1.




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org