[jira] [Updated] (HDDS-3223) Improve the read efficiency of big object in s3g

2020-03-16 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3223:
-
Issue Type: Improvement  (was: Bug)

> Improve the read efficiency of big object in s3g
> 
>
> Key: HDDS-3223
> URL: https://issues.apache.org/jira/browse/HDDS-3223
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: runzhiwang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-3223) Improve the read efficiency of big object in s3g

2020-03-16 Thread runzhiwang (Jira)
runzhiwang created HDDS-3223:


 Summary: Improve the read efficiency of big object in s3g
 Key: HDDS-3223
 URL: https://issues.apache.org/jira/browse/HDDS-3223
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: runzhiwang






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3168) Merge a lot of RPC call getContainerWithPipeline into one

2020-03-16 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3168:
-
Summary: Merge a lot of RPC call getContainerWithPipeline into one  (was: 
Merge a lot of RPC call getContainerWithPipeline)

> Merge a lot of RPC call getContainerWithPipeline into one
> -
>
> Key: HDDS-3168
> URL: https://issues.apache.org/jira/browse/HDDS-3168
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
> Attachments: screenshot-1.png, screenshot-2.png, screenshot-3.png
>
>
>  !screenshot-3.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3168) Merge a lot of RPC call getContainerWithPipeline into one

2020-03-16 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3168:
-
Description:  !screenshot-3.png!   (was: Read a 100MB object which has 25 
chunks, the jaeger trace information as the image shows. Now ozone read each 
chunk in sequential order, it can be improve by reading chunks in parallel. And 
the rpc call is also too many, it can be improve by one rpc call and return the 
result by batch.  !screenshot-3.png! )

> Merge a lot of RPC call getContainerWithPipeline into one
> -
>
> Key: HDDS-3168
> URL: https://issues.apache.org/jira/browse/HDDS-3168
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
> Attachments: screenshot-1.png, screenshot-2.png, screenshot-3.png
>
>
>  !screenshot-3.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3168) Merge a lot of RPC call getContainerWithPipeline

2020-03-16 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3168:
-
Summary: Merge a lot of RPC call getContainerWithPipeline  (was: Merge a 
lot of RPC call getContainerWithPipeline into one)

> Merge a lot of RPC call getContainerWithPipeline
> 
>
> Key: HDDS-3168
> URL: https://issues.apache.org/jira/browse/HDDS-3168
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
> Attachments: screenshot-1.png, screenshot-2.png, screenshot-3.png
>
>
>  !screenshot-3.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3168) Merge a lot of RPC call getContainerWithPipeline into one

2020-03-16 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3168:
-
Summary: Merge a lot of RPC call getContainerWithPipeline into one  (was: 
Merge a lot of RPC call )

> Merge a lot of RPC call getContainerWithPipeline into one
> -
>
> Key: HDDS-3168
> URL: https://issues.apache.org/jira/browse/HDDS-3168
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
> Attachments: screenshot-1.png, screenshot-2.png, screenshot-3.png
>
>
> Read a 100MB object which has 25 chunks, the jaeger trace information as the 
> image shows. Now ozone read each chunk in sequential order, it can be improve 
> by reading chunks in parallel. And the rpc call is also too many, it can be 
> improve by one rpc call and return the result by batch.  !screenshot-3.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3168) Merge a lot of RPC call

2020-03-16 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3168:
-
Summary: Merge a lot of RPC call   (was: Improve the efficiency of reading 
object)

> Merge a lot of RPC call 
> 
>
> Key: HDDS-3168
> URL: https://issues.apache.org/jira/browse/HDDS-3168
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
> Attachments: screenshot-1.png, screenshot-2.png, screenshot-3.png
>
>
> Read a 100MB object which has 25 chunks, the jaeger trace information as the 
> image shows. Now ozone read each chunk in sequential order, it can be improve 
> by reading chunks in parallel. And the rpc call is also too many, it can be 
> improve by one rpc call and return the result by batch.  !screenshot-3.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1933) Datanode should use hostname in place of ip addresses to allow DN's to work when ipaddress change

2020-03-13 Thread runzhiwang (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17058543#comment-17058543
 ] 

runzhiwang edited comment on HDDS-1933 at 3/13/20, 9:23 AM:


I change getIpAddress at the following links to getHostName, it works fine.
1. 
https://github.com/apache/hadoop-ozone/blob/master/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientGrpc.java#L173.
 
2. 
https://github.com/apache/hadoop-ozone/blob/master/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/ratis/RatisHelper.java#L93.
 (pointed by [~adoroszlai])

But I'm not sure which of the following two fixes is better:  1. replace the 
two fileds  ipAddress and hostName in DatanodeDetails with only one field 
dnsName: dnsName = if (dfs.datanode.use.datanode.hostname) hostName else 
ipAddress, this fix is thorough but has to change a lot of code and has a high 
risk; 2. pass the parameter dfs.datanode.use.datanode.hostname to methods which 
need to difference between hostName and ipAddress, this fix is simple but if 
someone forget to do this in the case need to difference between hostName and 
ipAddress, it will introduce a bug. So what do you think ? [~Sammi] [~xyao] 
[~adoroszlai] [~swagle] [~msingh] [~elek]


was (Author: yjxxtd):
I change getIpAddress at the following links to getHostName, it works fine.
1. 
https://github.com/apache/hadoop-ozone/blob/master/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientGrpc.java#L173.
 
2. 
https://github.com/apache/hadoop-ozone/blob/master/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/ratis/RatisHelper.java#L93.
 (pointed by [~adoroszlai])

But I'm not sure which of the following two fixes is better:  1. replace the 
two fileds  ipAddress and hostName in DatanodeDetails with only one field 
dnsName: dnsName = if (dfs.datanode.use.datanode.hostname) hostName else 
ipAddress, this fix is thorough but has to change a lot of code and has a high 
risk; 2. pass the parameter dfs.datanode.use.datanode.hostname to methods which 
need to difference between hostName and ipAddress, this fix is simple but if 
someone forget to do this in the case need to difference between hostName and 
ipAddress, it will introduce a bug. So what do you think ? 
[~Sammi][~xyao][~adoroszlai][~swagle][~msingh][~elek]

> Datanode should use hostname in place of ip addresses to allow DN's to work 
> when ipaddress change
> -
>
> Key: HDDS-1933
> URL: https://issues.apache.org/jira/browse/HDDS-1933
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Priority: Blocker
>
> This was noticed by [~elek] while deploying Ozone on Kubernetes based 
> environment.
> When the datanode ip address change on restart, the Datanode details cease to 
> be correct for the datanode. and this prevents the cluster from functioning 
> after a restart.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1933) Datanode should use hostname in place of ip addresses to allow DN's to work when ipaddress change

2020-03-13 Thread runzhiwang (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17058543#comment-17058543
 ] 

runzhiwang edited comment on HDDS-1933 at 3/13/20, 9:23 AM:


I change getIpAddress at the following links to getHostName, it works fine.
1. 
https://github.com/apache/hadoop-ozone/blob/master/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientGrpc.java#L173.
 
2. 
https://github.com/apache/hadoop-ozone/blob/master/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/ratis/RatisHelper.java#L93.
 (pointed by [~adoroszlai])

But I'm not sure which of the following two fixes is better:  1. replace the 
two fileds  ipAddress and hostName in DatanodeDetails with only one field 
dnsName: dnsName = if (dfs.datanode.use.datanode.hostname) hostName else 
ipAddress, this fix is thorough but has to change a lot of code and has a high 
risk; 2. pass the parameter dfs.datanode.use.datanode.hostname to methods which 
need to difference between hostName and ipAddress, this fix is simple but if 
someone forget to do this in the case need to difference between hostName and 
ipAddress, it will introduce a bug. So what do you think ? 
[~Sammi][~xyao][~adoroszlai][~swagle][~msingh][~elek]


was (Author: yjxxtd):
I change getIpAddress at the following links to getHostName, it works fine.
1. 
https://github.com/apache/hadoop-ozone/blob/master/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientGrpc.java#L173.
 
2. 
https://github.com/apache/hadoop-ozone/blob/master/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/ratis/RatisHelper.java#L93.
 (pointed by [~adoroszlai])

But I'm not sure which of the following two fixes is better:  1. replace the 
two fileds  ipAddress and hostName in DatanodeDetails with only one field 
dnsName: dnsName = if (dfs.datanode.use.datanode.hostname) hostName else 
ipAddress, this fix is thorough but has to change a lot of code and has a high 
risk; 2. pass the parameter dfs.datanode.use.datanode.hostname to methods which 
need to difference between hostName and ipAddress, this fix is simple but if 
someone forget to do this in the case need to difference between hostName and 
ipAddress, it will introduce a bug. So what do you think ? 
[~Sammi][~xyao][~adoroszlai][~swagle][~msingh]

> Datanode should use hostname in place of ip addresses to allow DN's to work 
> when ipaddress change
> -
>
> Key: HDDS-1933
> URL: https://issues.apache.org/jira/browse/HDDS-1933
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Priority: Blocker
>
> This was noticed by [~elek] while deploying Ozone on Kubernetes based 
> environment.
> When the datanode ip address change on restart, the Datanode details cease to 
> be correct for the datanode. and this prevents the cluster from functioning 
> after a restart.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1933) Datanode should use hostname in place of ip addresses to allow DN's to work when ipaddress change

2020-03-13 Thread runzhiwang (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17058543#comment-17058543
 ] 

runzhiwang edited comment on HDDS-1933 at 3/13/20, 9:18 AM:


I change getIpAddress at the following links to getHostName, it works fine.
1. 
https://github.com/apache/hadoop-ozone/blob/master/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientGrpc.java#L173.
 
2. 
https://github.com/apache/hadoop-ozone/blob/master/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/ratis/RatisHelper.java#L93.
 (pointed by [~adoroszlai])

But I'm not sure which of the following two fixes is better:  1. replace the 
two fileds  ipAddress and hostName in DatanodeDetails with only one field 
dnsName: dnsName = if (dfs.datanode.use.datanode.hostname) hostName else 
ipAddress, this fix is thorough but has to change a lot of code and has a high 
risk; 2. pass the parameter dfs.datanode.use.datanode.hostname to methods which 
need to difference between hostName and ipAddress, this fix is simple but if 
someone forget to do this in the case need to difference between hostName and 
ipAddress, it will introduce a bug. So what do you think ? 
[~Sammi][~xyao][~adoroszlai][~swagle][~msingh]


was (Author: yjxxtd):
I change getIpAddress at the following links to getHostName, it works fine.
1. 
https://github.com/apache/hadoop-ozone/blob/master/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientGrpc.java#L173.
 
2. 
https://github.com/apache/hadoop-ozone/blob/master/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/ratis/RatisHelper.java#L93.
 (pointed by [~adoroszlai])

But I'm not sure which of the following two fixes is better:  1. replace the 
two fileds  ipAddress and hostName in DatanodeDetails with only one field 
dnsName: dnsName = if (dfs.datanode.use.datanode.hostname) hostName else 
ipAddress, this fix is thorough but has to change a lot of code and has a high 
risk; 2. pass the parameter dfs.datanode.use.datanode.hostname to methods which 
need to difference between hostName and ipAddress, this fix is simple but if 
someone forget to do this in the case need to difference between hostName and 
ipAddress, it will produce a bug. So what do you think ? 
[~Sammi][~xyao][~adoroszlai][~swagle][~msingh]

> Datanode should use hostname in place of ip addresses to allow DN's to work 
> when ipaddress change
> -
>
> Key: HDDS-1933
> URL: https://issues.apache.org/jira/browse/HDDS-1933
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Priority: Blocker
>
> This was noticed by [~elek] while deploying Ozone on Kubernetes based 
> environment.
> When the datanode ip address change on restart, the Datanode details cease to 
> be correct for the datanode. and this prevents the cluster from functioning 
> after a restart.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1933) Datanode should use hostname in place of ip addresses to allow DN's to work when ipaddress change

2020-03-13 Thread runzhiwang (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17058543#comment-17058543
 ] 

runzhiwang commented on HDDS-1933:
--

I change getIpAddress at the following links to getHostName, it works fine.
1. 
https://github.com/apache/hadoop-ozone/blob/master/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientGrpc.java#L173.
 
2. 
https://github.com/apache/hadoop-ozone/blob/master/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/ratis/RatisHelper.java#L93.
 (pointed by [~adoroszlai])

But I'm not sure which of the following two fixes is better:  1. replace the 
two fileds  ipAddress and hostName in DatanodeDetails with only one field 
dnsName: dnsName = if (dfs.datanode.use.datanode.hostname) hostName else 
ipAddress, this fix is thorough but has to change a lot of code and has a high 
risk; 2. pass the parameter dfs.datanode.use.datanode.hostname to methods which 
need to difference between hostName and ipAddress, this fix is simple but if 
someone forget to do this in the case need to difference between hostName and 
ipAddress, it will produce a bug. So what do you think ? 
[~Sammi][~xyao][~adoroszlai][~swagle][~msingh]

> Datanode should use hostname in place of ip addresses to allow DN's to work 
> when ipaddress change
> -
>
> Key: HDDS-1933
> URL: https://issues.apache.org/jira/browse/HDDS-1933
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Priority: Blocker
>
> This was noticed by [~elek] while deploying Ozone on Kubernetes based 
> environment.
> When the datanode ip address change on restart, the Datanode details cease to 
> be correct for the datanode. and this prevents the cluster from functioning 
> after a restart.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1933) Datanode should use hostname in place of ip addresses to allow DN's to work when ipaddress change

2020-03-13 Thread runzhiwang (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17058476#comment-17058476
 ] 

runzhiwang commented on HDDS-1933:
--

[~swagle] Hi, I think the following code use the ipaddress of datanode and the 
code [~adoroszlai] has found out were the root cause. I will submit a PR.
https://github.com/apache/hadoop-ozone/blob/master/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientGrpc.java#L173
  

> Datanode should use hostname in place of ip addresses to allow DN's to work 
> when ipaddress change
> -
>
> Key: HDDS-1933
> URL: https://issues.apache.org/jira/browse/HDDS-1933
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Priority: Blocker
>
> This was noticed by [~elek] while deploying Ozone on Kubernetes based 
> environment.
> When the datanode ip address change on restart, the Datanode details cease to 
> be correct for the datanode. and this prevents the cluster from functioning 
> after a restart.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3168) Improve the efficiency of reading object

2020-03-13 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3168:
-
Description: Read a 100MB object which has 25 chunks, the jaeger trace 
information as the image shows. Now ozone read each chunk in sequential order, 
it can be improve by reading chunks in parallel. And the rpc call is also too 
many, it can be improve by one rpc call and return the result by batch.  
!screenshot-3.png!   (was: Read a 100MB object which 25 chunks, the jaeger 
trace information as the image shows. Now ozone read each chunk in sequential 
order, it can be improve by reading chunks in parallel. And the rpc call is 
also too many, it can be improve by one rpc call and return the result by 
batch.  !screenshot-3.png! )

> Improve the efficiency of reading object
> 
>
> Key: HDDS-3168
> URL: https://issues.apache.org/jira/browse/HDDS-3168
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
> Attachments: screenshot-1.png, screenshot-2.png, screenshot-3.png
>
>
> Read a 100MB object which has 25 chunks, the jaeger trace information as the 
> image shows. Now ozone read each chunk in sequential order, it can be improve 
> by reading chunks in parallel. And the rpc call is also too many, it can be 
> improve by one rpc call and return the result by batch.  !screenshot-3.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3168) Improve the efficiency of reading object

2020-03-12 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3168:
-
Description: Read a 100MB object which 25 chunks, the jaeger trace 
information as the image shows. Now ozone read each chunk in sequential order, 
it can be improve by reading chunks in parallel. And the rpc call is also too 
many, it can be improve by one rpc call and return the result by batch.  
!screenshot-3.png!   (was: Read a 100MB object which 25 chunks, the jaeger 
trace information as the image shows. Now ozone read each chunk in sequential 
order, but I think these chunks can be read in parallel. And the cost of 
getContainerWithPipeline is 1-3 ms, which is also too long.  !screenshot-3.png! 
)

> Improve the efficiency of reading object
> 
>
> Key: HDDS-3168
> URL: https://issues.apache.org/jira/browse/HDDS-3168
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
> Attachments: screenshot-1.png, screenshot-2.png, screenshot-3.png
>
>
> Read a 100MB object which 25 chunks, the jaeger trace information as the 
> image shows. Now ozone read each chunk in sequential order, it can be improve 
> by reading chunks in parallel. And the rpc call is also too many, it can be 
> improve by one rpc call and return the result by batch.  !screenshot-3.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3171) Couldn't create RpcClient protocol exception in k8s.

2020-03-12 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3171:
-
Description: 
Start ozone on k8s, sometimes it report Couldn't create RpcClient protocol 
exception on k8s, and sometimes not.
 !screenshot-1.png! 

  was:
Start ozone on k8s, sometimes it report 
 !screenshot-1.png! 


> Couldn't create RpcClient protocol exception in k8s.
> 
>
> Key: HDDS-3171
> URL: https://issues.apache.org/jira/browse/HDDS-3171
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
> Attachments: screenshot-1.png
>
>
> Start ozone on k8s, sometimes it report Couldn't create RpcClient protocol 
> exception on k8s, and sometimes not.
>  !screenshot-1.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3171) Couldn't create RpcClient protocol exception on k8s.

2020-03-12 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3171:
-
Summary: Couldn't create RpcClient protocol exception on k8s.  (was: 
Couldn't create RpcClient protocol exception in k8s.)

> Couldn't create RpcClient protocol exception on k8s.
> 
>
> Key: HDDS-3171
> URL: https://issues.apache.org/jira/browse/HDDS-3171
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
> Attachments: screenshot-1.png
>
>
> Start ozone on k8s, sometimes it report Couldn't create RpcClient protocol 
> exception on k8s, and sometimes not.
>  !screenshot-1.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-3171) Couldn't create RpcClient protocol exception on k8s.

2020-03-12 Thread runzhiwang (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-3171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17058369#comment-17058369
 ] 

runzhiwang commented on HDDS-3171:
--

I'm working on it.

> Couldn't create RpcClient protocol exception on k8s.
> 
>
> Key: HDDS-3171
> URL: https://issues.apache.org/jira/browse/HDDS-3171
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
> Attachments: screenshot-1.png
>
>
> Start ozone on k8s, sometimes it report Couldn't create RpcClient protocol 
> exception on k8s, and sometimes not.
>  !screenshot-1.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3171) Couldn't create RpcClient protocol exception in k8s.

2020-03-12 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3171:
-
Attachment: screenshot-1.png

> Couldn't create RpcClient protocol exception in k8s.
> 
>
> Key: HDDS-3171
> URL: https://issues.apache.org/jira/browse/HDDS-3171
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: runzhiwang
>Priority: Major
> Attachments: screenshot-1.png
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-3171) Couldn't create RpcClient protocol exception in k8s.

2020-03-12 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang reassigned HDDS-3171:


Assignee: runzhiwang

> Couldn't create RpcClient protocol exception in k8s.
> 
>
> Key: HDDS-3171
> URL: https://issues.apache.org/jira/browse/HDDS-3171
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
> Attachments: screenshot-1.png
>
>
>  !screenshot-1.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3171) Couldn't create RpcClient protocol exception in k8s.

2020-03-12 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3171:
-
Description: 
Start ozone on k8s, sometimes it report 
 !screenshot-1.png! 

  was: !screenshot-1.png! 


> Couldn't create RpcClient protocol exception in k8s.
> 
>
> Key: HDDS-3171
> URL: https://issues.apache.org/jira/browse/HDDS-3171
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
> Attachments: screenshot-1.png
>
>
> Start ozone on k8s, sometimes it report 
>  !screenshot-1.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3171) Couldn't create RpcClient protocol exception in k8s.

2020-03-12 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3171:
-
Description:  !screenshot-1.png! 

> Couldn't create RpcClient protocol exception in k8s.
> 
>
> Key: HDDS-3171
> URL: https://issues.apache.org/jira/browse/HDDS-3171
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: runzhiwang
>Priority: Major
> Attachments: screenshot-1.png
>
>
>  !screenshot-1.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-3171) Couldn't create RpcClient protocol exception in k8s.

2020-03-12 Thread runzhiwang (Jira)
runzhiwang created HDDS-3171:


 Summary: Couldn't create RpcClient protocol exception in k8s.
 Key: HDDS-3171
 URL: https://issues.apache.org/jira/browse/HDDS-3171
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: runzhiwang






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1933) Datanode should use hostname in place of ip addresses to allow DN's to work when ipaddress change

2020-03-12 Thread runzhiwang (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17058305#comment-17058305
 ] 

runzhiwang commented on HDDS-1933:
--

[~swagle] Okay, I see. I will check the code and find out how to make it work.

> Datanode should use hostname in place of ip addresses to allow DN's to work 
> when ipaddress change
> -
>
> Key: HDDS-1933
> URL: https://issues.apache.org/jira/browse/HDDS-1933
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Priority: Blocker
>
> This was noticed by [~elek] while deploying Ozone on Kubernetes based 
> environment.
> When the datanode ip address change on restart, the Datanode details cease to 
> be correct for the datanode. and this prevents the cluster from functioning 
> after a restart.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1933) Datanode should use hostname in place of ip addresses to allow DN's to work when ipaddress change

2020-03-12 Thread runzhiwang (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17058302#comment-17058302
 ] 

runzhiwang commented on HDDS-1933:
--

[~swagle] Hi, I'm sorry, but what do you mean "making a change so that this 
config is respected" ? 

> Datanode should use hostname in place of ip addresses to allow DN's to work 
> when ipaddress change
> -
>
> Key: HDDS-1933
> URL: https://issues.apache.org/jira/browse/HDDS-1933
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Priority: Blocker
>
> This was noticed by [~elek] while deploying Ozone on Kubernetes based 
> environment.
> When the datanode ip address change on restart, the Datanode details cease to 
> be correct for the datanode. and this prevents the cluster from functioning 
> after a restart.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1933) Datanode should use hostname in place of ip addresses to allow DN's to work when ipaddress change

2020-03-12 Thread runzhiwang (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17058297#comment-17058297
 ] 

runzhiwang commented on HDDS-1933:
--

[~swagle] set dfs.datanode.use.datanode.hostname cannot work. The reason just 
as [~adoroszlai] said, I have test it.

> Datanode should use hostname in place of ip addresses to allow DN's to work 
> when ipaddress change
> -
>
> Key: HDDS-1933
> URL: https://issues.apache.org/jira/browse/HDDS-1933
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Priority: Blocker
>
> This was noticed by [~elek] while deploying Ozone on Kubernetes based 
> environment.
> When the datanode ip address change on restart, the Datanode details cease to 
> be correct for the datanode. and this prevents the cluster from functioning 
> after a restart.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-3168) Improve the efficiency of reading object

2020-03-11 Thread runzhiwang (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-3168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17057622#comment-17057622
 ] 

runzhiwang commented on HDDS-3168:
--

I'm working on it.

> Improve the efficiency of reading object
> 
>
> Key: HDDS-3168
> URL: https://issues.apache.org/jira/browse/HDDS-3168
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
> Attachments: screenshot-1.png, screenshot-2.png, screenshot-3.png
>
>
> Read a 100MB object which 25 chunks, the jaeger trace information as the 
> image shows. Now ozone read each chunk in sequential order, but I think these 
> chunks can be read in parallel. And the cost of getContainerWithPipeline is 
> 1-3 ms, which is also too long.  !screenshot-3.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3168) Improve the efficiency of reading object

2020-03-11 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3168:
-
Description: Read a 100MB object which 25 chunks, the jaeger trace 
information as the image shows. Now ozone read each chunk in sequential order, 
but I think these chunks can be read in parallel. And the cost of 
getContainerWithPipeline is 1-3 ms, which is also too long.  !screenshot-3.png! 
  (was: Read a 100MB object, the jaeger trace information as the image shows. 
Now ozone read each chunk in sequential order, but I think these chunks can be 
read in parallel. And the cost of getContainerWithPipeline is 1-3 ms, which is 
also too long.  !screenshot-3.png! )

> Improve the efficiency of reading object
> 
>
> Key: HDDS-3168
> URL: https://issues.apache.org/jira/browse/HDDS-3168
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
> Attachments: screenshot-1.png, screenshot-2.png, screenshot-3.png
>
>
> Read a 100MB object which 25 chunks, the jaeger trace information as the 
> image shows. Now ozone read each chunk in sequential order, but I think these 
> chunks can be read in parallel. And the cost of getContainerWithPipeline is 
> 1-3 ms, which is also too long.  !screenshot-3.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3168) Improve the efficiency of reading object

2020-03-11 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3168:
-
Description: Read a 100MB object, the jaeger trace information as the image 
shows. Now ozone read each chunk in sequential order, but I think these chunks 
can be read in parallel. And the cost of getContainerWithPipeline is 1-3 ms, 
which is also too long.  !screenshot-3.png!   (was:  !screenshot-3.png! )

> Improve the efficiency of reading object
> 
>
> Key: HDDS-3168
> URL: https://issues.apache.org/jira/browse/HDDS-3168
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
> Attachments: screenshot-1.png, screenshot-2.png, screenshot-3.png
>
>
> Read a 100MB object, the jaeger trace information as the image shows. Now 
> ozone read each chunk in sequential order, but I think these chunks can be 
> read in parallel. And the cost of getContainerWithPipeline is 1-3 ms, which 
> is also too long.  !screenshot-3.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3168) Improve the efficiency of reading object

2020-03-11 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3168:
-
Description:  !screenshot-3.png!   (was:  !screenshot-2.png! )

> Improve the efficiency of reading object
> 
>
> Key: HDDS-3168
> URL: https://issues.apache.org/jira/browse/HDDS-3168
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
> Attachments: screenshot-1.png, screenshot-2.png, screenshot-3.png
>
>
>  !screenshot-3.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3168) Improve the efficiency of reading object

2020-03-11 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3168:
-
Attachment: screenshot-3.png

> Improve the efficiency of reading object
> 
>
> Key: HDDS-3168
> URL: https://issues.apache.org/jira/browse/HDDS-3168
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
> Attachments: screenshot-1.png, screenshot-2.png, screenshot-3.png
>
>
>  !screenshot-2.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3168) Improve the efficiency of reading object

2020-03-11 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3168:
-
Description:  !screenshot-2.png!   (was:  !screenshot-1.png! )

> Improve the efficiency of reading object
> 
>
> Key: HDDS-3168
> URL: https://issues.apache.org/jira/browse/HDDS-3168
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
> Attachments: screenshot-1.png, screenshot-2.png
>
>
>  !screenshot-2.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3168) Improve the efficiency of reading object

2020-03-11 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3168:
-
Attachment: screenshot-2.png

> Improve the efficiency of reading object
> 
>
> Key: HDDS-3168
> URL: https://issues.apache.org/jira/browse/HDDS-3168
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
> Attachments: screenshot-1.png, screenshot-2.png
>
>
>  !screenshot-1.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3168) Improve the efficiency of reading object

2020-03-11 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3168:
-
Description:  !screenshot-1.png! 

> Improve the efficiency of reading object
> 
>
> Key: HDDS-3168
> URL: https://issues.apache.org/jira/browse/HDDS-3168
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
> Attachments: screenshot-1.png
>
>
>  !screenshot-1.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3168) Improve the efficiency of reading object

2020-03-11 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3168:
-
Attachment: screenshot-1.png

> Improve the efficiency of reading object
> 
>
> Key: HDDS-3168
> URL: https://issues.apache.org/jira/browse/HDDS-3168
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
> Attachments: screenshot-1.png
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-3168) Improve the efficiency of reading object

2020-03-11 Thread runzhiwang (Jira)
runzhiwang created HDDS-3168:


 Summary: Improve the efficiency of reading object
 Key: HDDS-3168
 URL: https://issues.apache.org/jira/browse/HDDS-3168
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: runzhiwang
Assignee: runzhiwang






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3158) Create client for each request in s3g cost too much time.

2020-03-11 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3158:
-
Description: S3gateway create client for each request, but create a client 
cost about 5 ms,  which is too long for reading a small object. It maybe better 
by reusing client for different requests.  (was: S3gateway create client for 
each request, but create a client cost about )

> Create client for each request in s3g cost too much time.
> -
>
> Key: HDDS-3158
> URL: https://issues.apache.org/jira/browse/HDDS-3158
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: runzhiwang
>Priority: Major
>
> S3gateway create client for each request, but create a client cost about 5 
> ms,  which is too long for reading a small object. It maybe better by reusing 
> client for different requests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3158) Create client for each request in s3g cost too much time.

2020-03-11 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3158:
-
Affects Version/s: 0.5.0

> Create client for each request in s3g cost too much time.
> -
>
> Key: HDDS-3158
> URL: https://issues.apache.org/jira/browse/HDDS-3158
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.5.0
>Reporter: runzhiwang
>Priority: Major
>
> S3gateway create client for each request, but create a client cost about 5 
> ms,  which is too long for reading a small object. It maybe better by reusing 
> client for different requests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3158) Create client for each request in s3g cost too much time.

2020-03-11 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3158:
-
Description: S3gateway create client for each request, but create a client 
cost about 

> Create client for each request in s3g cost too much time.
> -
>
> Key: HDDS-3158
> URL: https://issues.apache.org/jira/browse/HDDS-3158
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: runzhiwang
>Priority: Major
>
> S3gateway create client for each request, but create a client cost about 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-3158) Create client for each request in s3g cost too much time.

2020-03-11 Thread runzhiwang (Jira)
runzhiwang created HDDS-3158:


 Summary: Create client for each request in s3g cost too much time.
 Key: HDDS-3158
 URL: https://issues.apache.org/jira/browse/HDDS-3158
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: runzhiwang






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3130) Add trace span in s3gateway

2020-03-06 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3130:
-
Attachment: (was: image-2020-03-05-13-59-29-552.png)

> Add trace span in s3gateway
> ---
>
> Key: HDDS-3130
> URL: https://issues.apache.org/jira/browse/HDDS-3130
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3130) Add jaeger trace span in s3gateway

2020-03-06 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3130:
-
Summary: Add jaeger trace span in s3gateway  (was: Add trace span in 
s3gateway)

> Add jaeger trace span in s3gateway
> --
>
> Key: HDDS-3130
> URL: https://issues.apache.org/jira/browse/HDDS-3130
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3130) Add trace span in s3gateway

2020-03-06 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3130:
-
Summary: Add trace span in s3gateway  (was: jaeger can not work normal)

> Add trace span in s3gateway
> ---
>
> Key: HDDS-3130
> URL: https://issues.apache.org/jira/browse/HDDS-3130
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
>
> Just as the image shows, the trace information of CreateKey is not enough.
>  !image-2020-03-05-13-59-29-552.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3130) Add trace span in s3gateway

2020-03-06 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3130:
-
Description: (was: Just as the image shows, the trace information of 
CreateKey is not enough.
 !image-2020-03-05-13-59-29-552.png! )

> Add trace span in s3gateway
> ---
>
> Key: HDDS-3130
> URL: https://issues.apache.org/jira/browse/HDDS-3130
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3130) jaeger can not work normal

2020-03-06 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3130:
-
Issue Type: Improvement  (was: Bug)

> jaeger can not work normal
> --
>
> Key: HDDS-3130
> URL: https://issues.apache.org/jira/browse/HDDS-3130
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
> Attachments: image-2020-03-05-13-59-29-552.png
>
>
> Just as the image shows, the trace information of CreateKey is not enough.
>  !image-2020-03-05-13-59-29-552.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-3124) Time interval calculate error

2020-03-04 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang resolved HDDS-3124.
--
Resolution: Fixed

> Time interval calculate error 
> --
>
> Key: HDDS-3124
> URL: https://issues.apache.org/jira/browse/HDDS-3124
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.5.0
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
>  Labels: pull-request-available
> Attachments: screenshot-1.png
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Just as the image shows, the time interval in log "Unable to communicate to 
> SCM server at scm-0.scm:9861 for past " is 0, 3000 seconds, but actually it 
> is 0, 300 seconds.
>  !screenshot-1.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-3041) Memory leak of s3g

2020-03-04 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang resolved HDDS-3041.
--
Resolution: Fixed

> Memory leak of s3g
> --
>
> Key: HDDS-3041
> URL: https://issues.apache.org/jira/browse/HDDS-3041
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Affects Versions: 0.6.0
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2020-02-24-12-06-22-248.png, 
> image-2020-02-24-12-10-09-552.png, image-2020-02-26-17-11-31-834.png, 
> screenshot-1.png, screenshot-2.png, screenshot-3.png, screenshot-4.png, 
> screenshot-5.png
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-3127) docker compose should create headless principals for test users

2020-03-04 Thread runzhiwang (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-3127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17051829#comment-17051829
 ] 

runzhiwang commented on HDDS-3127:
--

I'm working on it

> docker compose should create headless principals for test users
> ---
>
> Key: HDDS-3127
> URL: https://issues.apache.org/jira/browse/HDDS-3127
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: docker
>Reporter: Arpit Agarwal
>Assignee: runzhiwang
>Priority: Major
>
> docker-compose setup creates principals with service names e.g. 
> {{testuser/s...@example.com}}. Hence the testuser on nodes other than scm is 
> not an administrator.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-3127) docker compose should create headless principals for test users

2020-03-04 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang reassigned HDDS-3127:


Assignee: runzhiwang

> docker compose should create headless principals for test users
> ---
>
> Key: HDDS-3127
> URL: https://issues.apache.org/jira/browse/HDDS-3127
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: docker
>Reporter: Arpit Agarwal
>Assignee: runzhiwang
>Priority: Major
>
> docker-compose setup creates principals with service names e.g. 
> {{testuser/s...@example.com}}. Hence the testuser on nodes other than scm is 
> not an administrator.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-3122) Fail to destroy pipeline in k8s when redeploy DataNode

2020-03-04 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang reassigned HDDS-3122:


Assignee: runzhiwang

> Fail to destroy pipeline in k8s when redeploy DataNode
> --
>
> Key: HDDS-3122
> URL: https://issues.apache.org/jira/browse/HDDS-3122
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
>
> 2020-03-02 19:35:51 ERROR SCMPipelineManager:72 - Destroy pipeline failed for 
> pipeline:Pipeline[ Id: 7cde1fee-6fe5-4fbc-a822-1c8cd06d5a28, Nodes: 
> 66c73405-356c-4e23-a85d-6cce04f03a16{ip: 10.244.1.154, host: 
> datanode-3-0.datanode-3.default.svc.cluster.local, networkLocation: 
> /default-rack, certSerialId: null}a4ec7184-8ac5-4dba-a518-83bf31c57f3d{ip: 
> 10.244.1.153, host: datanode-2-0.datanode-2.default.svc.cluster.local, 
> networkLocation: /default-rack, certSerialId: 
> null}224e464c-369a-4e87-afef-0bd957a10eed{ip: 10.244.1.151, host: 
> datanode-1-0.datanode-1.default.svc.cluster.local, networkLocation: 
> /default-rack, certSerialId: null}, Type:RATIS, Factor:THREE, State:CLOSED, 
> leaderId:null, CreationTimestamp2020-03-02T11:32:32.198Z]
> org.apache.hadoop.hdds.scm.pipeline.PipelineNotFoundException: 
> PipelineID=7cde1fee-6fe5-4fbc-a822-1c8cd06d5a28 not found
> at 
> org.apache.hadoop.hdds.scm.pipeline.PipelineStateMap.getPipeline(PipelineStateMap.java:133)
> at 
> org.apache.hadoop.hdds.scm.pipeline.PipelineStateMap.removePipeline(PipelineStateMap.java:323)
> at 
> org.apache.hadoop.hdds.scm.pipeline.PipelineStateManager.removePipeline(PipelineStateManager.java:104)
> at 
> org.apache.hadoop.hdds.scm.pipeline.SCMPipelineManager.removePipeline(SCMPipelineManager.java:562)
> at 
> org.apache.hadoop.hdds.scm.pipeline.SCMPipelineManager.destroyPipeline(SCMPipelineManager.java:548)
> at 
> org.apache.hadoop.hdds.scm.pipeline.SCMPipelineManager.lambda$finalizeAndDestroyPipeline$0(SCMPipelineManager.java:391)
> at 
> org.apache.hadoop.hdds.utils.Scheduler.lambda$schedule$1(Scheduler.java:70)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-3130) jaeger can not work normal

2020-03-04 Thread runzhiwang (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-3130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17051828#comment-17051828
 ] 

runzhiwang commented on HDDS-3130:
--

I'm working on it

> jaeger can not work normal
> --
>
> Key: HDDS-3130
> URL: https://issues.apache.org/jira/browse/HDDS-3130
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
> Attachments: image-2020-03-05-13-59-29-552.png
>
>
> Just as the image shows, the trace information of CreateKey is not enough.
>  !image-2020-03-05-13-59-29-552.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-3130) jaeger can not work normal

2020-03-04 Thread runzhiwang (Jira)
runzhiwang created HDDS-3130:


 Summary: jaeger can not work normal
 Key: HDDS-3130
 URL: https://issues.apache.org/jira/browse/HDDS-3130
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: runzhiwang
Assignee: runzhiwang
 Attachments: image-2020-03-05-13-59-29-552.png

Just as the image shows, the trace information of CreateKey is not enough.
 !image-2020-03-05-13-59-29-552.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3124) Time interval calculate error

2020-03-04 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3124:
-
Description: 
Just as the image shows, the time interval in log "Unable to communicate to SCM 
server at scm-0.scm:9861 for past " is 0, 3000 seconds, but actually it is 0, 
300 seconds.
 !screenshot-1.png! 

  was:
Just as the image shows, the time interval in log "Unable to communicate to SCM 
server at scm-0.scm:9861 for past " is 0, 3000, 6000 seconds, but actually it 
is 0, 20, 40 seconds.



> Time interval calculate error 
> --
>
> Key: HDDS-3124
> URL: https://issues.apache.org/jira/browse/HDDS-3124
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.5.0
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
> Attachments: screenshot-1.png
>
>
> Just as the image shows, the time interval in log "Unable to communicate to 
> SCM server at scm-0.scm:9861 for past " is 0, 3000 seconds, but actually it 
> is 0, 300 seconds.
>  !screenshot-1.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3124) Time interval calculate error

2020-03-04 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3124:
-
Attachment: (was: screenshot-2.png)

> Time interval calculate error 
> --
>
> Key: HDDS-3124
> URL: https://issues.apache.org/jira/browse/HDDS-3124
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.5.0
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
> Attachments: screenshot-1.png
>
>
> Just as the image shows, the time interval in log "Unable to communicate to 
> SCM server at scm-0.scm:9861 for past " is 0, 3000, 6000 seconds, but 
> actually it is 0, 20, 40 seconds.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3124) Time interval calculate error

2020-03-04 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3124:
-
Attachment: (was: screenshot-3.png)

> Time interval calculate error 
> --
>
> Key: HDDS-3124
> URL: https://issues.apache.org/jira/browse/HDDS-3124
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.5.0
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
> Attachments: screenshot-1.png
>
>
> Just as the image shows, the time interval in log "Unable to communicate to 
> SCM server at scm-0.scm:9861 for past " is 0, 3000, 6000 seconds, but 
> actually it is 0, 20, 40 seconds.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3124) Time interval calculate error

2020-03-04 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3124:
-
Description: 
Just as the image shows, the time interval in log "Unable to communicate to SCM 
server at scm-0.scm:9861 for past " is 0, 3000, 6000 seconds, but actually it 
is 0, 20, 40 seconds.


  was:
Just as the image shows, the time interval in log "Unable to communicate to SCM 
server at scm-0.scm:9861 for past " is 0, 3000, 6000 seconds, but actually it 
is 0, 20, 40 seconds.
 !screenshot-2.png! 
 !screenshot-3.png! 
 !screenshot-4.png! 


> Time interval calculate error 
> --
>
> Key: HDDS-3124
> URL: https://issues.apache.org/jira/browse/HDDS-3124
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.5.0
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
> Attachments: screenshot-1.png
>
>
> Just as the image shows, the time interval in log "Unable to communicate to 
> SCM server at scm-0.scm:9861 for past " is 0, 3000, 6000 seconds, but 
> actually it is 0, 20, 40 seconds.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3124) Time interval calculate error

2020-03-04 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3124:
-
Attachment: (was: screenshot-1.png)

> Time interval calculate error 
> --
>
> Key: HDDS-3124
> URL: https://issues.apache.org/jira/browse/HDDS-3124
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.5.0
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
> Attachments: screenshot-1.png
>
>
> Just as the image shows, the time interval in log "Unable to communicate to 
> SCM server at scm-0.scm:9861 for past " is 0, 3000, 6000 seconds, but 
> actually it is 0, 20, 40 seconds.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3124) Time interval calculate error

2020-03-04 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3124:
-
Attachment: (was: screenshot-4.png)

> Time interval calculate error 
> --
>
> Key: HDDS-3124
> URL: https://issues.apache.org/jira/browse/HDDS-3124
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.5.0
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
> Attachments: screenshot-1.png
>
>
> Just as the image shows, the time interval in log "Unable to communicate to 
> SCM server at scm-0.scm:9861 for past " is 0, 3000, 6000 seconds, but 
> actually it is 0, 20, 40 seconds.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3124) Time interval calculate error

2020-03-04 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3124:
-
Attachment: screenshot-1.png

> Time interval calculate error 
> --
>
> Key: HDDS-3124
> URL: https://issues.apache.org/jira/browse/HDDS-3124
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.5.0
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
> Attachments: screenshot-1.png
>
>
> Just as the image shows, the time interval in log "Unable to communicate to 
> SCM server at scm-0.scm:9861 for past " is 0, 3000, 6000 seconds, but 
> actually it is 0, 20, 40 seconds.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-3125) Can not make full use of all datanodes.

2020-03-03 Thread runzhiwang (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-3125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17050718#comment-17050718
 ] 

runzhiwang commented on HDDS-3125:
--

I'm working on it.

> Can not make full use of all datanodes.
> ---
>
> Key: HDDS-3125
> URL: https://issues.apache.org/jira/browse/HDDS-3125
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.5.0
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
> Attachments: screenshot-1.png
>
>
> I deploy ozone on k8s with multi-raft open, deploy datanode on 18 machines, 
> each machine start 11 datanodes.   But as the image shows, there are 10 
> datanodes cost 100% cpu, but the datanode with pid 29817 only cost 0.3% cpu, 
> this happens almost on all the 18 machines. So I think ozone has not make 
> full use of all datanodes.
>  !screenshot-1.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-3125) Can not make full use of all datanodes.

2020-03-03 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang reassigned HDDS-3125:


Assignee: runzhiwang

> Can not make full use of all datanodes.
> ---
>
> Key: HDDS-3125
> URL: https://issues.apache.org/jira/browse/HDDS-3125
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.5.0
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
> Attachments: screenshot-1.png
>
>
> I deploy ozone on k8s with multi-raft open, deploy datanode on 18 machines, 
> each machine start 11 datanodes.   But as the image shows, there are 10 
> datanodes cost 100% cpu, but the datanode with pid 29817 only cost 0.3% cpu, 
> this happens almost on all the 18 machines. So I think ozone has not make 
> full use of all datanodes.
>  !screenshot-1.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3125) Can not make full use of all datanodes.

2020-03-03 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3125:
-
Description: 
I deploy ozone on k8s with multi-raft open, deploy datanode on 18 machines, 
each machine start 11 datanodes.   But as the image shows, there are 10 
datanodes cost 100% cpu, but the datanode with pid 29817 only cost 0.3% cpu, 
this happens almost on all the 18 machines. So I think ozone has not make full 
use of all datanodes.
 !screenshot-1.png! 

> Can not make full use of all datanodes.
> ---
>
> Key: HDDS-3125
> URL: https://issues.apache.org/jira/browse/HDDS-3125
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.5.0
>Reporter: runzhiwang
>Priority: Major
> Attachments: screenshot-1.png
>
>
> I deploy ozone on k8s with multi-raft open, deploy datanode on 18 machines, 
> each machine start 11 datanodes.   But as the image shows, there are 10 
> datanodes cost 100% cpu, but the datanode with pid 29817 only cost 0.3% cpu, 
> this happens almost on all the 18 machines. So I think ozone has not make 
> full use of all datanodes.
>  !screenshot-1.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3125) Can not make full use of all datanodes.

2020-03-03 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3125:
-
Attachment: screenshot-1.png

> Can not make full use of all datanodes.
> ---
>
> Key: HDDS-3125
> URL: https://issues.apache.org/jira/browse/HDDS-3125
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.5.0
>Reporter: runzhiwang
>Priority: Major
> Attachments: screenshot-1.png
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-3125) Can not make full use of all datanodes.

2020-03-03 Thread runzhiwang (Jira)
runzhiwang created HDDS-3125:


 Summary: Can not make full use of all datanodes.
 Key: HDDS-3125
 URL: https://issues.apache.org/jira/browse/HDDS-3125
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Affects Versions: 0.5.0
Reporter: runzhiwang






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-3124) Time interval calculate error

2020-03-03 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang reassigned HDDS-3124:


Assignee: runzhiwang

> Time interval calculate error 
> --
>
> Key: HDDS-3124
> URL: https://issues.apache.org/jira/browse/HDDS-3124
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.5.0
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
> Attachments: screenshot-1.png, screenshot-2.png, screenshot-3.png, 
> screenshot-4.png
>
>
> Just as the image shows, the time interval in log "Unable to communicate to 
> SCM server at scm-0.scm:9861 for past " is 0, 3000, 6000 seconds, but 
> actually it is 0, 20, 40 seconds.
>  !screenshot-2.png! 
>  !screenshot-3.png! 
>  !screenshot-4.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-3124) Time interval calculate error

2020-03-03 Thread runzhiwang (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-3124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17050689#comment-17050689
 ] 

runzhiwang commented on HDDS-3124:
--

I'm working on it.

> Time interval calculate error 
> --
>
> Key: HDDS-3124
> URL: https://issues.apache.org/jira/browse/HDDS-3124
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.5.0
>Reporter: runzhiwang
>Priority: Major
> Attachments: screenshot-1.png, screenshot-2.png, screenshot-3.png, 
> screenshot-4.png
>
>
> Just as the image shows, the time interval in log "Unable to communicate to 
> SCM server at scm-0.scm:9861 for past " is 0, 3000, 6000 seconds, but 
> actually it is 0, 20, 40 seconds.
>  !screenshot-2.png! 
>  !screenshot-3.png! 
>  !screenshot-4.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3124) Time interval calculate error

2020-03-03 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3124:
-
Description: 
Just as the image shows, the time interval in log "Unable to communicate to SCM 
server at scm-0.scm:9861 for past " is 0, 3000, 6000 seconds, but actually it 
is 0, 20, 40 seconds.
 !screenshot-2.png! 
 !screenshot-3.png! 
 !screenshot-4.png! 

  was:
Just as the image shows, the time interval in log "Unable to communicate to SCM 
server at scm-0.scm:9861 for past " is 0, 3000, 9000 seconds, but actually it 
is 0, 20, 40 seconds.
 !screenshot-2.png! 
 !screenshot-3.png! 
 !screenshot-4.png! 


> Time interval calculate error 
> --
>
> Key: HDDS-3124
> URL: https://issues.apache.org/jira/browse/HDDS-3124
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.5.0
>Reporter: runzhiwang
>Priority: Major
> Attachments: screenshot-1.png, screenshot-2.png, screenshot-3.png, 
> screenshot-4.png
>
>
> Just as the image shows, the time interval in log "Unable to communicate to 
> SCM server at scm-0.scm:9861 for past " is 0, 3000, 6000 seconds, but 
> actually it is 0, 20, 40 seconds.
>  !screenshot-2.png! 
>  !screenshot-3.png! 
>  !screenshot-4.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3124) Time interval calculate error

2020-03-03 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3124:
-
Description: 
Just as the image shows, the time interval in log "Unable to communicate to SCM 
server at scm-0.scm:9861 for past " is 0, 3000, 9000 seconds, but actually it 
is 0, 20, 40 seconds.
 !screenshot-2.png! 
 !screenshot-3.png! 
 !screenshot-4.png! 

  was:
 !screenshot-2.png! 
 !screenshot-3.png! 
 !screenshot-4.png! 


> Time interval calculate error 
> --
>
> Key: HDDS-3124
> URL: https://issues.apache.org/jira/browse/HDDS-3124
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.5.0
>Reporter: runzhiwang
>Priority: Major
> Attachments: screenshot-1.png, screenshot-2.png, screenshot-3.png, 
> screenshot-4.png
>
>
> Just as the image shows, the time interval in log "Unable to communicate to 
> SCM server at scm-0.scm:9861 for past " is 0, 3000, 9000 seconds, but 
> actually it is 0, 20, 40 seconds.
>  !screenshot-2.png! 
>  !screenshot-3.png! 
>  !screenshot-4.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3124) Time interval calculate error

2020-03-03 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3124:
-
Attachment: screenshot-4.png

> Time interval calculate error 
> --
>
> Key: HDDS-3124
> URL: https://issues.apache.org/jira/browse/HDDS-3124
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.5.0
>Reporter: runzhiwang
>Priority: Major
> Attachments: screenshot-1.png, screenshot-2.png, screenshot-3.png, 
> screenshot-4.png
>
>
>  !screenshot-2.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3124) Time interval calculate error

2020-03-03 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3124:
-
Description: 
 !screenshot-2.png! 
 !screenshot-3.png! 
 !screenshot-4.png! 

  was: !screenshot-2.png! 


> Time interval calculate error 
> --
>
> Key: HDDS-3124
> URL: https://issues.apache.org/jira/browse/HDDS-3124
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.5.0
>Reporter: runzhiwang
>Priority: Major
> Attachments: screenshot-1.png, screenshot-2.png, screenshot-3.png, 
> screenshot-4.png
>
>
>  !screenshot-2.png! 
>  !screenshot-3.png! 
>  !screenshot-4.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3124) Time interval calculate error

2020-03-03 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3124:
-
Attachment: screenshot-3.png

> Time interval calculate error 
> --
>
> Key: HDDS-3124
> URL: https://issues.apache.org/jira/browse/HDDS-3124
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.5.0
>Reporter: runzhiwang
>Priority: Major
> Attachments: screenshot-1.png, screenshot-2.png, screenshot-3.png
>
>
>  !screenshot-2.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3124) Time interval calculate error

2020-03-03 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3124:
-
Attachment: screenshot-2.png

> Time interval calculate error 
> --
>
> Key: HDDS-3124
> URL: https://issues.apache.org/jira/browse/HDDS-3124
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.5.0
>Reporter: runzhiwang
>Priority: Major
> Attachments: screenshot-1.png, screenshot-2.png
>
>
>  !screenshot-1.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3124) Time interval calculate error

2020-03-03 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3124:
-
Description:  !screenshot-2.png!   (was:  !screenshot-1.png! )

> Time interval calculate error 
> --
>
> Key: HDDS-3124
> URL: https://issues.apache.org/jira/browse/HDDS-3124
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.5.0
>Reporter: runzhiwang
>Priority: Major
> Attachments: screenshot-1.png, screenshot-2.png
>
>
>  !screenshot-2.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3124) Time interval calculate error

2020-03-03 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3124:
-
Description:  !screenshot-1.png!   (was:  
!image-2020-03-04-09-45-36-083.png! 
 !image-2020-03-04-09-45-59-292.png! 
 !image-2020-03-04-09-46-24-923.png! )

> Time interval calculate error 
> --
>
> Key: HDDS-3124
> URL: https://issues.apache.org/jira/browse/HDDS-3124
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.5.0
>Reporter: runzhiwang
>Priority: Major
> Attachments: screenshot-1.png, screenshot-2.png
>
>
>  !screenshot-1.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3124) Time interval calculate error

2020-03-03 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3124:
-
Attachment: screenshot-1.png

> Time interval calculate error 
> --
>
> Key: HDDS-3124
> URL: https://issues.apache.org/jira/browse/HDDS-3124
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.5.0
>Reporter: runzhiwang
>Priority: Major
> Attachments: screenshot-1.png, screenshot-2.png
>
>
>  !image-2020-03-04-09-45-36-083.png! 
>  !image-2020-03-04-09-45-59-292.png! 
>  !image-2020-03-04-09-46-24-923.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-3124) Time interval calculate error

2020-03-03 Thread runzhiwang (Jira)
runzhiwang created HDDS-3124:


 Summary: Time interval calculate error 
 Key: HDDS-3124
 URL: https://issues.apache.org/jira/browse/HDDS-3124
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Affects Versions: 0.5.0
Reporter: runzhiwang


 !image-2020-03-04-09-45-36-083.png! 
 !image-2020-03-04-09-45-59-292.png! 
 !image-2020-03-04-09-46-24-923.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-3122) Fail to destroy pipeline in k8s when redeploy DataNode

2020-03-02 Thread runzhiwang (Jira)
runzhiwang created HDDS-3122:


 Summary: Fail to destroy pipeline in k8s when redeploy DataNode
 Key: HDDS-3122
 URL: https://issues.apache.org/jira/browse/HDDS-3122
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: runzhiwang


2020-03-02 19:35:51 ERROR SCMPipelineManager:72 - Destroy pipeline failed for 
pipeline:Pipeline[ Id: 7cde1fee-6fe5-4fbc-a822-1c8cd06d5a28, Nodes: 
66c73405-356c-4e23-a85d-6cce04f03a16{ip: 10.244.1.154, host: 
datanode-3-0.datanode-3.default.svc.cluster.local, networkLocation: 
/default-rack, certSerialId: null}a4ec7184-8ac5-4dba-a518-83bf31c57f3d{ip: 
10.244.1.153, host: datanode-2-0.datanode-2.default.svc.cluster.local, 
networkLocation: /default-rack, certSerialId: 
null}224e464c-369a-4e87-afef-0bd957a10eed{ip: 10.244.1.151, host: 
datanode-1-0.datanode-1.default.svc.cluster.local, networkLocation: 
/default-rack, certSerialId: null}, Type:RATIS, Factor:THREE, State:CLOSED, 
leaderId:null, CreationTimestamp2020-03-02T11:32:32.198Z]
org.apache.hadoop.hdds.scm.pipeline.PipelineNotFoundException: 
PipelineID=7cde1fee-6fe5-4fbc-a822-1c8cd06d5a28 not found
at 
org.apache.hadoop.hdds.scm.pipeline.PipelineStateMap.getPipeline(PipelineStateMap.java:133)
at 
org.apache.hadoop.hdds.scm.pipeline.PipelineStateMap.removePipeline(PipelineStateMap.java:323)
at 
org.apache.hadoop.hdds.scm.pipeline.PipelineStateManager.removePipeline(PipelineStateManager.java:104)
at 
org.apache.hadoop.hdds.scm.pipeline.SCMPipelineManager.removePipeline(SCMPipelineManager.java:562)
at 
org.apache.hadoop.hdds.scm.pipeline.SCMPipelineManager.destroyPipeline(SCMPipelineManager.java:548)
at 
org.apache.hadoop.hdds.scm.pipeline.SCMPipelineManager.lambda$finalizeAndDestroyPipeline$0(SCMPipelineManager.java:391)
at 
org.apache.hadoop.hdds.utils.Scheduler.lambda$schedule$1(Scheduler.java:70)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-3122) Fail to destroy pipeline in k8s when redeploy DataNode

2020-03-02 Thread runzhiwang (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-3122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17049841#comment-17049841
 ] 

runzhiwang commented on HDDS-3122:
--

I'm working on it.

> Fail to destroy pipeline in k8s when redeploy DataNode
> --
>
> Key: HDDS-3122
> URL: https://issues.apache.org/jira/browse/HDDS-3122
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: runzhiwang
>Priority: Major
>
> 2020-03-02 19:35:51 ERROR SCMPipelineManager:72 - Destroy pipeline failed for 
> pipeline:Pipeline[ Id: 7cde1fee-6fe5-4fbc-a822-1c8cd06d5a28, Nodes: 
> 66c73405-356c-4e23-a85d-6cce04f03a16{ip: 10.244.1.154, host: 
> datanode-3-0.datanode-3.default.svc.cluster.local, networkLocation: 
> /default-rack, certSerialId: null}a4ec7184-8ac5-4dba-a518-83bf31c57f3d{ip: 
> 10.244.1.153, host: datanode-2-0.datanode-2.default.svc.cluster.local, 
> networkLocation: /default-rack, certSerialId: 
> null}224e464c-369a-4e87-afef-0bd957a10eed{ip: 10.244.1.151, host: 
> datanode-1-0.datanode-1.default.svc.cluster.local, networkLocation: 
> /default-rack, certSerialId: null}, Type:RATIS, Factor:THREE, State:CLOSED, 
> leaderId:null, CreationTimestamp2020-03-02T11:32:32.198Z]
> org.apache.hadoop.hdds.scm.pipeline.PipelineNotFoundException: 
> PipelineID=7cde1fee-6fe5-4fbc-a822-1c8cd06d5a28 not found
> at 
> org.apache.hadoop.hdds.scm.pipeline.PipelineStateMap.getPipeline(PipelineStateMap.java:133)
> at 
> org.apache.hadoop.hdds.scm.pipeline.PipelineStateMap.removePipeline(PipelineStateMap.java:323)
> at 
> org.apache.hadoop.hdds.scm.pipeline.PipelineStateManager.removePipeline(PipelineStateManager.java:104)
> at 
> org.apache.hadoop.hdds.scm.pipeline.SCMPipelineManager.removePipeline(SCMPipelineManager.java:562)
> at 
> org.apache.hadoop.hdds.scm.pipeline.SCMPipelineManager.destroyPipeline(SCMPipelineManager.java:548)
> at 
> org.apache.hadoop.hdds.scm.pipeline.SCMPipelineManager.lambda$finalizeAndDestroyPipeline$0(SCMPipelineManager.java:391)
> at 
> org.apache.hadoop.hdds.utils.Scheduler.lambda$schedule$1(Scheduler.java:70)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1933) Datanode should use hostname in place of ip addresses to allow DN's to work when ipaddress change

2020-03-02 Thread runzhiwang (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17049773#comment-17049773
 ] 

runzhiwang commented on HDDS-1933:
--

I'm working on it

> Datanode should use hostname in place of ip addresses to allow DN's to work 
> when ipaddress change
> -
>
> Key: HDDS-1933
> URL: https://issues.apache.org/jira/browse/HDDS-1933
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Priority: Blocker
>
> This was noticed by [~elek] while deploying Ozone on Kubernetes based 
> environment.
> When the datanode ip address change on restart, the Datanode details cease to 
> be correct for the datanode. and this prevents the cluster from functioning 
> after a restart.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1933) Datanode should use hostname in place of ip addresses to allow DN's to work when ipaddress change

2020-03-02 Thread runzhiwang (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17049139#comment-17049139
 ] 

runzhiwang commented on HDDS-1933:
--

[~msingh] Hi, I have set "dfs.datanode.use.datanode.hostname" to true, but it 
does not work. Have you test it ? I want to fix this bug.

> Datanode should use hostname in place of ip addresses to allow DN's to work 
> when ipaddress change
> -
>
> Key: HDDS-1933
> URL: https://issues.apache.org/jira/browse/HDDS-1933
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Priority: Blocker
>
> This was noticed by [~elek] while deploying Ozone on Kubernetes based 
> environment.
> When the datanode ip address change on restart, the Datanode details cease to 
> be correct for the datanode. and this prevents the cluster from functioning 
> after a restart.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2821) Use datanode IP or host in log to improve readability

2020-01-09 Thread runzhiwang (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17012479#comment-17012479
 ] 

runzhiwang commented on HDDS-2821:
--

I'm working on it.

> Use datanode IP or host in log to improve readability 
> --
>
> Key: HDDS-2821
> URL: https://issues.apache.org/jira/browse/HDDS-2821
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Sammi Chen
>Priority: Critical
>
> It's hard to remember all datanode uuid and datanode IP/HOST map relation 
> once the cluster goes bigger. 
> 2019-12-26 19:29:12,737 INFO org.apache.ratis.server.impl.RaftServerImpl: 
> 1da74a1d-f64d-4ad4-b04c-85f26687e683@group-438744F3D4AC: shutdown
> 2019-12-26 19:29:12,737 INFO org.apache.ratis.util.JmxRegister: Successfully 
> un-registered JMX Bean with object name 
> Ratis:service=RaftServer,group=group-438744F3D4AC,id=1da74a1d-f64d-4ad4-b04c-85f26687e683
> 2019-12-26 19:29:12,737 INFO org.apache.ratis.server.impl.RoleInfo: 
> 1da74a1d-f64d-4ad4-b04c-85f26687e683: shutdown LeaderState
> 2019-12-26 19:29:12,737 WARN org.apache.ratis.grpc.server.GrpcLogAppender: 
> 1da74a1d-f64d-4ad4-b04c-85f26687e683@group-438744F3D4AC->ed90869c-317e-4303-8922-9fa83a3983cb-GrpcLogAppender:
>  Wait interrupted by java.lang.InterruptedException
> 2019-12-26 19:29:12,737 WARN org.apache.ratis.grpc.server.GrpcLogAppender: 
> 1da74a1d-f64d-4ad4-b04c-85f26687e683@group-438744F3D4AC->b65b0b6c-b0bb-429f-a23d-467c72d4b85c-GrpcLogAppender:
>  Wait interrupted by java.lang.InterruptedException
> 2019-12-26 19:29:12,737 INFO org.apache.ratis.server.impl.PendingRequests: 
> 1da74a1d-f64d-4ad4-b04c-85f26687e683@group-438744F3D4AC-PendingRequests: 
> sendNotLeaderResponses
> 2019-12-26 19:29:12,738 INFO 
> org.apache.ratis.server.impl.StateMachineUpdater: 
> 1da74a1d-f64d-4ad4-b04c-85f26687e683@group-438744F3D4AC-StateMachineUpdater: 
> set stopIndex = 0
> 2019-12-26 19:29:12,738 INFO org.apache.ratis.grpc.server.GrpcLogAppender: 
> 1da74a1d-f64d-4ad4-b04c-85f26687e683@group-438744F3D4AC->ed90869c-317e-4303-8922-9fa83a3983cb-AppendLogResponseHandler:
>  follower responses appendEntries COMPLETED
> 2019-12-26 19:29:12,738 INFO org.apache.ratis.grpc.server.GrpcLogAppender: 
> 1da74a1d-f64d-4ad4-b04c-85f26687e683@group-438744F3D4AC->b65b0b6c-b0bb-429f-a23d-467c72d4b85c-AppendLogResponseHandler:
>  follower responses appendEntries COMPLETED
> 2019-12-26 19:29:12,739 INFO org.apache.ratis.server.impl.FollowerInfo: 
> 1da74a1d-f64d-4ad4-b04c-85f26687e683@group-438744F3D4AC->b65b0b6c-b0bb-429f-a23d-467c72d4b85c:
>  nextIndex: updateUnconditionally 1 -> 0
> 2019-12-26 19:29:12,739 INFO org.apache.ratis.server.impl.FollowerInfo: 
> 1da74a1d-f64d-4ad4-b04c-85f26687e683@group-438744F3D4AC->ed90869c-317e-4303-8922-9fa83a3983cb:
>  nextIndex: updateUnconditionally 1 -> 0
> 2019-12-26 19:29:13,519 INFO org.apache.ratis.server.impl.RaftServerProxy: 
> 1da74a1d-f64d-4ad4-b04c-85f26687e683: remove group-AAD5A4D8E6F5:null
> 2019-12-26 19:29:13,519 ERROR 
> org.apache.hadoop.ozone.container.common.statemachine.commandhandler.ClosePipelineCommandHandler:
>  Can't close pipeline #id: "b30e1e45-9c7b-4912-93e6-aad5a4d8e6f5"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2819) Timeout threshold is too small to exit Saftmode when security enabled

2019-12-28 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-2819:
-
Description: 
When I run ozone-0.5.0-SNAPSHOT/compose/ozonesecure/test.sh on computer with 8 
core 1999.971 Mhz, 16G memory, it always fail and report "Safemode is still 
on". Then I enlarge the timeout threshold in testlib.sh from 90 seconds to 360 
seconds, and record the time to exist safemode 10 times,  and the average time 
to exit safemode is about 160 seconds.

!image-2019-12-29-11-35-51-332.png!

!image-2019-12-29-11-16-41-739.png!

  was:
When I run ozone-0.5.0-SNAPSHOT/compose/ozonesecure/test.sh on my computer, it 
always fail and report "Safemode is still on". Then I enlarge the timeout 
threshold in testlib.sh from 90 seconds to 360 seconds, and record the time to 
exist safemode 10 times,  and the average time to exit safemode is about 160 
seconds.

My Computer is 8 core 1999.971 Mhz, 16G memory.

!image-2019-12-29-11-35-51-332.png!

!image-2019-12-29-11-16-41-739.png!


> Timeout threshold  is too small to exit Saftmode when security enabled
> --
>
> Key: HDDS-2819
> URL: https://issues.apache.org/jira/browse/HDDS-2819
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: runzhiwang
>Priority: Major
> Attachments: image-2019-12-29-11-16-41-739.png, 
> image-2019-12-29-11-21-25-621.png, image-2019-12-29-11-35-51-332.png
>
>
> When I run ozone-0.5.0-SNAPSHOT/compose/ozonesecure/test.sh on computer with 
> 8 core 1999.971 Mhz, 16G memory, it always fail and report "Safemode is still 
> on". Then I enlarge the timeout threshold in testlib.sh from 90 seconds to 
> 360 seconds, and record the time to exist safemode 10 times,  and the average 
> time to exit safemode is about 160 seconds.
> !image-2019-12-29-11-35-51-332.png!
> !image-2019-12-29-11-16-41-739.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2819) Timeout threshold is too small to exit Saftmode when security enabled

2019-12-28 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-2819:
-
Description: 
When I run ozone-0.5.0-SNAPSHOT/compose/ozonesecure/test.sh on my computer, it 
always fail and report "Safemode is still on". Then I enlarge the timeout 
threshold in testlib.sh from 90 seconds to 360 seconds, and record the time to 
exist safemode 10 times,  and the average time to exit safemode is about 160 
seconds.

My Computer is 8 core 1999.971 Mhz, 16G memory.

!image-2019-12-29-11-35-51-332.png!

!image-2019-12-29-11-16-41-739.png!

  was:
When I run ozone-0.5.0-SNAPSHOT/compose/ozonesecure/test.sh on my computer, it 
always fail and report "Safemode is still on". Then I enlarge the timeout 
threshold in testlib.sh from 90 seconds to 360 seconds, and record the time to 
exist safemode 10 times,  and the average time to exit safemode is about 160 
seconds.

My Computer is 8 core 1999.971 Mhz, 16G memory.

!image-2019-12-29-11-21-25-621.png!

!image-2019-12-29-11-16-41-739.png!


> Timeout threshold  is too small to exit Saftmode when security enabled
> --
>
> Key: HDDS-2819
> URL: https://issues.apache.org/jira/browse/HDDS-2819
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: runzhiwang
>Priority: Major
> Attachments: image-2019-12-29-11-16-41-739.png, 
> image-2019-12-29-11-21-25-621.png, image-2019-12-29-11-35-51-332.png
>
>
> When I run ozone-0.5.0-SNAPSHOT/compose/ozonesecure/test.sh on my computer, 
> it always fail and report "Safemode is still on". Then I enlarge the timeout 
> threshold in testlib.sh from 90 seconds to 360 seconds, and record the time 
> to exist safemode 10 times,  and the average time to exit safemode is about 
> 160 seconds.
> My Computer is 8 core 1999.971 Mhz, 16G memory.
> !image-2019-12-29-11-35-51-332.png!
> !image-2019-12-29-11-16-41-739.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2819) Timeout threshold is too small to exit Saftmode when security enabled

2019-12-28 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-2819:
-
Attachment: image-2019-12-29-11-35-51-332.png

> Timeout threshold  is too small to exit Saftmode when security enabled
> --
>
> Key: HDDS-2819
> URL: https://issues.apache.org/jira/browse/HDDS-2819
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: runzhiwang
>Priority: Major
> Attachments: image-2019-12-29-11-16-41-739.png, 
> image-2019-12-29-11-21-25-621.png, image-2019-12-29-11-35-51-332.png
>
>
> When I run ozone-0.5.0-SNAPSHOT/compose/ozonesecure/test.sh on my computer, 
> it always fail and report "Safemode is still on". Then I enlarge the timeout 
> threshold in testlib.sh from 90 seconds to 360 seconds, and record the time 
> to exist safemode 10 times,  and the average time to exit safemode is about 
> 160 seconds.
> My Computer is 8 core 1999.971 Mhz, 16G memory.
> !image-2019-12-29-11-21-25-621.png!
> !image-2019-12-29-11-16-41-739.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2819) Timeout threshold is too small to exit Saftmode when security enabled

2019-12-28 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-2819:
-
Description: 
When I run ozone-0.5.0-SNAPSHOT/compose/ozonesecure/test.sh on my computer, it 
always fail and report "Safemode is still on". Then I enlarge the timeout 
threshold in testlib.sh from 90 seconds to 360 seconds, and record the time to 
exist safemode 10 times,  and the average time to exit safemode is about 160 
seconds.

My Computer is 8 core 1999.971 Mhz, 16G memory.

!image-2019-12-29-11-21-25-621.png!

!image-2019-12-29-11-16-41-739.png!

  was:
When I run ozone-0.5.0-SNAPSHOT/compose/ozonesecure/test.sh on my computer, it 
always fail and report "Safemode is still on". Then I enlarge the timeout 
threshold in testlib.sh from 90 seconds to 360 seconds, and record the time to 
exist safemode 10 times,  and the average time to exit safemode is about 160 
seconds.

My Computer is 8 core, 16G memory.

!image-2019-12-29-11-21-25-621.png!

!image-2019-12-29-11-16-41-739.png!


> Timeout threshold  is too small to exit Saftmode when security enabled
> --
>
> Key: HDDS-2819
> URL: https://issues.apache.org/jira/browse/HDDS-2819
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: runzhiwang
>Priority: Major
> Attachments: image-2019-12-29-11-16-41-739.png, 
> image-2019-12-29-11-21-25-621.png
>
>
> When I run ozone-0.5.0-SNAPSHOT/compose/ozonesecure/test.sh on my computer, 
> it always fail and report "Safemode is still on". Then I enlarge the timeout 
> threshold in testlib.sh from 90 seconds to 360 seconds, and record the time 
> to exist safemode 10 times,  and the average time to exit safemode is about 
> 160 seconds.
> My Computer is 8 core 1999.971 Mhz, 16G memory.
> !image-2019-12-29-11-21-25-621.png!
> !image-2019-12-29-11-16-41-739.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2819) Timeout threshold is too small to exit Saftmode when security enabled

2019-12-28 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-2819:
-
Description: 
When I run ozone-0.5.0-SNAPSHOT/compose/ozonesecure/test.sh on my computer, it 
always fail and report "Safemode is still on". Then I enlarge the timeout 
threshold in testlib.sh from 90 seconds to 360 seconds, and record the time to 
exist safemode 10 times,  and the average time to exit safemode is about 160 
seconds.

My Computer is 8 core, 16G memory.

!image-2019-12-29-11-21-25-621.png!

!image-2019-12-29-11-16-41-739.png!

  was:
!image-2019-12-29-11-21-25-621.png!

!image-2019-12-29-11-16-41-739.png!


> Timeout threshold  is too small to exit Saftmode when security enabled
> --
>
> Key: HDDS-2819
> URL: https://issues.apache.org/jira/browse/HDDS-2819
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: runzhiwang
>Priority: Major
> Attachments: image-2019-12-29-11-16-41-739.png, 
> image-2019-12-29-11-21-25-621.png
>
>
> When I run ozone-0.5.0-SNAPSHOT/compose/ozonesecure/test.sh on my computer, 
> it always fail and report "Safemode is still on". Then I enlarge the timeout 
> threshold in testlib.sh from 90 seconds to 360 seconds, and record the time 
> to exist safemode 10 times,  and the average time to exit safemode is about 
> 160 seconds.
> My Computer is 8 core, 16G memory.
> !image-2019-12-29-11-21-25-621.png!
> !image-2019-12-29-11-16-41-739.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2819) Timeout threshold is too small to exit Saftmode when security enabled

2019-12-28 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-2819:
-
Attachment: image-2019-12-29-11-21-25-621.png

> Timeout threshold  is too small to exit Saftmode when security enabled
> --
>
> Key: HDDS-2819
> URL: https://issues.apache.org/jira/browse/HDDS-2819
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: runzhiwang
>Priority: Major
> Attachments: image-2019-12-29-11-16-41-739.png, 
> image-2019-12-29-11-21-25-621.png
>
>
> !image-2019-12-29-11-16-41-739.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2819) Timeout threshold is too small to exit Saftmode when security enabled

2019-12-28 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-2819:
-
Description: 
!image-2019-12-29-11-21-25-621.png!

!image-2019-12-29-11-16-41-739.png!

  was:!image-2019-12-29-11-16-41-739.png!


> Timeout threshold  is too small to exit Saftmode when security enabled
> --
>
> Key: HDDS-2819
> URL: https://issues.apache.org/jira/browse/HDDS-2819
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: runzhiwang
>Priority: Major
> Attachments: image-2019-12-29-11-16-41-739.png, 
> image-2019-12-29-11-21-25-621.png
>
>
> !image-2019-12-29-11-21-25-621.png!
> !image-2019-12-29-11-16-41-739.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2819) Timeout threshold is too small to exit Saftmode when security enabled

2019-12-28 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-2819:
-
Description: !image-2019-12-29-11-16-41-739.png!

> Timeout threshold  is too small to exit Saftmode when security enabled
> --
>
> Key: HDDS-2819
> URL: https://issues.apache.org/jira/browse/HDDS-2819
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: runzhiwang
>Priority: Major
> Attachments: image-2019-12-29-11-16-41-739.png
>
>
> !image-2019-12-29-11-16-41-739.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2819) Timeout threshold is too small to exit Saftmode when security enabled

2019-12-28 Thread runzhiwang (Jira)
runzhiwang created HDDS-2819:


 Summary: Timeout threshold  is too small to exit Saftmode when 
security enabled
 Key: HDDS-2819
 URL: https://issues.apache.org/jira/browse/HDDS-2819
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: runzhiwang






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-2807) Fail Unit Test: TestMiniChaosOzoneCluster

2019-12-26 Thread runzhiwang (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17003272#comment-17003272
 ] 

runzhiwang edited comment on HDDS-2807 at 12/27/19 7:36 AM:


I‘m working on it


was (Author: runzhiwang):
[~msingh] Could you help resolve this problem ?

> Fail Unit Test: TestMiniChaosOzoneCluster
> -
>
> Key: HDDS-2807
> URL: https://issues.apache.org/jira/browse/HDDS-2807
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: runzhiwang
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2019-12-25-21-16-25-372.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> I run the unit test in docker: elek/ozone-build:20191106-1 on my machine. But 
> the unit test TestMiniChaosOzoneCluster cannot pass.  The related message in 
> hadoop-ozone/fault-injection-test/mini-chaos-tests/target/surefire-reports/org.apache.hadoop.ozone.TestMiniChaosOzoneCluster-output.txt
>  are as follows:
> 2019-12-25 15:38:20,747 [pool-244-thread-5] WARN io.KeyOutputStream 
> (KeyOutputStream.java:handleException(280)) - Encountered exception 
> java.io.IOException: Unexpected Storage Container Exception: 
> java.util.concurrent.CompletionException: 
> java.util.concurrent.CompletionException: 
> org.apache.ratis.protocol.AlreadyClosedException: 
> SlidingWindow$Client:client-C946713E1023->RAFT is closed. on the pipeline 
> Pipeline[ Id: 0ff487a6-5734-4ec6-babd-156a65d321dc, Nodes: 
> 4dbb8a5a-3a9a-42d5-bbf9-9c65f4703da2\{ip: 10.10.10.10, host: 10.10.10.10, 
> networkLocation: /default-rack, certSerialId: 
> null}36059332-e77c-4d4c-a133-ad28b3db004b\{ip: 10.10.10.10, host: 
> 10.10.10.10, networkLocation: /default-rack, certSerialId: 
> null}5c95288c-1710-49a2-a896-55f5568462e2\{ip: 10.10.10.10, host: 
> 10.10.10.10, networkLocation: /default-rack, certSerialId: null}, Type:RATIS, 
> Factor:THREE, State:OPEN, leaderId:36059332-e77c-4d4c-a133-ad28b3db004b ]. 
> The last committed block length is 0, uncommitted data length is 8192 retry 
> count 0
>  
> !image-2019-12-25-21-16-25-372.png!
>  
>  
>  
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2812) Fix low version wget cannot resolve the proxy of https

2019-12-26 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-2812:
-
Description: 
 

When run compose/ozonesecure/test.sh and compose/ozonesecure-mr/test.sh on 
computer which connect outer network by proxy, it fails to {{wget 
https://github.com}} as the image show. Because the version of wget in{{ 
openjdk:8u191-jdk-alpine3.9}} is too low, it cannot resolve the proxy of https.

{{openjdk:8u191-jdk-alpine3.9}} was used at 
[https://github.com/apache/hadoop-ozone/blob/master/hadoop-ozone/dist/src/main/compose/ozonesecure/docker-image/docker-krb5/Dockerfile-krb5#L18].

!image-2019-12-26-19-27-51-010.png!

  was:
When run the  compose/ozone-mr/hadoop27/test.sh in my machine which connect  
network by proxy, it fails to execute sudo apk add --update py-pip in docker 
container. Because sudo will reset all environment variables including the 
proxy, so it fails to download  
[http://dl-cdn.alpinelinux.org/alpine/v3.8/community/x86_64/APKINDEX.tar.gz.] 
By sudo -E, it will execute the command with the environment of the user, thus 
the proxy can be valid. 

!image-2019-12-26-19-27-51-010.png!


> Fix low version wget cannot resolve the proxy of https
> --
>
> Key: HDDS-2812
> URL: https://issues.apache.org/jira/browse/HDDS-2812
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: runzhiwang
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2019-12-26-19-27-51-010.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
>  
> When run compose/ozonesecure/test.sh and compose/ozonesecure-mr/test.sh on 
> computer which connect outer network by proxy, it fails to {{wget 
> https://github.com}} as the image show. Because the version of wget in{{ 
> openjdk:8u191-jdk-alpine3.9}} is too low, it cannot resolve the proxy of 
> https.
> {{openjdk:8u191-jdk-alpine3.9}} was used at 
> [https://github.com/apache/hadoop-ozone/blob/master/hadoop-ozone/dist/src/main/compose/ozonesecure/docker-image/docker-krb5/Dockerfile-krb5#L18].
> !image-2019-12-26-19-27-51-010.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2814) Fail to connect s3g in docker container with network proxy

2019-12-26 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-2814:
-
Description: 
When run compose/ozones3-haproxy/test.sh on computer which connect network by 
proxy, it fails to curl 
[http://scm:9876|http://scm:9876/static/bootstrap-3.4.1/js/bootstrap.min.js] as 
the image show. Because curl try to resolve 
[http://scm:9876|http://scm:9876/static/bootstrap-3.4.1/js/bootstrap.min.js] by 
the network proxy, which caused fail.  Actually, 
[http://scm:9876|http://scm:9876/] is the local realm name used by the docker 
container which should not resolved by proxy.

!image-2019-12-27-13-32-20-850.png!

  was:
When run compose/ozones3-haproxy/test.sh in my machine which connect network by 
proxy, it fails to curl 
[http://scm:9876|http://scm:9876/static/bootstrap-3.4.1/js/bootstrap.min.js] as 
the image show. Because curl try to resolve 
[http://scm:9876|http://scm:9876/static/bootstrap-3.4.1/js/bootstrap.min.js] by 
the network proxy, which caused fail.  Actually, 
[http://scm:9876|http://scm:9876/] is the local realm name used by the docker 
container which should not resolved by proxy.

!image-2019-12-27-13-32-20-850.png!


> Fail to connect  s3g in docker container with network proxy
> ---
>
> Key: HDDS-2814
> URL: https://issues.apache.org/jira/browse/HDDS-2814
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: runzhiwang
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2019-12-27-13-32-20-850.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When run compose/ozones3-haproxy/test.sh on computer which connect network by 
> proxy, it fails to curl 
> [http://scm:9876|http://scm:9876/static/bootstrap-3.4.1/js/bootstrap.min.js] 
> as the image show. Because curl try to resolve 
> [http://scm:9876|http://scm:9876/static/bootstrap-3.4.1/js/bootstrap.min.js] 
> by the network proxy, which caused fail.  Actually, 
> [http://scm:9876|http://scm:9876/] is the local realm name used by the docker 
> container which should not resolved by proxy.
> !image-2019-12-27-13-32-20-850.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2814) Fail to connect s3g in docker container with network proxy

2019-12-26 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-2814:
-
Attachment: image-2019-12-27-13-32-20-850.png

> Fail to connect  s3g in docker container with network proxy
> ---
>
> Key: HDDS-2814
> URL: https://issues.apache.org/jira/browse/HDDS-2814
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: runzhiwang
>Priority: Major
> Attachments: image-2019-12-27-13-32-20-850.png
>
>
> When run compose/ozones3-haproxy/test.sh in my machine which connect network 
> by proxy, it fails to curl 
> [http://scm:9876|http://scm:9876/static/bootstrap-3.4.1/js/bootstrap.min.js] 
> as the image show. Because curl try to resolve 
> [http://scm:9876|http://scm:9876/static/bootstrap-3.4.1/js/bootstrap.min.js] 
> by the network proxy, which caused fail.  Actually, 
> [http://scm:9876|http://scm:9876/] is the local realm name used by the docker 
> container which should not resolved by proxy.
> !image-2019-12-27-11-45-39-015.png! .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2814) Fail to connect s3g in docker container with network proxy

2019-12-26 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-2814:
-
Description: 
When run compose/ozones3-haproxy/test.sh in my machine which connect network by 
proxy, it fails to curl 
[http://scm:9876|http://scm:9876/static/bootstrap-3.4.1/js/bootstrap.min.js] as 
the image show. Because curl try to resolve 
[http://scm:9876|http://scm:9876/static/bootstrap-3.4.1/js/bootstrap.min.js] by 
the network proxy, which caused fail.  Actually, 
[http://scm:9876|http://scm:9876/] is the local realm name used by the docker 
container which should not resolved by proxy.

!image-2019-12-27-13-32-20-850.png!

  was:
When run compose/ozones3-haproxy/test.sh in my machine which connect network by 
proxy, it fails to curl 
[http://scm:9876|http://scm:9876/static/bootstrap-3.4.1/js/bootstrap.min.js] as 
the image show. Because curl try to resolve 
[http://scm:9876|http://scm:9876/static/bootstrap-3.4.1/js/bootstrap.min.js] by 
the network proxy, which caused fail.  Actually, 
[http://scm:9876|http://scm:9876/] is the local realm name used by the docker 
container which should not resolved by proxy.

!image-2019-12-27-11-45-39-015.png! .


> Fail to connect  s3g in docker container with network proxy
> ---
>
> Key: HDDS-2814
> URL: https://issues.apache.org/jira/browse/HDDS-2814
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: runzhiwang
>Priority: Major
> Attachments: image-2019-12-27-13-32-20-850.png
>
>
> When run compose/ozones3-haproxy/test.sh in my machine which connect network 
> by proxy, it fails to curl 
> [http://scm:9876|http://scm:9876/static/bootstrap-3.4.1/js/bootstrap.min.js] 
> as the image show. Because curl try to resolve 
> [http://scm:9876|http://scm:9876/static/bootstrap-3.4.1/js/bootstrap.min.js] 
> by the network proxy, which caused fail.  Actually, 
> [http://scm:9876|http://scm:9876/] is the local realm name used by the docker 
> container which should not resolved by proxy.
> !image-2019-12-27-13-32-20-850.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2814) Fail to connect s3g in docker container with network proxy

2019-12-26 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-2814:
-
Attachment: (was: image-2019-12-27-11-45-39-015.png)

> Fail to connect  s3g in docker container with network proxy
> ---
>
> Key: HDDS-2814
> URL: https://issues.apache.org/jira/browse/HDDS-2814
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: runzhiwang
>Priority: Major
>
> When run compose/ozones3-haproxy/test.sh in my machine which connect network 
> by proxy, it fails to curl 
> [http://scm:9876|http://scm:9876/static/bootstrap-3.4.1/js/bootstrap.min.js] 
> as the image show. Because curl try to resolve 
> [http://scm:9876|http://scm:9876/static/bootstrap-3.4.1/js/bootstrap.min.js] 
> by the network proxy, which caused fail.  Actually, 
> [http://scm:9876|http://scm:9876/] is the local realm name used by the docker 
> container which should not resolved by proxy.
> !image-2019-12-27-11-45-39-015.png! .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2814) Fail to connect s3g in docker container with network proxy

2019-12-26 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-2814:
-
Summary: Fail to connect  s3g in docker container with network proxy  (was: 
Can not connect to s3g in docker container with network proxy)

> Fail to connect  s3g in docker container with network proxy
> ---
>
> Key: HDDS-2814
> URL: https://issues.apache.org/jira/browse/HDDS-2814
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: runzhiwang
>Priority: Major
> Attachments: image-2019-12-27-11-45-39-015.png
>
>
> When run compose/ozones3-haproxy/test.sh in my machine which connect network 
> by proxy, it fails to curl 
> [http://scm:9876|http://scm:9876/static/bootstrap-3.4.1/js/bootstrap.min.js] 
> as the image show. Because curl try to resolve 
> [http://scm:9876|http://scm:9876/static/bootstrap-3.4.1/js/bootstrap.min.js] 
> by the network proxy, which caused fail.  Actually, 
> [http://scm:9876|http://scm:9876/] is the local realm name used by the docker 
> container which should not resolved by proxy.
> !image-2019-12-27-11-45-39-015.png! .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2814) Can not connect to s3g in docker container with network proxy

2019-12-26 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-2814:
-
Description: 
When run compose/ozones3-haproxy/test.sh in my machine which connect network by 
proxy, it fails to curl 
[http://scm:9876|http://scm:9876/static/bootstrap-3.4.1/js/bootstrap.min.js] as 
the image show. Because curl try to resolve 
[http://scm:9876|http://scm:9876/static/bootstrap-3.4.1/js/bootstrap.min.js] by 
the network proxy, which caused fail.  Actually, 
[http://scm:9876|http://scm:9876/] is the local realm name used by the docker 
container which should not resolved by proxy.

!image-2019-12-27-11-45-39-015.png! .

  was:
When run compose/ozones3-haproxy/test.sh in my machine which connect network by 
proxy, it fails to curl 
[http://scm:9876|http://scm:9876/static/bootstrap-3.4.1/js/bootstrap.min.js] as 
the image show. Because curl try to resolve 
[http://scm:9876|http://scm:9876/static/bootstrap-3.4.1/js/bootstrap.min.js] by 
the network proxy, which caused fail. 

!image-2019-12-27-11-45-39-015.png! .


> Can not connect to s3g in docker container with network proxy
> -
>
> Key: HDDS-2814
> URL: https://issues.apache.org/jira/browse/HDDS-2814
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: runzhiwang
>Priority: Major
> Attachments: image-2019-12-27-11-45-39-015.png
>
>
> When run compose/ozones3-haproxy/test.sh in my machine which connect network 
> by proxy, it fails to curl 
> [http://scm:9876|http://scm:9876/static/bootstrap-3.4.1/js/bootstrap.min.js] 
> as the image show. Because curl try to resolve 
> [http://scm:9876|http://scm:9876/static/bootstrap-3.4.1/js/bootstrap.min.js] 
> by the network proxy, which caused fail.  Actually, 
> [http://scm:9876|http://scm:9876/] is the local realm name used by the docker 
> container which should not resolved by proxy.
> !image-2019-12-27-11-45-39-015.png! .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2814) Can not connect to s3g in docker container with network proxy

2019-12-26 Thread runzhiwang (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17003881#comment-17003881
 ] 

runzhiwang commented on HDDS-2814:
--

I‘m working on it.

> Can not connect to s3g in docker container with network proxy
> -
>
> Key: HDDS-2814
> URL: https://issues.apache.org/jira/browse/HDDS-2814
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: runzhiwang
>Priority: Major
> Attachments: image-2019-12-27-11-45-39-015.png
>
>
> When run compose/ozones3-haproxy/test.sh in my machine which connect network 
> by proxy, it fails to curl 
> [http://scm:9876|http://scm:9876/static/bootstrap-3.4.1/js/bootstrap.min.js] 
> as the image show. Because curl try to resolve 
> [http://scm:9876|http://scm:9876/static/bootstrap-3.4.1/js/bootstrap.min.js] 
> by the network proxy, which caused fail. 
> !image-2019-12-27-11-45-39-015.png! .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2814) Can not connect to s3g in docker container with network proxy

2019-12-26 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-2814:
-
Description: 
When run compose/ozones3-haproxy/test.sh in my machine which connect network by 
proxy, it fails to curl 
[http://scm:9876|http://scm:9876/static/bootstrap-3.4.1/js/bootstrap.min.js] as 
the image show. Because curl try to resolve 
[http://scm:9876|http://scm:9876/static/bootstrap-3.4.1/js/bootstrap.min.js] by 
the network proxy, which caused fail. 

!image-2019-12-27-11-45-39-015.png! .

> Can not connect to s3g in docker container with network proxy
> -
>
> Key: HDDS-2814
> URL: https://issues.apache.org/jira/browse/HDDS-2814
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: runzhiwang
>Priority: Major
> Attachments: image-2019-12-27-11-45-39-015.png
>
>
> When run compose/ozones3-haproxy/test.sh in my machine which connect network 
> by proxy, it fails to curl 
> [http://scm:9876|http://scm:9876/static/bootstrap-3.4.1/js/bootstrap.min.js] 
> as the image show. Because curl try to resolve 
> [http://scm:9876|http://scm:9876/static/bootstrap-3.4.1/js/bootstrap.min.js] 
> by the network proxy, which caused fail. 
> !image-2019-12-27-11-45-39-015.png! .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2814) Can not connect to s3g in docker container with network proxy

2019-12-26 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-2814:
-
Attachment: image-2019-12-27-11-45-39-015.png

> Can not connect to s3g in docker container with network proxy
> -
>
> Key: HDDS-2814
> URL: https://issues.apache.org/jira/browse/HDDS-2814
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: runzhiwang
>Priority: Major
> Attachments: image-2019-12-27-11-45-39-015.png
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



<    1   2   3   4   5   6   7   >