[jira] [Commented] (HDFS-15533) Provide DFS API compatible class(ViewDistributedFileSystem), but use ViewFileSystemOverloadScheme inside

2022-06-30 Thread JiangHua Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17561289#comment-17561289
 ] 

JiangHua Zhu commented on HDFS-15533:
-

Nice to talk to you, [~umamaheswararao].
It seems that the redundant NameNode#FS_HDFS_IMPL_KEY should be removed here.
If necessary, I will create a new jira to fix it.
Hope to continue to communicate with you.

> Provide DFS API compatible class(ViewDistributedFileSystem), but use 
> ViewFileSystemOverloadScheme inside
> 
>
> Key: HDFS-15533
> URL: https://issues.apache.org/jira/browse/HDFS-15533
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: dfs, viewfs
>Affects Versions: 3.4.0
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Major
> Fix For: 3.3.1, 3.4.0
>
>
> I have been working on a thought from last week is that, we wanted to provide 
> DFS compatible APIs with mount functionality. So, that existing DFS 
> applications can work with out class cast issues.
> When we tested with other components like Hive and HBase, I noticed some 
> classcast issues.
> {code:java}
> HBase example:
> java.lang.ClassCastException: 
> org.apache.hadoop.fs.viewfs.ViewFileSystemOverloadScheme cannot be cast to 
> org.apache.hadoop.hdfs.DistributedFileSystemjava.lang.ClassCastException: 
> org.apache.hadoop.fs.viewfs.ViewFileSystemOverloadScheme cannot be cast to 
> org.apache.hadoop.hdfs.DistributedFileSystem at 
> org.apache.hadoop.hbase.util.FSUtils.getDFSHedgedReadMetrics(FSUtils.java:1748)
>  at 
> org.apache.hadoop.hbase.regionserver.MetricsRegionServerWrapperImpl.(MetricsRegionServerWrapperImpl.java:146)
>  at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.handleReportForDutyResponse(HRegionServer.java:1594)
>  at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1001)
>  at java.lang.Thread.run(Thread.java:748){code}
> {code:java}
> Hive:
> |io.AcidUtils|: Failed to get files with ID; using regular API: Only 
> supported for DFS; got class 
> org.apache.hadoop.fs.viewfs.ViewFileSystemOverloadScheme{code}
> SO, here the implementation details are like follows:
> We extended DistributedFileSystem and created a class called " 
> ViewDistributedFileSystem"
>  This vfs=ViewFirstibutedFileSystem, try to initialize 
> ViewFileSystemOverloadScheme. If success call will delegate to  vfs. If fails 
> to initialize due to no mount points, or other errors, it will just fallback 
> to regular DFS init. If users does not configure any mount, system will 
> behave exactly like today's DFS. If there are mount points, vfs functionality 
> will come under DFS.
> I have a patch and will post it in some time.
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-13522) RBF: Support observer node from Router-Based Federation

2022-06-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-13522?focusedWorklogId=786881=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-786881
 ]

ASF GitHub Bot logged work on HDFS-13522:
-

Author: ASF GitHub Bot
Created on: 30/Jun/22 23:51
Start Date: 30/Jun/22 23:51
Worklog Time Spent: 10m 
  Work Description: simbadzina opened a new pull request, #4523:
URL: https://github.com/apache/hadoop/pull/4523

   ### Description of PR
   Allow routers to use observer namenode without an msync on even read.
   Is layered on top of the following two
   
   - https://github.com/apache/hadoop/pull/4311
   - https://github.com/apache/hadoop/pull/4127 and 
   
   I'm still working on cleaning up this PR to add documentation, pick better 
variable names and remove unneeded features like "disabling observer read from 
the client side". I will also move IPC related changes to the first PR in the 
series (https://github.com/apache/hadoop/pull/4311)
   
   ### How was this patch tested?
   
   New unit tests.
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




Issue Time Tracking
---

Worklog Id: (was: 786881)
Time Spent: 15.5h  (was: 15h 20m)

> RBF: Support observer node from Router-Based Federation
> ---
>
> Key: HDFS-13522
> URL: https://issues.apache.org/jira/browse/HDFS-13522
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation, namenode
>Reporter: Erik Krogen
>Assignee: Simbarashe Dzinamarira
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDFS-13522.001.patch, HDFS-13522.002.patch, 
> HDFS-13522_WIP.patch, RBF_ Observer support.pdf, Router+Observer RPC 
> clogging.png, ShortTerm-Routers+Observer.png
>
>  Time Spent: 15.5h
>  Remaining Estimate: 0h
>
> Changes will need to occur to the router to support the new observer node.
> One such change will be to make the router understand the observer state, 
> e.g. {{FederationNamenodeServiceState}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16646) RBF: Support an elastic RouterRpcFairnessPolicyController

2022-06-30 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HDFS-16646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-16646:
---
Description: 
As we all known, `StaticRouterRpcFairnessPolicyController` is very helpfully 
for RBF to minimize impact of clients connecting to healthy vs unhealthy 
nameNodes. 
But in prod environment, the traffic of clients accessing each NS and the 
pressure of downstream namenodes are dynamically changed. So if we only have 
one static permit conf, RBF cannot able to adapt to the changes in traffic to 
achieve optimal results. 

So here I propose an elastic RouterRpcFairnessPolicyController to help RBF 
adapt to traffic changes to achieve an optimal result.

The overall idea is:
* Each name service can configured the exclusive permits like 
`StaticRouterRpcFairnessPolicyController`
* TotalPermits is more than sum(NsExclusivePermit) and mark TotalPermits - 
sum(NsExclusivePermit) as SharedPermits
* Each name service can properly preempt the SharedPermits after it's own 
exclusive permits is used up.
* But the maximum value of SharedPermits preempted by each nameservice should 
be limited. Such as 20% of SharedPermits.

Suppose we have 200 handlers and 5 name services, and each name services 
configured different exclusive Permits, like:
| NS1 | NS2 | NS3 | NS4 | NS5 | Concurrent NS |
|-- | -- | -- | -- | -- | -- |
| 9 | 11 | 8 | 12 | 10 | 50 |

The `sum(NsExclusivePermit)` is 100, and the `SharedPermits = TotalPermits(200) 
- Sum(NsExclusivePermit)(100) = 100`
Suppose we configure that each nameservice can preempt up to 20% of 
TotalPermits, marked as `elasticPercent`.

Then from the point view of a single NS, the permits it may be can use are as 
follow:
- Exclusive Permits, which is cannot be used by other name services.
- Limited SharedPermits, whether is can use so many shared permits depends on 
the remaining number of SharedPermits, because the SharedPermits is be 
preempted by all nameservices.

If we configure the `elasticPercent=100`, it means one nameservices can use up 
all SharedPermits.
If we configure the `elasticPercent=0`, it means nameservice can only use it's 
exclusive Permits.
If we configure the `elasticPercent=20`, it means that the RBF can tolerate 5 
unhealthy name services at the same time.

In our prod environment, we configured as follow, and it works well:
- RBF has 3000 handlers
- Each nameservice has 10 exclusive permits
- `elasticPercent` is 30%

Of course, we need to configure reasonable parameters according to the prod 
traffic.

  was:
### Description of PR
As we all known, `StaticRouterRpcFairnessPolicyController` is very helpfully 
for RBF to minimize impact of clients connecting to healthy vs unhealthy 
nameNodes. 
But in prod environment, the traffic of clients accessing each NS and the 
pressure of downstream namenodes are dynamically changed. So if we only have 
one static permit conf, RBF cannot able to adapt to the changes in traffic to 
achieve optimal results. 

So here I propose an elastic RouterRpcFairnessPolicyController to help RBF 
adapt to traffic changes to achieve an optimal result.

The overall idea is:
- Each name service can configured the exclusive permits like 
`StaticRouterRpcFairnessPolicyController`
- TotalPermits is more than sum(NsExclusivePermit) and mark TotalPermits - 
sum(NsExclusivePermit) as SharedPermits
- Each name service can properly preempt the SharedPermits after it's own 
exclusive permits is used up.
- But the maximum value of SharedPermits preempted by each nameservice should 
be limited. Such as 20% of SharedPermits.

Suppose we have 200 handlers and 5 name services, and each name services 
configured different exclusive Permits, like:
| NS1 | NS2 | NS3 | NS4 | NS5 | Concurrent NS |
|-- | -- | -- | -- | -- | -- |
| 9 | 11 | 8 | 12 | 10 | 50 |

The `sum(NsExclusivePermit)` is 100, and the `SharedPermits = TotalPermits(200) 
- Sum(NsExclusivePermit)(100) = 100`
Suppose we configure that each nameservice can preempt up to 20% of 
TotalPermits, marked as `elasticPercent`.

Then from the point view of a single NS, the permits it may be can use are as 
follow:
- Exclusive Permits, which is cannot be used by other name services.
- Limited SharedPermits, whether is can use so many shared permits depends on 
the remaining number of SharedPermits, because the SharedPermits is be 
preempted by all nameservices.

If we configure the `elasticPercent=100`, it means one nameservices can use up 
all SharedPermits.
If we configure the `elasticPercent=0`, it means nameservice can only use it's 
exclusive Permits.
If we configure the `elasticPercent=20`, it means that the RBF can tolerate 5 
unhealthy name services at the same time.

In our prod environment, we configured as follow, and it works well:
- RBF has 3000 handlers
- Each nameservice has 10 exclusive permits
- `elasticPercent` is 30%

Of course, we need to configure reasonable parameters 

[jira] [Updated] (HDFS-16646) RBF: Support an elastic RouterRpcFairnessPolicyController

2022-06-30 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HDFS-16646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-16646:
---
Description: 
### Description of PR
As we all known, `StaticRouterRpcFairnessPolicyController` is very helpfully 
for RBF to minimize impact of clients connecting to healthy vs unhealthy 
nameNodes. 
But in prod environment, the traffic of clients accessing each NS and the 
pressure of downstream namenodes are dynamically changed. So if we only have 
one static permit conf, RBF cannot able to adapt to the changes in traffic to 
achieve optimal results. 

So here I propose an elastic RouterRpcFairnessPolicyController to help RBF 
adapt to traffic changes to achieve an optimal result.

The overall idea is:
- Each name service can configured the exclusive permits like 
`StaticRouterRpcFairnessPolicyController`
- TotalPermits is more than sum(NsExclusivePermit) and mark TotalPermits - 
sum(NsExclusivePermit) as SharedPermits
- Each name service can properly preempt the SharedPermits after it's own 
exclusive permits is used up.
- But the maximum value of SharedPermits preempted by each nameservice should 
be limited. Such as 20% of SharedPermits.

Suppose we have 200 handlers and 5 name services, and each name services 
configured different exclusive Permits, like:
| NS1 | NS2 | NS3 | NS4 | NS5 | Concurrent NS |
|-- | -- | -- | -- | -- | -- |
| 9 | 11 | 8 | 12 | 10 | 50 |

The `sum(NsExclusivePermit)` is 100, and the `SharedPermits = TotalPermits(200) 
- Sum(NsExclusivePermit)(100) = 100`
Suppose we configure that each nameservice can preempt up to 20% of 
TotalPermits, marked as `elasticPercent`.

Then from the point view of a single NS, the permits it may be can use are as 
follow:
- Exclusive Permits, which is cannot be used by other name services.
- Limited SharedPermits, whether is can use so many shared permits depends on 
the remaining number of SharedPermits, because the SharedPermits is be 
preempted by all nameservices.

If we configure the `elasticPercent=100`, it means one nameservices can use up 
all SharedPermits.
If we configure the `elasticPercent=0`, it means nameservice can only use it's 
exclusive Permits.
If we configure the `elasticPercent=20`, it means that the RBF can tolerate 5 
unhealthy name services at the same time.

In our prod environment, we configured as follow, and it works well:
- RBF has 3000 handlers
- Each nameservice has 10 exclusive permits
- `elasticPercent` is 30%

Of course, we need to configure reasonable parameters according to the prod 
traffic.

  was:
As we all known, StaticRouterRpcFairnessPolicyController is very helpfully for 
RBF to minimize impact of clients connecting to healthy vs unhealthy nameNodes.
But in prod environment, the traffic of clients accessing each NS and the 
pressure of downstream namenodes are dynamically changed. So if we only have 
one static permit conf, RBF cannot able to adapt to the changes in traffic to 
achieve optimal results.

So here I propose an elastic RouterRpcFairnessPolicyController to help RBF 
adapt to traffic changes to achieve an optimal result.

The overall idea is:

Each name service can configured the exclusive permits like 
StaticRouterRpcFairnessPolicyController
TotalPermits is more than sum(NsExclusivePermit) and mark TotalPermits - 
sum(NsExclusivePermit) as SharedPermits
Each name service can properly preempt the SharedPermits after it's own 
exclusive permits is used up.
But the maximum value of SharedPermits preempted by each nameservice should be 
limited. Such as 20% of SharedPermits.
Suppose we have 200 handlers and 5 name services, and each name services 
configured different exclusive Permits, like:

NS1 NS2 NS3 NS4 NS5 Concurrent NS
9   11  8   12  10  50
The sum(NsExclusivePermit) is 100, and the SharedPermits = TotalPermits(200) - 
Sum(NsExclusivePermit)(100) = 100
Suppose we configure that each nameservice can preempt up to 20% of 
TotalPermits, marked as elasticPercent.

Then from the point view of a single NS, the permits it may be can use are as 
follow:

Exclusive Permits, which is cannot be used by other name services.
Limited SharedPermits, whether is can use so many shared permits depends on the 
remaining number of SharedPermits, because the SharedPermits is be preempted by 
all nameservices.
If we configure the elasticPercent=100, it means one nameservices can use up 
all SharedPermits.
If we configure the elasticPercent=0, it means nameservice can only use it's 
exclusive Permits.
If we configure the elasticPercent=20, it means that the RBF can tolerate 5 
unhealthy name services at the same time.

In our prod environment, we configured as follow, and it works well:

RBF has 3000 handlers
Each nameservice has 10 exclusive permits
elasticPercent is 30%
Of course, we need to configure reasonable parameters according to the prod 
traffic.


> RBF: Support an 

[jira] [Updated] (HDFS-16646) RBF: Support an elastic RouterRpcFairnessPolicyController

2022-06-30 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HDFS-16646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-16646:
---
Summary: RBF: Support an elastic RouterRpcFairnessPolicyController  (was: 
[RBF] Improved isolation for downstream name nodes. {Elastic})

> RBF: Support an elastic RouterRpcFairnessPolicyController
> -
>
> Key: HDFS-16646
> URL: https://issues.apache.org/jira/browse/HDFS-16646
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16646) RBF: Support an elastic RouterRpcFairnessPolicyController

2022-06-30 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HDFS-16646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-16646:
---
Description: 
As we all known, StaticRouterRpcFairnessPolicyController is very helpfully for 
RBF to minimize impact of clients connecting to healthy vs unhealthy nameNodes.
But in prod environment, the traffic of clients accessing each NS and the 
pressure of downstream namenodes are dynamically changed. So if we only have 
one static permit conf, RBF cannot able to adapt to the changes in traffic to 
achieve optimal results.

So here I propose an elastic RouterRpcFairnessPolicyController to help RBF 
adapt to traffic changes to achieve an optimal result.

The overall idea is:

Each name service can configured the exclusive permits like 
StaticRouterRpcFairnessPolicyController
TotalPermits is more than sum(NsExclusivePermit) and mark TotalPermits - 
sum(NsExclusivePermit) as SharedPermits
Each name service can properly preempt the SharedPermits after it's own 
exclusive permits is used up.
But the maximum value of SharedPermits preempted by each nameservice should be 
limited. Such as 20% of SharedPermits.
Suppose we have 200 handlers and 5 name services, and each name services 
configured different exclusive Permits, like:

NS1 NS2 NS3 NS4 NS5 Concurrent NS
9   11  8   12  10  50
The sum(NsExclusivePermit) is 100, and the SharedPermits = TotalPermits(200) - 
Sum(NsExclusivePermit)(100) = 100
Suppose we configure that each nameservice can preempt up to 20% of 
TotalPermits, marked as elasticPercent.

Then from the point view of a single NS, the permits it may be can use are as 
follow:

Exclusive Permits, which is cannot be used by other name services.
Limited SharedPermits, whether is can use so many shared permits depends on the 
remaining number of SharedPermits, because the SharedPermits is be preempted by 
all nameservices.
If we configure the elasticPercent=100, it means one nameservices can use up 
all SharedPermits.
If we configure the elasticPercent=0, it means nameservice can only use it's 
exclusive Permits.
If we configure the elasticPercent=20, it means that the RBF can tolerate 5 
unhealthy name services at the same time.

In our prod environment, we configured as follow, and it works well:

RBF has 3000 handlers
Each nameservice has 10 exclusive permits
elasticPercent is 30%
Of course, we need to configure reasonable parameters according to the prod 
traffic.

> RBF: Support an elastic RouterRpcFairnessPolicyController
> -
>
> Key: HDFS-16646
> URL: https://issues.apache.org/jira/browse/HDFS-16646
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> As we all known, StaticRouterRpcFairnessPolicyController is very helpfully 
> for RBF to minimize impact of clients connecting to healthy vs unhealthy 
> nameNodes.
> But in prod environment, the traffic of clients accessing each NS and the 
> pressure of downstream namenodes are dynamically changed. So if we only have 
> one static permit conf, RBF cannot able to adapt to the changes in traffic to 
> achieve optimal results.
> So here I propose an elastic RouterRpcFairnessPolicyController to help RBF 
> adapt to traffic changes to achieve an optimal result.
> The overall idea is:
> Each name service can configured the exclusive permits like 
> StaticRouterRpcFairnessPolicyController
> TotalPermits is more than sum(NsExclusivePermit) and mark TotalPermits - 
> sum(NsExclusivePermit) as SharedPermits
> Each name service can properly preempt the SharedPermits after it's own 
> exclusive permits is used up.
> But the maximum value of SharedPermits preempted by each nameservice should 
> be limited. Such as 20% of SharedPermits.
> Suppose we have 200 handlers and 5 name services, and each name services 
> configured different exclusive Permits, like:
> NS1   NS2 NS3 NS4 NS5 Concurrent NS
> 9 11  8   12  10  50
> The sum(NsExclusivePermit) is 100, and the SharedPermits = TotalPermits(200) 
> - Sum(NsExclusivePermit)(100) = 100
> Suppose we configure that each nameservice can preempt up to 20% of 
> TotalPermits, marked as elasticPercent.
> Then from the point view of a single NS, the permits it may be can use are as 
> follow:
> Exclusive Permits, which is cannot be used by other name services.
> Limited SharedPermits, whether is can use so many shared permits depends on 
> the remaining number of SharedPermits, because the SharedPermits is be 
> preempted by all nameservices.
> If we configure the elasticPercent=100, it means one nameservices can use up 
> all SharedPermits.
> If we 

[jira] [Work logged] (HDFS-16646) [RBF] Improved isolation for downstream name nodes. {Elastic}

2022-06-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16646?focusedWorklogId=786575=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-786575
 ]

ASF GitHub Bot logged work on HDFS-16646:
-

Author: ASF GitHub Bot
Created on: 30/Jun/22 13:12
Start Date: 30/Jun/22 13:12
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4519:
URL: https://github.com/apache/hadoop/pull/4519#issuecomment-1171203116

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 41s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  40m 17s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 58s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   0m 52s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 50s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 58s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  4s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 13s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 48s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 24s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 41s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 45s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   0m 45s |  |  
hadoop-hdfs-project_hadoop-hdfs-rbf-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
 with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 generated 0 new + 54 
unchanged - 1 fixed = 54 total (was 55)  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 38s |  |  
hadoop-hdfs-project_hadoop-hdfs-rbf-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
 with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 generated 0 new 
+ 54 unchanged - 1 fixed = 54 total (was 55)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 27s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4519/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-project/hadoop-hdfs-rbf: The patch generated 14 new + 1 
unchanged - 0 fixed = 15 total (was 1)  |
   | +1 :green_heart: |  mvnsite  |   0m 43s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 42s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m  0s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | -1 :x: |  spotbugs  |   1m 41s | 
[/new-spotbugs-hadoop-hdfs-project_hadoop-hdfs-rbf.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4519/1/artifact/out/new-spotbugs-hadoop-hdfs-project_hadoop-hdfs-rbf.html)
 |  hadoop-hdfs-project/hadoop-hdfs-rbf generated 2 new + 0 unchanged - 0 fixed 
= 2 total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  22m 20s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |  22m 23s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4519/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-rbf in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 54s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 125m 26s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | SpotBugs | module:hadoop-hdfs-project/hadoop-hdfs-rbf |
   |  |  

[jira] [Work logged] (HDFS-16646) [RBF] Improved isolation for downstream name nodes. {Elastic}

2022-06-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16646?focusedWorklogId=786515=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-786515
 ]

ASF GitHub Bot logged work on HDFS-16646:
-

Author: ASF GitHub Bot
Created on: 30/Jun/22 11:05
Start Date: 30/Jun/22 11:05
Worklog Time Spent: 10m 
  Work Description: ZanderXu opened a new pull request, #4519:
URL: https://github.com/apache/hadoop/pull/4519

   ### Description of PR
   As we all known, `StaticRouterRpcFairnessPolicyController` is very helpfully 
for RBF to minimize impact of clients connecting to healthy vs unhealthy 
nameNodes. 
   But in prod environment, the traffic of clients accessing each NS and the 
pressure of downstream namenodes are dynamically changed. So if we only have 
one static permit conf, RBF cannot able to adapt to the changes in traffic to 
achieve optimal results. 
   
   So here I propose an elastic RouterRpcFairnessPolicyController to help RBF 
adapt to traffic changes to achieve an optimal result.
   
   The overall idea is:
   - Each name service can configured the exclusive permits like 
`StaticRouterRpcFairnessPolicyController`
   - TotalPermits is more than sum(NsExclusivePermit) and mark TotalPermits - 
sum(NsExclusivePermit) as SharedPermits
   - Each name service can properly preempt the SharedPermits after it's own 
exclusive permits is used up.
   - But the maximum value of SharedPermits preempted by each nameservice 
should be limited. Such as 20% of SharedPermits.
   
   Suppose we have 200 handlers and 5 name services, and each name services 
configured different exclusive Permits, like:
   | NS1 | NS2 | NS3 | NS4 | NS5 | Concurrent NS |
   |

Issue Time Tracking
---

Worklog Id: (was: 786515)
Remaining Estimate: 0h
Time Spent: 10m

> [RBF] Improved isolation for downstream name nodes. {Elastic}
> -
>
> Key: HDFS-16646
> URL: https://issues.apache.org/jira/browse/HDFS-16646
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16646) [RBF] Improved isolation for downstream name nodes. {Elastic}

2022-06-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-16646:
--
Labels: pull-request-available  (was: )

> [RBF] Improved isolation for downstream name nodes. {Elastic}
> -
>
> Key: HDFS-16646
> URL: https://issues.apache.org/jira/browse/HDFS-16646
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16645) Multi inProgress segments caused "Invalid log manifest"

2022-06-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16645?focusedWorklogId=786514=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-786514
 ]

ASF GitHub Bot logged work on HDFS-16645:
-

Author: ASF GitHub Bot
Created on: 30/Jun/22 11:04
Start Date: 30/Jun/22 11:04
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4518:
URL: https://github.com/apache/hadoop/pull/4518#issuecomment-1171081128

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 37s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | -1 :x: |  mvninstall  |   1m 52s | 
[/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4518/1/artifact/out/branch-mvninstall-root.txt)
 |  root in trunk failed.  |
   | -1 :x: |  compile  |   0m 23s | 
[/branch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4518/1/artifact/out/branch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-hdfs in trunk failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | -1 :x: |  compile  |   0m 20s | 
[/branch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4518/1/artifact/out/branch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt)
 |  hadoop-hdfs in trunk failed with JDK Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.  |
   | +1 :green_heart: |  checkstyle  |  23m 27s |  |  trunk passed  |
   | -1 :x: |  mvnsite  |   0m 41s | 
[/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4518/1/artifact/out/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in trunk failed.  |
   | +1 :green_heart: |  javadoc  |   1m 17s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 28s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | -1 :x: |  spotbugs  |   0m 21s | 
[/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4518/1/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in trunk failed.  |
   | +1 :green_heart: |  shadedclient  |  52m 58s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 20s | 
[/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4518/1/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch failed.  |
   | -1 :x: |  compile  |   0m 19s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4518/1/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-hdfs in the patch failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | -1 :x: |  javac  |   0m 19s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4518/1/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-hdfs in the patch failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | -1 :x: |  compile  |   0m 20s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4518/1/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt)
 |  hadoop-hdfs in the patch failed with JDK Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.  |
   | -1 :x: |  javac  |   0m 20s | 

[jira] [Created] (HDFS-16646) [RBF] Improved isolation for downstream name nodes. {Elastic}

2022-06-30 Thread ZanderXu (Jira)
ZanderXu created HDFS-16646:
---

 Summary: [RBF] Improved isolation for downstream name nodes. 
{Elastic}
 Key: HDFS-16646
 URL: https://issues.apache.org/jira/browse/HDFS-16646
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: ZanderXu
Assignee: ZanderXu






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16645) Multi inProgress segments caused "Invalid log manifest"

2022-06-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16645?focusedWorklogId=786494=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-786494
 ]

ASF GitHub Bot logged work on HDFS-16645:
-

Author: ASF GitHub Bot
Created on: 30/Jun/22 09:38
Start Date: 30/Jun/22 09:38
Worklog Time Spent: 10m 
  Work Description: ZanderXu opened a new pull request, #4518:
URL: https://github.com/apache/hadoop/pull/4518

   ### Description of PR
   ```
   java.lang.IllegalStateException: Invalid log manifest (log [1-? 
(in-progress)] overlaps [6-? (in-progress)])[[6-? (in-progress)], [1-? 
(in-progress)]] CommittedTxId: 0 
   at 
org.apache.hadoop.hdfs.server.protocol.RemoteEditLogManifest.checkState(RemoteEditLogManifest.java:62)
   at 
org.apache.hadoop.hdfs.server.protocol.RemoteEditLogManifest.(RemoteEditLogManifest.java:46)
   at 
org.apache.hadoop.hdfs.qjournal.server.Journal.getEditLogManifest(Journal.java:740)
   ```
   




Issue Time Tracking
---

Worklog Id: (was: 786494)
Remaining Estimate: 0h
Time Spent: 10m

> Multi inProgress segments caused "Invalid log manifest"
> ---
>
> Key: HDFS-16645
> URL: https://issues.apache.org/jira/browse/HDFS-16645
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {code:java}
> java.lang.IllegalStateException: Invalid log manifest (log [1-? 
> (in-progress)] overlaps [6-? (in-progress)])[[6-? (in-progress)], [1-? 
> (in-progress)]] CommittedTxId: 0 
> at 
> org.apache.hadoop.hdfs.server.protocol.RemoteEditLogManifest.checkState(RemoteEditLogManifest.java:62)
>   at 
> org.apache.hadoop.hdfs.server.protocol.RemoteEditLogManifest.(RemoteEditLogManifest.java:46)
>   at 
> org.apache.hadoop.hdfs.qjournal.server.Journal.getEditLogManifest(Journal.java:740)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16645) Multi inProgress segments caused "Invalid log manifest"

2022-06-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-16645:
--
Labels: pull-request-available  (was: )

> Multi inProgress segments caused "Invalid log manifest"
> ---
>
> Key: HDFS-16645
> URL: https://issues.apache.org/jira/browse/HDFS-16645
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {code:java}
> java.lang.IllegalStateException: Invalid log manifest (log [1-? 
> (in-progress)] overlaps [6-? (in-progress)])[[6-? (in-progress)], [1-? 
> (in-progress)]] CommittedTxId: 0 
> at 
> org.apache.hadoop.hdfs.server.protocol.RemoteEditLogManifest.checkState(RemoteEditLogManifest.java:62)
>   at 
> org.apache.hadoop.hdfs.server.protocol.RemoteEditLogManifest.(RemoteEditLogManifest.java:46)
>   at 
> org.apache.hadoop.hdfs.qjournal.server.Journal.getEditLogManifest(Journal.java:740)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-16645) Multi inProgress segments caused "Invalid log manifest"

2022-06-30 Thread ZanderXu (Jira)
ZanderXu created HDFS-16645:
---

 Summary: Multi inProgress segments caused "Invalid log manifest"
 Key: HDFS-16645
 URL: https://issues.apache.org/jira/browse/HDFS-16645
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: ZanderXu
Assignee: ZanderXu


{code:java}
java.lang.IllegalStateException: Invalid log manifest (log [1-? (in-progress)] 
overlaps [6-? (in-progress)])[[6-? (in-progress)], [1-? (in-progress)]] 
CommittedTxId: 0 
at 
org.apache.hadoop.hdfs.server.protocol.RemoteEditLogManifest.checkState(RemoteEditLogManifest.java:62)
at 
org.apache.hadoop.hdfs.server.protocol.RemoteEditLogManifest.(RemoteEditLogManifest.java:46)
at 
org.apache.hadoop.hdfs.qjournal.server.Journal.getEditLogManifest(Journal.java:740)
{code}




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16645) Multi inProgress segments caused "Invalid log manifest"

2022-06-30 Thread ZanderXu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZanderXu updated HDFS-16645:

Description: 
{code:java}
java.lang.IllegalStateException: Invalid log manifest (log [1-? (in-progress)] 
overlaps [6-? (in-progress)])[[6-? (in-progress)], [1-? (in-progress)]] 
CommittedTxId: 0 
at 
org.apache.hadoop.hdfs.server.protocol.RemoteEditLogManifest.checkState(RemoteEditLogManifest.java:62)
at 
org.apache.hadoop.hdfs.server.protocol.RemoteEditLogManifest.(RemoteEditLogManifest.java:46)
at 
org.apache.hadoop.hdfs.qjournal.server.Journal.getEditLogManifest(Journal.java:740)
{code}


  was:
{code:java}
java.lang.IllegalStateException: Invalid log manifest (log [1-? (in-progress)] 
overlaps [6-? (in-progress)])[[6-? (in-progress)], [1-? (in-progress)]] 
CommittedTxId: 0 
at 
org.apache.hadoop.hdfs.server.protocol.RemoteEditLogManifest.checkState(RemoteEditLogManifest.java:62)
at 
org.apache.hadoop.hdfs.server.protocol.RemoteEditLogManifest.(RemoteEditLogManifest.java:46)
at 
org.apache.hadoop.hdfs.qjournal.server.Journal.getEditLogManifest(Journal.java:740)
{code}



> Multi inProgress segments caused "Invalid log manifest"
> ---
>
> Key: HDFS-16645
> URL: https://issues.apache.org/jira/browse/HDFS-16645
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>
> {code:java}
> java.lang.IllegalStateException: Invalid log manifest (log [1-? 
> (in-progress)] overlaps [6-? (in-progress)])[[6-? (in-progress)], [1-? 
> (in-progress)]] CommittedTxId: 0 
> at 
> org.apache.hadoop.hdfs.server.protocol.RemoteEditLogManifest.checkState(RemoteEditLogManifest.java:62)
>   at 
> org.apache.hadoop.hdfs.server.protocol.RemoteEditLogManifest.(RemoteEditLogManifest.java:46)
>   at 
> org.apache.hadoop.hdfs.qjournal.server.Journal.getEditLogManifest(Journal.java:740)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16636) Put Chinese characters File Name Cause http header error

2022-06-30 Thread lidayu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lidayu updated HDFS-16636:
--
Description: 
*When We Put a file,the filename have Chinese characters,like this:*

[hdfsadmin@VM-15-26-centos ~]$ curl -i -X PUT 
"http://9.135.15.26:50070/webhdfs/v1/上游2.png?op=CREATE=hdfsadmin"     
                                                       
HTTP/1.1 307 TEMPORARY_REDIRECT
Cache-Control: no-cache
Expires: Fri, 17 Jun 2022 09:26:09 GMT
Date: Fri, 17 Jun 2022 09:26:09 GMT
Pragma: no-cache
Expires: Fri, 17 Jun 2022 09:26:09 GMT
Date: Fri, 17 Jun 2022 09:26:09 GMT
Pragma: no-cache
X-FRAME-OPTIONS: SAMEORIGIN
Set-Cookie: 
hadoop.auth="u=hdfsadmin=hdfsadmin=simple=1655493969462=YvkwSAwETWr2BqfkRTvBHy0Yj2A=";
 Path=/; HttpOnly
Location: 
[http://VM-15-26-centos:50075/webhdfs/v1/%E4%B8%8A%E6%B8%B82.png?op=CREATE=hdfsadmin=9.135.15.26:9000==true=false|http://vm-15-26-centos:50075/webhdfs/v1/%E4%B8%8A%E6%B8%B82.png?op=CREATE=hdfsadmin=9.135.15.26:9000==true=false]
Content-Type: application/octet-stream
Content-Length: 0

[hdfsadmin@VM-15-26-centos ~]$ curl -i -X PUT 
"http://VM-15-26-centos:50075/webhdfs/v1/%E4%B8%8A%E6%B8%B82.png?op=CREATE=hdfsadmin=9.135.15.26:9000=
=true=false"
HTTP/1.1 100 Continue

HTTP/1.1 201 Created
*Location: hdfs://9.135.15.26:9000/*
*82.png*    ->(this cause problem,the line have no colon)
Content-Length: 0
Connection: close

 

 

*THE problem is:the location will A newline,normal ,the will a one line ,like 
this:*

*Location: hdfs://9.135.15.26:9000/上游2.png    ,*

*it will cause knox error:validate header,becuase the line have no ":",*

*!image-2022-06-17-17-34-43-294.png|width=615,height=393!*

add:

I have notice datanode code: Httpheaders.java 

static void encodeAscii0(CharSequence seq, ByteBuf buf) {
int length = seq.length();
for (int i = 0 ; i < length; i++) {
buf.writeByte((byte) seq.charAt(i));{*}—>this will make "上” to asc ”10” ,10 in 
asc is new line{*}
}
}

  was:
*When We Put a file,the filename have Chinese characters,like this:*

[hdfsadmin@VM-15-26-centos ~]$ curl -i -X PUT 
"http://9.135.15.26:50070/webhdfs/v1/上游2.png?op=CREATE=hdfsadmin"     
                                                       
HTTP/1.1 307 TEMPORARY_REDIRECT
Cache-Control: no-cache
Expires: Fri, 17 Jun 2022 09:26:09 GMT
Date: Fri, 17 Jun 2022 09:26:09 GMT
Pragma: no-cache
Expires: Fri, 17 Jun 2022 09:26:09 GMT
Date: Fri, 17 Jun 2022 09:26:09 GMT
Pragma: no-cache
X-FRAME-OPTIONS: SAMEORIGIN
Set-Cookie: 
hadoop.auth="u=hdfsadmin=hdfsadmin=simple=1655493969462=YvkwSAwETWr2BqfkRTvBHy0Yj2A=";
 Path=/; HttpOnly
Location: 
[http://VM-15-26-centos:50075/webhdfs/v1/%E4%B8%8A%E6%B8%B82.png?op=CREATE=hdfsadmin=9.135.15.26:9000==true=false|http://vm-15-26-centos:50075/webhdfs/v1/%E4%B8%8A%E6%B8%B82.png?op=CREATE=hdfsadmin=9.135.15.26:9000==true=false]
Content-Type: application/octet-stream
Content-Length: 0

[hdfsadmin@VM-15-26-centos ~]$ curl -i -X PUT 
"http://VM-15-26-centos:50075/webhdfs/v1/%E4%B8%8A%E6%B8%B82.png?op=CREATE=hdfsadmin=9.135.15.26:9000=
=true=false"
HTTP/1.1 100 Continue

HTTP/1.1 201 Created
*Location: hdfs://9.135.15.26:9000/*
*82.png*    ->(this cause problem,the line have no colon)
Content-Length: 0
Connection: close

 

 

*THE problem is:the location will A newline,normal ,the will a one line ,like 
this:*

*Location: hdfs://9.135.15.26:9000/上游2.png    ,*

*it will cause knox error:validate header,becuase the line have no ":",*

*!image-2022-06-17-17-34-43-294.png|width=615,height=393!*

 


> Put Chinese characters File Name Cause http header error
> 
>
> Key: HDFS-16636
> URL: https://issues.apache.org/jira/browse/HDFS-16636
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: lidayu
>Priority: Major
> Attachments: image-2022-06-17-17-27-17-052.png, 
> image-2022-06-17-17-34-43-294.png
>
>
> *When We Put a file,the filename have Chinese characters,like this:*
> [hdfsadmin@VM-15-26-centos ~]$ curl -i -X PUT 
> "http://9.135.15.26:50070/webhdfs/v1/上游2.png?op=CREATE=hdfsadmin"   
>                                                          
> HTTP/1.1 307 TEMPORARY_REDIRECT
> Cache-Control: no-cache
> Expires: Fri, 17 Jun 2022 09:26:09 GMT
> Date: Fri, 17 Jun 2022 09:26:09 GMT
> Pragma: no-cache
> Expires: Fri, 17 Jun 2022 09:26:09 GMT
> Date: Fri, 17 Jun 2022 09:26:09 GMT
> Pragma: no-cache
> X-FRAME-OPTIONS: SAMEORIGIN
> Set-Cookie: 
> hadoop.auth="u=hdfsadmin=hdfsadmin=simple=1655493969462=YvkwSAwETWr2BqfkRTvBHy0Yj2A=";
>  Path=/; HttpOnly
> Location: 
> [http://VM-15-26-centos:50075/webhdfs/v1/%E4%B8%8A%E6%B8%B82.png?op=CREATE=hdfsadmin=9.135.15.26:9000==true=false|http://vm-15-26-centos:50075/webhdfs/v1/%E4%B8%8A%E6%B8%B82.png?op=CREATE=hdfsadmin=9.135.15.26:9000==true=false]
> Content-Type: application/octet-stream
> Content-Length: 0
> 

[jira] [Work logged] (HDFS-16453) Upgrade okhttp from 2.7.5 to 4.9.3

2022-06-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16453?focusedWorklogId=786385=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-786385
 ]

ASF GitHub Bot logged work on HDFS-16453:
-

Author: ASF GitHub Bot
Created on: 30/Jun/22 06:19
Start Date: 30/Jun/22 06:19
Worklog Time Spent: 10m 
  Work Description: pan3793 commented on PR #4229:
URL: https://github.com/apache/hadoop/pull/4229#issuecomment-1170813260

   okhttp 3.14.9 is the latest version which does not depend on kotlin




Issue Time Tracking
---

Worklog Id: (was: 786385)
Time Spent: 1h 10m  (was: 1h)

> Upgrade okhttp from 2.7.5 to 4.9.3
> --
>
> Key: HDFS-16453
> URL: https://issues.apache.org/jira/browse/HDFS-16453
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 3.3.1
>Reporter: Ivan Viaznikov
>Assignee: Ashutosh Gupta
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.4
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> {{org.apache.hadoop:hadoop-hdfs-client}} comes with 
> {{com.squareup.okhttp:okhttp:2.7.5}} as a dependency, which is vulnerable to 
> an information disclosure issue due to how the contents of sensitive headers, 
> such as the {{Authorization}} header, can be logged when an 
> {{IllegalArgumentException}} is thrown.
> This issue could allow an attacker or malicious user who has access to the 
> logs to obtain the sensitive contents of the affected headers which could 
> facilitate further attacks.
> Fixed in {{5.0.0-alpha3}} by 
> [this|https://github.com/square/okhttp/commit/dcc6483b7dc6d9c0b8e03ff7c30c13f3c75264a5]
>  commit. The fix was cherry-picked and backported into {{4.9.2}} with 
> [this|https://github.com/square/okhttp/commit/1fd7c0afdc2cee9ba982b07d49662af7f60e1518]
>  commit.
> Requesting you to clarify if this dependency will be updated to a fixed 
> version in the following releases



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org