[jira] [Updated] (HADOOP-17714) ABFS: testBlobBackCompatibility, testRandomRead & WasbAbfsCompatibility tests fail when triggered with default configs

2021-05-27 Thread Sneha Varma (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sneha Varma updated HADOOP-17714:
-
Summary: ABFS: testBlobBackCompatibility, testRandomRead & 
WasbAbfsCompatibility tests fail when triggered with default configs  (was: 
ABFS: testBlobBackCompatibility & WasbAbfsCompatibility tests fail when 
triggered with default configs)

> ABFS: testBlobBackCompatibility, testRandomRead & WasbAbfsCompatibility tests 
> fail when triggered with default configs
> --
>
> Key: HADOOP-17714
> URL: https://issues.apache.org/jira/browse/HADOOP-17714
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Reporter: Sneha Varma
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> testBlobBackCompatibility & WasbAbfsCompatibility tests fail when triggered 
> with default configs as http is not enabled on gen2 accounts by default.
>  
> Options to fix it:
> tests' config should enforce https by default 
> or the tests should be modified not execute http requests
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17714) ABFS: testBlobBackCompatibility, testRandomRead & WasbAbfsCompatibility tests fail when triggered with default configs

2021-05-27 Thread Sneha Varma (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sneha Varma updated HADOOP-17714:
-
Description: 
testBlobBackCompatibility, testRandomRead & WasbAbfsCompatibility tests fail 
when triggered with default configs as http is not enabled on gen2 accounts by 
default.

 

Options to fix it:

tests' config should enforce https by default 

or the tests should be modified not execute http requests

 

  was:
testBlobBackCompatibility & WasbAbfsCompatibility tests fail when triggered 
with default configs as http is not enabled on gen2 accounts by default.

 

Options to fix it:

tests' config should enforce https by default 

or the tests should be modified not execute http requests

 


> ABFS: testBlobBackCompatibility, testRandomRead & WasbAbfsCompatibility tests 
> fail when triggered with default configs
> --
>
> Key: HADOOP-17714
> URL: https://issues.apache.org/jira/browse/HADOOP-17714
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Reporter: Sneha Varma
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> testBlobBackCompatibility, testRandomRead & WasbAbfsCompatibility tests fail 
> when triggered with default configs as http is not enabled on gen2 accounts 
> by default.
>  
> Options to fix it:
> tests' config should enforce https by default 
> or the tests should be modified not execute http requests
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17714) ABFS: testBlobBackCompatibility & WasbAbfsCompatibility tests fail when triggered with default configs

2021-05-20 Thread Sneha Varma (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sneha Varma updated HADOOP-17714:
-
Summary: ABFS: testBlobBackCompatibility & WasbAbfsCompatibility tests fail 
when triggered with default configs  (was: testBlobBackCompatibility & 
WasbAbfsCompatibility tests fail when triggered with default configs)

> ABFS: testBlobBackCompatibility & WasbAbfsCompatibility tests fail when 
> triggered with default configs
> --
>
> Key: HADOOP-17714
> URL: https://issues.apache.org/jira/browse/HADOOP-17714
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Reporter: Sneha Varma
>Priority: Minor
>
> testBlobBackCompatibility & WasbAbfsCompatibility tests fail when triggered 
> with default configs as http is not enabled on gen2 accounts by default.
>  
> Options to fix it:
> tests' config should enforce https by default 
> or the tests should be modified not execute http requests
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17715) ABFS: Append blob tests with non HNS accounts fail

2021-05-20 Thread Sneha Varma (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sneha Varma updated HADOOP-17715:
-
Summary: ABFS: Append blob tests with non HNS accounts fail  (was: Append 
blob tests with non HNS accounts fail)

> ABFS: Append blob tests with non HNS accounts fail
> --
>
> Key: HADOOP-17715
> URL: https://issues.apache.org/jira/browse/HADOOP-17715
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sneha Varma
>Priority: Minor
>
> Append blob tests with non HNS accounts fail.
>  # The script to run the tests should ensure that append blob tests with non 
> HNS account don't execute
>  # Should have proper documentation mentioning that append blob is allowed 
> only for HNS accounts



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17716) ABFS: ITestAbfsStreamStatistics TestAbfsStreamOps fail with append blob on HNS account

2021-05-20 Thread Sneha Varma (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sneha Varma updated HADOOP-17716:
-
Summary: ABFS: ITestAbfsStreamStatistics TestAbfsStreamOps fail with append 
blob on HNS account  (was: ITestAbfsStreamStatistics TestAbfsStreamOps fail 
with append blob on HNS account)

> ABFS: ITestAbfsStreamStatistics TestAbfsStreamOps fail with append blob on 
> HNS account
> --
>
> Key: HADOOP-17716
> URL: https://issues.apache.org/jira/browse/HADOOP-17716
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sneha Varma
>Priority: Minor
>
> ITestAbfsStreamStatistics -TestAbfsStreamOps fail with append blob on HNS 
> account



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17716) ITestAbfsStreamStatistics TestAbfsStreamOps fail with append blob on HNS account

2021-05-20 Thread Sneha Varma (Jira)
Sneha Varma created HADOOP-17716:


 Summary: ITestAbfsStreamStatistics TestAbfsStreamOps fail with 
append blob on HNS account
 Key: HADOOP-17716
 URL: https://issues.apache.org/jira/browse/HADOOP-17716
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Sneha Varma


ITestAbfsStreamStatistics -TestAbfsStreamOps fail with append blob on HNS 
account



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17715) Append blob tests with non HNS accounts fail

2021-05-20 Thread Sneha Varma (Jira)
Sneha Varma created HADOOP-17715:


 Summary: Append blob tests with non HNS accounts fail
 Key: HADOOP-17715
 URL: https://issues.apache.org/jira/browse/HADOOP-17715
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Sneha Varma


Append blob tests with non HNS accounts fail.
 # The script to run the tests should ensure that append blob tests with non 
HNS account don't execute
 # Should have proper documentation mentioning that append blob is allowed only 
for HNS accounts



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17714) testBlobBackCompatibility & WasbAbfsCompatibility tests fail when triggered with default configs

2021-05-20 Thread Sneha Varma (Jira)
Sneha Varma created HADOOP-17714:


 Summary: testBlobBackCompatibility & WasbAbfsCompatibility tests 
fail when triggered with default configs
 Key: HADOOP-17714
 URL: https://issues.apache.org/jira/browse/HADOOP-17714
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: test
Reporter: Sneha Varma


testBlobBackCompatibility & WasbAbfsCompatibility tests fail when triggered 
with default configs as http is not enabled on gen2 accounts by default.

 

Options to fix it:

tests' config should enforce https by default 

or the tests should be modified not execute http requests

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17628) ABFS: Distcp contract test testDistCpWithIterator is timing out consistently

2021-05-20 Thread Sneha Varma (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17348176#comment-17348176
 ] 

Sneha Varma commented on HADOOP-17628:
--

[~bilahari.th] Would like to add that ContractSecureDistCp is also seeing the 
same issue would be great if we could add the same info in description of the 
issue for it as well

> ABFS: Distcp contract test testDistCpWithIterator is timing out consistently 
> -
>
> Key: HADOOP-17628
> URL: https://issues.apache.org/jira/browse/HADOOP-17628
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.4.0
>Reporter: Bilahari T H
>Priority: Minor
>
> The test case testDistCpWithIterator in AbstractContractDistCpTest is 
> consistently timing out.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17590) ABFS: Introduce Lease Operations with Append to provide single writer semantics

2021-03-15 Thread Sneha Varma (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sneha Varma updated HADOOP-17590:
-
Description: 
The lease operations will be introduced as part of Append, Flush to ensure the 
single writer semantics.

 

Details:

Acquire Lease will be introduced in Create, Auto-Renew, Acquire will be added 
to Append & Release, Auto-Renew, Acquire in Flush.

 

Duration the creation of the file the lease will be acquired, as part of 
appends the lease will be auto-renewed & the lease can be released as part of 
flush.

 

By default the lease duration will be of 60 seconds.

"fs.azure.write.enforcelease" & "fs.azure.write.lease.duration" two configs 
will be introduced.

  was:
The lease operations will be introduced as part of Append, Flush to ensure the 
single writer semantics.

 Acquire Lease will be introduced in Create, Auto-Renew, Acquire will be added 
to Append & Release, Auto-Renew, Acquire in Flush.

 By default the lease duration will be of 60 seconds.

"fs.azure.write.enforcelease" & "fs.azure.write.lease.duration" two configs 
will be introduced.


> ABFS: Introduce Lease Operations with Append to provide single writer 
> semantics
> ---
>
> Key: HADOOP-17590
> URL: https://issues.apache.org/jira/browse/HADOOP-17590
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sneha Varma
>Priority: Major
>
> The lease operations will be introduced as part of Append, Flush to ensure 
> the single writer semantics.
>  
> Details:
> Acquire Lease will be introduced in Create, Auto-Renew, Acquire will be added 
> to Append & Release, Auto-Renew, Acquire in Flush.
>  
> Duration the creation of the file the lease will be acquired, as part of 
> appends the lease will be auto-renewed & the lease can be released as part of 
> flush.
>  
> By default the lease duration will be of 60 seconds.
> "fs.azure.write.enforcelease" & "fs.azure.write.lease.duration" two configs 
> will be introduced.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17590) ABFS: Introduce Lease Operations with Append to provide single writer semantics

2021-03-15 Thread Sneha Varma (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sneha Varma updated HADOOP-17590:
-
Description: 
The lease operations will be introduced as part of Append, Flush to ensure the 
single writer semantics.

 Acquire Lease will be introduced in Create, Auto-Renew, Acquire will be added 
to Append & Release, Auto-Renew, Acquire in Flush.

 By default the lease duration will be of 60 seconds.

"fs.azure.write.enforcelease" & "fs.azure.write.lease.duration" two configs 
will be introduced.

  was:
The lease operations will be introduced as part of Append, Flush to ensure the 
single writer semantics.

 

Acquire Lease will be introduced in Create, Auto-Renew, Acquire will be added 
to Append & Release, Auto-Renew, Acquire in Flush.

 

By default the lease duration will be of 60 seconds.


> ABFS: Introduce Lease Operations with Append to provide single writer 
> semantics
> ---
>
> Key: HADOOP-17590
> URL: https://issues.apache.org/jira/browse/HADOOP-17590
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sneha Varma
>Priority: Major
>
> The lease operations will be introduced as part of Append, Flush to ensure 
> the single writer semantics.
>  Acquire Lease will be introduced in Create, Auto-Renew, Acquire will be 
> added to Append & Release, Auto-Renew, Acquire in Flush.
>  By default the lease duration will be of 60 seconds.
> "fs.azure.write.enforcelease" & "fs.azure.write.lease.duration" two configs 
> will be introduced.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17590) ABFS: Introduce Lease Operations with Append to provide single writer semantics

2021-03-15 Thread Sneha Varma (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sneha Varma updated HADOOP-17590:
-
Description: 
The lease operations will be introduced as part of Append, Flush to ensure the 
single writer semantics.

 

Acquire Lease will be introduced in Create, Auto-Renew, Acquire will be added 
to Append & Release, Auto-Renew, Acquire in Flush.

 

By default the lease duration will be of 60 seconds.

  was:
The lease operations will be introduced as part of Append, Flush to ensure the 
single writer semantics.

 

Acquire Lease will be introduced in Create, Auto-Renew, Acquire will be added 
to Append & Release, Auto-Renew, Acquire in Flush.


> ABFS: Introduce Lease Operations with Append to provide single writer 
> semantics
> ---
>
> Key: HADOOP-17590
> URL: https://issues.apache.org/jira/browse/HADOOP-17590
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sneha Varma
>Priority: Major
>
> The lease operations will be introduced as part of Append, Flush to ensure 
> the single writer semantics.
>  
> Acquire Lease will be introduced in Create, Auto-Renew, Acquire will be added 
> to Append & Release, Auto-Renew, Acquire in Flush.
>  
> By default the lease duration will be of 60 seconds.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17590) ABFS: Introduce Lease Operations with Append to provide single writer semantics

2021-03-15 Thread Sneha Varma (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sneha Varma updated HADOOP-17590:
-
Description: 
The lease operations will be introduced as part of Append, Flush to ensure the 
single writer semantics.

 

Acquire Lease will be introduced in Create, Auto-Renew, Acquire will be added 
to Append & Release, Auto-Renew, Acquire in Flush.

> ABFS: Introduce Lease Operations with Append to provide single writer 
> semantics
> ---
>
> Key: HADOOP-17590
> URL: https://issues.apache.org/jira/browse/HADOOP-17590
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sneha Varma
>Priority: Major
>
> The lease operations will be introduced as part of Append, Flush to ensure 
> the single writer semantics.
>  
> Acquire Lease will be introduced in Create, Auto-Renew, Acquire will be added 
> to Append & Release, Auto-Renew, Acquire in Flush.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17590) ABFS: Introduce Lease Operations with Append to provide single writer semantics

2021-03-15 Thread Sneha Varma (Jira)
Sneha Varma created HADOOP-17590:


 Summary: ABFS: Introduce Lease Operations with Append to provide 
single writer semantics
 Key: HADOOP-17590
 URL: https://issues.apache.org/jira/browse/HADOOP-17590
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Sneha Varma






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16402) AAD MSI flow is broken

2019-07-01 Thread Sneha Varma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sneha Varma reassigned HADOOP-16402:


Assignee: Sneha Varma  (was: Vishwajeet Dusane)

> AAD MSI flow is broken
> --
>
> Key: HADOOP-16402
> URL: https://issues.apache.org/jira/browse/HADOOP-16402
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl
>Affects Versions: 2.9.0
>Reporter: Vishwajeet Dusane
>Assignee: Sneha Varma
>Priority: Major
>
> For AAD with MSI flow to work, ADL driver needs to initialize 
> MsiTokenProvider class with AAD Client id and tenant id. With the current 
> implementation, AAD MSI flow is broken.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15778) ABFS: Fix client side throttling for read

2018-09-20 Thread Sneha Varma (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16623033#comment-16623033
 ] 

Sneha Varma commented on HADOOP-15778:
--

Patch HADOOP-15778-HADOOP-15407-002.patch

Incorporating the comment

ABFS tests Results:

Tests run: 36, Failures: 0, Errors: 0, Skipped: 0

Tests run: 269, Failures: 0, Errors: 0, Skipped: 182

Tests run: 165, Failures: 0, Errors: 0, Skipped: 15

 

 

> ABFS: Fix client side throttling for read
> -
>
> Key: HADOOP-15778
> URL: https://issues.apache.org/jira/browse/HADOOP-15778
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sneha Varma
>Assignee: Sneha Varma
>Priority: Minor
> Attachments: HADOOP-15778-HADOOP-15407-001.patch, 
> HADOOP-15778-HADOOP-15407-002.patch
>
>
> 1. The content length for ReadFile in updateMetrics of 
> AbfsClientThrottlingIntercept is incorrect for cases when the request fails.
> It is currently equal to the number of bytes that are read whereas it should 
> be equal to the number of bytes requested.
> 2. sendingRequest of AbfsClientThrottlingIntercept at AbfsRestOperation 
> should be called irrespective of  whether the request has body.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15778) ABFS: Fix client side throttling for read

2018-09-20 Thread Sneha Varma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sneha Varma updated HADOOP-15778:
-
Attachment: HADOOP-15778-HADOOP-15407-002.patch
Status: Patch Available  (was: Open)

> ABFS: Fix client side throttling for read
> -
>
> Key: HADOOP-15778
> URL: https://issues.apache.org/jira/browse/HADOOP-15778
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sneha Varma
>Assignee: Sneha Varma
>Priority: Minor
> Attachments: HADOOP-15778-HADOOP-15407-001.patch, 
> HADOOP-15778-HADOOP-15407-002.patch
>
>
> 1. The content length for ReadFile in updateMetrics of 
> AbfsClientThrottlingIntercept is incorrect for cases when the request fails.
> It is currently equal to the number of bytes that are read whereas it should 
> be equal to the number of bytes requested.
> 2. sendingRequest of AbfsClientThrottlingIntercept at AbfsRestOperation 
> should be called irrespective of  whether the request has body.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15778) ABFS: Fix client side throttling for read

2018-09-20 Thread Sneha Varma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sneha Varma updated HADOOP-15778:
-
Attachment: (was: HADOOP-15778-HADOOP-15407-002.patch)

> ABFS: Fix client side throttling for read
> -
>
> Key: HADOOP-15778
> URL: https://issues.apache.org/jira/browse/HADOOP-15778
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sneha Varma
>Assignee: Sneha Varma
>Priority: Minor
> Attachments: HADOOP-15778-HADOOP-15407-001.patch
>
>
> 1. The content length for ReadFile in updateMetrics of 
> AbfsClientThrottlingIntercept is incorrect for cases when the request fails.
> It is currently equal to the number of bytes that are read whereas it should 
> be equal to the number of bytes requested.
> 2. sendingRequest of AbfsClientThrottlingIntercept at AbfsRestOperation 
> should be called irrespective of  whether the request has body.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15778) ABFS: Fix client side throttling for read

2018-09-20 Thread Sneha Varma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sneha Varma updated HADOOP-15778:
-
Status: Open  (was: Patch Available)

> ABFS: Fix client side throttling for read
> -
>
> Key: HADOOP-15778
> URL: https://issues.apache.org/jira/browse/HADOOP-15778
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sneha Varma
>Assignee: Sneha Varma
>Priority: Minor
> Attachments: HADOOP-15778-HADOOP-15407-001.patch, 
> HADOOP-15778-HADOOP-15407-002.patch
>
>
> 1. The content length for ReadFile in updateMetrics of 
> AbfsClientThrottlingIntercept is incorrect for cases when the request fails.
> It is currently equal to the number of bytes that are read whereas it should 
> be equal to the number of bytes requested.
> 2. sendingRequest of AbfsClientThrottlingIntercept at AbfsRestOperation 
> should be called irrespective of  whether the request has body.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15778) ABFS: Fix client side throttling for read

2018-09-20 Thread Sneha Varma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sneha Varma updated HADOOP-15778:
-
Attachment: HADOOP-15778-HADOOP-15407-002.patch

> ABFS: Fix client side throttling for read
> -
>
> Key: HADOOP-15778
> URL: https://issues.apache.org/jira/browse/HADOOP-15778
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sneha Varma
>Assignee: Sneha Varma
>Priority: Minor
> Attachments: HADOOP-15778-HADOOP-15407-001.patch, 
> HADOOP-15778-HADOOP-15407-002.patch
>
>
> 1. The content length for ReadFile in updateMetrics of 
> AbfsClientThrottlingIntercept is incorrect for cases when the request fails.
> It is currently equal to the number of bytes that are read whereas it should 
> be equal to the number of bytes requested.
> 2. sendingRequest of AbfsClientThrottlingIntercept at AbfsRestOperation 
> should be called irrespective of  whether the request has body.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15778) ABFS: Fix client side throttling for read

2018-09-20 Thread Sneha Varma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sneha Varma updated HADOOP-15778:
-
  Assignee: Sneha Varma
Attachment: HADOOP-15778-HADOOP-15407-001.patch
Status: Patch Available  (was: Open)

ABFS tests Results:
 Tests run: 36, Failures: 0, Errors: 0, Skipped: 0
 Tests run: 269, Failures: 0, Errors: 0, Skipped: 182
 Tests run: 165, Failures: 0, Errors: 0, Skipped: 15

> ABFS: Fix client side throttling for read
> -
>
> Key: HADOOP-15778
> URL: https://issues.apache.org/jira/browse/HADOOP-15778
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sneha Varma
>Assignee: Sneha Varma
>Priority: Minor
> Attachments: HADOOP-15778-HADOOP-15407-001.patch
>
>
> 1. The content length for ReadFile in updateMetrics of 
> AbfsClientThrottlingIntercept is incorrect for cases when the request fails.
> It is currently equal to the number of bytes that are read whereas it should 
> be equal to the number of bytes requested.
> 2. sendingRequest of AbfsClientThrottlingIntercept at AbfsRestOperation 
> should be called irrespective of  whether the request has body.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15778) ABFS: Fix client side throttling for read

2018-09-20 Thread Sneha Varma (JIRA)
Sneha Varma created HADOOP-15778:


 Summary: ABFS: Fix client side throttling for read
 Key: HADOOP-15778
 URL: https://issues.apache.org/jira/browse/HADOOP-15778
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Sneha Varma


1. The content length for ReadFile in updateMetrics of 
AbfsClientThrottlingIntercept is incorrect for cases when the request fails.
It is currently equal to the number of bytes that are read whereas it should be 
equal to the number of bytes requested.



2. sendingRequest of AbfsClientThrottlingIntercept at AbfsRestOperation should 
be called irrespective of  whether the request has body.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HADOOP-15740) ABFS: Check variable names during initialization of AbfsClientThrottlingIntercept

2018-09-12 Thread Sneha Varma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sneha Varma updated HADOOP-15740:
-
Comment: was deleted

(was: Thanks a lot Thomas & Da Zhou,

Below are the ABFS test results:



*Account without namespace support*
 Tests run: 29, Failures: 0, Errors: 0, Skipped: 0

Tests run: 268, Failures: 0, Errors: 0, Skipped: 182

Tests run: 167, Failures: 0, Errors: 0, Skipped: 27

*Account with namespace support*:

Tests run: 29, Failures: 0, Errors: 0, Skipped: 0

Tests run: 268, Failures: 0, Errors: 0, Skipped: 30

Tests run: 167, Failures: 0, Errors: 0, Skipped: 27)

> ABFS: Check variable names during initialization of 
> AbfsClientThrottlingIntercept 
> --
>
> Key: HADOOP-15740
> URL: https://issues.apache.org/jira/browse/HADOOP-15740
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sneha Varma
>Assignee: Sneha Varma
>Priority: Minor
> Attachments: HADOOP-15740-HADOOP-15407-001.patch
>
>
> In the initializeSingleton function of the AbfsClientThrottlingIntercept the 
> local variable name is same as global variable isAutoThrottlingEnabled 
> because of which the global isAutoThrottlingEnabled is not being set to true.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15740) ABFS: Check variable names during initialization of AbfsClientThrottlingIntercept

2018-09-12 Thread Sneha Varma (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16612539#comment-16612539
 ] 

Sneha Varma commented on HADOOP-15740:
--

Thanks a lot Thomas & Da Zhou,

Below are the ABFS test results:

*Account without namespace support*
Tests run: 29, Failures: 0, Errors: 0, Skipped: 0

Tests run: 268, Failures: 0, Errors: 0, Skipped: 182

Tests run: 167, Failures: 0, Errors: 0, Skipped: 27

*Account with namespace support*:

Tests run: 29, Failures: 0, Errors: 0, Skipped: 0

Tests run: 268, Failures: 0, Errors: 0, Skipped: 30

Tests run: 167, Failures: 0, Errors: 0, Skipped: 27

> ABFS: Check variable names during initialization of 
> AbfsClientThrottlingIntercept 
> --
>
> Key: HADOOP-15740
> URL: https://issues.apache.org/jira/browse/HADOOP-15740
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sneha Varma
>Assignee: Sneha Varma
>Priority: Minor
> Attachments: HADOOP-15740-HADOOP-15407-001.patch
>
>
> In the initializeSingleton function of the AbfsClientThrottlingIntercept the 
> local variable name is same as global variable isAutoThrottlingEnabled 
> because of which the global isAutoThrottlingEnabled is not being set to true.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15740) ABFS: Check variable names during initialization of AbfsClientThrottlingIntercept

2018-09-12 Thread Sneha Varma (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16612534#comment-16612534
 ] 

Sneha Varma edited comment on HADOOP-15740 at 9/12/18 6:01 PM:
---

Thanks a lot Thomas & Da Zhou,

Below are the ABFS test results:



*Account without namespace support*
 Tests run: 29, Failures: 0, Errors: 0, Skipped: 0

Tests run: 268, Failures: 0, Errors: 0, Skipped: 182

Tests run: 167, Failures: 0, Errors: 0, Skipped: 27

*Account with namespace support*:

Tests run: 29, Failures: 0, Errors: 0, Skipped: 0

Tests run: 268, Failures: 0, Errors: 0, Skipped: 30

Tests run: 167, Failures: 0, Errors: 0, Skipped: 27


was (Author: sneha_varma):
Thanks a lot Thomas & Da Zhou,

Below are the ABFS tests result:
**

*Account without namespace support*
Tests run: 29, Failures: 0, Errors: 0, Skipped: 0

Tests run: 268, Failures: 0, Errors: 0, Skipped: 182

Tests run: 167, Failures: 0, Errors: 0, Skipped: 27

*Account with namespace support*:

Tests run: 29, Failures: 0, Errors: 0, Skipped: 0

Tests run: 268, Failures: 0, Errors: 0, Skipped: 30

Tests run: 167, Failures: 0, Errors: 0, Skipped: 27

> ABFS: Check variable names during initialization of 
> AbfsClientThrottlingIntercept 
> --
>
> Key: HADOOP-15740
> URL: https://issues.apache.org/jira/browse/HADOOP-15740
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sneha Varma
>Assignee: Sneha Varma
>Priority: Minor
> Attachments: HADOOP-15740-HADOOP-15407-001.patch
>
>
> In the initializeSingleton function of the AbfsClientThrottlingIntercept the 
> local variable name is same as global variable isAutoThrottlingEnabled 
> because of which the global isAutoThrottlingEnabled is not being set to true.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15740) ABFS: Check variable names during initialization of AbfsClientThrottlingIntercept

2018-09-12 Thread Sneha Varma (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16612534#comment-16612534
 ] 

Sneha Varma commented on HADOOP-15740:
--

Thanks a lot Thomas & Da Zhou,

Below are the ABFS tests result:
**

*Account without namespace support*
Tests run: 29, Failures: 0, Errors: 0, Skipped: 0

Tests run: 268, Failures: 0, Errors: 0, Skipped: 182

Tests run: 167, Failures: 0, Errors: 0, Skipped: 27

*Account with namespace support*:

Tests run: 29, Failures: 0, Errors: 0, Skipped: 0

Tests run: 268, Failures: 0, Errors: 0, Skipped: 30

Tests run: 167, Failures: 0, Errors: 0, Skipped: 27

> ABFS: Check variable names during initialization of 
> AbfsClientThrottlingIntercept 
> --
>
> Key: HADOOP-15740
> URL: https://issues.apache.org/jira/browse/HADOOP-15740
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sneha Varma
>Assignee: Sneha Varma
>Priority: Minor
> Attachments: HADOOP-15740-HADOOP-15407-001.patch
>
>
> In the initializeSingleton function of the AbfsClientThrottlingIntercept the 
> local variable name is same as global variable isAutoThrottlingEnabled 
> because of which the global isAutoThrottlingEnabled is not being set to true.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15740) ABFS: Check variable names during initialization of AbfsClientThrottlingIntercept

2018-09-10 Thread Sneha Varma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sneha Varma updated HADOOP-15740:
-
Description: 
In the initializeSingleton function of the AbfsClientThrottlingIntercept the 
local variable name is same as global variable isAutoThrottlingEnabled because 
of which the global isAutoThrottlingEnabled is not being set to true.

 

> ABFS: Check variable names during initialization of 
> AbfsClientThrottlingIntercept 
> --
>
> Key: HADOOP-15740
> URL: https://issues.apache.org/jira/browse/HADOOP-15740
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sneha Varma
>Assignee: Sneha Varma
>Priority: Minor
> Attachments: HADOOP-15740-HADOOP-15407-001.patch
>
>
> In the initializeSingleton function of the AbfsClientThrottlingIntercept the 
> local variable name is same as global variable isAutoThrottlingEnabled 
> because of which the global isAutoThrottlingEnabled is not being set to true.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15740) ABFS: Check variable names during initialization of AbfsClientThrottlingIntercept

2018-09-10 Thread Sneha Varma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sneha Varma updated HADOOP-15740:
-
Attachment: HADOOP-15740-HADOOP-15407-001.patch
Status: Patch Available  (was: Open)

> ABFS: Check variable names during initialization of 
> AbfsClientThrottlingIntercept 
> --
>
> Key: HADOOP-15740
> URL: https://issues.apache.org/jira/browse/HADOOP-15740
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sneha Varma
>Assignee: Sneha Varma
>Priority: Minor
> Attachments: HADOOP-15740-HADOOP-15407-001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15740) ABFS: Check variable names during initialization of AbfsClientThrottlingIntercept

2018-09-10 Thread Sneha Varma (JIRA)
Sneha Varma created HADOOP-15740:


 Summary: ABFS: Check variable names during initialization of 
AbfsClientThrottlingIntercept 
 Key: HADOOP-15740
 URL: https://issues.apache.org/jira/browse/HADOOP-15740
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Sneha Varma
Assignee: Sneha Varma






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15703) ABFS - Implement client-side throttling

2018-08-29 Thread Sneha Varma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sneha Varma updated HADOOP-15703:
-
Status: Open  (was: Patch Available)

> ABFS - Implement client-side throttling 
> 
>
> Key: HADOOP-15703
> URL: https://issues.apache.org/jira/browse/HADOOP-15703
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sneha Varma
>Priority: Major
> Attachments: HADOOP-15703-HADOOP-15407-001.patch
>
>
> Big data workloads frequently exceed the AzureBlobFS max ingress and egress 
> limits 
> (https://docs.microsoft.com/en-us/azure/storage/common/storage-scalability-targets).
>  For example, the max ingress limit for a GRS account in the United States is 
> currently 10 Gbps. When the limit is exceeded, the AzureBlobFS service fails 
> a percentage of incoming requests, and this causes the client to initiate the 
> retry policy. The retry policy delays requests by sleeping, but the sleep 
> duration is independent of the client throughput and account limit. This 
> results in low throughput, due to the high number of failed requests and 
> thrashing causes by the retry policy.
> To fix this, we introduce a client-side throttle which minimizes failed 
> requests and maximizes throughput. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15703) ABFS - Implement client-side throttling

2018-08-28 Thread Sneha Varma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sneha Varma updated HADOOP-15703:
-
Attachment: (was: HADOOP-15703-HADOOP-15407-001.patch)

> ABFS - Implement client-side throttling 
> 
>
> Key: HADOOP-15703
> URL: https://issues.apache.org/jira/browse/HADOOP-15703
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sneha Varma
>Priority: Major
> Attachments: HADOOP-15703-HADOOP-15407-001.patch
>
>
> Big data workloads frequently exceed the AzureBlobFS max ingress and egress 
> limits 
> (https://docs.microsoft.com/en-us/azure/storage/common/storage-scalability-targets).
>  For example, the max ingress limit for a GRS account in the United States is 
> currently 10 Gbps. When the limit is exceeded, the AzureBlobFS service fails 
> a percentage of incoming requests, and this causes the client to initiate the 
> retry policy. The retry policy delays requests by sleeping, but the sleep 
> duration is independent of the client throughput and account limit. This 
> results in low throughput, due to the high number of failed requests and 
> thrashing causes by the retry policy.
> To fix this, we introduce a client-side throttle which minimizes failed 
> requests and maximizes throughput. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15703) ABFS - Implement client-side throttling

2018-08-28 Thread Sneha Varma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sneha Varma updated HADOOP-15703:
-
Attachment: HADOOP-15703-HADOOP-15407-001.patch

> ABFS - Implement client-side throttling 
> 
>
> Key: HADOOP-15703
> URL: https://issues.apache.org/jira/browse/HADOOP-15703
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sneha Varma
>Priority: Major
> Attachments: HADOOP-15703-HADOOP-15407-001.patch
>
>
> Big data workloads frequently exceed the AzureBlobFS max ingress and egress 
> limits 
> (https://docs.microsoft.com/en-us/azure/storage/common/storage-scalability-targets).
>  For example, the max ingress limit for a GRS account in the United States is 
> currently 10 Gbps. When the limit is exceeded, the AzureBlobFS service fails 
> a percentage of incoming requests, and this causes the client to initiate the 
> retry policy. The retry policy delays requests by sleeping, but the sleep 
> duration is independent of the client throughput and account limit. This 
> results in low throughput, due to the high number of failed requests and 
> thrashing causes by the retry policy.
> To fix this, we introduce a client-side throttle which minimizes failed 
> requests and maximizes throughput. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15703) ABFS - Implement client-side throttling

2018-08-28 Thread Sneha Varma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sneha Varma updated HADOOP-15703:
-
Attachment: HADOOP-15703-HADOOP-15407-001.patch

> ABFS - Implement client-side throttling 
> 
>
> Key: HADOOP-15703
> URL: https://issues.apache.org/jira/browse/HADOOP-15703
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sneha Varma
>Priority: Major
> Attachments: HADOOP-15703-HADOOP-15407-001.patch
>
>
> Big data workloads frequently exceed the AzureBlobFS max ingress and egress 
> limits 
> (https://docs.microsoft.com/en-us/azure/storage/common/storage-scalability-targets).
>  For example, the max ingress limit for a GRS account in the United States is 
> currently 10 Gbps. When the limit is exceeded, the AzureBlobFS service fails 
> a percentage of incoming requests, and this causes the client to initiate the 
> retry policy. The retry policy delays requests by sleeping, but the sleep 
> duration is independent of the client throughput and account limit. This 
> results in low throughput, due to the high number of failed requests and 
> thrashing causes by the retry policy.
> To fix this, we introduce a client-side throttle which minimizes failed 
> requests and maximizes throughput. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15703) ABFS - Implement client-side throttling

2018-08-28 Thread Sneha Varma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sneha Varma updated HADOOP-15703:
-
Attachment: (was: HADOOP-15703-001.patch)

> ABFS - Implement client-side throttling 
> 
>
> Key: HADOOP-15703
> URL: https://issues.apache.org/jira/browse/HADOOP-15703
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sneha Varma
>Priority: Major
>
> Big data workloads frequently exceed the AzureBlobFS max ingress and egress 
> limits 
> (https://docs.microsoft.com/en-us/azure/storage/common/storage-scalability-targets).
>  For example, the max ingress limit for a GRS account in the United States is 
> currently 10 Gbps. When the limit is exceeded, the AzureBlobFS service fails 
> a percentage of incoming requests, and this causes the client to initiate the 
> retry policy. The retry policy delays requests by sleeping, but the sleep 
> duration is independent of the client throughput and account limit. This 
> results in low throughput, due to the high number of failed requests and 
> thrashing causes by the retry policy.
> To fix this, we introduce a client-side throttle which minimizes failed 
> requests and maximizes throughput. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15703) ABFS - Implement client-side throttling

2018-08-28 Thread Sneha Varma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sneha Varma updated HADOOP-15703:
-
Summary: ABFS - Implement client-side throttling   (was: AzureBlobFS - 
Implement client-side throttling )

> ABFS - Implement client-side throttling 
> 
>
> Key: HADOOP-15703
> URL: https://issues.apache.org/jira/browse/HADOOP-15703
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sneha Varma
>Priority: Major
> Attachments: HADOOP-15703-001.patch
>
>
> Big data workloads frequently exceed the AzureBlobFS max ingress and egress 
> limits 
> (https://docs.microsoft.com/en-us/azure/storage/common/storage-scalability-targets).
>  For example, the max ingress limit for a GRS account in the United States is 
> currently 10 Gbps. When the limit is exceeded, the AzureBlobFS service fails 
> a percentage of incoming requests, and this causes the client to initiate the 
> retry policy. The retry policy delays requests by sleeping, but the sleep 
> duration is independent of the client throughput and account limit. This 
> results in low throughput, due to the high number of failed requests and 
> thrashing causes by the retry policy.
> To fix this, we introduce a client-side throttle which minimizes failed 
> requests and maximizes throughput. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15703) AzureBlobFS - Implement client-side throttling

2018-08-28 Thread Sneha Varma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sneha Varma updated HADOOP-15703:
-
Attachment: HADOOP-15703-001.patch
Status: Patch Available  (was: Open)

> AzureBlobFS - Implement client-side throttling 
> ---
>
> Key: HADOOP-15703
> URL: https://issues.apache.org/jira/browse/HADOOP-15703
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sneha Varma
>Priority: Major
> Attachments: HADOOP-15703-001.patch
>
>
> Big data workloads frequently exceed the AzureBlobFS max ingress and egress 
> limits 
> (https://docs.microsoft.com/en-us/azure/storage/common/storage-scalability-targets).
>  For example, the max ingress limit for a GRS account in the United States is 
> currently 10 Gbps. When the limit is exceeded, the AzureBlobFS service fails 
> a percentage of incoming requests, and this causes the client to initiate the 
> retry policy. The retry policy delays requests by sleeping, but the sleep 
> duration is independent of the client throughput and account limit. This 
> results in low throughput, due to the high number of failed requests and 
> thrashing causes by the retry policy.
> To fix this, we introduce a client-side throttle which minimizes failed 
> requests and maximizes throughput. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15703) AzureBlobFS - Implement client-side throttling

2018-08-28 Thread Sneha Varma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sneha Varma updated HADOOP-15703:
-
Summary: AzureBlobFS - Implement client-side throttling   (was: AzureBlobFS 
- implement client-side throttling )

> AzureBlobFS - Implement client-side throttling 
> ---
>
> Key: HADOOP-15703
> URL: https://issues.apache.org/jira/browse/HADOOP-15703
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sneha Varma
>Priority: Major
>
> Big data workloads frequently exceed the AzureBlobFS max ingress and egress 
> limits 
> (https://docs.microsoft.com/en-us/azure/storage/common/storage-scalability-targets).
>  For example, the max ingress limit for a GRS account in the United States is 
> currently 10 Gbps. When the limit is exceeded, the AzureBlobFS service fails 
> a percentage of incoming requests, and this causes the client to initiate the 
> retry policy. The retry policy delays requests by sleeping, but the sleep 
> duration is independent of the client throughput and account limit. This 
> results in low throughput, due to the high number of failed requests and 
> thrashing causes by the retry policy.
> To fix this, we introduce a client-side throttle which minimizes failed 
> requests and maximizes throughput. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15703) AzureBlobFS - implement client-side throttling

2018-08-28 Thread Sneha Varma (JIRA)
Sneha Varma created HADOOP-15703:


 Summary: AzureBlobFS - implement client-side throttling 
 Key: HADOOP-15703
 URL: https://issues.apache.org/jira/browse/HADOOP-15703
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Sneha Varma


Big data workloads frequently exceed the AzureBlobFS max ingress and egress 
limits 
(https://docs.microsoft.com/en-us/azure/storage/common/storage-scalability-targets).
 For example, the max ingress limit for a GRS account in the United States is 
currently 10 Gbps. When the limit is exceeded, the AzureBlobFS service fails a 
percentage of incoming requests, and this causes the client to initiate the 
retry policy. The retry policy delays requests by sleeping, but the sleep 
duration is independent of the client throughput and account limit. This 
results in low throughput, due to the high number of failed requests and 
thrashing causes by the retry policy.

To fix this, we introduce a client-side throttle which minimizes failed 
requests and maximizes throughput. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org