[ 
https://issues.apache.org/jira/browse/HADOOP-18876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17761750#comment-17761750
 ] 

Anmol Asrani edited comment on HADOOP-18876 at 9/4/23 9:40 AM:
---------------------------------------------------------------

Hi [~ste...@apache.org] , this is a summary of our test efforts: -
 # {*}Concurrency Management for FileSystem{*}:
 ** Each FileSystem instance utilizes a BlockingThreadPoolExecutorService to 
handle concurrency.
 ** We adjust parameters like 
{{abfsConfiguration.getWriteMaxConcurrentRequestCount}} and 
{{abfsConfiguration.getMaxWriteRequestsToQueue}} to control concurrent tasks.
 # {*}Default Settings{*}:
 ** In cases where we don't specify 
{*}{{fs.azure.write.max.concurrent.requests}}{*}, it defaults to four times the 
number of available processors.
 ** In our setup, which has 8 processors per VM, this results in a maximum of 
around 32 concurrent requests.
 ** Similarly, if we leave *{{fs.azure.write.max.requests.to.queue}}* 
unspecified, it defaults to twice the value of 
{{{}abfsConfiguration.getWriteMaxConcurrentRequestCount{}}}, which is 
approximately 64 in our configuration.
 # {*}Parallel FileSystem Instances{*}:
 ** With about 20 open FileSystem instances in our jobs, we've noticed that the 
cap at the JVM level becomes roughly 20 times.
 # {*}OutputStream and Block Uploads{*}:
 ** The default setting for {{*fs.azure.block.upload.active.blocks*}} is 20, 
meaning each OutputStream level can handle around 20 blocks for upload.
 # {*}Hardware and Configuration{*}:
 ** Our setup includes 40 worker nodes, each equipped with around 64 GB of RAM.
 ** We conducted 10-15 Spark TPCDS workload runs with various file sizes 
ranging from 1 TB to 100 TB.
 ** Importantly, we didn't encounter any Out-of-Memory (OOM) errors, and we 
experienced significant improvements in latency.
 **  
 # {*}When Disk Can Result in OOM{*}:
 ** Parameters: -  
 *** {{fs.azure.write.max.concurrent.requests}} (m): Controls concurrent write 
requests.
 *** {{fs.azure.write.max.requests.to.queue}} (n ): Sets the max queued 
requests.
 *** {{FileSystem instances on a JVM}} (k): Concurrent filesystem objects on a 
JVM.
 *** {{fs.azure.write.request.size}} (writeSize): Default is 8MB per request.
 *** {{fs.azure.block.upload.active.blocks}} (z): Limits queued blocks per 
OutputStream.
 ** OOM issues with disk can arise when we set 
{{fs.azure.write.max.concurrent.requests}} (m) to a high value.
 ** Even with disk usage, a large number of concurrent requests (m) can lead to 
memory consumption issues.
 ** The formula for calculating the maximum memory consumed by AbfsOutputStream 
is *k * writeSize * m.*
 ** High values of m can potentially deplete available memory resources, 
resulting in OOM errors when using disk storage.


was (Author: JIRAUSER281089):
Hi [~ste...@apache.org] , this is a summary of our test efforts: -
 # {*}Concurrency Management for FileSystem{*}:
 ** Each FileSystem instance utilizes a BlockingThreadPoolExecutorService to 
handle concurrency.
 ** We adjust parameters like 
{{abfsConfiguration.getWriteMaxConcurrentRequestCount}} and 
{{abfsConfiguration.getMaxWriteRequestsToQueue}} to control concurrent tasks.
 # {*}Default Settings{*}:
 ** In cases where we don't specify 
{*}{{fs.azure.write.max.concurrent.requests}}{*}, it defaults to four times the 
number of available processors.
 ** In our setup, which has 8 processors per VM, this results in a maximum of 
around 32 concurrent requests.
 ** Similarly, if we leave *{{fs.azure.write.max.requests.to.queue}}* 
unspecified, it defaults to twice the value of 
{{{}abfsConfiguration.getWriteMaxConcurrentRequestCount{}}}, which is 
approximately 64 in our configuration.
 # {*}Parallel FileSystem Instances{*}:
 ** With about 20 open FileSystem instances in our jobs, we've noticed that the 
cap at the JVM level becomes roughly 20 times.
 # {*}OutputStream and Block Uploads{*}:
 ** The default setting for {{*fs.azure.block.upload.active.blocks*}} is 20, 
meaning each OutputStream level can handle around 20 blocks for upload.
 # {*}Hardware and Configuration{*}:
 ** Our setup includes 40 worker nodes, each equipped with around 64 GB of RAM.
 ** We conducted 10-15 Spark TPCDS workload runs with various file sizes 
ranging from 1 TB to 100 TB.
 ** Importantly, we didn't encounter any Out-of-Memory (OOM) errors, and we 
experienced significant improvements in latency.
 **  
 # {*}When Disk Can Result in OOM{*}:
 ** Parameters: -  
 *** {{fs.azure.write.max.concurrent.requests}} (m): Controls concurrent write 
requests.
 *** {{fs.azure.write.max.requests.to.queue}} (n): Sets the max queued requests.
 *** {{FileSystem instances on a JVM}} (k): Concurrent filesystem objects on a 
JVM.
 *** {{fs.azure.write.request.size}} (writeSize): Default is 8MB per request.
 *** {{fs.azure.block.upload.active.blocks}} (z): Limits queued blocks per 
OutputStream.
 ** OOM issues with disk can arise when we set 
{{fs.azure.write.max.concurrent.requests}} (m) to a high value.
 ** Even with disk usage, a large number of concurrent requests (m) can lead to 
memory consumption issues.
 ** The formula for calculating the maximum memory consumed by AbfsOutputStream 
is *k * writeSize * m.*
 ** High values of m can potentially deplete available memory resources, 
resulting in OOM errors when using disk storage.

> ABFS: Change default from disk to bytebuffer for fs.azure.data.blocks.buffer
> ----------------------------------------------------------------------------
>
>                 Key: HADOOP-18876
>                 URL: https://issues.apache.org/jira/browse/HADOOP-18876
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: build
>    Affects Versions: 3.3.6
>            Reporter: Anmol Asrani
>            Assignee: Anmol Asrani
>            Priority: Major
>              Labels: pull-request-available
>             Fix For: 3.3.6
>
>
> Change default from disk to bytebuffer for fs.azure.data.blocks.buffer.
> Gathered from multiple workload runs, the presented data underscores a 
> noteworthy enhancement in performance. The adoption of ByteBuffer for 
> *reading operations* exhibited a remarkable improvement of approximately 
> *64.83%* when compared to traditional disk-based reading. Similarly, the 
> implementation of ByteBuffer for *write operations* yielded a substantial 
> efficiency gain of about {*}60.75%{*}. These findings underscore the 
> consistent and substantial advantages of integrating ByteBuffer across a 
> range of workload scenarios.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to