[ 
https://issues.apache.org/jira/browse/SPARK-21776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16131749#comment-16131749
 ] 

zhaP524 commented on SPARK-21776:
---------------------------------

@Kazuaki Ishizaki  I see ,  I  have changed the type of question,This problem 
has caused my business to be suspended 

> How to use the memory-mapped file on Spark??
> --------------------------------------------
>
>                 Key: SPARK-21776
>                 URL: https://issues.apache.org/jira/browse/SPARK-21776
>             Project: Spark
>          Issue Type: Improvement
>          Components: Block Manager, Documentation, Input/Output, Spark Core
>    Affects Versions: 2.1.1
>         Environment: Spark 2.1.1 
> Scala 2.11.8
>            Reporter: zhaP524
>            Priority: Blocker
>         Attachments: screenshot-1.png, screenshot-2.png
>
>
>       In generation, we have to use the Spark full quantity loaded HBase 
> table based on one dimension table to generate business, because the base 
> table is total quantity loaded, the memory will pressure is very big, I want 
> to see if the Spark can use this way to deal with memory mapped file?Is there 
> such a mechanism?How do you use it?
>       And I found in the Spark a parameter: 
> spark.storage.memoryMapThreshold=2m, is not very clear what this parameter is 
> used for?
>        There is a putBytes and getBytes method in DiskStore.scala with Spark 
> source code, is this the memory-mapped file mentioned above?How to understand?
>        Let me know if you have any trouble..
> Wish to You!!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to