[ 
https://issues.apache.org/jira/browse/SPARK-21776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhaP524 updated SPARK-21776:
----------------------------
    Attachment: screenshot-2.png

> How to use the memory-mapped file on Spark???
> ---------------------------------------------
>
>                 Key: SPARK-21776
>                 URL: https://issues.apache.org/jira/browse/SPARK-21776
>             Project: Spark
>          Issue Type: Question
>          Components: Block Manager, Documentation, Input/Output
>    Affects Versions: 2.1.1
>         Environment: Spark 2.1.1 
> Scala 2.11.8
>            Reporter: zhaP524
>         Attachments: screenshot-1.png, screenshot-2.png
>
>
>       In generation, we have to use the Spark full quantity loaded HBase 
> table based on one dimension table to generate business, because the base 
> table is total quantity loaded, the memory will pressure is very big, I want 
> to see if the Spark can use this way to deal with memory mapped file?Is there 
> such a mechanism?How do you use it?
>       And I found in the Spark a parameter: 
> spark.storage.memoryMapThreshold=2m, is not very clear what this parameter is 
> used for?
>        There is a putBytes and getBytes method in DiskStore.scala with Spark 
> source code, is this the memory-mapped file mentioned above?How to 
> understand??
>        Let me know if you have any trouble..
> Wish to You!!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to