Re: [UPDATE] Apache Spark 3.1.0 Release Window

2020-10-13 Thread Michel Sumbul
I think you put Jan 2020 instead of 2021 :-)

Le mar. 13 oct. 2020 à 00:51, Xiao Li  a écrit :

> Thank you, Dongjoon
>
> Xiao
>
> On Mon, Oct 12, 2020 at 4:19 PM Dongjoon Hyun 
> wrote:
>
>> Hi, All.
>>
>> Apache Spark 3.1.0 Release Window is adjusted like the following today.
>> Please check the latest information on the official website.
>>
>> -
>> https://github.com/apache/spark-website/commit/0cd0bdc80503882b4737db7e77cc8f9d17ec12ca
>> - https://spark.apache.org/versioning-policy.html
>>
>> Bests,
>> Dongjoon.
>>
>
>
> --
>
>


Re: Spark3 on k8S reading encrypted data from HDFS with KMS in HA

2020-08-19 Thread Michel Sumbul
Hi Prashant,

I have the problem only on K8S, it's working fine when spark is executed on
top of yarn.
I'm asking myself if the delegation gets saved, any idea how to check that?
Could it be because kms is in HA and spark request 2 delegation token?

For the testing,  just running spark3 on top of any k8s cluster reading
data to any hadoop3 with kms should be fine. I'm using a HDP3 cluster, but
there is probably a more easy way to test.

Michel

Le mer. 19 août 2020 à 09:50, Prashant Sharma  a
écrit :

> -dev
> Hi,
>
> I have used Spark with HDFS encrypted with Hadoop KMS, and it worked well.
> Somehow, I could not recall, if I had the kubernetes in the mix. Somehow,
> seeing the error, it is not clear what caused the failure. Can I reproduce
> this somehow?
>
> Thanks,
>
> On Sat, Aug 15, 2020 at 7:18 PM Michel Sumbul 
> wrote:
>
>> Hi guys,
>>
>> Does anyone have an idea on this issue? even some tips to troubleshoot it?
>> I got the impression that after the creation of the delegation for the
>> KMS, the token is not sent to the executor or maybe not saved?
>>
>> I'm sure I'm not the only one using Spark with HDFS encrypted with KMS :-)
>>
>> Thanks,
>> Michel
>>
>> Le jeu. 13 août 2020 à 14:32, Michel Sumbul  a
>> écrit :
>>
>>> Hi guys,
>>>
>>> Does anyone try Spark3 on k8s reading data from HDFS encrypted with KMS
>>> in HA mode (with kerberos)?
>>>
>>> I have a wordcount job running with Spark3 reading data on HDFS (hadoop
>>> 3.1) everything secure with kerberos. Everything works fine if the data
>>> folder is not encrypted (spark on k8s). If the data is on an encrypted
>>> folder, Spark3 on yarn is working fine but it doesn't work when Spark3 is
>>> running on K8S.
>>> I submit the job with spark-submit command and I provide the keytab and
>>> the principal to use.
>>> I got the kerberos error saying that there is no TGT to authenticate to
>>> the KMS (ranger kms, full stack trace of the error at the end of the mail)
>>> servers but in the log I can see that Spark get 2 delegation token, one for
>>> each KMS servers:
>>>
>>> -- --
>>>
>>> 20/08/13 10:50:50 INFO HadoopDelegationTokenManager: Attempting to login
>>> to KDC using principal: mytestu...@paf.com
>>>
>>> 20/08/13 10:50:50 INFO HadoopDelegationTokenManager: Successfully logged
>>> into KDC.
>>>
>>> 20/08/13 10:50:52 WARN DomainSocketFactory: The short-circuit local
>>> reads feature cannot be used because libhadoop cannot be loaded.
>>>
>>> 20/08/13 10:50:52 INFO HadoopFSDelegationTokenProvider: getting token
>>> for: DFS[DFSClient[clientName=DFSClient_NONMAPREDUCE_-237056190_16, ugi=
>>> mytestu...@paf.com (auth:KERBEROS)]] with renewer testuser
>>>
>>> 20/08/13 10:50:52 INFO DFSClient: Created token for testuser:
>>> HDFS_DELEGATION_TOKEN owner= mytestu...@paf.com, renewer=testuser,
>>> realUser=, issueDate=1597315852353, maxDate=1597920652353,
>>> sequenceNumber=55185062, masterKeyId=1964 on ha-hdfs:cluster2
>>>
>>> 20/08/13 10:50:52 INFO KMSClientProvider: New token created: (Kind:
>>> kms-dt, Service: kms://ht...@server2.paf.com:9393/kms, Ident: (kms-dt
>>> owner=testuser, renewer=testuser, realUser=, issueDate=1597315852642,
>>> maxDate=1597920652642, sequenceNumber=3929883, masterKeyId=623))
>>>
>>> 20/08/13 10:50:52 INFO HadoopFSDelegationTokenProvider: getting token
>>> for: DFS[DFSClient[clientName=DFSClient_NONMAPREDUCE_-237056190_16, ugi=
>>> testu...@paf.com (auth:KERBEROS)]] with renewer testu...@paf.com
>>>
>>> 20/08/13 10:50:52 INFO DFSClient: Created token for testuser:
>>> HDFS_DELEGATION_TOKEN owner=testu...@paf.com, renewer=testuser,
>>> realUser=, issueDate=1597315852744, maxDate=1597920652744,
>>> sequenceNumber=55185063, masterKeyId=1964 on ha-hdfs:cluster2
>>>
>>> 20/08/13 10:50:52 INFO KMSClientProvider: New token created: (Kind:
>>> kms-dt, Service: kms://ht...@server.paf.com:9393/kms, Ident: (kms-dt
>>> owner=testuser, renewer=testuser, realUser=, issueDate=1597315852839,
>>> maxDate=1597920652839, sequenceNumber=3929884, masterKeyId=624))
>>>
>>> 20/08/13 10:50:52 INFO HadoopFSDelegationTokenProvider: Renewal interval
>>> is 86400104 for token HDFS_DELEGATION_TOKEN
>>>
>>> 20/08/13 10:50:52 INFO HadoopFSDelegationTokenProvider: Renewal interval
>>> is 86400108 for token kms-dt
>>>
>>

Re: Spark3 on k8S reading encrypted data from HDFS with KMS in HA

2020-08-15 Thread Michel Sumbul
Hi guys,

Does anyone have an idea on this issue? even some tips to troubleshoot it?
I got the impression that after the creation of the delegation for the KMS,
the token is not sent to the executor or maybe not saved?

I'm sure I'm not the only one using Spark with HDFS encrypted with KMS :-)

Thanks,
Michel

Le jeu. 13 août 2020 à 14:32, Michel Sumbul  a
écrit :

> Hi guys,
>
> Does anyone try Spark3 on k8s reading data from HDFS encrypted with KMS in
> HA mode (with kerberos)?
>
> I have a wordcount job running with Spark3 reading data on HDFS (hadoop
> 3.1) everything secure with kerberos. Everything works fine if the data
> folder is not encrypted (spark on k8s). If the data is on an encrypted
> folder, Spark3 on yarn is working fine but it doesn't work when Spark3 is
> running on K8S.
> I submit the job with spark-submit command and I provide the keytab and
> the principal to use.
> I got the kerberos error saying that there is no TGT to authenticate to
> the KMS (ranger kms, full stack trace of the error at the end of the mail)
> servers but in the log I can see that Spark get 2 delegation token, one for
> each KMS servers:
>
> -- --
>
> 20/08/13 10:50:50 INFO HadoopDelegationTokenManager: Attempting to login
> to KDC using principal: mytestu...@paf.com
>
> 20/08/13 10:50:50 INFO HadoopDelegationTokenManager: Successfully logged
> into KDC.
>
> 20/08/13 10:50:52 WARN DomainSocketFactory: The short-circuit local reads
> feature cannot be used because libhadoop cannot be loaded.
>
> 20/08/13 10:50:52 INFO HadoopFSDelegationTokenProvider: getting token for:
> DFS[DFSClient[clientName=DFSClient_NONMAPREDUCE_-237056190_16, ugi=
> mytestu...@paf.com (auth:KERBEROS)]] with renewer testuser
>
> 20/08/13 10:50:52 INFO DFSClient: Created token for testuser:
> HDFS_DELEGATION_TOKEN owner= mytestu...@paf.com, renewer=testuser,
> realUser=, issueDate=1597315852353, maxDate=1597920652353,
> sequenceNumber=55185062, masterKeyId=1964 on ha-hdfs:cluster2
>
> 20/08/13 10:50:52 INFO KMSClientProvider: New token created: (Kind:
> kms-dt, Service: kms://ht...@server2.paf.com:9393/kms, Ident: (kms-dt
> owner=testuser, renewer=testuser, realUser=, issueDate=1597315852642,
> maxDate=1597920652642, sequenceNumber=3929883, masterKeyId=623))
>
> 20/08/13 10:50:52 INFO HadoopFSDelegationTokenProvider: getting token for:
> DFS[DFSClient[clientName=DFSClient_NONMAPREDUCE_-237056190_16, ugi=
> testu...@paf.com (auth:KERBEROS)]] with renewer testu...@paf.com
>
> 20/08/13 10:50:52 INFO DFSClient: Created token for testuser:
> HDFS_DELEGATION_TOKEN owner=testu...@paf.com, renewer=testuser,
> realUser=, issueDate=1597315852744, maxDate=1597920652744,
> sequenceNumber=55185063, masterKeyId=1964 on ha-hdfs:cluster2
>
> 20/08/13 10:50:52 INFO KMSClientProvider: New token created: (Kind:
> kms-dt, Service: kms://ht...@server.paf.com:9393/kms, Ident: (kms-dt
> owner=testuser, renewer=testuser, realUser=, issueDate=1597315852839,
> maxDate=1597920652839, sequenceNumber=3929884, masterKeyId=624))
>
> 20/08/13 10:50:52 INFO HadoopFSDelegationTokenProvider: Renewal interval
> is 86400104 for token HDFS_DELEGATION_TOKEN
>
> 20/08/13 10:50:52 INFO HadoopFSDelegationTokenProvider: Renewal interval
> is 86400108 for token kms-dt
>
> 20/08/13 10:50:54 INFO HiveConf: Found configuration file null
>
> 20/08/13 10:50:54 INFO HadoopDelegationTokenManager: Scheduling renewal in
> 18.0 h.
>
> 20/08/13 10:50:54 INFO HadoopDelegationTokenManager: Updating delegation
> tokens.
>
> 20/08/13 10:50:54 INFO SparkHadoopUtil: Updating delegation tokens for
> current user.
>
> 20/08/13 10:50:55 INFO SparkHadoopUtil: Updating delegation tokens for
> current user.
> --- --
>
> In the core-site.xml, I have the following property for the 2 kms server
>
> --
>
> hadoop.security.key.provider.path
>
> kms://ht...@server.paf.com;server2.paf.com:9393/kms
>
> -
>
>
> Does anyone have an idea how to make it work? Or at least anyone has been
> able to make it work?
> Does anyone know where the delegation tokens are saved during the
> execution of jobs on k8s and how it is shared between the executors?
>
>
> Thanks,
> Michel
>
> PS: The full stack trace of the error:
>
> 
>
> Caused by: org.apache.spark.SparkException: Job aborted due to stage
> failure: Task 22 in stage 0.0 failed 4 times, most recent failure: Lost
> task 22.3 in stage 0.0 (TID 23, 10.5.5.5, executor 1): java.io.IOException:
> org.apache.hadoop.security.authentication.client.AuthenticationException:
> Error while authenticating with end

Spark3 on k8S reading encrypted data from HDFS with KMS in HA

2020-08-13 Thread Michel Sumbul
Hi guys,

Does anyone try Spark3 on k8s reading data from HDFS encrypted with KMS in
HA mode (with kerberos)?

I have a wordcount job running with Spark3 reading data on HDFS (hadoop
3.1) everything secure with kerberos. Everything works fine if the data
folder is not encrypted (spark on k8s). If the data is on an encrypted
folder, Spark3 on yarn is working fine but it doesn't work when Spark3 is
running on K8S.
I submit the job with spark-submit command and I provide the keytab and the
principal to use.
I got the kerberos error saying that there is no TGT to authenticate to the
KMS (ranger kms, full stack trace of the error at the end of the mail)
servers but in the log I can see that Spark get 2 delegation token, one for
each KMS servers:

-- --

20/08/13 10:50:50 INFO HadoopDelegationTokenManager: Attempting to login to
KDC using principal: mytestu...@paf.com

20/08/13 10:50:50 INFO HadoopDelegationTokenManager: Successfully logged
into KDC.

20/08/13 10:50:52 WARN DomainSocketFactory: The short-circuit local reads
feature cannot be used because libhadoop cannot be loaded.

20/08/13 10:50:52 INFO HadoopFSDelegationTokenProvider: getting token for:
DFS[DFSClient[clientName=DFSClient_NONMAPREDUCE_-237056190_16, ugi=
mytestu...@paf.com (auth:KERBEROS)]] with renewer testuser

20/08/13 10:50:52 INFO DFSClient: Created token for testuser:
HDFS_DELEGATION_TOKEN owner= mytestu...@paf.com, renewer=testuser,
realUser=, issueDate=1597315852353, maxDate=1597920652353,
sequenceNumber=55185062, masterKeyId=1964 on ha-hdfs:cluster2

20/08/13 10:50:52 INFO KMSClientProvider: New token created: (Kind: kms-dt,
Service: kms://ht...@server2.paf.com:9393/kms, Ident: (kms-dt
owner=testuser, renewer=testuser, realUser=, issueDate=1597315852642,
maxDate=1597920652642, sequenceNumber=3929883, masterKeyId=623))

20/08/13 10:50:52 INFO HadoopFSDelegationTokenProvider: getting token for:
DFS[DFSClient[clientName=DFSClient_NONMAPREDUCE_-237056190_16, ugi=
testu...@paf.com (auth:KERBEROS)]] with renewer testu...@paf.com

20/08/13 10:50:52 INFO DFSClient: Created token for testuser:
HDFS_DELEGATION_TOKEN owner=testu...@paf.com, renewer=testuser, realUser=,
issueDate=1597315852744, maxDate=1597920652744, sequenceNumber=55185063,
masterKeyId=1964 on ha-hdfs:cluster2

20/08/13 10:50:52 INFO KMSClientProvider: New token created: (Kind: kms-dt,
Service: kms://ht...@server.paf.com:9393/kms, Ident: (kms-dt
owner=testuser, renewer=testuser, realUser=, issueDate=1597315852839,
maxDate=1597920652839, sequenceNumber=3929884, masterKeyId=624))

20/08/13 10:50:52 INFO HadoopFSDelegationTokenProvider: Renewal interval is
86400104 for token HDFS_DELEGATION_TOKEN

20/08/13 10:50:52 INFO HadoopFSDelegationTokenProvider: Renewal interval is
86400108 for token kms-dt

20/08/13 10:50:54 INFO HiveConf: Found configuration file null

20/08/13 10:50:54 INFO HadoopDelegationTokenManager: Scheduling renewal in
18.0 h.

20/08/13 10:50:54 INFO HadoopDelegationTokenManager: Updating delegation
tokens.

20/08/13 10:50:54 INFO SparkHadoopUtil: Updating delegation tokens for
current user.

20/08/13 10:50:55 INFO SparkHadoopUtil: Updating delegation tokens for
current user.
--- --

In the core-site.xml, I have the following property for the 2 kms server

--

hadoop.security.key.provider.path

kms://ht...@server.paf.com;server2.paf.com:9393/kms

-


Does anyone have an idea how to make it work? Or at least anyone has been
able to make it work?
Does anyone know where the delegation tokens are saved during the execution
of jobs on k8s and how it is shared between the executors?


Thanks,
Michel

PS: The full stack trace of the error:



Caused by: org.apache.spark.SparkException: Job aborted due to stage
failure: Task 22 in stage 0.0 failed 4 times, most recent failure: Lost
task 22.3 in stage 0.0 (TID 23, 10.5.5.5, executor 1): java.io.IOException:
org.apache.hadoop.security.authentication.client.AuthenticationException:
Error while authenticating with endpoint:
https://server.paf.com:9393/kms/v1/keyversion/dir_tmp_key%400/_eek?eek_op=decrypt

at
org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:525)

at
org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:826)

at
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:351)

at
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:347)

at
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:172)

at
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.decryptEncryptedKey(LoadBalancingKMSClientProvider.java:347)

at

Re: Spark 3 pod template for the driver

2020-06-29 Thread Michel Sumbul
Hello,

Adding the dev mailing list maybe there is someone here that can help to
have/show a valid/accepted pod template for spark 3?

Thanks in advance,
Michel


Le ven. 26 juin 2020 à 14:03, Michel Sumbul  a
écrit :

> Hi Jorge,
> If I set that in the spark submit command it works but I want it only in
> the pod template file.
>
> Best regards,
> Michel
>
> Le ven. 26 juin 2020 à 14:01, Jorge Machado  a écrit :
>
>> Try to set spark.kubernetes.container.image
>>
>> On 26. Jun 2020, at 14:58, Michel Sumbul  wrote:
>>
>> Hi guys,
>>
>> I try to use Spark 3 on top of Kubernetes and to specify a pod template
>> for the driver.
>>
>> Here is my pod manifest or the driver and when I do a spark-submit with
>> the option:
>> --conf
>> spark.kubernetes.driver.podTemplateFile=/data/k8s/podtemplate_driver3.yaml
>>
>> I got the error message that I need to specify an image, but it's the
>> manifest.
>> Does my manifest file is wrong, How should it look like?
>>
>> Thanks for your help,
>> Michel
>>
>> 
>> The pod manifest:
>>
>> apiVersion: v1
>> kind: Pod
>> metadata:
>>   name: mySpark3App
>>   labels:
>> app: mySpark3App
>> customlabel/app-id: "1"
>> spec:
>>   securityContext:
>> runAsUser: 1000
>>   volumes:
>> - name: "test-volume"
>>   emptyDir: {}
>>   containers:
>> - name: spark3driver
>>   image: mydockerregistry.example.com/images/dev/spark3:latest
>>   instances: 1
>>   resources:
>> requests:
>>   cpu: "1000m"
>>   memory: "512Mi"
>> limits:
>>   cpu: "1000m"
>>   memory: "512Mi"
>>   volumeMounts:
>>- name: "test-volume"
>>  mountPath: "/tmp"
>>
>>
>>


Re: Spark 3 pod template for the driver

2020-06-26 Thread Michel Sumbul
Hi Jorge,
If I set that in the spark submit command it works but I want it only in
the pod template file.

Best regards,
Michel

Le ven. 26 juin 2020 à 14:01, Jorge Machado  a écrit :

> Try to set spark.kubernetes.container.image
>
> On 26. Jun 2020, at 14:58, Michel Sumbul  wrote:
>
> Hi guys,
>
> I try to use Spark 3 on top of Kubernetes and to specify a pod template
> for the driver.
>
> Here is my pod manifest or the driver and when I do a spark-submit with
> the option:
> --conf
> spark.kubernetes.driver.podTemplateFile=/data/k8s/podtemplate_driver3.yaml
>
> I got the error message that I need to specify an image, but it's the
> manifest.
> Does my manifest file is wrong, How should it look like?
>
> Thanks for your help,
> Michel
>
> 
> The pod manifest:
>
> apiVersion: v1
> kind: Pod
> metadata:
>   name: mySpark3App
>   labels:
> app: mySpark3App
> customlabel/app-id: "1"
> spec:
>   securityContext:
> runAsUser: 1000
>   volumes:
> - name: "test-volume"
>   emptyDir: {}
>   containers:
> - name: spark3driver
>   image: mydockerregistry.example.com/images/dev/spark3:latest
>   instances: 1
>   resources:
> requests:
>   cpu: "1000m"
>   memory: "512Mi"
> limits:
>   cpu: "1000m"
>   memory: "512Mi"
>   volumeMounts:
>- name: "test-volume"
>  mountPath: "/tmp"
>
>
>


Spark 3 pod template for the driver

2020-06-26 Thread Michel Sumbul
Hi guys,
I try to use Spark 3 on top of Kubernetes and to specify a pod template for the 
driver.
Here is my pod manifest or the driver and when I do a spark-submit with the 
option:
--conf 
spark.kubernetes.driver.podTemplateFile=/data/k8s/podtemplate_driver3.yaml

I got the error message that I need to specify an image, but its the manifest.

Does my manifest file is wrong, how should it look like?

Thanks for your help,
Michel
The pod manifest:
apiVersion: v1kind: Podmetadata:  name: mySpark3App  labels:    app: 
mySpark3App    customlabel/app-id: "1"spec:  securityContext:    runAsUser: 
1000  volumes:    - name: "test-volume"      emptyDir: {}  containers:    - 
name: spark3driver      image: 
mydockerregistry.example.com/images/dev/spark3:latest      instances: 1      
resources:        requests:          cpu: "1000m"          memory: "512Mi"      
  limits:          cpu: "1000m"          memory: "512Mi"      volumeMounts:     
  - name: "test-volume"         mountPath: "/tmp"


Spark 3 pod template for the driver

2020-06-26 Thread Michel Sumbul
Hi guys,

I try to use Spark 3 on top of Kubernetes and to specify a pod template for
the driver.

Here is my pod manifest or the driver and when I do a spark-submit with the
option:
--conf
spark.kubernetes.driver.podTemplateFile=/data/k8s/podtemplate_driver3.yaml

I got the error message that I need to specify an image, but it's the
manifest.
Does my manifest file is wrong, How should it look like?

Thanks for your help,
Michel


The pod manifest:

apiVersion: v1
kind: Pod
metadata:
  name: mySpark3App
  labels:
app: mySpark3App
customlabel/app-id: "1"
spec:
  securityContext:
runAsUser: 1000
  volumes:
- name: "test-volume"
  emptyDir: {}
  containers:
- name: spark3driver
  image: mydockerregistry.example.com/images/dev/spark3:latest
  instances: 1
  resources:
requests:
  cpu: "1000m"
  memory: "512Mi"
limits:
  cpu: "1000m"
  memory: "512Mi"
  volumeMounts:
   - name: "test-volume"
 mountPath: "/tmp"


Re: Exact meaning of spark.memory.storageFraction in spark 2.3.x [Marketing Mail] [Marketing Mail]

2020-03-20 Thread Michel Sumbul
Hi  Iacovos,

thansk for the reply its super clear.
Do you know if there is a way to know the max memory usage?
In the spark ui 2.3.x the "peak memory usage" metris is always at zero.

Thanks,
Michel

<https://www.avast.com/sig-email?utm_medium=email_source=link_campaign=sig-email_content=webmail>
Garanti
sans virus. www.avast.com
<https://www.avast.com/sig-email?utm_medium=email_source=link_campaign=sig-email_content=webmail>
<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>

Le ven. 20 mars 2020 à 14:56, Jack Kolokasis  a
écrit :

> This is just a counter to show you the size of cached RDDs. If it is zero
> means that no caching has occurred. Also, even storage memory is used for
> computing the counter will show as zero.
>
> Iacovos
> On 20/3/20 4:51 μ.μ., Michel Sumbul wrote:
>
> Hi,
>
> Thanks for the very quick reply!
> If I see the metrics "storage memory", always at 0, does that mean that
> the memory is neither used for caching or computing?
>
> Thanks,
> Michel
>
>
> <https://www.avast.com/sig-email?utm_medium=email_source=link_campaign=sig-email_content=webmail>
>  Garanti
> sans virus. www.avast.com
> <https://www.avast.com/sig-email?utm_medium=email_source=link_campaign=sig-email_content=webmail>
>
> Le ven. 20 mars 2020 à 14:45, Jack Kolokasis  a
> écrit :
>
>> Hello Michel,
>>
>> Spark seperates executors memory using an adaptive boundary between
>> storage and execution memory. If there is no caching and execution
>> memory needs more space, then it will use a portion of the storage memory.
>>
>> If your program does not use caching then you can reduce storage memory.
>>
>> Iacovos
>>
>> On 20/3/20 4:40 μ.μ., msumbul wrote:
>> > Hello,
>> >
>> > Im asking mysef the exact meaning of the setting of
>> > spark.memory.storageFraction.
>> > The documentation mention:
>> >
>> > "Amount of storage memory immune to eviction, expressed as a fraction
>> of the
>> > size of the region set aside by spark.memory.fraction. The higher this
>> is,
>> > the less working memory may be available to execution and tasks may
>> spill to
>> > disk more often"
>> >
>> > Does that mean that if there is no caching that part of the memory will
>> not
>> > be used at all?
>> > In the spark UI, in the tab "Executor", I can see that the "storage
>> memory"
>> > is always zero. Does that mean that that part of the memory is never
>> used at
>> > all and I can reduce it or never used for storage specifically?
>> >
>> > Thanks in advance for your help,
>> > Michel
>> >
>> >
>> >
>> > --
>> > Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/
>> >
>> > -
>> > To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>> >
>>
>> -
>> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>>
>>


Re: Exact meaning of spark.memory.storageFraction in spark 2.3.x [Marketing Mail]

2020-03-20 Thread Michel Sumbul
Hi,

Thanks for the very quick reply!
If I see the metrics "storage memory", always at 0, does that mean that the
memory is neither used for caching or computing?

Thanks,
Michel


Garanti
sans virus. www.avast.com

<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>

Le ven. 20 mars 2020 à 14:45, Jack Kolokasis  a
écrit :

> Hello Michel,
>
> Spark seperates executors memory using an adaptive boundary between
> storage and execution memory. If there is no caching and execution
> memory needs more space, then it will use a portion of the storage memory.
>
> If your program does not use caching then you can reduce storage memory.
>
> Iacovos
>
> On 20/3/20 4:40 μ.μ., msumbul wrote:
> > Hello,
> >
> > Im asking mysef the exact meaning of the setting of
> > spark.memory.storageFraction.
> > The documentation mention:
> >
> > "Amount of storage memory immune to eviction, expressed as a fraction of
> the
> > size of the region set aside by spark.memory.fraction. The higher this
> is,
> > the less working memory may be available to execution and tasks may
> spill to
> > disk more often"
> >
> > Does that mean that if there is no caching that part of the memory will
> not
> > be used at all?
> > In the spark UI, in the tab "Executor", I can see that the "storage
> memory"
> > is always zero. Does that mean that that part of the memory is never
> used at
> > all and I can reduce it or never used for storage specifically?
> >
> > Thanks in advance for your help,
> > Michel
> >
> >
> >
> > --
> > Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/
> >
> > -
> > To unsubscribe e-mail: user-unsubscr...@spark.apache.org
> >
>
> -
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>