ve started running jobs in cluster mode and obviously driver is
> running on worker and can't see the logs.
>
> I would like to store logs (preferably in hdfs), any easy way to do that?
>
> Thanks
>
>
>
> --
> Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/
>
> -
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
> --
Tomasz Krol
patric...@gmail.com
:
>
>> Isn't step1 and step2 producing the copy of Table A?
>>
>>
>>
>> --
>> Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/
>>
>> -
>> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>>
>> --
Tomasz Krol
patric...@gmail.com
onciliation job smoothly enough.
>
> Others, any better input?
>
> On Wed 29 May, 2019, 10:50 PM Tomasz Krol, wrote:
>
>> Hey Guys,
>>
>> I am wondering what would be your approach to following scenario:
>>
>> I have two tables - one (Table A) is relatively s
TB
of data that takes obviously some time.
I was trying different approaches but no luck.
I am wondering whats your ideas how can we perform this scenario
efficiently in Spark?
Cheers
Tom
--
Tomasz Krol
patric...@gmail.com
Hey Guys,
Do you know any possible way to refresh parquet tables that will clear
cached metadata for all users in Spark Thrift Server. Or can I somehow stop
caching metadata at all for parquet tables? Seems like
spark.sql.parquet.cacheMetadata doesnt work anymore.
Thanks
Tom
--
Tomasz Krol
properly (sort merge
join happens) with adaptive qe enable.
Thanks
Tom
--
Tomasz Krol
patric...@gmail.com
decreased. I am wondering if any of you manage to get good results running
the Spark Thrift Server with FAIR scheduler mode?
Thanks
Tom
--
Tomasz Krol
patric...@gmail.com
Yeah, seems like that option with making emptyDir larger is something that
we need to consider.
Cheers
Tomasz Krol
On Fri, 1 Mar 2019 at 19:30, Matt Cheah wrote:
> Ah I see: We always force the local directory to use emptyDir and it
> cannot be configured to use any other volume typ
let us know if that moves
> the spills as expected?
>
>
>
> -Matt Cheah
>
>
>
> *From: *Tomasz Krol
> *Date: *Wednesday, February 27, 2019 at 3:41 AM
> *To: *"user@spark.apache.org"
> *Subject: *Spark on k8s - map persistentStorage for data spilling
>
&
--
Tomasz Krol
patric...@gmail.com
10 matches
Mail list logo