-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org
Hi there,
Read your question and I do believe you are on right path. But what could be
worth checking is - are you able to connect to s3 bucket from your worker
nodes.
I did read that you are able to do it from your machine but since write
happens at the the worker end, it might be worth checkin
I hope you are using the Query object that is returned by the Structured
streaming, right?
Returned object contains a lot of information about each query and tracking
state of the object should be helpful.
Hope this may help, if not can you please share more details with examples?
Best,
A
--
Before anyone asks: yes this is banned immediately.
-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org
God works in mysterious ways. ? not so mysterious.
Imran Khan Every Forum. BBC https://youtu.be/NM-5pipi2nc
Winston Churchill : "do you have enemies ? Good you stood for something in
your life."
Nelson Mandela apartheid
Martin Luther King Civil Right Leader.
---
Thank you, that’s absolutely right!
Getting the rows of `s` without matches in `p` is now not a problem anymore.
Have a nice day
Roland
> Am 30.04.2020 um 17:36 schrieb Ryan C. Kleck :
>
> He’s saying you need to move the filters for the ‘p’ table in order to do
> what you want. They need to b
He’s saying you need to move the filters for the ‘p’ table in order to do what
you want. They need to be before your WHERE. The order of operations in sql
applies your join clause filters before the WHERE. The filters on your ‘s’
table need to stay in the WHERE. It’s the only time the ordering m
Thank for quick reply.
It plans the LeftOuter as soon as the filters on the second table will be
removed.
> It seems like you are asking for a left join, but your filters demand the
> behavior of an inner join.
Can you explain that?
The filters on the second table uses partition pruning that w
Does it still plan an inner join if you remove a filter on both tables?
It seems like you are asking for a left join, but your filters demand the
behavior of an inner join.
Maybe you could do the filters on the tables first and then join them.
Something roughly like..
s_DF = s_DF.filter(year =
Hi All,
we are on vanilla Spark 2.4.4 and currently experience a somehow strange
behavior of the query planner/optimizer and therefore get wrong results.
select
s.event_id as search_event_id,
s.query_string,
p.event_id
from s
left outer join p on s.event_id = p.source_event_id
where
I've 3 disks now
disk1- already have data
disk2- newly added
I want to shift the data from disk1 to disk2, obviously both are datanodes.
Please suggest the steps for hot data node disk migration.
On Wed, Apr 29, 2020 at 2:38 AM JB Data31 wrote:
> Use Hadoop NFSv3 gateway to mount FS.
>
> @*JB*Δ
11 matches
Mail list logo