I am not using case when. It is mostly IF. By slow, I mean 6 min even for
10 records for 41 level nested ifs.
On Jan 11, 2017 3:31 PM, "Georg Heiler" wrote:
> I was using the dataframe api not sql. The main problem was that too much
> code was generated.
> Using an
I was using the dataframe api not sql. The main problem was that too much
code was generated.
Using an unforgettable turned out to be quicker as well.
Olivier Girardot schrieb am Di. 10. Jan.
2017 um 21:54:
> Are you using the "case when" functions ? what do you
Are you using the "case when" functions ? what do you mean by slow ? can you
share a snippet ?
On Tue, Jan 10, 2017 8:15 PM, Georg Heiler georg.kf.hei...@gmail.com
wrote:
Maybe you can create an UDF?
Raghavendra Pandey schrieb am Di., 10. Jan. 2017
um 20:04
Maybe you can create an UDF?
Raghavendra Pandey schrieb am Di., 10. Jan.
2017 um 20:04 Uhr:
> I have of around 41 level of nested if else in spark sql. I have
> programmed it using apis on dataframe. But it takes too much time.
> Is there anything I can do to
I have of around 41 level of nested if else in spark sql. I have programmed
it using apis on dataframe. But it takes too much time.
Is there anything I can do to improve on time here?