Hi Everyone,

I am pretty new to Spark (and the mailing list), so forgive me if the
answer is obvious.

I have a dataset, and each row contains a start date and end date.

I would like to explode each row so that each day between the start and end
dates becomes its own row.
e.g.
row1  2015-01-01  2015-01-03
becomes
row1   2015-01-01
row1   2015-01-02
row1   2015-01-03

So, my questions are:
Is Spark a good place to do that?
I can do it in Hive, but it's a bit messy, and this seems like a good
problem to use for learning Spark (and Python).

If so, any pointers on what methods I should use? Particularly how to split
one row into multiples.

Lastly, I am a bit hesitant to ask but is there a recommendation on which
version of python to use? Not interested in which is better, just want to
know if they are both supported equally.

I am using Spark 1.6.1 (Hortonworks distro).

Thanks!
John

-- 

John Aherne
Big Data and SQL Developer

[image: JustEnough Logo]

Cell:
Email:
Skype:
Web:

+1 (303) 809-9718
john.ahe...@justenough.com
john.aherne.je
www.justenough.com


Confidentiality Note: The information contained in this email and
document(s) attached are for the exclusive use of the addressee and
may contain confidential, privileged and non-disclosable information.
If the recipient of this email is not the addressee, such recipient is
strictly prohibited from reading, photocopying, distribution or
otherwise using this email or its contents in any way.

Reply via email to