You could also use the jodatime library, which has a ton of great other
options in it.
J
ᐧ
*JIMMY MCERLAIN*
DATA SCIENTIST (NERD)
*. . . . . . . . . . . . . . . . . .*
*IF WE CAN’T DOUBLE YOUR SALES,*
*ONE OF US IS IN THE WRONG BUSINESS.*
*E*: ji...@sellpoints.com
*M*: *510.303.7751
I have used Oozie for all our workflows with Spark apps but you will have
to use a java event as the workflow element. I am interested in anyones
experience with Luigi and/or any other tools.
On Mon, Nov 10, 2014 at 10:34 AM, Adamantios Corais
adamantios.cor...@gmail.com wrote:
I have some
can you be more specific what version of spark, hive, hadoop, etc...
what are you trying to do? what are the issues you are seeing?
J
ᐧ
*JIMMY MCERLAIN*
DATA SCIENTIST (NERD)
*. . . . . . . . . . . . . . . . . .*
*IF WE CAN’T DOUBLE YOUR SALES,*
*ONE OF US IS IN THE WRONG BUSINESS
and over again to fit models so its pulled into memory once then
basically analyzed through the algos... other DBs systems are reading and
writing to disk repeatedly and are thus slower, such as mahout (though its
getting ported over to Spark as well to compete with MLlib)...
J
ᐧ
*JIMMY MCERLAIN
is working fine... it leads me to believe that it is a bug
within the REPL for 1.1
Can anyone else confirm this?
ᐧ
*JIMMY MCERLAIN*
DATA SCIENTIST (NERD)
*. . . . . . . . . . . . . . . . . .*
*IF WE CAN’T DOUBLE YOUR SALES,*
*ONE OF US IS IN THE WRONG BUSINESS.*
*E*: ji
then
pushing them out to the cluster and pointing them to corresponding
dependent jars
Sorry I cannot be more help!
J
ᐧ
*JIMMY MCERLAIN*
DATA SCIENTIST (NERD)
*. . . . . . . . . . . . . . . . . .*
*IF WE CAN’T DOUBLE YOUR SALES,*
*ONE OF US IS IN THE WRONG BUSINESS.*
*E*: ji...@sellpoints.com
BTW this has always worked for me before until we upgraded the cluster to
Spark 1.1.1...
J
ᐧ
*JIMMY MCERLAIN*
DATA SCIENTIST (NERD)
*. . . . . . . . . . . . . . . . . .*
*IF WE CAN’T DOUBLE YOUR SALES,*
*ONE OF US IS IN THE WRONG BUSINESS.*
*E*: ji...@sellpoints.com
*M*: *510.303.7751
Not sure if this is what you are after but its based on a moving average
within spark... I was building an ARIMA model on top of spark and this
helped me out a lot:
http://stackoverflow.com/questions/23402303/apache-spark-moving-average
ᐧ
*JIMMY MCERLAIN*
DATA SCIENTIST (NERD