[INFO]
[INFO] BUILD SUCCESS
[INFO]
[INFO] Total time: 10:53 min
[INFO] Finished at: 2016-04-26T06:56:49+02:00
[INFO] Final Memory: 107M/890M
[INFO]
Thanks Reynold! I was going to ask about that one as it breaks the build for me.
[info] Compiling 1 Scala source to
/Users/jacek/dev/oss/spark/sql/hivecontext-compatibility/target/scala-2.11/classes...
[error]
Cool!! Thanks for the clarification Mike.
Thanking You
-
Praveen Devarao
Spark Technology Centre
IBM India Software Labs
-
"Courage
another project hosted on our jenkins (e-mission) needs anaconda scipy
upgraded from 0.15.1 to 0.17.0. this will also upgrade a few other
libs, which i've included at the end of this email.
i've spoken w/josh @ databricks and we don't believe that this will
impact the spark builds at all. if
Thanks for your work on this. Can we continue discussing on the JIRA?
On Sun, Apr 24, 2016 at 9:39 AM, Caique Marques
wrote:
> Hello, everyone!
>
> I'm trying to implement the association rules in Python. I got implement
> an association by a frequent element, works
Interesting.
bq. details of execution for 10 and 100 scale factor input
Looks like some chart (or image) didn't go through.
FYI
On Mon, Apr 25, 2016 at 12:50 PM, Ali Tootoonchian wrote:
> Caching shuffle RDD before the sort process improves system performance.
> SQL
> planner
Caching shuffle RDD before the sort process improves system performance. SQL
planner can be intelligent to cache join, aggregate or sort data frame
before executing next sort process.
For any sort process two job is created by spark, first one is responsible
for producing range boundary for
Spark SQL's query planner has always delayed building the RDD, so has never
needed to eagerly calculate the range boundaries (since Spark 1.0).
On Mon, Apr 25, 2016 at 2:04 AM, Praveen Devarao
wrote:
> Thanks Reynold for the reason as to why sortBykey invokes a Job
>
>
Hey all,
Just wondering if anyone has had issues with this or if it is expected that
the semantic around the memory management is different here.
Thanks
-Pat
On Tue, Apr 19, 2016 at 9:32 AM, Patrick Woody
wrote:
> Hey all,
>
> I had a question about the MemoryStore
If you want to refer back to Kafka based on offset ranges, why not use
createDirectStream?
On Fri, Apr 22, 2016 at 11:49 PM, Renyi Xiong wrote:
> Hi,
>
> Is it possible for Kafka receiver generated WriteAheadLogBackedBlockRDD to
> hold corresponded Kafka offset range so
Thanks Reynold for the reason as to why sortBykey invokes a Job
When you say "DataFrame/Dataset does not have this issue" is it right to
assume you are referring to Spark 2.0 or Spark 1.6 DF already has built-in
it?
Thanking You
11 matches
Mail list logo