Hi,
I'm wondering how far off base I am with the question:
Is a LogicalPlan in #SparkSQL similar to a RDD in #ApacheSpark Core in
that they both seem a metadata of the computation that eventually gets
executed to produce records?
What am I missing if anything? How imprecise I am by comparing
I think you can also pass in a zip file using the --files option
(http://spark.apache.org/docs/latest/running-on-yarn.html has some
examples). The files should then be present in the current working
directory of the driver R process.
Thanks
Shivaram
On Wed, Aug 17, 2016 at 4:16 AM, Felix Cheung
Yea. Please create a jira. Thanks!
On Tue, Aug 16, 2016 at 11:06 PM, Jacek Laskowski wrote:
> On Tue, Aug 16, 2016 at 10:51 PM, Yin Huai wrote:
>
> > Do you want to try it?
>
> Yes, indeed! I'd be more than happy. Guide me if you don't mind. Thanks.
>
>
Hi All , We are using Spark 1.6 Version R library .. Below is our code
which Loads the THIRD Party Library .
library("BreakoutDetection", lib.loc = "*hdfs://xx/BreakoutDetection/*")
:
library("BreakoutDetection", lib.loc = "*//xx/BreakoutDetection/*") :
When i try to execute the code
On Tue, Aug 16, 2016 at 10:51 PM, Yin Huai wrote:
> Do you want to try it?
Yes, indeed! I'd be more than happy. Guide me if you don't mind. Thanks.
Should I create a JIRA for this?
Jacek
-
To