,"pricing")
#merging sales and pricing file#
merg_sales_pricing<- SparkR::sql(hiveContext,"select .")
head(merg_sales_pricing)
Thanks,
Vipul
On 23 November 2015 at 14:52, Jeff Zhang <zjf...@gmail.com> wrote:
> If possible, could you share y
gt;
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/SparkR-DataFrame-Out-of-memory-exception-for-very-small-file-tp25435.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> --------
original sales DataFrame.
>
> Yes, DataFrame is immutable, and every mutation of DataFrame will produce
> a new DataFrame.
>
>
>
> On Mon, Nov 23, 2015 at 4:44 PM, Vipul Rai <vipulrai8...@gmail.com> wrote:
>
>> Hello Rui,
>>
>> Sorry , What I meant was th
Hi Nikhil,
It seems you have Kerberos enabled cluster and it is unable to authenticate
using the ticket.
Please check the Kerberos settings, it could also be because of Kerberos
version mismatch on nodes.
Thanks,
Vipul
On Tue 17 Nov, 2015 07:31 Nikhil Gs wrote:
>
Hi Nick/Igor,
Any solution for this ?
Even I am having the same issue and copying jar to each executor is not
feasible if we use lot of jars.
Thanks,
Vipul
HI All,
I have a spark app written in java,which parses the incoming log using the
headers which are in .xml. (There are many headers and logs are from 15-20
devices in various formats and separators).
I am able to run it in local mode after specifying all the resources and
passing it as