Hi,
Can we run Spark on YARN with out installing HDFS?
If yes, where would HADOOP_CONF_DIR point to?
Regards,
--
*This message may contain confidential and privileged information. If it
has been sent to you in error, please reply to advise the sender of the
error and then immediately
Hi All,
I have the same issue with one compressed file .tgz around 3 GB. I increase the
nodes without any affect to the performance.
Best Regards,
Mostafa Alaa Mohamed,
Technical Expert Big Data,
M: +971506450787
Email: mohamedamost...@etisalat.ae<mailto:mohamedamost...@etisalat.ae>
Unsubscribe
Best Regards,
Mostafa Alaa Mohamed,
Technical Expert Big Data,
M: +971506450787
Email: mohamedamost...@etisalat.ae
-Original Message-
From: balaji9058 [mailto:kssb...@gmail.com]
Sent: Wednesday, December 14, 2016 08:32 AM
To: user@spark.apache.org
Subject: Re: Graphx triplet
we specify the rejection directory?
If not avaiable do you recommend to open Jira issue?
Best Regards,
Mostafa Alaa Mohamed,
Technical Expert Big Data,
M: +971506450787
Email: mohamedamost...@etisalat.ae<mailto:mohamedamost...@etisalat.ae>
The c
Hi All,
I have dataframe contains some data and I need to insert it into hive table. My
questions
1- Where will spark save the rejected rows from the insertion statements?
2- Can spark failed if some rows rejected?
3- How can I specify the rejection directory?
Regards,
that it will generate few partitions.
However, I can ONLY see 1 partition.
I cached the CassandraRDD and in the UI storage tab it shows ONLY 1
partition.
Any idea, why I am getting 1 partition?
Thanks,
Alaa
--
*This message may contain confidential and privileged information. If it
has been sent
Thanks Ankur,
But I grabbed some keys from the Spark results and ran "nodetool -h
getendpoints " and it showed the data is coming from at least 2 nodes?
Regards,
Alaa
On Thu, Sep 3, 2015 at 12:06 PM, Ankur Srivastava <
ankur.srivast...@gmail.com> wrote:
> Hi Alaa,
to use and I'll dig up the rest.
Regards,
Alaa Ali
Thanks Alex! I'm actually working with views from HBase because I will
never edit the HBase table from Phoenix and I'd hate to accidentally drop
it. I'll have to work out how to create the view with the additional ID
column.
Regards,
Alaa Ali
On Fri, Nov 21, 2014 at 5:26 PM, Alex Kamil alex.ka
))
But this doesn't work because the sql expression that the JdbcRDD expects
has to have two ?s to represent the lower and upper bound.
How can I run my query through the JdbcRDD?
Regards,
Alaa Ali
question, I still haven't tried this out, but I'll actually be
using this with PySpark, so I'm guessing the PhoenixPigConfiguration and
newHadoopRDD can be defined in PySpark as well?
Regards,
Alaa Ali
On Fri, Nov 21, 2014 at 4:34 PM, Josh Mahonin jmaho...@interset.com wrote:
Hi Alaa Ali
Hey freedafeng, I'm exactly where you are. I want the output to show the
rowkey and all column qualifiers that correspond to it. How did you write
HBaseResultToStringConverter to do what you wanted it to do?
--
View this message in context:
12 matches
Mail list logo