Yinghau,
Last week you mentioned that you are running your cluster on EC2 that you shut
them down instance(s) over the weekend.
QTE
Since I shutdown the EC2 instance every night, I thought that using
'master','slave1','slave2' will save typing after the full host name change
with reboot.
Hi,
I have an MapReduce job which uses does some operation of reading from HBase
tables. I have configured the cluster in Secure Mode including Secure HBase.
I am running the Job(classical MR job) from a custom client running under user
subroto.
The mentioned user has valid principal in
Hi,
You should consider using Amazon Virtual Private Cloud (VPC) service to
avoid re-assigning IP addresses every time you restart your instances.
Keep in mind limitation of 5 ElasticIPs per VPC.
Regards,
Samir
On 14 November 2012 15:34, Kartashov, Andy andy.kartas...@mpac.ca wrote:
Hi,
I am trying to insert a table in hive, and I am getting this strange error.
Here is what I do
insert overwrite table hivetable
struct(lpad(ch, 20, ' '),lpad(start, 10, 0),lpad(strand,10,' '),lpad(ref,
3, ' ')),
struct(X,mmm,c_count,t_count,mm)
from atable;
and here is what I get. Any and
[Moving thread to cdh-u...@cloudera.org as it seems to be CDH related +
bcc:user@hadoop.a.o]
Hi,
The underlying issue is that Sqoop depends upon HSQLDB 1.x whereas the
default jar bundled with CDH 4 is HSQLDB 2.x. Since this is bundled with
Hadoop for example purposes only, you should be able to
On Wed, Nov 14, 2012 at 4:35 AM, mailinglist
mailingl...@datenvandalismus.org wrote:
does anyone know, if it possible to setup an active-active-NameNode in hadoop
1.0 ? Or how can i provide a HA-NameNode?
HA is not present in hadoop 1.0. You'll have to upgrade to a release
on branch 2.0 or
Adding to Andy's points:
To be clarify: I think 0.23 does not claim HA feature.
Also Hadoop-2 HA is Active-Standby model.
Regards,
Uma
From: Andy Isaacson [a...@cloudera.com]
Sent: Thursday, November 15, 2012 8:19 AM
To: user@hadoop.apache.org
Subject:
Hi Manoj
For an edge node, you need to include the hadoop jars and configuration files
in that box like any other node(Use the same version your cluster has). But no
need to start any hadoop daemons.
You need to ensure that this node is able to connect with all machines in the
cluster.