Re: EC2 Hadoop Cluster VS Amazon EMR

2016-03-11 Thread Jonathan Aquilina
When I was testing EMR i had only spent around 17USD for testing and
with a decent sized EMR cluster.

On 2016-03-11 12:31, José Luis Larroque wrote:

> Hi Jonathan!  
> I was trying to know which of those options use a time ago. For now i'm using 
> Amazon EMR, because it's more easy, you have some stuff configurated already. 
> 
> But, a few benefits could be that, with EC2, you can use the Free tier, and 
> save some money while you are testing your stuff. And probably should be more 
> cheaper the use of EC2 against EMR, but i'm not 100% sure of this. 
> 
> Bye! 
> Jose 
> 
> 2016-03-07 6:17 GMT-03:00 Jonathan Aquilina :
> 
>> Good Morning, 
>> 
>> Just some food for thought as of late im noticing people using EC2 to setup 
>> ones own Hadoop cluster. What is the advantage of using ec2 over amazon's 
>> EMR hadoop cluster? 
>> 
>> Regard, 
>> 
>> Jonathan
 

Re: EC2 Hadoop Cluster VS Amazon EMR

2016-03-11 Thread José Luis Larroque
Hi Jonathan!

I was trying to know which of those options use a time ago. For now i'm
using Amazon EMR, because it's more easy, you have some stuff configurated
already.

But, a few benefits could be that, with EC2, you can use the Free tier, and
save some money while you are testing your stuff. And probably should be
more cheaper the use of EC2 against EMR, but i'm not 100% sure of this.

Bye!
Jose



2016-03-07 6:17 GMT-03:00 Jonathan Aquilina :

> Good Morning,
>
> Just some food for thought as of late im noticing people using EC2 to
> setup ones own Hadoop cluster. What is the advantage of using ec2 over
> amazon's EMR hadoop cluster?
>
>
>
> Regard,
>
> Jonathan
>
>


[Query:]Table creation with column family in Phoenix

2016-03-11 Thread Divya Gehlot
Hi,
I created a table in Phoenix with three column families  and Inserted the
values as shown below

Syntax :

> CREATE TABLE TESTCF (MYKEY VARCHAR NOT NULL PRIMARY KEY, CF1.COL1 VARCHAR,
> CF2.COL2 VARCHAR, CF3.COL3 VARCHAR)
> UPSERT INTO TESTCF (MYKEY,CF1.COL1,CF2.COL2,CF3.COL3)values
> ('Key2','CF1','CF2','CF3')
> UPSERT INTO TESTCF (MYKEY,CF1.COL1,CF2.COL2,CF3.COL3)values
> ('Key2','CF12','CF22','CF32')


 When I try to scan same table in Hbase
hbase(main):010:0> scan "TESTCF"

> ROW   COLUMN+CELL
>  Key1 column=CF1:COL1, timestamp=1457682385805, value=CF1
>  Key1 column=CF1:_0, timestamp=1457682385805, value=
>  Key1 column=CF2:COL2, timestamp=1457682385805, value=CF2
>  Key1 column=CF3:COL3, timestamp=1457682385805, value=CF3
>  Key2 column=CF1:COL1, timestamp=1457682426396, value=CF12
>  Key2 column=CF1:_0, timestamp=1457682426396, value=
>  Key2 column=CF2:COL2, timestamp=1457682426396, value=CF22
>  Key2 column=CF3:COL3, timestamp=1457682426396, value=CF32
> 2 row(s) in 0.0260 seconds


My query is why I am getting CF1:_0 one extra column in each row with no
value.

Can any body explain me .
Would really appreciate the help.

Thanks,
Divya


RE: Name node files are creating different directory

2016-03-11 Thread Brahma Reddy Battula
Hi vinodh

As properties are wrong, it is taken default properties..Correct like 
following..

fs.namenode.name.dir  should be like dfs.namenode.name.dir,
fs.datanode.data.dir  should be like dfs.datanode.data.dir


Reference:
https://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml

From: Vinodh Nagaraj [mailto:vinodh.db...@gmail.com]
Sent: 11 March 2016 13:54
To: user@hadoop.apache.org
Subject: Name node files are creating different directory

Hi,

I configured hadoop 2.7.1 on windows 7 (32 bit ) in C Drive.

I tried to format using "hdfs namenode -format',name node files are created at 
C:\tmp\hadoop-user\dfs\name.but  the property "C:\name" in hdfs-site.xml .

hdfs namenode -format

-10.219.149.100-1457674982841
16/03/11 11:13:03 INFO common.Storage: Storage directory 
\tmp\hadoop-487174\dfs\name has been successfully formatted.

core-site.xml



fs.defaultFS
hdfs://localhost/



hdfs-site.xml



fs.namenode.name.dir
C:\name



fs.datanode.data.dir
C:\data




what is wrong in that?Please help me.

Thanks & Regards,
Vinodh.N