RE: S3/EMR Hive: Load contents of a single file

2013-03-27 Thread Tony Burton

No problem Keith - it was a worthwhile exercise for me to go back and double 
check everything was working as expected.




-Original Message-
From: Keith Wiley [mailto:kwi...@keithwiley.com] 
Sent: 27 March 2013 17:03
To: user@hive.apache.org
Subject: Re: S3/EMR Hive: Load contents of a single file

Okay, I also saw your previous response which analyzed queries into two tables 
built around two files in the same directory.  I guess I was simply wrong in my 
understanding that a Hive table is fundamentally associated with a directory 
instead of a file.  Turns out, it be can either one.  A directory table uses 
all files in the directory while a file table uses one specific file and 
properly avoids sibling files.  My bad.

Thanks for the careful analysis and clarification.  TIL!

Cheers!

On Mar 27, 2013, at 02:58 , Tony Burton wrote:

> A bit more info - do an extended description of the table:
>  
> $ desc extended gsrc1;
>  
> And the "location" field is "location:s3://mybucket/path/to/data/src1.txt"
>  
> Do the same on a table created with a location pointing at the directory and 
> the same info gives (not surprisingly) "location:s3://mybucket/path/to/data/"
> 


Keith Wiley kwi...@keithwiley.com keithwiley.commusic.keithwiley.com

"I used to be with it, but then they changed what it was.  Now, what I'm with 
isn't it, and what's it seems weird and scary to me."
   --  Abe (Grandpa) Simpson 




Please consider the environment before printing this email

Inbound Email has been scanned for viruses and SPAM 

**
Please consider the environment before printing this email or attachments

This email and any attachments are confidential, protected by copyright and may 
be legally privileged.  If you are not the intended recipient, then the 
dissemination or copying of this email is prohibited. If you have received this 
in error, please notify the sender by replying by email and then delete the 
email completely from your system.  Neither Sporting Index nor the sender 
accepts responsibility for any virus, or any other defect which might affect 
any computer or IT system into which the email is received and/or opened.  It 
is the responsibility of the recipient to scan the email and no responsibility 
is accepted for any loss or damage arising in any way from receipt or use of 
this email.  Sporting Index Ltd is a company registered in England and Wales 
with company number 2636842, whose registered office is at Gateway House, 
Milverton Street, London, SE11 4AP.  Sporting Index Ltd is authorised and 
regulated by the UK Financial Services Authority (reg. no. 150404) and Gambling 
Commission (reg. no. 000-027343-R-308898-001).  Any financial promotion 
contained herein has been issued
and approved by Sporting Index Ltd.

Outbound email has been scanned for viruses and SPAM

Re: S3/EMR Hive: Load contents of a single file

2013-03-27 Thread Keith Wiley
Okay, I also saw your previous response which analyzed queries into two tables 
built around two files in the same directory.  I guess I was simply wrong in my 
understanding that a Hive table is fundamentally associated with a directory 
instead of a file.  Turns out, it be can either one.  A directory table uses 
all files in the directory while a file table uses one specific file and 
properly avoids sibling files.  My bad.

Thanks for the careful analysis and clarification.  TIL!

Cheers!

On Mar 27, 2013, at 02:58 , Tony Burton wrote:

> A bit more info - do an extended description of the table:
>  
> $ desc extended gsrc1;
>  
> And the “location” field is “location:s3://mybucket/path/to/data/src1.txt”
>  
> Do the same on a table created with a location pointing at the directory and 
> the same info gives (not surprisingly) “location:s3://mybucket/path/to/data/”
> 


Keith Wiley kwi...@keithwiley.com keithwiley.commusic.keithwiley.com

"I used to be with it, but then they changed what it was.  Now, what I'm with
isn't it, and what's it seems weird and scary to me."
   --  Abe (Grandpa) Simpson




RE: S3/EMR Hive: Load contents of a single file

2013-03-27 Thread Tony Burton
A bit more info - do an extended description of the table:

$ desc extended gsrc1;

And the "location" field is "location:s3://mybucket/path/to/data/src1.txt"

Do the same on a table created with a location pointing at the directory and 
the same info gives (not surprisingly) "location:s3://mybucket/path/to/data/"





From: Tony Burton [mailto:tbur...@sportingindex.com]
Sent: 27 March 2013 08:46
To: 'user@hive.apache.org'
Subject: RE: S3/EMR Hive: Load contents of a single file

Thanks for the reply Keith.

> you could have dispensed with the additional "alter table" business and 
> simply created the original table around the directory in the first place

Yep, but I have multiple files in that directory and wanted to create a table 
based upon one file per table.

> Do you know for certain that it isn't using other files also in that 
> directory as part of the same table
> or if it is currently empty, that if you add a new file to the directory 
> after creating the table in your
> described fashion, it doesn't immediately become visible as part of the table?

I've got two files in my s3://mybucket/path/to/data/ directory, 
s3://mybucket/path/to/data/src1.txt and s3://mybucket/path/to/data/src2.txt - 
both contain lists of ~-separated date/count pairs, eg 20130101~12345. Both 
contain data for just the month of February this year.

Create two tables:

$ create external table gsrc1 (gdate string, c int) row format delimited fields 
terminated by '~' stored as textfile;
$ alter table gsrc1 set location 's3://spinmetrics/global/src1.txt';
$ create external table gsrc2 (gdate string, c int) row format delimited fields 
terminated by '~' stored as textfile;
$ alter table gsrc2 set location 's3://spinmetrics/global/src2.txt';

Count(*) on each table:

$ select count(*) from gsrc1:
28
$ select count(*) from gsrc2:
28

Ok, but both tables could be pointing at the same data. Check max, min and 
first/last entry from both tables:

$ select min(c), max(c) from gsrc1;
2935 23130
$ select min(c), max(c) from gsrc2;
865953 2768868

$ select * from gsrc1 where gdate="20130201"
20130201 5153
$ select * from gsrc1 where gdate="20130228"
20130228 7051
$ select * from gsrc2 where gdate="20130201"
20130201 1472017
$ select * from gsrc2 where gdate="20130228"
20130228 1323241

And without copying in the whole data set I am 100% confident that these values 
match the contents of the individual files in s3. Maybe other readers could try 
a similar exercise and present their results? Are there other tests I could try 
to further verify my findings?

Tony





-----Original Message-
From: Keith Wiley [mailto:kwi...@keithwiley.com]
Sent: 26 March 2013 19:40
To: user@hive.apache.org<mailto:user@hive.apache.org>
Subject: Re: S3/EMR Hive: Load contents of a single file

Are you sure this is doing what you think it's doing? Since Hive associates 
tables with directories (well external tables at least, I'm not very familiar 
with internal tables), my suspicion is that even if your approach described 
below works, what Hive actually did was use s3://mybucket/path/to/data/ as the 
table location...in which case you could have dispensed with the additional 
"alter table" business and simply created the original table around the 
directory in the first place...or I could be completely wrong. Do you know for 
certain that it isn't using other files also in that directory as part of the 
same table...or if it is currently empty, that if you add a new file to the 
directory after creating the table in your described fashion, it doesn't 
immediately become visible as part of the table? I eagerly await clarification.

On Mar 26, 2013, at 10:39 , Tony Burton wrote:

>
> Thanks for the quick reply Sanjay.
>
> ALTER TABLE is the key, but slightly different to your suggestion. I create 
> the table as before, but don't specify location:
>
> $ create external table myData (str1 string, str2 string, count1 int)
> partitioned by  row format  stored as textfile;
>
> Then use ALTER TABLE like this:
>
> $ ALTER TABLE myData SET LOCATION '
> s3://mybucket/path/to/data/src1.txt ';
>
> Bingo, I can now run queries with myData in the same way I can when the 
> LOCATION is a directory. Cool!
>
> Tony
>
>
>
>
>
>
>
> From: Sanjay Subramanian [mailto:sanjay.subraman...@wizecommerce.com]
> Sent: 26 March 2013 17:22
> To: user@hive.apache.org<mailto:user@hive.apache.org>
> Subject: Re: S3/EMR Hive: Load contents of a single file
>
> Hi Tony
>
> Can u create the table without any location.
>
> After that you could do an ALTER TABLE add location and partition
>
> ALTER TABLE myData ADD PARTITION (partitionColumn1='$value1&

RE: S3/EMR Hive: Load contents of a single file

2013-03-27 Thread Tony Burton
Thanks for the reply Keith.

> you could have dispensed with the additional "alter table" business and 
> simply created the original table around the directory in the first place

Yep, but I have multiple files in that directory and wanted to create a table 
based upon one file per table.

> Do you know for certain that it isn't using other files also in that 
> directory as part of the same table
> or if it is currently empty, that if you add a new file to the directory 
> after creating the table in your
> described fashion, it doesn't immediately become visible as part of the table?

I've got two files in my s3://mybucket/path/to/data/ directory, 
s3://mybucket/path/to/data/src1.txt and s3://mybucket/path/to/data/src2.txt - 
both contain lists of ~-separated date/count pairs, eg 20130101~12345. Both 
contain data for just the month of February this year.

Create two tables: 

$ create external table gsrc1 (gdate string, c int) row format delimited fields 
terminated by '~' stored as textfile;
$ alter table gsrc1 set location 's3://spinmetrics/global/src1.txt';
$ create external table gsrc2 (gdate string, c int) row format delimited fields 
terminated by '~' stored as textfile;
$ alter table gsrc2 set location 's3://spinmetrics/global/src2.txt';

Count(*) on each table:

$ select count(*) from gsrc1:
28
$ select count(*) from gsrc2:
28

Ok, but both tables could be pointing at the same data. Check max, min and 
first/last entry from both tables:

$ select min(c), max(c) from gsrc1;
293523130
$ select min(c), max(c) from gsrc2;
865953  2768868

$ select * from gsrc1 where gdate="20130201"
201302015153
$ select * from gsrc1 where gdate="20130228"
201302287051
$ select * from gsrc2 where gdate="20130201"
201302011472017
$ select * from gsrc2 where gdate="20130228"
201302281323241

And without copying in the whole data set I am 100% confident that these values 
match the contents of the individual files in s3. Maybe other readers could try 
a similar exercise and present their results? Are there other tests I could try 
to further verify my findings?

Tony





-Original Message-
From: Keith Wiley [mailto:kwi...@keithwiley.com] 
Sent: 26 March 2013 19:40
To: user@hive.apache.org
Subject: Re: S3/EMR Hive: Load contents of a single file

Are you sure this is doing what you think it's doing?  Since Hive associates 
tables with directories (well external tables at least, I'm not very familiar 
with internal tables), my suspicion is that even if your approach described 
below works, what Hive actually did was use s3://mybucket/path/to/data/ as the 
table location...in which case you could have dispensed with the additional 
"alter table" business and simply created the original table around the 
directory in the first place...or I could be completely wrong.  Do you know for 
certain that it isn't using other files also in that directory as part of the 
same table...or if it is currently empty, that if you add a new file to the 
directory after creating the table in your described fashion, it doesn't 
immediately become visible as part of the table?  I eagerly await clarification.

On Mar 26, 2013, at 10:39 , Tony Burton wrote:

>  
> Thanks for the quick reply Sanjay.
>  
> ALTER TABLE is the key, but slightly different to your suggestion. I create 
> the table as before, but don't specify location:
>  
> $ create external table myData (str1 string, str2 string, count1 int) 
> partitioned by  row format  stored as textfile;
>  
> Then use ALTER TABLE like this:
>  
> $ ALTER TABLE myData SET LOCATION ' 
> s3://mybucket/path/to/data/src1.txt ';
>  
> Bingo, I can now run queries with myData in the same way I can when the 
> LOCATION is a directory. Cool!
>  
> Tony
>  
>  
>  
>  
>  
>  
>  
> From: Sanjay Subramanian [mailto:sanjay.subraman...@wizecommerce.com]
> Sent: 26 March 2013 17:22
> To: user@hive.apache.org
> Subject: Re: S3/EMR Hive: Load contents of a single file
>  
> Hi Tony
>  
> Can u create the table without any location. 
>  
> After that you could do an ALTER TABLE add location and partition
>  
> ALTER TABLE myData ADD PARTITION (partitionColumn1='$value1' , 
> partitionColumn2='$value2') LOCATION '/path/to/your/directory/in/hdfs';"
> 
> 
> An example Without Partitions
> -
> ALTER TABLE myData SET LOCATION 
> 'hdfs://10.48.97.97:9000/path/to/your/data/directory/in/hdfs';"
> 
> 
> While specifying location, you have to point to a directory. You cannot point 
> to a file (IMHO).
>  
> Hope that helps
>  
> sanjay
>  
> From: Tony Burton 
> Repl

Re: S3/EMR Hive: Load contents of a single file

2013-03-26 Thread Keith Wiley
Are you sure this is doing what you think it's doing?  Since Hive associates 
tables with directories (well external tables at least, I'm not very familiar 
with internal tables), my suspicion is that even if your approach described 
below works, what Hive actually did was use s3://mybucket/path/to/data/ as the 
table location...in which case you could have dispensed with the additional 
"alter table" business and simply created the original table around the 
directory in the first place...or I could be completely wrong.  Do you know for 
certain that it isn't using other files also in that directory as part of the 
same table...or if it is currently empty, that if you add a new file to the 
directory after creating the table in your described fashion, it doesn't 
immediately become visible as part of the table?  I eagerly await clarification.

On Mar 26, 2013, at 10:39 , Tony Burton wrote:

>  
> Thanks for the quick reply Sanjay.
>  
> ALTER TABLE is the key, but slightly different to your suggestion. I create 
> the table as before, but don’t specify location:
>  
> $ create external table myData (str1 string, str2 string, count1 int) 
> partitioned by  row format  stored as textfile;
>  
> Then use ALTER TABLE like this:
>  
> $ ALTER TABLE myData SET LOCATION ' s3://mybucket/path/to/data/src1.txt ';
>  
> Bingo, I can now run queries with myData in the same way I can when the 
> LOCATION is a directory. Cool!
>  
> Tony
>  
>  
>  
>  
>  
>  
>  
> From: Sanjay Subramanian [mailto:sanjay.subraman...@wizecommerce.com] 
> Sent: 26 March 2013 17:22
> To: user@hive.apache.org
> Subject: Re: S3/EMR Hive: Load contents of a single file
>  
> Hi Tony 
>  
> Can u create the table without any location. 
>  
> After that you could do an ALTER TABLE add location and partition
>  
> ALTER TABLE myData ADD PARTITION (partitionColumn1='$value1' , 
> partitionColumn2='$value2') LOCATION '/path/to/your/directory/in/hdfs';"
> 
> 
> An example Without Partitions
> -
> ALTER TABLE myData SET LOCATION 
> 'hdfs://10.48.97.97:9000/path/to/your/data/directory/in/hdfs';"
> 
> 
> While specifying location, you have to point to a directory. You cannot point 
> to a file (IMHO).
>  
> Hope that helps
>  
> sanjay
>  
> From: Tony Burton 
> Reply-To: "user@hive.apache.org" 
> Date: Tuesday, March 26, 2013 10:11 AM
> To: "user@hive.apache.org" 
> Subject: S3/EMR Hive: Load contents of a single file
>  
> Hi list,
>  
> I've been using hive to perform queries on data hosted on AWS S3, and my 
> tables point at data by specifying the directory in which the data is stored, 
> eg
>  
> $ create external table myData (str1 string, str2 string, count1 int) 
> partitioned by  row format  stored as textfile location 
> 's3://mybucket/path/to/data';
>  
> where s3://mybucket/path/to/data is the "directory" that contains the files 
> I'm interested in. My use case now is to create a table with data pointing to 
> a specifc file in a directory:
>  
> $ create external table myData (str1 string, str2 string, count1 int) 
> partitioned by  row format  stored as textfile location 
> 's3://mybucket/path/to/data/src1.txt';
>
> and I get the error: "FAILED: Error in metadata: MetaException(message:Got 
> exception: java.io.IOException Can't make directory for path 
> 's3://spinmetrics/global/counter_Fixture.txt' since it is a file.)". Ok, lets 
> try to create the table without specifying the data source:
>  
> $ create external table myData (str1 string, str2 string, count1 int) 
> partitioned by  row format  stored as textfile
>  
> Ok, no problem. Now lets load the data
>  
> $ LOAD DATA INPATH 's3://mybucket/path/to/data/src1.txt' INTO TABLE myData;
>  
> (referring to https://cwiki.apache.org/Hive/languagemanual-dml.html - 
> "...filepath can refer to a file (in which case hive will move the file into 
> the table)")
>  
> Error message is: " FAILED: Error in semantic analysis: Line 1:17 Path is not 
> legal ''s3://mybucket/path/to/data/src1.txt": Move from: s3:// 
> mybucket/path/to/data/src1.txt 
> to:hdfs://10.48.97.97:9000/mnt/hive_081/warehouse/gfix is not valid. Please 
> check that values for params "default.fs.name" and 
> "hive.metastore.warehouse.dir" do not conflict."
>  
> So I check my default.fs.name and hive.metastore.warehouse.dir (which have 
> never caused problems before):
>  
> $ set fs.default.name;
> fs.default.name=hdfs://10.48.97.9

Re: S3/EMR Hive: Load contents of a single file

2013-03-26 Thread Sanjay Subramanian
Awesome ! Keep the bees happy ! Happy Hiving !

From: Tony Burton mailto:tbur...@sportingindex.com>>
Reply-To: "user@hive.apache.org<mailto:user@hive.apache.org>" 
mailto:user@hive.apache.org>>
Date: Tuesday, March 26, 2013 10:39 AM
To: "user@hive.apache.org<mailto:user@hive.apache.org>" 
mailto:user@hive.apache.org>>
Subject: RE: S3/EMR Hive: Load contents of a single file


Thanks for the quick reply Sanjay.

ALTER TABLE is the key, but slightly different to your suggestion. I create the 
table as before, but don’t specify location:

$ create external table myData (str1 string, str2 string, count1 int) 
partitioned by  row format  stored as textfile;

Then use ALTER TABLE like this:

$ ALTER TABLE myData SET LOCATION ' s3://mybucket/path/to/data/src1.txt ';

Bingo, I can now run queries with myData in the same way I can when the 
LOCATION is a directory. Cool!

Tony







From: Sanjay Subramanian [mailto:sanjay.subraman...@wizecommerce.com]
Sent: 26 March 2013 17:22
To: user@hive.apache.org<mailto:user@hive.apache.org>
Subject: Re: S3/EMR Hive: Load contents of a single file

Hi Tony

Can u create the table without any location.

After that you could do an ALTER TABLE add location and partition

ALTER TABLE myData ADD PARTITION (partitionColumn1='$value1' , 
partitionColumn2='$value2') LOCATION '/path/to/your/directory/in/hdfs';"


An example Without Partitions
-
ALTER TABLE myData SET LOCATION 
'hdfs://10.48.97.97:9000/path/to/your/data/directory/in/hdfs';"


While specifying location, you have to point to a directory. You cannot point 
to a file (IMHO).

Hope that helps

sanjay

From: Tony Burton mailto:tbur...@sportingindex.com>>
Reply-To: "user@hive.apache.org<mailto:user@hive.apache.org>" 
mailto:user@hive.apache.org>>
Date: Tuesday, March 26, 2013 10:11 AM
To: "user@hive.apache.org<mailto:user@hive.apache.org>" 
mailto:user@hive.apache.org>>
Subject: S3/EMR Hive: Load contents of a single file

Hi list,

I've been using hive to perform queries on data hosted on AWS S3, and my tables 
point at data by specifying the directory in which the data is stored, eg

$ create external table myData (str1 string, str2 string, count1 int) 
partitioned by  row format  stored as textfile location 
's3://mybucket/path/to/data';

where s3://mybucket/path/to/data is the "directory" that contains the files I'm 
interested in. My use case now is to create a table with data pointing to a 
specifc file in a directory:

$ create external table myData (str1 string, str2 string, count1 int) 
partitioned by  row format  stored as textfile location 
's3://mybucket/path/to/data/src1.txt';

and I get the error: "FAILED: Error in metadata: MetaException(message:Got 
exception: java.io.IOException Can't make directory for path 
's3://spinmetrics/global/counter_Fixture.txt' since it is a file.)". Ok, lets 
try to create the table without specifying the data source:

$ create external table myData (str1 string, str2 string, count1 int) 
partitioned by  row format  stored as textfile

Ok, no problem. Now lets load the data

$ LOAD DATA INPATH 's3://mybucket/path/to/data/src1.txt' INTO TABLE myData;

(referring to https://cwiki.apache.org/Hive/languagemanual-dml.html - 
"...filepath can refer to a file (in which case hive will move the file into 
the table)")

Error message is: " FAILED: Error in semantic analysis: Line 1:17 Path is not 
legal ''s3://mybucket/path/to/data/src1.txt": Move from: s3:// 
mybucket/path/to/data/src1.txt to: 
hdfs://10.48.97.97:9000/mnt/hive_081/warehouse/gfix is not valid. Please check 
that values for params "default.fs.name" and "hive.metastore.warehouse.dir" do 
not conflict."

So I check my default.fs.name and hive.metastore.warehouse.dir (which have 
never caused problems before):

$ set fs.default.name;
fs.default.name=hdfs://10.48.97.97:9000
$ set hive.metastore.warehouse.dir;
hive.metastore.warehouse.dir=/mnt/hive_081/warehouse

Clearly different, but which is correct? Is there an easier way to load a 
single file into a hive table? Or should I just put each file in a directory 
and proceed as before?

Thanks!

Tony







Tony Burton
Senior Software Engineer
e: tbur...@sportingindex.com<mailto:tbur...@sportingindex.com>
[cid:image001.png@01CDC643.43FE7330]<http://www.sportingsolutions.com/>



*
PPlease consider the environment before printing this email or attachments

This email and any attachments are confidential, protected by copyright and may 
be legally privileged. If you are not the intended recipient, then the 
dissemination or copying of this email is prohibited. If you have r

Re: S3/EMR Hive: Load contents of a single file

2013-03-26 Thread Ramki Palle
First of all, you cannot point a table to a file. Each table will have a
corresponding table. If you want to have all the in the table contains in
only one file, simply copy that one file into the directory. The table does
not need to know the name of the file. It only matters whether the
structure of the data in the file is similar to the table structure.

When you query the table, it gets the data from whatever files are there
from the corresponding directory.

Regards,
Ramki.


On Tue, Mar 26, 2013 at 10:11 AM, Tony Burton wrote:

> Hi list,
>
> ** **
>
> I've been using hive to perform queries on data hosted on AWS S3, and my
> tables point at data by specifying the directory in which the data is
> stored, eg 
>
> ** **
>
> $ create external table myData (str1 string, str2 string, count1 int)
> partitioned by  row format  stored as textfile location
> 's3://mybucket/path/to/data';
>
> ** **
>
> where s3://mybucket/path/to/data is the "directory" that contains the
> files I'm interested in. My use case now is to create a table with data
> pointing to a specifc file in a directory:
>
> ** **
>
> $ create external table myData (str1 string, str2 string, count1 int)
> partitioned by  row format  stored as textfile location
> 's3://mybucket/path/to/data/src1.txt';
>
> 
>
> and I get the error: "FAILED: Error in metadata: MetaException(message:Got
> exception: java.io.IOException Can't make directory for path
> 's3://spinmetrics/global/counter_Fixture.txt' since it is a file.)". Ok,
> lets try to create the table without specifying the data source:
>
> ** **
>
> $ create external table myData (str1 string, str2 string, count1 int)
> partitioned by  row format  stored as textfile
>
> ** **
>
> Ok, no problem. Now lets load the data
>
> ** **
>
> $ LOAD DATA INPATH 's3://mybucket/path/to/data/src1.txt' INTO TABLE myData;
> 
>
> ** **
>
> (referring to https://cwiki.apache.org/Hive/languagemanual-dml.html -
> "...filepath can refer to a file (in which case hive will move the file
> into the table)")
>
> ** **
>
> Error message is: " FAILED: Error in semantic analysis: Line 1:17 Path is
> not legal ''s3://mybucket/path/to/data/src1.txt": Move from: s3://
> mybucket/path/to/data/src1.txt to: hdfs://
> 10.48.97.97:9000/mnt/hive_081/warehouse/gfix is not valid. Please check
> that values for params "default.fs.name" and
> "hive.metastore.warehouse.dir" do not conflict."
>
> ** **
>
> So I check my default.fs.name and hive.metastore.warehouse.dir (which
> have never caused problems before):
>
> ** **
>
> $ set fs.default.name;
>
> fs.default.name=hdfs://10.48.97.97:9000
>
> $ set hive.metastore.warehouse.dir;
>
> hive.metastore.warehouse.dir=/mnt/hive_081/warehouse
>
> ** **
>
> Clearly different, but which is correct? Is there an easier way to load a
> single file into a hive table? Or should I just put each file in a
> directory and proceed as before?
>
> ** **
>
> Thanks!
>
> ** **
>
> Tony
>
> ** **
>
> ** **
>
> ** **
>
> ** **
>
> ** **
>
> ** **
>
> ** **
>
> *Tony Burton
> Senior Software Engineer*
> e: tbur...@sportingindex.com
>
> 
>
> [image: cid:image001.png@01CDC643.43FE7330]
> 
>
> ** **
>
>
>
>
> *
> P *Please consider the environment before printing this email or
> attachments*
>
>
> This email and any attachments are confidential, protected by copyright
> and may be legally privileged. If you are not the intended recipient, then
> the dissemination or copying of this email is prohibited. If you have
> received this in error, please notify the sender by replying by email and
> then delete the email completely from your system. Neither Sporting Index
> nor the sender accepts responsibility for any virus, or any other defect
> which might affect any computer or IT system into which the email is
> received and/or opened. It is the responsibility of the recipient to scan
> the email and no responsibility is accepted for any loss or damage arising
> in any way from receipt or use of this email. Sporting Index Ltd is a
> company registered in England and Wales with company number 2636842, whose
> registered office is at Gateway House, Milverton Street, London, SE11 4AP.
> Sporting Index Ltd is authorised and regulated by the UK Financial Services
> Authority (reg. no. 150404) and Gambling Commission (reg. no.
> 000-027343-R-308898-001). Any financial promotion contained herein has been
> issued and approved by Sporting Index Ltd.
>
> Outbound email has been scanned for viruses and SPAM
>
>
<>

RE: S3/EMR Hive: Load contents of a single file

2013-03-26 Thread Tony Burton

Thanks for the quick reply Sanjay.

ALTER TABLE is the key, but slightly different to your suggestion. I create the 
table as before, but don't specify location:

$ create external table myData (str1 string, str2 string, count1 int) 
partitioned by  row format  stored as textfile;

Then use ALTER TABLE like this:

$ ALTER TABLE myData SET LOCATION ' s3://mybucket/path/to/data/src1.txt ';

Bingo, I can now run queries with myData in the same way I can when the 
LOCATION is a directory. Cool!

Tony







From: Sanjay Subramanian [mailto:sanjay.subraman...@wizecommerce.com]
Sent: 26 March 2013 17:22
To: user@hive.apache.org
Subject: Re: S3/EMR Hive: Load contents of a single file

Hi Tony

Can u create the table without any location.

After that you could do an ALTER TABLE add location and partition

ALTER TABLE myData ADD PARTITION (partitionColumn1='$value1' , 
partitionColumn2='$value2') LOCATION '/path/to/your/directory/in/hdfs';"


An example Without Partitions
-
ALTER TABLE myData SET LOCATION 
'hdfs://10.48.97.97:9000/path/to/your/data/directory/in/hdfs';"


While specifying location, you have to point to a directory. You cannot point 
to a file (IMHO).

Hope that helps

sanjay

From: Tony Burton mailto:tbur...@sportingindex.com>>
Reply-To: "user@hive.apache.org<mailto:user@hive.apache.org>" 
mailto:user@hive.apache.org>>
Date: Tuesday, March 26, 2013 10:11 AM
To: "user@hive.apache.org<mailto:user@hive.apache.org>" 
mailto:user@hive.apache.org>>
Subject: S3/EMR Hive: Load contents of a single file

Hi list,

I've been using hive to perform queries on data hosted on AWS S3, and my tables 
point at data by specifying the directory in which the data is stored, eg

$ create external table myData (str1 string, str2 string, count1 int) 
partitioned by  row format  stored as textfile location 
's3://mybucket/path/to/data';

where s3://mybucket/path/to/data is the "directory" that contains the files I'm 
interested in. My use case now is to create a table with data pointing to a 
specifc file in a directory:

$ create external table myData (str1 string, str2 string, count1 int) 
partitioned by  row format  stored as textfile location 
's3://mybucket/path/to/data/src1.txt';

and I get the error: "FAILED: Error in metadata: MetaException(message:Got 
exception: java.io.IOException Can't make directory for path 
's3://spinmetrics/global/counter_Fixture.txt' since it is a file.)". Ok, lets 
try to create the table without specifying the data source:

$ create external table myData (str1 string, str2 string, count1 int) 
partitioned by  row format  stored as textfile

Ok, no problem. Now lets load the data

$ LOAD DATA INPATH 's3://mybucket/path/to/data/src1.txt' INTO TABLE myData;

(referring to https://cwiki.apache.org/Hive/languagemanual-dml.html - 
"...filepath can refer to a file (in which case hive will move the file into 
the table)")

Error message is: " FAILED: Error in semantic analysis: Line 1:17 Path is not 
legal ''s3://mybucket/path/to/data/src1.txt": Move from: s3:// 
mybucket/path/to/data/src1.txt to: 
hdfs://10.48.97.97:9000/mnt/hive_081/warehouse/gfix is not valid. Please check 
that values for params "default.fs.name" and "hive.metastore.warehouse.dir" do 
not conflict."

So I check my default.fs.name and hive.metastore.warehouse.dir (which have 
never caused problems before):

$ set fs.default.name;
fs.default.name=hdfs://10.48.97.97:9000
$ set hive.metastore.warehouse.dir;
hive.metastore.warehouse.dir=/mnt/hive_081/warehouse

Clearly different, but which is correct? Is there an easier way to load a 
single file into a hive table? Or should I just put each file in a directory 
and proceed as before?

Thanks!

Tony







Tony Burton
Senior Software Engineer
e: tbur...@sportingindex.com<mailto:tbur...@sportingindex.com>
[cid:image001.png@01CE2A48.EF2C8FD0]<http://www.sportingsolutions.com/>



*
PPlease consider the environment before printing this email or attachments

This email and any attachments are confidential, protected by copyright and may 
be legally privileged. If you are not the intended recipient, then the 
dissemination or copying of this email is prohibited. If you have received this 
in error, please notify the sender by replying by email and then delete the 
email completely from your system. Neither Sporting Index nor the sender 
accepts responsibility for any virus, or any other defect which might affect 
any computer or IT system into which the email is received and/or opened. It is 
the responsibility of the recipient to scan the email and no responsibility is 
accepted for any loss or damage arising in any way from

Re: S3/EMR Hive: Load contents of a single file

2013-03-26 Thread Sanjay Subramanian
Hi Tony

Can u create the table without any location.

After that you could do an ALTER TABLE add location and partition

ALTER TABLE myData ADD PARTITION (partitionColumn1='$value1' , 
partitionColumn2='$value2') LOCATION '/path/to/your/directory/in/hdfs';"

An example Without Partitions
-
ALTER TABLE myData SET LOCATION 
'hdfs://10.48.97.97:9000/path/to/your/data/directory/in/hdfs';"

While specifying location, you have to point to a directory. You cannot point 
to a file (IMHO).

Hope that helps

sanjay

From: Tony Burton mailto:tbur...@sportingindex.com>>
Reply-To: "user@hive.apache.org" 
mailto:user@hive.apache.org>>
Date: Tuesday, March 26, 2013 10:11 AM
To: "user@hive.apache.org" 
mailto:user@hive.apache.org>>
Subject: S3/EMR Hive: Load contents of a single file

Hi list,

I've been using hive to perform queries on data hosted on AWS S3, and my tables 
point at data by specifying the directory in which the data is stored, eg

$ create external table myData (str1 string, str2 string, count1 int) 
partitioned by  row format  stored as textfile location 
's3://mybucket/path/to/data';

where s3://mybucket/path/to/data is the "directory" that contains the files I'm 
interested in. My use case now is to create a table with data pointing to a 
specifc file in a directory:

$ create external table myData (str1 string, str2 string, count1 int) 
partitioned by  row format  stored as textfile location 
's3://mybucket/path/to/data/src1.txt';

and I get the error: "FAILED: Error in metadata: MetaException(message:Got 
exception: java.io.IOException Can't make directory for path 
's3://spinmetrics/global/counter_Fixture.txt' since it is a file.)". Ok, lets 
try to create the table without specifying the data source:

$ create external table myData (str1 string, str2 string, count1 int) 
partitioned by  row format  stored as textfile

Ok, no problem. Now lets load the data

$ LOAD DATA INPATH 's3://mybucket/path/to/data/src1.txt' INTO TABLE myData;

(referring to https://cwiki.apache.org/Hive/languagemanual-dml.html - 
"...filepath can refer to a file (in which case hive will move the file into 
the table)")

Error message is: " FAILED: Error in semantic analysis: Line 1:17 Path is not 
legal ''s3://mybucket/path/to/data/src1.txt": Move from: s3:// 
mybucket/path/to/data/src1.txt to: 
hdfs://10.48.97.97:9000/mnt/hive_081/warehouse/gfix is not valid. Please check 
that values for params "default.fs.name" and "hive.metastore.warehouse.dir" do 
not conflict."

So I check my default.fs.name and hive.metastore.warehouse.dir (which have 
never caused problems before):

$ set fs.default.name;
fs.default.name=hdfs://10.48.97.97:9000
$ set hive.metastore.warehouse.dir;
hive.metastore.warehouse.dir=/mnt/hive_081/warehouse

Clearly different, but which is correct? Is there an easier way to load a 
single file into a hive table? Or should I just put each file in a directory 
and proceed as before?

Thanks!

Tony







Tony Burton
Senior Software Engineer
e: tbur...@sportingindex.com

[cid:image001.png@01CDC643.43FE7330]




*
PPlease consider the environment before printing this email or attachments

This email and any attachments are confidential, protected by copyright and may 
be legally privileged. If you are not the intended recipient, then the 
dissemination or copying of this email is prohibited. If you have received this 
in error, please notify the sender by replying by email and then delete the 
email completely from your system. Neither Sporting Index nor the sender 
accepts responsibility for any virus, or any other defect which might affect 
any computer or IT system into which the email is received and/or opened. It is 
the responsibility of the recipient to scan the email and no responsibility is 
accepted for any loss or damage arising in any way from receipt or use of this 
email. Sporting Index Ltd is a company registered in England and Wales with 
company number 2636842, whose registered office is at Gateway House, Milverton 
Street, London, SE11 4AP. Sporting Index Ltd is authorised and regulated by the 
UK Financial Services Authority (reg. no. 150404) and Gambling Commission (reg. 
no. 000-027343-R-308898-001). Any financial promotion contained herein has been 
issued and approved by Sporting Index Ltd.

Outbound email has been scanned for viruses and SPAM

CONFIDENTIALITY NOTICE
==
This email message and any attachments are for the exclusive use of the 
intended recipient(s) and may contain confidential and privileged information. 
Any unauthorized review, use, disclosure or distribution is prohibited. If you 
are not the intended recipient, please contact the sender by reply email and 
destroy all copies of the original message along with any attachments,