[ANN] Hadoop Summit Call for Papers Now Open

2012-02-07 Thread Joe McGonnell
Just a quick note to let everyone know that the Call for Papers for this year's 
Hadoop Summit is currently open. Please visit www.hadoopsummit.org for more 
information. The submission deadline for abstracts is February 22nd.

This year's Hadoop Summit will be even bigger and better than past years. The 
conference has been expanded to two days (June 13th and 14th) and moved to the 
San Jose Convention Center. There are now six tracks (Future of Hadoop, 
Deployment and Operations, Enterprise Data Architecture, Applications and Data 
Science, Analytics and Business Intelligence and Hadoop in Action). 

We are assembling a community-focused content selection committee that will 
include representatives from more than 20 companies involved in the Apache 
Hadoop space. The committee will be looking for presentations that involve 
compelling use cases and success stories, best practices and technical insights 
that help to advance the adoption of Apache Hadoop and related sub-projects.

We look forward to your submissions. Remember that each presenter receives a 
free all-access pass to Hadoop Summit.

Thanks,
The Hadoop Summit team

Re: What's the best practice of loading logs into hdfs while using hive to do log analytic?

2012-02-07 Thread alo alt
Yes. 
You can use partitioned tables in hive to append in a new table without moving 
the data. For flume you can define small sinks, but you're right, the file in 
hdfs is closed and written when flume send the closing. Please note, the gzip 
codec has no marker inside so you have to wait till flume has closing the file 
in hdfs before you can process them. Snappy would fit, but I have no longtime 
tests within an productive environment. 

For blocksizing you're right, but I think that you can move behind. 

--
Alexander Lorenz
http://mapredit.blogspot.com

On Feb 7, 2012, at 3:09 PM, Xiaobin She wrote:

> hi Bejoy and Alex,
> 
> thank you for your advice.
> 
> Actually I have look at Scribe first, and I have heard of Flume.
> 
> I look at flume's user guide just now, and flume seems promising, as Bejoy 
> said , the flume collector can dump data into hdfs when the collector buffer 
> reaches a particular size of after a particular time interval, this is good 
> and I think it can solve the problem of data delivery latency.
> 
> But what about compress?
> 
> from the user's guide of flume, I see that flum supports compression  of log 
> files, but if flume did not wait until the collector has collect one hour of 
> log and then compress it and send it to hdfs, then it will  send part of the 
> one hour log to hdfs, am I right?
> 
> so if I want to use thest data in hive (assume I have an external table in 
> hive), I have to specify at least two partiton key while creating table, one 
> for day-month-hour, and one for some other time interval like ten miniutes, 
> then I add hive partition to the existed external table with specified 
> partition key.
> 
> Is the above process right ?
> 
> If this right, then there could be some other problem, like the ten miniute 
> logs after compress is not big enough to fit the block size of hdfs which may 
> couse lots of small files ( for some of our log id, this may come true), or 
> if I set the time interval to be half an hour, then at the end of hour, it 
> may still cause the data delivery latency problem.
> 
> this seems not a very good solution, am I making some mistakes or 
> misunderstanding here?
> 
> thank you very much!
> 
> 
> 
> 
> 
> 2012/2/7 alo alt 
> Hi,
> 
> a first start with flume:
> http://mapredit.blogspot.com/2011/10/centralized-logfile-management-across.html
> 
> Facebook's scribe could also be work for you.
> 
> - Alex
> 
> --
> Alexander Lorenz
> http://mapredit.blogspot.com
> 
> On Feb 7, 2012, at 11:03 AM, Xiaobin She wrote:
> 
> > Hi all,
> >
> > Sorry if it is not appropriate to send one thread into two maillist.
> > **
> > I'm tring to use hadoop and hive to do some log analytic jobs.
> >
> > Our system generate lots of logs every day, for example, it produce about
> > 370GB logs(including lots of log files) yesterday, and every day the logs
> > increases.
> >
> > And we want to use hadoop and hive to replace our old log analysic system.
> >
> > We distinguish our logs with logid, we have an log collector which will
> > collect logs from clients and then generate log files.
> >
> > for every logid, there will be one log file every hour, for some logid,
> > this hourly log file can be 1~2GB
> >
> > I have set up an test cluster with hadoop and hive, and I have run some
> > test which seems good for us.
> >
> > For reference, we will create one table in hive for every logid which will
> > be partitoned by hour.
> >
> > Now I have a question, what's the best practice for loading logs files into
> > hdfs or hive warehouse dir ?
> >
> > My first thought is,  at the begining of every hour,  compress the log file
> > of the last hour of every logid and then use the hive cmd tool to load
> > these compressed log files into hdfs.
> >
> > using  commands like "LOAD DATA LOCAL inpath '$logname' OVERWRITE  INTO
> > TABLE $tablename PARTITION (dt='$h') "
> >
> > I think this can work, and I have run some test on our 3-nodes test
> > clusters.
> >
> > But the problem is, there are lots of logid which means there are lots of
> > log files,  so every hour we will have to load lots of files into hdfs
> > and there is another problem,  we will run hourly analysis job on these
> > hourly collected log files,
> > which inroduces the problem, because there are lots of log files, if we
> > load these log files at the same time at the begining of every hour, I
> > think  there will some network flows and there will be data delivery
> > latency problem.
> >
> > For data delivery latency problem, I mean it will take some time for the
> > log files to be copyed into hdfs,  and this will cause our hourly log
> > analysis job to start later.
> >
> > So I want to figure out if we can write or append logs into an compressed
> > file which is already located in hdfs, and I have posted an thread in the
> > mailist, and from what I have learned, this is not possible.
> >
> >
> > So, what's the best practice of loading logs into hdfs while using hive to
> > do log ana

Re: What's the best practice of loading logs into hdfs while using hive to do log analytic?

2012-02-07 Thread Xiaobin She
hi Bejoy and Alex,

thank you for your advice.

Actually I have look at Scribe first, and I have heard of Flume.

I look at flume's user guide just now, and flume seems promising, as Bejoy
said , the flume collector can dump data into hdfs when the collector
buffer reaches a particular size of after a particular time interval, this
is good and I think it can solve the problem of data delivery latency.

But what about compress?

from the user's guide of flume, I see that flum supports compression  of
log files, but if flume did not wait until the collector has collect one
hour of log and then compress it and send it to hdfs, then it will  send
part of the one hour log to hdfs, am I right?

so if I want to use thest data in hive (assume I have an external table in
hive), I have to specify at least two partiton key while creating table,
one for day-month-hour, and one for some other time interval like ten
miniutes, then I add hive partition to the existed external table with
specified partition key.

Is the above process right ?

If this right, then there could be some other problem, like the ten miniute
logs after compress is not big enough to fit the block size of hdfs which
may couse lots of small files ( for some of our log id, this may come
true), or if I set the time interval to be half an hour, then at the end of
hour, it may still cause the data delivery latency problem.

this seems not a very good solution, am I making some mistakes or
misunderstanding here?

thank you very much!





2012/2/7 alo alt 

> Hi,
>
> a first start with flume:
>
> http://mapredit.blogspot.com/2011/10/centralized-logfile-management-across.html
>
> Facebook's scribe could also be work for you.
>
> - Alex
>
> --
> Alexander Lorenz
> http://mapredit.blogspot.com
>
> On Feb 7, 2012, at 11:03 AM, Xiaobin She wrote:
>
> > Hi all,
> >
> > Sorry if it is not appropriate to send one thread into two maillist.
> > **
> > I'm tring to use hadoop and hive to do some log analytic jobs.
> >
> > Our system generate lots of logs every day, for example, it produce about
> > 370GB logs(including lots of log files) yesterday, and every day the logs
> > increases.
> >
> > And we want to use hadoop and hive to replace our old log analysic
> system.
> >
> > We distinguish our logs with logid, we have an log collector which will
> > collect logs from clients and then generate log files.
> >
> > for every logid, there will be one log file every hour, for some logid,
> > this hourly log file can be 1~2GB
> >
> > I have set up an test cluster with hadoop and hive, and I have run some
> > test which seems good for us.
> >
> > For reference, we will create one table in hive for every logid which
> will
> > be partitoned by hour.
> >
> > Now I have a question, what's the best practice for loading logs files
> into
> > hdfs or hive warehouse dir ?
> >
> > My first thought is,  at the begining of every hour,  compress the log
> file
> > of the last hour of every logid and then use the hive cmd tool to load
> > these compressed log files into hdfs.
> >
> > using  commands like "LOAD DATA LOCAL inpath '$logname' OVERWRITE  INTO
> > TABLE $tablename PARTITION (dt='$h') "
> >
> > I think this can work, and I have run some test on our 3-nodes test
> > clusters.
> >
> > But the problem is, there are lots of logid which means there are lots of
> > log files,  so every hour we will have to load lots of files into hdfs
> > and there is another problem,  we will run hourly analysis job on these
> > hourly collected log files,
> > which inroduces the problem, because there are lots of log files, if we
> > load these log files at the same time at the begining of every hour, I
> > think  there will some network flows and there will be data delivery
> > latency problem.
> >
> > For data delivery latency problem, I mean it will take some time for the
> > log files to be copyed into hdfs,  and this will cause our hourly log
> > analysis job to start later.
> >
> > So I want to figure out if we can write or append logs into an compressed
> > file which is already located in hdfs, and I have posted an thread in the
> > mailist, and from what I have learned, this is not possible.
> >
> >
> > So, what's the best practice of loading logs into hdfs while using hive
> to
> > do log analytic?
> >
> > Or what's the common methods to handle problem I have describe above?
> >
> > Can anyone give me some advices?
> >
> > Thank you very much for your help!
>
>


Get race condition in hive with script_pipe.q ( the same as HIVE-1491)

2012-02-07 Thread Bing Li
Hi,All
I got the same error message described in [HIVE-1491] (
https://issues.apache.org/jira/browse/HIVE-1491) when ran script_pipe.q on
Open JDK.
Will hive plan to resolve this race condition?

Details:
=
[junit] Begin query: script_pipe.q
[junit] Ended Job = job_local_0001 with errors
[junit] Error during job, obtaining debugging information...
[junit] Exception: Client Execution failed with error code = 9
[junit] See build/ql/tmp/hive.log, or try "ant test ...
-Dtest.silent=false" to get more logs.
[junit] junit.framework.AssertionFailedError: Client Execution failed
with error code = 9
[junit] See build/ql/tmp/hive.log, or try "ant test ...
-Dtest.silent=false" to get more logs.
[junit] at junit.framework.Assert.fail(Assert.java:50)
[junit] at
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_script_pipe(TestCliDriver.java:27636)
[junit] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
Method)
[junit] at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:60)
[junit] at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37)
[junit] at java.lang.reflect.Method.invoke(Method.java:611)
[junit] at junit.framework.TestCase.runTest(TestCase.java:168)
[junit] at junit.framework.TestCase.runBare(TestCase.java:134)
[junit] at junit.framework.TestResult$1.protect(TestResult.java:110)
[junit] at
junit.framework.TestResult.runProtected(TestResult.java:128)
[junit] at junit.framework.TestResult.run(TestResult.java:113)
[junit] at junit.framework.TestCase.run(TestCase.java:124)
[junit] at junit.framework.TestSuite.runTest(TestSuite.java:243)
[junit] at junit.framework.TestSuite.run(TestSuite.java:238)
[junit] at
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:518)
[junit] at
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1052)
[junit] at
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:906)


I tried to re-produce it:
1. create table src with two columns: key and value
2. run the following queries:
SELECT TRANSFORM(key, value) USING 'head -n 1' FROM src;// PASS
SELECT TRANSFORM(key, value) USING 'head -n 1' as a,b,c,d FROM src;
//PASS

SELECT TRANSFORM(key, value, key, value) USING 'head -n 1' FROM src;
//PASS
SELECT TRANSFORM(key, value, key, value) USING 'head -n 1' as a,b,c,d FROM
src;//PASS

SELECT TRANSFORM(key, value, key, value, key, value) USING 'head -n 1' FROM
src;//FAILED
SELECT TRANSFORM(key, value, key, value, key, value) USING 'head -n 1' as
a,b,c,d FROM src;//FAILED


Thanks,
-- Bing


Re: Error when Creating an UDF

2012-02-07 Thread alo alt
Btw, welcome on the list, JC!

:)

--
Alexander Lorenz
http://mapredit.blogspot.com

On Feb 7, 2012, at 10:57 AM, Jean-Charles Thomas wrote:

> Hi,
>  
> thanks for the replies, I get it now to work, the jar file was not correctly 
> done.
> Now it works like a charm.
>  
> Regards,
>  
> Jean-Charles
>  
>  
>  
> Von: bejoy...@yahoo.com [mailto:bejoy...@yahoo.com] 
> Gesendet: Montag, 6. Februar 2012 19:48
> An: user@hive.apache.org
> Betreff: Re: Error when Creating an UDF
>  
> Hi
> One of your jar is not available and may be that has the required UDF or any 
> related methods.
> 
> Hive was not able to locate your first jar
> 
> '/scripts/hiveMd5.jar does not exist'
> 
> Just fix this with the correct location. Everything should work fine.
> Regards
> Bejoy K S
> 
> From handheld, Please excuse typos.
> From: Jean-Charles Thomas 
> Date: Mon, 6 Feb 2012 16:51:58 +0100
> To: user@hive.apache.org
> ReplyTo: user@hive.apache.org
> Subject: Error when Creating an UDF
>  
> Hi everybody,
>  
> i am trying to create an UDF follwing the example in the Hive Wiki.
> Everything is fine but the CREATE statement (see below) where an error occurs:
>  
> hive> add jar /scripts/hiveMd5.jar;
> /scripts/hiveMd5.jar does not exist
> hive> add jar /scripts/hive/udf/Md5.jar;
> Added /scripts/hive/udf/Md5.jar to class path
> Added resource: /scripts/hive/udf/Md5.jar
> hive> CREATE TEMPORARY FUNCTION mymd5 AS 'com.autoscout24.hive.udf.Md5';
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.FunctionTask
> hive>
>  
> in the Hive log, there is no much more:
>  
> 2012-02-06 16:16:36,096 ERROR ql.Driver (SessionState.java:printError(343)) - 
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.FunctionTask
>  
> Any Help is welcome,
>  
> Thanks a lot for hlep,
>  
> Jean-Charles
>  



Re: What's the best practice of loading logs into hdfs while using hive to do log analytic?

2012-02-07 Thread alo alt
Hi,

a first start with flume:
http://mapredit.blogspot.com/2011/10/centralized-logfile-management-across.html

Facebook's scribe could also be work for you.

- Alex

--
Alexander Lorenz
http://mapredit.blogspot.com

On Feb 7, 2012, at 11:03 AM, Xiaobin She wrote:

> Hi all,
> 
> Sorry if it is not appropriate to send one thread into two maillist.
> **
> I'm tring to use hadoop and hive to do some log analytic jobs.
> 
> Our system generate lots of logs every day, for example, it produce about
> 370GB logs(including lots of log files) yesterday, and every day the logs
> increases.
> 
> And we want to use hadoop and hive to replace our old log analysic system.
> 
> We distinguish our logs with logid, we have an log collector which will
> collect logs from clients and then generate log files.
> 
> for every logid, there will be one log file every hour, for some logid,
> this hourly log file can be 1~2GB
> 
> I have set up an test cluster with hadoop and hive, and I have run some
> test which seems good for us.
> 
> For reference, we will create one table in hive for every logid which will
> be partitoned by hour.
> 
> Now I have a question, what's the best practice for loading logs files into
> hdfs or hive warehouse dir ?
> 
> My first thought is,  at the begining of every hour,  compress the log file
> of the last hour of every logid and then use the hive cmd tool to load
> these compressed log files into hdfs.
> 
> using  commands like "LOAD DATA LOCAL inpath '$logname' OVERWRITE  INTO
> TABLE $tablename PARTITION (dt='$h') "
> 
> I think this can work, and I have run some test on our 3-nodes test
> clusters.
> 
> But the problem is, there are lots of logid which means there are lots of
> log files,  so every hour we will have to load lots of files into hdfs
> and there is another problem,  we will run hourly analysis job on these
> hourly collected log files,
> which inroduces the problem, because there are lots of log files, if we
> load these log files at the same time at the begining of every hour, I
> think  there will some network flows and there will be data delivery
> latency problem.
> 
> For data delivery latency problem, I mean it will take some time for the
> log files to be copyed into hdfs,  and this will cause our hourly log
> analysis job to start later.
> 
> So I want to figure out if we can write or append logs into an compressed
> file which is already located in hdfs, and I have posted an thread in the
> mailist, and from what I have learned, this is not possible.
> 
> 
> So, what's the best practice of loading logs into hdfs while using hive to
> do log analytic?
> 
> Or what's the common methods to handle problem I have describe above?
> 
> Can anyone give me some advices?
> 
> Thank you very much for your help!



What's the best practice of loading logs into hdfs while using hive to do log analytic?

2012-02-07 Thread Xiaobin She
Hi all,

Sorry if it is not appropriate to send one thread into two maillist.
**
I'm tring to use hadoop and hive to do some log analytic jobs.

Our system generate lots of logs every day, for example, it produce about
370GB logs(including lots of log files) yesterday, and every day the logs
increases.

And we want to use hadoop and hive to replace our old log analysic system.

We distinguish our logs with logid, we have an log collector which will
collect logs from clients and then generate log files.

for every logid, there will be one log file every hour, for some logid,
this hourly log file can be 1~2GB

I have set up an test cluster with hadoop and hive, and I have run some
test which seems good for us.

For reference, we will create one table in hive for every logid which will
be partitoned by hour.

Now I have a question, what's the best practice for loading logs files into
hdfs or hive warehouse dir ?

My first thought is,  at the begining of every hour,  compress the log file
of the last hour of every logid and then use the hive cmd tool to load
these compressed log files into hdfs.

using  commands like "LOAD DATA LOCAL inpath '$logname' OVERWRITE  INTO
TABLE $tablename PARTITION (dt='$h') "

I think this can work, and I have run some test on our 3-nodes test
clusters.

But the problem is, there are lots of logid which means there are lots of
log files,  so every hour we will have to load lots of files into hdfs
and there is another problem,  we will run hourly analysis job on these
hourly collected log files,
which inroduces the problem, because there are lots of log files, if we
load these log files at the same time at the begining of every hour, I
think  there will some network flows and there will be data delivery
latency problem.

For data delivery latency problem, I mean it will take some time for the
log files to be copyed into hdfs,  and this will cause our hourly log
analysis job to start later.

So I want to figure out if we can write or append logs into an compressed
file which is already located in hdfs, and I have posted an thread in the
mailist, and from what I have learned, this is not possible.


So, what's the best practice of loading logs into hdfs while using hive to
do log analytic?

Or what's the common methods to handle problem I have describe above?

Can anyone give me some advices?

Thank you very much for your help!


AW: Error when Creating an UDF

2012-02-07 Thread Jean-Charles Thomas
Hi,

thanks for the replies, I get it now to work, the jar file was not correctly 
done.
Now it works like a charm.

Regards,

Jean-Charles



Von: bejoy...@yahoo.com [mailto:bejoy...@yahoo.com]
Gesendet: Montag, 6. Februar 2012 19:48
An: user@hive.apache.org
Betreff: Re: Error when Creating an UDF

Hi
One of your jar is not available and may be that has the required UDF or any 
related methods.

Hive was not able to locate your first jar

'/scripts/hiveMd5.jar does not exist'

Just fix this with the correct location. Everything should work fine.
Regards
Bejoy K S

>From handheld, Please excuse typos.

From: Jean-Charles Thomas 
Date: Mon, 6 Feb 2012 16:51:58 +0100
To: user@hive.apache.org
ReplyTo: user@hive.apache.org
Subject: Error when Creating an UDF

Hi everybody,

i am trying to create an UDF follwing the example in the Hive Wiki.
Everything is fine but the CREATE statement (see below) where an error occurs:

hive> add jar /scripts/hiveMd5.jar;
/scripts/hiveMd5.jar does not exist
hive> add jar /scripts/hive/udf/Md5.jar;
Added /scripts/hive/udf/Md5.jar to class path
Added resource: /scripts/hive/udf/Md5.jar
hive> CREATE TEMPORARY FUNCTION mymd5 AS 'com.autoscout24.hive.udf.Md5';
FAILED: Execution Error, return code 1 from 
org.apache.hadoop.hive.ql.exec.FunctionTask
hive>

in the Hive log, there is no much more:

2012-02-06 16:16:36,096 ERROR ql.Driver (SessionState.java:printError(343)) - 
FAILED: Execution Error, return code 1 from 
org.apache.hadoop.hive.ql.exec.FunctionTask

Any Help is welcome,

Thanks a lot for hlep,

Jean-Charles