Re: [ADV] Blatant marketing of the book Pro Hadoop. In honor of the 09 summit here is a 50% off coupon corrected code is LUCKYOU

2009-06-17 Thread zhang jianfeng
Hi Jason,

I still can not visit the links you provide, I am in china maybe some
network problem.

Could you send me the alpha chapters of your book? That would be
appreciated.


Thank you

Jeff Zhang



2009/6/17 jason hadoop 

> You can purchase the ebook from www.apress.com. The final copy is now
> available.
> There is a 50% off coupon good for a few more days, LUCKYOU.
>
> you can try prohadoop.ning.com as an alternative for www.prohadoopbook.com
> ,
> or www.prohadoop.com.
>
> What error do you receive when you try to visit www.prohadoopbook.com ?
>
> 2009/6/17 zjffdu 
>
> > HI Jason,
> >
> > Where can I download your books' Alpha Chapters, I am very interested in
> > your book about hadoop.
> >
> > And I cannot visit the link www.prohadoopbook.com
> >
> >
> >
> > -Original Message-
> > From: jason hadoop [mailto:jason.had...@gmail.com]
> > Sent: 2009年6月9日 20:47
> > To: core-user@hadoop.apache.org
> > Subject: [ADV] Blatant marketing of the book Pro Hadoop. In honor of the
> 09
> > summit here is a 50% off coupon corrected code is LUCKYOU
> >
> > http://eBookshop.apress.com CODE LUCKYOU
> >
> > --
> > Alpha Chapters of my book on Hadoop are available
> > http://www.apress.com/book/view/9781430219422
> > www.prohadoopbook.com a community for Hadoop Professionals
> >
> >
>
>
> --
> Pro Hadoop, a book to guide you from beginner to hadoop mastery,
> http://www.amazon.com/dp/1430219424?tag=jewlerymall
> www.prohadoopbook.com a community for Hadoop Professionals
>


Re: Is there any way to debug the hadoop job in eclipse

2009-06-06 Thread zhang jianfeng
Is there any resource on internet that I can get as soon as possible ?



On Fri, Jun 5, 2009 at 6:43 PM, jason hadoop  wrote:

> chapter 7 of my book goes into details of hour to debug with eclipse
>
> On Fri, Jun 5, 2009 at 3:40 AM, zhang jianfeng  wrote:
>
> > Hi all,
> >
> > Some jobs I submit to hadoop failed, but I can not see what's the
> problem.
> > So is there any way to debug the hadoop job in eclipse, such as the
> remote
> > debug.
> >
> > or others ways to find the job failed reason. I didnot find enough
> > information in the job tracker.
> >
> > Thank you.
> >
> > Jeff Zhang
> >
>
>
>
> --
> Alpha Chapters of my book on Hadoop are available
> http://www.apress.com/book/view/9781430219422
> www.prohadoopbook.com a community for Hadoop Professionals
>


Is there any way to debug the hadoop job in eclipse

2009-06-05 Thread zhang jianfeng
Hi all,

Some jobs I submit to hadoop failed, but I can not see what's the problem.
So is there any way to debug the hadoop job in eclipse, such as the remote
debug.

or others ways to find the job failed reason. I didnot find enough
information in the job tracker.

Thank you.

Jeff Zhang


Re: Job pending there

2009-05-30 Thread zhang jianfeng
This is the error message in task track log:   ( someone has any ideas ?)

2009-05-31 09:49:16,165 ERROR org.apache.hadoop.mapred.TaskTracker: Caught
exception: java.io.IOException: Call to localhost/127.0.0.1:9001 failed on
local exception: An existing connection was forcibly closed by the remote
host

at org.apache.hadoop.ipc.Client.call(Client.java:699)

at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:216)

at org.apache.hadoop.mapred.$Proxy4.getBuildVersion(Unknown Source)

at
org.apache.hadoop.mapred.TaskTracker.offerService(TaskTracker.java:974)

at org.apache.hadoop.mapred.TaskTracker.run(TaskTracker.java:1678)

at org.apache.hadoop.mapred.TaskTracker.main(TaskTracker.java:2698)

Caused by: java.io.IOException: An existing connection was forcibly closed
by the remote host

at sun.nio.ch.SocketDispatcher.read0(Native Method)

at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:25)

at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:233)

at sun.nio.ch.IOUtil.read(IOUtil.java:206)

at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:236)

at
org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)

at
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:140)

at
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:150)

at
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:123)

at java.io.FilterInputStream.read(FilterInputStream.java:116)

at
org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:271)

at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)

at java.io.BufferedInputStream.read(BufferedInputStream.java:237)

at java.io.DataInputStream.readInt(DataInputStream.java:370)

at
org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:493)

at org.apache.hadoop.ipc.Client$Connection.run(Client.java:438)



2009-05-31 09:49:18,118 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: localhost/127.0.0.1:9001. Already tried 0 time(s).

2009-05-31 09:49:20,040 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: localhost/127.0.0.1:9001. Already tried 1 time(s).

2009-05-31 09:49:21,946 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: localhost/127.0.0.1:9001. Already tried 2 time(s).

2009-05-31 09:49:23,853 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: localhost/127.0.0.1:9001. Already tried 3 time(s).

2009-05-31 09:49:25,774 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: localhost/127.0.0.1:9001. Already tried 4 time(s).




On Sun, May 31, 2009 at 10:03 AM, Zhong Wang wrote:

> You should read the logs to find out what happened.
>
> On Sun, May 31, 2009 at 9:48 AM, zhang jianfeng  wrote:
> > I also find the  tasktracker log is increasing, seems the task tracker
> > works, but it will exhaust my disk space.
> >
> >
> >
> > On Sun, May 31, 2009 at 9:45 AM, zhang jianfeng 
> wrote:
> >
> >> Hi all,
> >>
> >> I folllow the tutorial of hadoop and run it in local pseudo-distrubuted
> >> model. But every time I run
> >>  bin/hadoop jar hadoop-0.19.0-examples.jar grep input output
> 'dfs[a-z.]+',
> >> the job will always pend, I don't know what's the reason.
> >>
> >> ps, my platform is windows XP, I run it in cygwin.
> >>
> >>
> >> Thank you
> >>
> >> Jeff Zhang
> >>
> >
>
>
>
> --
> Zhong Wang
>


Re: Job pending there

2009-05-30 Thread zhang jianfeng
I also find the  tasktracker log is increasing, seems the task tracker
works, but it will exhaust my disk space.



On Sun, May 31, 2009 at 9:45 AM, zhang jianfeng  wrote:

> Hi all,
>
> I folllow the tutorial of hadoop and run it in local pseudo-distrubuted
> model. But every time I run
>  bin/hadoop jar hadoop-0.19.0-examples.jar grep input output 'dfs[a-z.]+',
> the job will always pend, I don't know what's the reason.
>
> ps, my platform is windows XP, I run it in cygwin.
>
>
> Thank you
>
> Jeff Zhang
>


Job pending there

2009-05-30 Thread zhang jianfeng
Hi all,

I folllow the tutorial of hadoop and run it in local pseudo-distrubuted
model. But every time I run
 bin/hadoop jar hadoop-0.19.0-examples.jar grep input output 'dfs[a-z.]+',
the job will always pend, I don't know what's the reason.

ps, my platform is windows XP, I run it in cygwin.


Thank you

Jeff Zhang


Re: Can I run the testcase in local

2009-05-10 Thread zhang jianfeng
PS, I run it in windows machine

On Sun, May 10, 2009 at 4:11 PM, zjffdu  wrote:

>  Hi all,
>
>
>
> I’d like to know more about the hadoop, so I want to debug the testcase in
> local.
>
>
>
> But I found the errors below:  Can anyone help to solve this problem, thank
> you very much.
>
>
>
>
>
>
> ###
>
>
>
> 2009-05-10 16:00:51,483 ERROR namenode.FSNamesystem (*
> FSNamesystem.java:(291)*) - FSNamesystem initialization failed.
>
> *java.io.IOException*: Problem starting http server
>
> at org.apache.hadoop.http.HttpServer.start(*HttpServer.java:369*)
>
> at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(*
> FSNamesystem.java:372*)
>
> at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(*
> FSNamesystem.java:289*)
>
> at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(*
> NameNode.java:162*)
>
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(*
> NameNode.java:209*)
>
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(*
> NameNode.java:197*)
>
> at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(*
> NameNode.java:822*)
>
> at org.apache.hadoop.hdfs.MiniDFSCluster.(*
> MiniDFSCluster.java:275*)
>
> at org.apache.hadoop.hdfs.MiniDFSCluster.(*
> MiniDFSCluster.java:119*)
>
> at org.apache.hadoop.mapred.ClusterMapReduceTestCase.startCluster(*
> ClusterMapReduceTestCase.java:81*)
>
> at org.apache.hadoop.mapred.ClusterMapReduceTestCase.setUp(*
> ClusterMapReduceTestCase.java:56*)
>
> at junit.framework.TestCase.runBare(*TestCase.java:125*)
>
> at junit.framework.TestResult$1.protect(*TestResult.java:106*)
>
> at junit.framework.TestResult.runProtected(*TestResult.java:124*)
>
> at junit.framework.TestResult.run(*TestResult.java:109*)
>
> at junit.framework.TestCase.run(*TestCase.java:118*)
>
> at junit.framework.TestSuite.runTest(*TestSuite.java:208*)
>
> at junit.framework.TestSuite.run(*TestSuite.java:203*)
>
> at
> org.eclipse.jdt.internal.junit.runner.junit3.JUnit3TestReference.run(*
> JUnit3TestReference.java:130*)
>
> at org.eclipse.jdt.internal.junit.runner.TestExecution.run(*
> TestExecution.java:38*)
>
> at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(*
> RemoteTestRunner.java:460*)
>
> at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(*
> RemoteTestRunner.java:673*)
>
> at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(*
> RemoteTestRunner.java:386*)
>
> at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(*
> RemoteTestRunner.java:196*)
>
> Caused by: *
> org.mortbay.util.MultiException[java.lang.ClassNotFoundException*:
> org.apache.hadoop.hdfs.server.namenode.dfshealth_jsp, *
> java.lang.ClassNotFoundException*:
> org.apache.hadoop.hdfs.server.namenode.nn_005fbrowsedfscontent_jsp]
>
> at org.mortbay.http.HttpServer.doStart(*HttpServer.java:731*)
>
> at org.mortbay.util.Container.start(*Container.java:72*)
>
> at org.apache.hadoop.http.HttpServer.start(*HttpServer.java:347*)
>
> ... 23 more
>
> 2009-05-10 16:00:51,483 INFO  namenode.FSNamesystem (*
> FSEditLog.java:printStatistics(940)*) - Number of transactions: 0 Total
> time for transactions(ms): 0 Number of syncs: 0 SyncTimes(ms): 0 0
>
> 2009-05-10 16:00:51,483 WARN  namenode.FSNamesystem (*
> FSNamesystem.java:run(2217)*) - ReplicationMonitor thread received *
> InterruptedException.java.lang.InterruptedException*: sleep interrupted
>
> 2009-05-10 16:00:51,655 INFO  ipc.Server (*Server.java:stop(1033)*) -
> Stopping server on 4233
>
>
>


Re: Amazon Elastic MapReduce

2009-04-02 Thread zhang jianfeng
seems like I should pay for additional money, so why not configure a hadoop
cluster in EC2 by myself. This already have been automatic using script.





On Thu, Apr 2, 2009 at 4:09 PM, Miles Osborne  wrote:

> ... and only in the US
>
> Miles
>
> 2009/4/2 zhang jianfeng :
> > Does it support pig ?
> >
> >
> > On Thu, Apr 2, 2009 at 3:47 PM, Chris K Wensel  wrote:
> >
> >>
> >> FYI
> >>
> >> Amazons new Hadoop offering:
> >> http://aws.amazon.com/elasticmapreduce/
> >>
> >> And Cascading 1.0 supports it:
> >> http://www.cascading.org/2009/04/amazon-elastic-mapreduce.html
> >>
> >> cheers,
> >> ckw
> >>
> >> --
> >> Chris K Wensel
> >> ch...@wensel.net
> >> http://www.cascading.org/
> >> http://www.scaleunlimited.com/
> >>
> >>
> >
>
>
>
> --
> The University of Edinburgh is a charitable body, registered in
> Scotland, with registration number SC005336.
>


Re: Amazon Elastic MapReduce

2009-04-02 Thread zhang jianfeng
Does it support pig ?


On Thu, Apr 2, 2009 at 3:47 PM, Chris K Wensel  wrote:

>
> FYI
>
> Amazons new Hadoop offering:
> http://aws.amazon.com/elasticmapreduce/
>
> And Cascading 1.0 supports it:
> http://www.cascading.org/2009/04/amazon-elastic-mapreduce.html
>
> cheers,
> ckw
>
> --
> Chris K Wensel
> ch...@wensel.net
> http://www.cascading.org/
> http://www.scaleunlimited.com/
>
>