Hello all,
This may be unrelated and/or belong to a different mailing list but issue I'm
having is evident only with M/R. I've been playing with Hadoop ports and
iptables and after a recent change I'd noticed the following messages every
time I execute a job:
12/08/09 10:02:06 INFO
I'm getting bounced by the email provider so I'm reposting..
From: ARTEM ERVITS are9...@nyp.orgmailto:are9...@nyp.org
Reply-To:
mapreduce-user@hadoop.apache.orgmailto:mapreduce-user@hadoop.apache.org
mapreduce-user@hadoop.apache.orgmailto:mapreduce-user@hadoop.apache.org
Date: Thursday, August
Hi,
I'm just a beginner in Hadoop and i hope my question isn't too silly...
I've tried to load the apachelog into hdfs, and i'm using hive to select
them.
select count(*) doesnt returns anything and i cant understand what's written
in the logfile,
i will be grateful if someone could assist me
That's the best answer so far, great Mike!
*Fabio Pitzolu*
Consultant - BI Infrastructure
Mob. +39 3356033776
Telefono 02 87157239
Fax. 02 93664786
*Gruppo Consulenza Innovazione - http://www.gr-ci.com*
2012/8/9 Mike Lyon mike.l...@gmail.com
I'm takng a wild guess, but i think they are
Yes those who want to unsubscribe are not having a option of unsubscribe in
the website. Atleast now they will provide that option so that anyone
desirous of unsubscribing can do so.
-Original Message-
From: Fabio Pitzolu [mailto:fabio.pitz...@gr-ci.com]
Sent: 09 August 2012 12:59
To:
Or they could just follow The directions and unsubscribe via email.
But i guess thats too difficult...
-mike
Sent from my iPhone
On Aug 9, 2012, at 0:38, sathyavageeswaran sat...@morisonmenon.com wrote:
Yes those who want to unsubscribe are not having a option of unsubscribe in
the website.
On 08/09/2012 01:24 AM, Mike Lyon wrote:
I'm takng a wild guess, but i think they are legitimately trying to
unsubscribe but are clueless and don't RTFM
The scariest part is that these are supposedly people with enough
technical chops to set up and program to a Hadoop cluster.
I suddenly do
the file loaded last could be corrupted. try to decompress the file and see if
you get any errors.
Thanks,
Ranjith
On Aug 9, 2012, at 8:07 PM, rei125 yling.meteorgar...@gmail.com wrote:
s.
hi,
i also meet this problem before,at last we give up the SPNEGO mode. we
overwrite the PseudoAuthenticationHandler.java .
@Override
public AuthenticationToken authenticate(HttpServletRequest request,
HttpServletResponse response) throws IOException,
AuthenticationException {
Hi,
You need the file.out and file.out.index files when wanting the
map-intermediate-reduce files. So try a pattern that matches these
and you should have it.
The X kind of files are what MR produces on HDFS as regular
outputs - these aren't intermediate.
On Fri, Aug 10, 2012 at 8:52 AM,
With Flume you could use batch mode. Flume will wait until the count of
events are delivered (let's say 100), and then bulk write them into HDFS
(as example). On top you could set a timeout, means, if in sec=x you not
hit batch=x write out. That are usefull for very small files (Avro
maybe),
Hi all
I found some warning in the log, and it may the bug of hdfs. Is there anyone
can confirm that?
I make a simple test case and the output is attached.
I test it on hadoop-dist-2.0.0-alpha
Best Regards
--
Zhanwei Wang
2012-08-09 17:17:27,370 WARN hdfs.DFSClient
Hi,
I run this test as superuser.
The reason of this warning is that, hdfs client buffer the block information
and then the block is appended. For example, the generation stamp buffered on
the client is 1000, after append, the real generation stamp of block is 1001.
When the client ask
This was a version dependency issue. The class is not in 0.20.203.0.
From: Artem Ervits [mailto:are9...@nyp.org]
Sent: Wednesday, August 08, 2012 2:34 PM
To: user@hadoop.apache.org
Subject: Setting up HTTP authentication
Hello all,
I followed the 1.0.3 docs to setup http simple authentication.
Hi again,
this is an direct response to my previous posting with the title Logs
cannot be created, where logs could not be created (Spill failed). I
got the hint, that i gotta check privileges, but that was not the
problem, because i own the folders that were used for this.
I finally found
HI Rahul
Better to to start a new thread than hijacking others .:) It helps to keep
the mailing list archives clean.
Learning java, you need to get some JAVA books and start off.
If you just want to run wordcount example just follow the steps in below url
http://wiki.apache.org/hadoop/WordCount
Thanks for the detailled explanation on this. I have been fighting with
this off and on in 0.20.203.
Cheers,
Andrew
On Aug 9, 2012 1:55 PM, John Armstrong j...@ccri.com wrote:
On 08/09/2012 01:52 PM, Justin Woody wrote:
Just to close the thread, the problem was that the user and group were
Unsubscribe
I am out of the office until 08/13/2012.
I am out of office.
For HAMSTER related things, you can contact Jason(Deng Peng Zhou/China/IBM)
For CFM related things, you can contact Daniel(Liang SH Su/China/Contr/IBM)
For TMB related things, you can contact Flora(Jun Ying Li/China/IBM)
For TWB
Hello Stephen,
You can use the VERSION file to verify that.
Regards,
Mohammad Tariq
On Fri, Aug 10, 2012 at 4:34 AM, Stephen Boesch java...@gmail.com wrote:
Hi, what's your take? I was thinking to check if a certain always-present
file exists via a hadoop dfs -ls file . Other
Stephen,
A NameNode can be considered 'formatted' if its up (it does not start
otherwise).
If you mean to check if the NameNode has been _freshly_ formatted,
then checking the count of hadoop fs -ls / to be non-zero may help,
cause by default the NameNode carries no files.
$ hadoop fs -ls / |
Thanks Harsh. But there is no firewall there, the two clusters are on the
same networks. I cannot telnet to the port even on the same machine.
On Thu, Aug 9, 2012 at 6:00 PM, Harsh J ha...@cloudera.com wrote:
Hi Jian,
HFTP is always-on by default. Can you check and make sure that the
Jian,
From your NN, can you get us the output netstat -anp | grep 50070?
On Fri, Aug 10, 2012 at 9:29 AM, Jian Fang
jian.fang.subscr...@gmail.com wrote:
Thanks Harsh. But there is no firewall there, the two clusters are on the
same networks. I cannot telnet to the port even on the same
23 matches
Mail list logo