check namenode logs for any issues -- in which case
> your backup would be very essential as a requirement for recovery!).
>
> P.s. Hold on for a bit for a possible comment from another user before
> getting into action. I've added extra directories this way, but I do
> not know if
This should be a straightforward question, but better safe than sorry.
I wanted to add a second name node directory (on an NFS as a backup), so now
my hdfs-site.xml contains:
dfs.name.dir
/mnt/hadoop/name
dfs.name.dir
/public/hadoop/name
When I go to start DFS i'm ge
-- Forwarded message --
From: mike anderson
Date: Thu, Feb 10, 2011 at 11:57 AM
Subject: multiple namenode directories
To: core-u...@hadoop.apache.org
This should be a straightforward question, but better safe than sorry.
I wanted to add a second name node directory (on an NFS
fsck / -delete"
>
> Brian
>
> On Jan 21, 2011, at 2:12 PM, mike anderson wrote:
>
> > Also, here's the output of dfsadmin -report. What seems weird is that
> it's
> > not reporting any missing blocks. BTW, I tried doing fsck / -delete, but
> i
s : Normal
Configured Capacity: 472054276096 (439.63 GB)
DFS Used: 130888634368 (121.9 GB)
Non DFS Used: 151224688640 (140.84 GB)
DFS Remaining: 189940953088(176.9 GB)
DFS Used%: 27.73%
DFS Remaining%: 40.24%
Last contact: Fri Jan 21 15:10:46 EST 2011
On Fri, Jan 21, 2011 at 3:03 PM, mike ande
> Thanks
> -Todd
>
> On Thu, Mar 4, 2010 at 11:37 AM, mike anderson wrote:
>
>> Removing edits.new and starting worked, though it didn't seem that
>> happy about it. It started up nonetheless, in safe mode. Saying that
>> "The ratio of reported bl
recommended - after you get your system
> back up and running I would strongly suggest running with at least two,
> preferably with one on a separate server via NFS.
>
> Thanks
> -Todd
>
> On Thu, Mar 4, 2010 at 9:05 AM, mike anderson wrote:
>
> > We have a single df
your namenode configured with multiple dfs.name.dir settings?
>
> If so, can you please reply with "ls -l" from each dfs.name.dir?
>
> Thanks
> -Todd
>
> On Thu, Mar 4, 2010 at 8:57 AM, mike anderson >wrote:
>
> > Our hadoop cluster went down last night
Our hadoop cluster went down last night when the namenode ran out of hard
drive space. Trying to restart fails with this exception (see below).
Since I don't really care that much about losing a days worth of data or so
I'm fine with blowing away the edits file if that's what it takes (we don't
ha
oot.logger}, EventCounter, Socket
On Thu, Aug 20, 2009 at 11:16 AM, Edward Capriolo wrote:
> On Thu, Aug 20, 2009 at 10:49 AM, mike anderson
> wrote:
> > Yeah, that is interesting Edward. I don't need syslog-ng for any
> particular
> > reason, other than that I'm fa
j.appender.SYSLOG.facility=local0
> > log4j.appender.SYSLOG.layout=org.apache.log4j.PatternLayout
> > log4j.appender.SYSLOG.layout.ConversionPattern=%p %c{2}: %m%n
> > log4j.appender.SYSLOG.SyslogHost=red
> > log4j.appender.SYSLOG.threshold=ERROR
> > log4j.appender.SYSLOG.Header=true
> > log4j.appender.S
Has anybody had any luck setting up the log4j.properties file to send logs
to a syslog-ng server?
My log4j.properties excerpt:
log4j.appender.SYSLOG=org.apache.log4j.net.SyslogAppender
log4j.appender.SYSLOG.syslogHost=10.0.20.164
log4j.appender.SYSLOG.layout=org.apache.log4j.PatternLayout
log4j.app
12 matches
Mail list logo