up on which harddisk.
Thanks!
JS
--
Todd Lipcon
Software Engineer, Cloudera
...@yahoo.com
Sent: Thursday, May 10, 2012 3:57 AM
Subject: Re: High load on datanode startup
On Thu, May 10, 2012 at 9:33 AM, Todd Lipcon t...@cloudera.com wrote:
That's real weird..
If you can reproduce this after a reboot, I'd recommend letting the DN
run for a minute
.
--
Todd Lipcon
Software Engineer, Cloudera
:
Hi Harsh J,
It seems that the 20% performance lost is not that bad, at least some
smart people are still working to improve it. I will keep an eye on this
interesting trend.
Shi
--
Todd Lipcon
Software Engineer, Cloudera
good (recent) war stories on the migration between the
two ?
Its interesting to me that cloudera and amazon are that difficult to swap
in/out in cloud.
--
Todd Lipcon
Software Engineer, Cloudera
is a little low. Just wondering if
theres a better chat channel for hadoop other than the official one
(#hadoop on freenode)?
In any case... Im on there :) come say hi.
--
Jay Vyas
MMSB/UCHC
--
Todd Lipcon
Software Engineer, Cloudera
/addendum/
Synopsis: It slowed performance on the 10KB and 1KB tests and still failed
the 100 byte and 10 byte tests with *Concurrent mode failure*
- Doug
--
Todd Lipcon
Software Engineer, Cloudera
talk2had...@gmail.com wrote:
Hello All,
If i change the rackid for some nodes and restart namenode, will data be
rearranged accordingly? Do i need to run rebalancer?
Any information on this would be appreciated.
Thanks and Regards
Ravi
--
Todd Lipcon
Software Engineer, Cloudera
value10/value
/property
Any thoughts of a feature like this?
--
--- Get your facts first, then you can distort them as you please.--
--
Todd Lipcon
Software Engineer, Cloudera
~lucid
-r 95a824e4005b2a94fe1c11f1ef9db4c672ba43cb
Compiled by root on Thu Oct 13 21:52:18 PDT 2011
From source with checksum 644e5db6c59d45bca96cec7f220dda51
Thanks, Guy
On Thu 15 Dec 2011 11:39:26 AM IST, Todd Lipcon wrote:
Hi Guy,
Several questions come to mind here:
- What was the exact
don't think anyone's currently working on this, but if you wanted to
contribute I can point you in the right direction. If you happen to be
at the SF HUG tonight, grab me and I'll give you the rundown on what
would be needed.
-Todd
--
Todd Lipcon
Software Engineer, Cloudera
: Wednesday, December 07, 2011 12:40 PM
To: common-user@hadoop.apache.org
Subject: HDFS Backup nodes
Does hadoop 0.20.205 supports configuring HDFS backup nodes ?
Thanks,
Praveenesh
--
Joseph Echeverria
Cloudera, Inc.
443.305.9434
--
Todd Lipcon
Software Engineer, Cloudera
that hundreds of other
companies have found: HDFS is stable and reliable and has no such GC
of death problems when used as intended.
-Todd
--
Todd Lipcon
Software Engineer, Cloudera
@hadoop.apache.org
Subject: HDFS Backup nodes
Does hadoop 0.20.205 supports configuring HDFS backup nodes ?
Thanks,
Praveenesh
--
Joseph Echeverria
Cloudera, Inc.
443.305.9434
--
Todd Lipcon
Software Engineer, Cloudera
shouldn't it know where the files are in
HDFS?
Thanks,
Chris
--
Todd Lipcon
Software Engineer, Cloudera
and not any kind
of fork as suggested above.
Back to work on 0.23 for me!
-Todd
--
Todd Lipcon
Software Engineer, Cloudera
if you are adventurous you
could consider building your own copy of Hadoop with this patch
applied. I've tested on a cluster and fairly confident it is safe.
Thanks
-Todd
On Fri, Oct 7, 2011 at 2:15 PM, Todd Lipcon t...@cloudera.com wrote:
Hi Chris,
You may be hitting HDFS-2379.
Can you grep
on 100 MB
Searching here or online doesn't show a lot about what this error
means
and
how to fix it.
We are running 0.20.2, r911707
Any suggestions?
Thanks,
Chris
--
Todd Lipcon
Software Engineer, Cloudera
. But is it
something you should really lose sleep over? Do you understand that there are
risks and there are improbable risks?
http://www.datacenterknowledge.com/archives/2007/05/07/averting-disaster-with-the-epo-button/
-Todd
--
Todd Lipcon
Software Engineer, Cloudera
Tel: +30 210 772 2546
Mobile: +30 6939354121
Fax: +30 210 772 2569
Email: gkous...@mail.ntua.gr
Site: http://users.ntua.gr/gkousiou/
National Technical University of Athens
9 Heroon Polytechniou str., 157 73 Zografou, Athens, Greece
--
Todd Lipcon
Software Engineer, Cloudera
,
-Shrinivas
--
Todd Lipcon
Software Engineer, Cloudera
, 2011 11:07:57 AM
Subject: Append to Existing File
Hi All,
Is append to an existing file is now supported in Hadoop for production
clusters?
If yes, please let me know which version and how
Thanks
Jagaran
--
Eric
--
Todd Lipcon
Software Engineer, Cloudera
the current(up-to-date) copy
of file system too.
I do not understand what would be the use case (in a production environment)
tin which someone would prefer Checkpoint node over Backup node, or I should
ask, what do people generally prefer of the two and why ?
--
Todd Lipcon
Software
in the US (it could be better of course) and we do not need
innovation like wikileaks offers to stay free, like open source the US is
always changing and innovating.
Wikileaks represents irresponsible use of technology that should be avoided.
--
Todd Lipcon
Software Engineer, Cloudera
for innovation than other nominees...
(ie cutting out the mention of iPad/wikileaks?)
If not, I will change it tomorrow.
-Todd
--
Todd Lipcon
Software Engineer, Cloudera
at hadoop/192.168.217.134
/
--
Eduardo Dario Ricci
Cel: 14-81354813
MSN: thenigma...@hotmail.com
--
Todd Lipcon
Software Engineer, Cloudera
ideas
please speak your mind).
Also, any development resources to get me started are welcomed.
[1] http://james.apache.org/mailbox/
[2] https://issues.apache.org/jira/browse/MAILBOX-44
Regards,
--
Ioan Eugen Stan
--
Todd Lipcon
Software Engineer, Cloudera
, this is on Apache distro 0.21.0.
-Shrinivas
--
Todd Lipcon
Software Engineer, Cloudera
--
Todd Lipcon
Software Engineer, Cloudera
, wordt verzocht de afzender onmiddellijk op de
hoogte te stellen van de ontvangst. Elk gebruik van de inhoud van dit
bericht en/of van de daarbij behorende bijlagen door een ander dan de
geadresseerde is onrechtmatig jegens afzender respectievelijk de hiervoor
bedoelde derde.
--
Todd Lipcon
of hadoop
yet.
I am getting the error when using methods in hadoop.util.NativeCodeLoader.
Can somebody help me to build the native libraries?
I am using ubuntu 64 bit version on a amd processor.
--
Sincerely,
Ajay Anandan.
MSc,Computing Science, University of Alberta.
--
Todd Lipcon
On Wed, Feb 23, 2011 at 11:21 AM, Todd Lipcon t...@cloudera.com wrote:
Hi Ajay,
Hadoop should ship with built artifacts for amd64 in
the lib/native/Linux-amd64-64/ subdirectory of your tarball. You just need
to put this directory on your java.library.path system property.
-Todd
You need
-no-authority-tp30813534p30813534.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.
--
Todd Lipcon
Software Engineer, Cloudera
)
2011-01-07 04:14:52,673 INFO
org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
--
Regards
Shuja-ur-Rehman Baig
http://pk.linkedin.com/in/shujamughal
--
Todd Lipcon
Software Engineer, Cloudera
-and-getLocalPathForWrite-tp2199517p2199517.html
Sent from the Hadoop lucene-users mailing list archive at Nabble.com.
--
Todd Lipcon
Software Engineer, Cloudera
.nabble.com/LocalDirAllocator-and-getLocalPathForWrite-tp2199517p2202221.html
Sent from the Hadoop lucene-users mailing list archive at Nabble.com.
--
Todd Lipcon
Software Engineer, Cloudera
into in order to fix this problem?
Thanks in advance.
-Jon
--
Todd Lipcon
Software Engineer, Cloudera
documented somewhere? especially that part
where
the output of mappers is partitioned, sorted and spilled to the disk. I
tried to
understand it, but it's rather complex. Is there any document that can help
me
understand it?
Thanks,
Da
--
Todd Lipcon
Software Engineer, Cloudera
NNBench it could be a general contribution?
-Todd
On Dec 27, 2010, at 8:21 PM, Todd Lipcon wrote:
Hi Stefan,
Sounds interesting.
Maybe you're looking for o.a.h.ipc.Server$Responder?
-Todd
On Mon, Dec 27, 2010 at 8:07 PM, Stefan Groschupf s...@101tec.com wrote:
Hi All
ideas?
Thanks,
Stefan
--
Todd Lipcon
Software Engineer, Cloudera
file with more columns runs 10X slower than the large file which is only
2X
slower.
Anybody have any idea why file input is slower under hadoop?
--
Todd Lipcon
Software Engineer, Cloudera
make sure our capacity
calculations based on # of files and # of directories is correct. We're
using: 0.20.2, r911707
Thanks,
Chris
--
Todd Lipcon
Software Engineer, Cloudera
in a vote on
https://issues.cloudera.org/browse/DISTRO-64.
The details about the fix are here:
https://issues.apache.org/jira/browse/MAPREDUCE-1938
Roger
--
Todd Lipcon
Software Engineer, Cloudera
. For example, if HDFS
is mounted as a local file system, then only user hadoop has write/delete
permissions.
Can this privilege be given to another user? In other words, is this
hadoop user hard-coded, or can another be used in its stead?
Thank you,
Mark
--
Todd Lipcon
Software Engineer
Engineer,
Impetus Infotech (India) Private Limited,
www.impetus.com
Mob:09907074413
--
Todd Lipcon
Software Engineer, Cloudera
/gL5DQ+v9iC89dIaHltvCK
Oa6HOwVWNXWksUxhZhgBAMueLiItX6y4jhCKA5xCOqAmfxA0KZpTkyZr4+ozrazg
=wScC
-END PGP SIGNATURE-
--
Lance Norskog
goks...@gmail.com
--
Todd Lipcon
Software Engineer, Cloudera
to succed .
Thanks in Advance
--
Todd Lipcon
Software Engineer, Cloudera
);
everything works fine.
Am I forgetting a step needed when using MultipleOutputs or is this a
bug/non-feature of using LZO compression in Hadoop.
Thank you!
~Ed
--
Todd Lipcon
Software Engineer, Cloudera
with the active NN.
Q1. In which situation do we need a CN?
Q2. If the NameNode machine fails, which different manual
intervention between BN and CN?
Thanks.
Shen
--
Todd Lipcon
Software Engineer, Cloudera
,
Martin
PD: Hbase is also running on the cluster.
--
Todd Lipcon
Software Engineer, Cloudera
.
... based upon what the client sends, not what the server knows.
Not anymore in trunk or the security branch - now it's mapped on the
server side with a configurable resolver class.
-Todd
--
Todd Lipcon
Software Engineer, Cloudera
between 0.20 and above.
-Todd
On Friday, August 06, 2010 09:05:47 am Todd Lipcon wrote:
On Thu, Aug 5, 2010 at 4:52 PM, Bobby Dennett bdenn...@gmail.com
wrote:
Hi Josh,
No real pain points... just trying to investigate/research the best
way to create the necessary libraries and jar
-at-twitter-part-1-splittable-lzo-compression/
Thanks in advance,
-Bobby
--
Todd Lipcon
Software Engineer, Cloudera
data.
http://www.roadtofailure.com -- The Fringes of Scalability, Social
Media, and Computer Science
--
Todd Lipcon
Software Engineer, Cloudera
Sarathy
--
Todd Lipcon
Software Engineer, Cloudera
of the above. See the packages
org.apache.hadoop.{hdfs,mapred}.{protocol,server.protocol}
Regards,
Ahmad Shahzad
--
Todd Lipcon
Software Engineer, Cloudera
different? Is there anything wrong of my understanding? Does anybody have
any experience on this? Badly need your help, thanks.
Best Regards,
Carp
--
Best Regards
Jeff Zhang
--
Todd Lipcon
Software Engineer, Cloudera
that I can change
to reduce the chances of the problem?
Any tips for diagnosing and troubleshooting?
Thanks!
Tan
--
Todd Lipcon
Software Engineer, Cloudera
?
Any tips for diagnosing and troubleshooting?
Thanks!
Tan
--
Todd Lipcon
Software Engineer, Cloudera
/property
/configuration
configuration
property
namefs.default.name/name
valuehdfs://centosxcat1/value
/property
/configuration
cat conf/hdfs-site.xml
--
Todd Lipcon
Software Engineer, Cloudera
the data.
2) Open existing file, skip to any position and update the data.
Correct, neither of those are allowed.
This will be even with FUSE.
Is this correct?
Regards.
--
Todd Lipcon
Software Engineer, Cloudera
-1057 for example.
Or the file must be written and closed completely, before it becomes
available for other nodes?
(AFAIK in 0.18.3 the file appeared as 0 size until it was closed).
Regards.
--
Todd Lipcon
Software Engineer, Cloudera
for viruses and
dangerous content by MailScanner, and is
believed to be clean.
--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.
--
Todd Lipcon
Software Engineer, Cloudera
you can buy these days you'll have plenty of
space.
-Todd
--
Todd Lipcon
Software Engineer, Cloudera
to the secondary location of dfs.name.dir.
Is the approach outlined below the preferred/suggested way to do this? Is
this people mean when they say, stick it on NFS ?
Thanks!
On May 17, 2010, at 11:14 PM, Todd Lipcon wrote:
On Mon, May 17, 2010 at 5:10 PM, jiang licht licht_ji...@yahoo.com
sum quis ego servo
Je suis ce que je protège
I am what I protect
--
Todd Lipcon
Software Engineer, Cloudera
Hey Brian,
Looks like it's not deadlocked, but rather just busy doing a lot of work:
org.apache.hadoop.hdfs.server.namenode.fsnamesystem$heartbeatmoni...@1c778255
daemon prio=10 tid=0x2aaafc012c00 nid=0x493 runnable
[0x413da000..0x413daa10]
java.lang.Thread.State: RUNNABLE
.
--
Todd Lipcon
Software Engineer, Cloudera
I've done a workaround to avoid issues
stemming from that, it would be unlikely to pass a backport vote.
-Todd
On Thu, May 13, 2010 at 11:50 PM, Todd Lipcon t...@cloudera.com wrote:
Hi Raghava,
Yes, that's a patch targeted at 0.20, but I'm not certain whether it
applies
on the vanilla
On Tue, May 11, 2010 at 7:33 AM, stephen mulcahy
stephen.mulc...@deri.orgwrote:
On 23/04/10 15:43, Todd Lipcon wrote:
Hi Stephen,
Can you try mounting ext4 with the nodelalloc option? I've seen the same
improvement due to delayed allocation butbeen a little nervous about that
option
get through the spam filter. At a loss.
Does anyone have any ideas what may trigger it? What can I do to not have it
tag me?
Thanks,
/ Oscar
--
Todd Lipcon
Software Engineer, Cloudera
. But I digress...)
Thanks in advance.
Joseph
--
Todd Lipcon
Software Engineer, Cloudera
from the Hadoop core-user mailing list archive at Nabble.com.
--
Todd Lipcon
Software Engineer, Cloudera
is extremely slow, but it seems to
only affect Hadoop. Any ideas?
Thanks,
-Scott
--
Todd Lipcon
Software Engineer, Cloudera
PM, Todd Lipcon wrote:
Hey Scott,
This is indeed really strange... if you do a straight hadoop fs -put
with
dfs.replication set to 1 from one of the DNs, does it upload slow? That
would cut out the network from the equation.
-Todd
On Fri, Apr 16, 2010 at 5:29 PM, Scott Carey sc
On Tue, Apr 13, 2010 at 4:13 AM, stephen mulcahy
stephen.mulc...@deri.orgwrote:
Todd Lipcon wrote:
Most likely a kernel bug. In previous versions of Debian there was a buggy
forcedeth driver, for example, that caused it to drop off the network in
high load. Who knows what new bug
that and achieved 100% utilization
even with small map tasks). But I am wondering, if doing so will
violate some fairness properties.
Thanks,
Abhishek
--
Todd Lipcon
Software Engineer, Cloudera
:
PacketResponder 2 for block blk_991235084167234271_101356 terminating
As they seem to preceed some HBase Problem, I would like to understand
what it means.
Thx for any help,
Al
--
Todd Lipcon
Software Engineer, Cloudera
as new as what you're talking about. Hadoop doesn't exercise the new
features in very recent kernels, so there's no sense accepting instability -
just go with something old that works!
-Todd
--
Todd Lipcon
Software Engineer, Cloudera
Doh, a couple more silly bugs in there. Don't use that version quite yet -
I'll put up a better patch later today. (Thanks to Kevin and Ted Yu for
pointing out the additional problems)
-Todd
On Wed, Apr 7, 2010 at 5:24 PM, Todd Lipcon t...@cloudera.com wrote:
For Dmitriy and anyone else who
OK, fixed, unit tests passing again. If anyone sees any more problems let
one of us know!
Thanks
-Todd
On Thu, Apr 8, 2010 at 10:39 AM, Todd Lipcon t...@cloudera.com wrote:
Doh, a couple more silly bugs in there. Don't use that version quite yet -
I'll put up a better patch later today
, but as this code is in HDFS, I am asking
this list)
Thx
Al
--
Todd Lipcon
Software Engineer, Cloudera
is in thread_dump files and the jstack -m result is in
java_native_frames.
The files ending with nn are the namenode results and the files ending with
dn are the datanode results.
Thanks,
Edson Ramiro
On 6 April 2010 18:19, Todd Lipcon t...@cloudera.com wrote:
Hi Edson,
Can you please run jstack
, can you see ethernet broadcast traffic at all? Do
you see anything in dmesg on the machine in question?
Thanks
-Todd
--
Todd Lipcon
Software Engineer, Cloudera
, and the output of running it on our cluster.
We are using the CDH2 distribution.
https://gist.github.com/e1bf7e4327c7aef56303
Any ideas on what could be going on?
Thanks,
-Dmitriy
--
Todd Lipcon
Software Engineer, Cloudera
On 30 March 2010 05:58, Steve Loughran ste...@apache.org wrote:
Edson Ramiro wrote:
I'm not involved with Debian community :(
I think you are now...
--
Todd Lipcon
Software Engineer, Cloudera
(Client.java:860)
at org.apache.hadoop.ipc.Client.call(Client.java:720)
... 13 more
Edson Ramiro
--
Todd Lipcon
Software Engineer, Cloudera
?
Thanks,
Gokul
--
Todd Lipcon
Software Engineer, Cloudera
version 1.6.0_17
Java(TM) SE Runtime Environment (build 1.6.0_17-b04)
Java HotSpot(TM) 64-Bit Server VM (build 14.3-b01, mixed mode)
Edson Ramiro
On 29 March 2010 17:14, Todd Lipcon t...@cloudera.com wrote:
Hi Edson,
It looks like for some reason your kernel does not have epoll enabled
--
Todd Lipcon
Software Engineer, Cloudera
available in CDH2:
http://archive.cloudera.com/cdh/2/hadoop-0.20.1+169.56.tar.gz
-Todd
--
Todd Lipcon
Software Engineer, Cloudera
:54 PM, Todd Lipcon t...@cloudera.com wrote:
Sorry, I actually meant ls -l from name.dir/current/
Having only one dfs.name.dir isn't recommended - after you get your
system
back up and running I would strongly suggest running with at least two,
preferably with one on a separate server via
On Thu, Feb 25, 2010 at 11:09 AM, Scott Carey sc...@richrelevance.comwrote:
On Feb 15, 2010, at 9:54 PM, Todd Lipcon wrote:
Hey all,
Just a note that you should avoid upgrading your clusters to 1.6.0u18.
We've seen a lot of segfaults or bus errors on the DN when running
with this JVM
Hi Neo,
See this bug:
http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=560044
as well as the discussion here:
http://issues.apache.org/jira/browse/HADOOP-6056
Thanks
-Todd
On Wed, Feb 24, 2010 at 9:16 AM, neo anderson
javadeveloper...@yahoo.co.uk wrote:
While running example programe
Hi Saptarshi,
Can you please ssh into the JobTracker node and check that this
directory is mounted, writable by the hadoop user, and not full?
-Todd
On Fri, Feb 19, 2010 at 2:13 PM, Saptarshi Guha
saptarshi.g...@gmail.com wrote:
Hello,
Not sure if i should post this here or on Cloudera's
Hi Xiao,
Are you sure that your secondary namenode was properly running before
the dataloss event?
Did you configure multiple dfs.name.dirs for your NN?
-Todd
P.S I moved this thread to common-user - probably a better place than -general.
On Sun, Feb 21, 2010 at 8:20 PM, xiao yang
On Fri, Feb 19, 2010 at 12:41 AM, Thomas Koch tho...@koch.ro wrote:
Hi,
yesterday I read the documentation of zookeeper and the zk contrib bookkeeper.
From what I read, I thought, that bookkeeper would be the ideal enhancement
for the namenode, to make it distributed and therefor finaly
Are you passing the python script to the cluster using the -file
option? eg -mapper foo.py -file foo.py
Thanks
-Todd
On Wed, Feb 17, 2010 at 7:45 PM, Dan Starr dsta...@gmail.com wrote:
Hi, I've tried posting this to Cloudera's community support site, but
the community website
Hi Song,
What version are you running? How much memory have you allocated to
the reducers in mapred.child.java.opts?
-Todd
On Tue, Feb 16, 2010 at 4:01 PM, Song Liu lamfeeli...@gmail.com wrote:
Sorry, seems no attachment is allowed, I paste it here:
Jobid Priority User Name Map %
On Mon, Feb 15, 2010 at 8:07 AM, Steve Kuo kuosen...@gmail.com wrote:
On Sun, Feb 14, 2010 at 12:46 PM, Todd Lipcon t...@cloudera.com wrote:
By the way, if all files have been indexed, DistributedLzoIndexer does not
detect that and hadoop throws an exception complaining that the input dir
1 - 100 of 230 matches
Mail list logo