Hi Mich!
The block size you are referring to is used only on the datanodes. The file
that the namenode writes (fsimage OR editlog) is not chunked using this block
size.
HTHRavi
On Wednesday, March 25, 2015 8:12 AM, Dr Mich Talebzadeh
m...@peridale.co.uk wrote:
Hi,
The block
Perhaps yarn.resourcemanager.max-completed-applications ?
On Tuesday, March 17, 2015 10:02 AM, hitarth trivedi t.hita...@gmail.com
wrote:
Hi,
When I submit a job to yarn ResourceManager the job is successful, and eventhe
Apps submitted, Apps Running, Apps Completed counters
Hi Dmitry!
I suspect its because we don't want two streams from the same DFSClient to
write to the same file. The Lease.holder is a simple string which corresponds
usually to DFSClient_someid .
HTH
Ravi.
On Tuesday, February 24, 2015 12:12 AM, Dmitry Simonov
dimmobor...@gmail.com wrote:
I am not aware of an API that would let you do this. You may be able to move an
application to a queue with 0 resources to achieve the desired behavior but I'm
not entirely sure.
On Wednesday, February 18, 2015 9:24 AM, xeonmailinglist
xeonmailingl...@gmail.com wrote:
By job, I
Hi!
There is no JobTracker in YARN. There is an ApplicationMaster. And there is a
ResourceManager. Which do you mean?
You can use the ResourceManager REST API to submit new applications
Hi Chen!
Are you running the balancer? What are you setting
dfs.datanode.available-space-volume-choosing-policy.balanced-space-threshold
dfs.datanode.available-space-volume-choosing-policy.balanced-space-preference-fractionto?
On Wednesday, February 11, 2015 7:44 AM, Chen Song
Hi Chen!
From my understanding, every operation on the Namenode is logged (and flushed)
to disk / QJM / shared storage. This includes the addBlock operation. So when
a client requests to write a new block, the metadata is logged by the active
NN, so even if it crashes later on, the new active
In unit tests MiniMRYarnCluster is used to do this kind of stuff.
On Friday, February 6, 2015 3:51 AM, Telles Nobrega
tellesnobr...@gmail.com wrote:
Hi, I'm working on a experiment and I need to do something like, start a
hadoop job (wordcount, terasort, pi) and let the application
Hi Nur!
Thanks for your report. Please feel free to open a JIRA in the YARN project for
this: https://issues.apache.org/jira/browse/YARN
A patch would be great. Look at ClientRMService.submitApplication()
CheersRavi
On Wednesday, February 4, 2015 9:34 PM, Nur Kholis Majid
You can try running fsck.
On Thursday, January 29, 2015 6:34 AM, Juraj jiv fatcap@gmail.com
wrote:
Hello,
I noticed our CDH5 cluster is not balanced and balancer role is missing. So i
added via cloudera manager new balancer role and clicked on rebalance in hdfs
menu. After
Do you have capacity on your cluster? Did you submit it to the right queue? Go
to your scheduler page:
http://YOUR-RESOURCE-MANAGER-HOSTNAME:8088/cluster/scheduler
On Thursday, January 29, 2015 4:48 AM, Frank Lanitz
frank.lan...@sql-ag.de wrote:
Hi,
Sorry for spamming the list. ;)
you mean there is a chance it's updating the clock while the job is
running?
Regards
Fabio
On 01/26/2015 08:00 PM, Ravi Prakash wrote:
Are you running NTP?
On Friday, January 23, 2015 12:42 AM, Fabio anyte...@gmail.com wrote:
Hi guys,
while analyzing SLS logs I
Are you running NTP?
On Friday, January 23, 2015 12:42 AM, Fabio anyte...@gmail.com wrote:
Hi guys,
while analyzing SLS logs I noticed some unexpected behaviors, such as
resources requests sent before the AM container gets to a RUNNING state.
For this reason I started wondering how
Hi Matt!
Take a look at the mapreduce.jobhistory.* configuration parameters here for the
delay in moving finished jobs to the
HistoryServer:https://hadoop.apache.org/docs/stable/hadoop-mapreduce-client/hadoop-mapreduce-client-core/mapred-default.xml
I've seen this error hadoop is not allowed
Hi Dave!
Here the class which is used to store all the edits :
https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java#L575
HTHRavi
On Monday, January 26, 2015 10:32 AM, dlmar...@comcast.net
Hi Rab!
I think you have a comma in between mapreduce and framework.name where it
should be a period. You can also try looking at the job's logs to see if the
configuration for mapreduce.framwork.name was indeed passed or not.
HTH
On Friday, January 16, 2015 9:55 AM, rab ra
Hi Kamal!
Thanks for your initiative. Please take a look at MiniDFSCluster /
MiniJournalCluster / MiniYarnCluster etc. In your unit tests you can
essentially start a cluster in a single JVM. You can look at
TestQJMWithFaults.java
HTHRavi
On Sunday, January 4, 2015 10:09 PM, kamaldeep
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi Venkata!
Please feel free to open a JIRA and upload a patch. You might also try
the new s3a implementation (instead of s3n), but there's a chance that
the behavior will be the same.
Cheers
Ravi
On 10/31/14 03:23, Ravuri, Venkata Puneet wrote:
Hi Chris!
When is this error caused? Which logs do you see this in? Are you sure you are
setting the ulimit for the correct user? What application are you trying to run
which is causing you to run up against this limit?
HTH
Ravi
On Saturday, August 9, 2014 6:07 AM, Chris MacKenzie
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi Chandra!
Replication is done according to priority (e.g. where only 1 block out
of 3 remains is higher priority than when only 2 out of 3 remain).
Every time a DN heartbeats into the NN, it *may* be assigned some
replication work according to
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
One way I can think of is decomissioning the nodes and then basically
re-imaging it however you want to. Is that not an option?
On 05/12/14 00:18, Bharath Kumar wrote:
Hi I am a query regarding JBOD ,
I sit possible to migrate from LVM to JBOD
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Avinash!
That JIRA is still open and does not seem to have been fixed. There are
a lot of issues with providing regexes though. A long standing issue has
been https://issues.apache.org/jira/browse/HDFS-13 which makes it even
harder
HTH
Ravi
On
Hi Azuryy!
You have to use dot to convert it to png
On Tuesday, April 1, 2014 6:38 PM, Azuryy Yu azury...@gmail.com wrote:
Hi,
I compiled Yarn event model using maven, but how to open .gv file to view
it?
Thanks.
Hi Siddharth,
The Trash feature is enabled by setting fs.trash.interval . I'm not sure
about your question on hive. What do you mean by the trash helping with dropped
tables?
On Friday, October 25, 2013 3:08 AM, Siddharth Tiwari
siddharth.tiw...@live.com wrote:
How can I enable trash in
Hi!
This parameter triggers a sort of fetched map outputs on the reducer node when
the number of in memory map outputs memToMemMergeOutputsThreshold . It is
disabled by default. I am guessing this was put in on the premise that it might
be faster to
sort fewer number of streams even in
Hi!
Tom White's Hadoop: The Definitive Guide is probably the best source for
information on this (apart from the code itself ;-) Look at
MergeManagerImpl.java btw in case you are so inclined).
HTH
Ravi
On Friday, October 25, 2013 2:36 PM, - commodor...@ymail.com wrote:
Hi All,
Can
Viswanathan,
What version of Hadoop are you using? What is the change?
On Wednesday, October 23, 2013 2:20 PM, Viswanathan J
jayamviswanat...@gmail.com wrote:
Hi guys,
If I update(very small change) the hadoop-core mapred class for one of the OOME
patch and compiled the jar. If I deploy
Hi Rico!
What was the command line you used to build?
On Wednesday, October 23, 2013 11:44 PM, codepeak gcodep...@gmail.com wrote:
Hi all,
I have a problem when compile the hadoop 2.2.0, the apache only offers
32bit distribution, but I need 64bit, so I have to compile it myself.
My
Hi Prashant!
You can set yarn.resourcemanager.max-completed-applicationsin yarn-site.xml of
RM to limit the maximum number of apps it keeps track of (It defaults to
1). You're right that the Heap may also be increased.
HTH
Ravi
On Monday, October 21, 2013 5:54 PM, Prashant Kommireddi
Sam, I would guess that the jar file you think is running, is not actually the
one. I am guessing that in the task classpath, there is a normal jar file
(without your changes) which is being picked up before your modified jar file.
On Thursday, October 17, 2013 10:13 PM, sam liu
Hi!
You can go to the JMX page: http://namenode:50070/jmx to find out what the
Heap Memory and usage is. Yes we know that there is a problem in the scripts. I
believe its being handled as part of
https://issues.apache.org/jira/browse/HADOOP-9902
On Friday, October 18, 2013 2:07 AM, ch
To: common-u...@hadoop.apache.org common-u...@hadoop.apache.org; Ravi
Prakash ravi...@ymail.com
Sent: Monday, October 7, 2013 5:55 AM
Subject: Re: datanode tuning
Thanks Ravi. The number of nodes isn't a lot but the size is rather large.
Each data node has about 14-16T (560-640T).
For the datanode
Please look at dfs.heartbeat.interval and
dfs.namenode.heartbeat.recheck-interval
40 datanodes is not a large cluster IMHO and the Namenode is capable of
managing 100 times more datanodes.
From: Rita rmorgan...@gmail.com
To: common-user@hadoop.apache.org
environment)?
--
Best Regards,
Karim Ahmed Awara
On Wed, Oct 2, 2013 at 1:13 AM, Ravi Prakash ravi...@ymail.com wrote:
Karim!
You should read BUILDING.txt . I usually generate the eclipse files using
mvn eclipse:eclipse
Then I can import all the projects into eclipse as eclipse projects
Is security on? I'm not entirely sure (and I think it might be illuminating to
the rest of us when you work this out, so please email back when you do), but I
am guessing that a code change may be required. I think I remember someone
telling me that hostnames are reverse-lookup'd to verify
Karim!
Look at DFSOutputStream.java:DataStreamer
HTH
Ravi
From: Karim Awara karim.aw...@kaust.edu.sa
To: user user@hadoop.apache.org
Sent: Thursday, September 26, 2013 7:51 AM
Subject: Re: Uploading a file to HDFS
Thanks for the reply. when the client
Karim!
You should read BUILDING.txt . I usually generate the eclipse files using
mvn eclipse:eclipse
Then I can import all the projects into eclipse as eclipse projects. This is
useful for code navigation and completion etc. however I still build using
command line:
mvn -Pdist
Tom! I would guess that just giving the NN JVM lots of memory (64Gb / 96Gb)
should be the easiest way.
From: Tom Brown tombrow...@gmail.com
To: user@hadoop.apache.org user@hadoop.apache.org
Sent: Wednesday, September 25, 2013 11:29 AM
Subject: Is there any
Kiran,
hadoop-0.18 is a VERY old version (Probably 5 years old). Please consider
trying out a newer version.
You can follow these steps inside a VM to get a single node cluster running:
https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html
HTH
Ravi.
http://lucene.472066.n3.nabble.com/Assigning-reduce-tasks-to-specific-nodes-td4022832.html
From: Mark Olimpiati markq2...@gmail.com
To: user@hadoop.apache.org
Sent: Friday, September 6, 2013 1:47 PM
Subject: assign tasks to specific nodes
Hi guys,
I'm
I believe https://issues.apache.org/jira/browse/MAPREDUCE-5399 causes
performance degradation in cases where there are a lot of reducers. I can
imagine it causing degradation if the configuration files are super big / some
other weird cases.
From: Krishna
Hi John!
If your block is going to be replicated to three nodes, then in the default
block placement policy, 2 of them will be on the same rack, and a third one
will be on a different rack. Depending on the network bandwidths available
intra-rack and inter-rack, writing with replication
What version of Hadoop are you planning on using? You will probably have to
partition the resources too. e.g. If you are using 0.23 / 2.0, the NM available
resources memory will have to be split on all the nodes
From: Sandeep L sandeepvre...@outlook.com
To:
should send
a notification by other means.
On Sat, Jun 22, 2013 at 2:38 PM, Ravi Prakash ravi...@ymail.com wrote:
Hi Prashant,
I would tend to agree with you. Although job-end notification is only a
best-effort mechanism (i.e. we cannot always guarantee notification for
example when the AM
Hi Pavan,
I assure you this configuration works. The problem is very likely in your
configuration files. Please look them over once again. Also did you restart
your daemons after changing the configuration? Some configurations necessarily
require a restart.
Ravi.
Hi Prashant,
I would tend to agree with you. Although job-end notification is only a
best-effort mechanism (i.e. we cannot always guarantee notification for
example when the AM OOMs), I agree with you that we can do more. If you feel
strongly about this, please create a JIRA and possibly
Hi Steve,
You can use fs.permissions.umask-modeto set the appropriate umask
From: Steve Lewis lordjoe2...@gmail.com
To: common-user common-user@hadoop.apache.org
Sent: Monday, May 20, 2013 9:33 AM
Subject: Default permissions for Hadoop output files
I am
This is not unexpected behavior. If there are fetch failures on the Reduce
(i.e. its not able to get the map outputs) then a map may be rerun.
From: David Parks davidpark...@yahoo.com
To: user@hadoop.apache.org user@hadoop.apache.org
Sent: Monday, March 11,
Hi Amit,
The job history files are stored in HDFS. eg.
/mapred/history/done/2013/02/06/00
I would think that there would be some changes in the format.
Thanks
Ravi
From: sangroya sangroyaa...@gmail.com
To: hadoop-u...@lucene.apache.org
Sent: Tuesday,
D-oh!
Thanks for discovering this. Sorry for my silly mistake. Filed and patched
https://issues.apache.org/jira/browse/MAPREDUCE-4786 .
Thanks
Ravi
From: Harsh J ha...@cloudera.com
To: user@hadoop.apache.org
Sent: Friday, November 9, 2012 2:24 PM
A simple search ought to have found this for you.
http://hadoop.apache.org/docs/r0.23.0/hadoop-yarn/hadoop-yarn-site/Federation.html
From: Visioner Sadak visioner.sa...@gmail.com
To: user@hadoop.apache.org
Sent: Saturday, October 27, 2012 2:03 AM
...@gmail.com
To: common-user@hadoop.apache.org; Ravi Prakash ravi...@ymail.com
Sent: Monday, October 22, 2012 6:46 PM
Subject: Re: measuring iops
Is it possible to know how many reads and writes are occurring thru the
entire cluster in a consolidated manner -- this does not include
replication factors
Hi Rita,
SliveTest can help you measure the number of reads / writes / deletes / ls /
appends per second your NameNode can handle.
DFSIO can be used to help you measure the amount of throughput.
Both these tests are actually very flexible and have a plethora of options to
help you test
Hi Adrian,
Please use user@hadoop.apache.org for user-related questions
Which version of Hadoop are you using? Where do you want the object? In a
map/reduce task? For the currently executing job or for a different job?
In 0.23, you can use the RM webservices.
Maybe at a slight tangent, but for each write operation on HDFS (e.g. create a
file, delete a file, create a directory), the NN waits until the edit has been
*flushed* to disk. So I can imagine such a hypothetical(?) disk would
tremendously speed up the NN even as it is. Mark, can you please
Hi Amit,
In your command the iopath directory ( ~/Desktop/test_gridmix_output )
doesn't seem to be an HDFS location. I believe it needs to be HDFS.
HTH
Ravi.
On Mon, Jun 18, 2012 at 11:16 AM, sangroya sangroyaa...@gmail.com wrote:
Hello Ravi,
Thanks for your response.
I got started by
Hi Amit,
I doubt that it is a problem with some version of hadoop. Could you please
post the stack trace if its available? Along with the output of hadoop
version? How did you set up your cluster? Did you setup mapred and hdfs
users? It seems to me more likely that the user trying to run the job,
Hi Todd,
It might be useful to try the CDH user mailing list too. I'm afraid I
haven't used CDH, so I'm not entirely certain.
The fact that after you run your JAVA program, the NN has created a
directory and a 0-byte file means you were able to contact and interact
with the NN just fine. I'm
:
/
SHUTDOWN_MSG: Shutting down TaskTracker at WSUSJXLHRN13067/192.168.0.16
/
Any clue? Thanks
Regards,
ravi
On Fri, May 18, 2012 at 12:01 AM, Ravi Prakash ravihad...@gmail.comwrote:
Ravishankar,
If you run $ jps, do you see
Ravishankar,
If you run $ jps, do you see a TaskTracker process running? Can you please
post the tasktracker logs as well?
On Thu, May 17, 2012 at 8:49 PM, Ravishankar Nair
ravishankar.n...@gmail.com wrote:
Dear experts,
Today is my tenth day working with Hadoop on installing on my windows
Shuai,
I'm afraid I don't know if there are open-sourced implementations of those
two algorithms, but there are a bunch of example programs in the
hadoop-mapreduce-examples*.jar that gets built / distributed.
Valid program names are:
aggregatewordcount: An Aggregate based map/reduce program
Hi Dave,
I'm not entirely certain, but briefly looking at code:
http://search-hadoop.com/c/Hadoop:/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java%7C%7C%252Bebs+%252BhdfsLine
2679
if ((curReplicas == 0)
I would think its a bug if at any datanode more than 5 blocks are being
moved at any time for balancing. How're you checking how many blocks are
being moved?
On Fri, May 4, 2012 at 3:19 AM, andreina andrein...@huawei.com wrote:
Hi All,
I started NN and DN1 with replication factor
Hi Pat,
20.205 is the stable version before 1.0. 1.0 is not substantially different
than 0.20. Any reasons you don't wanna use it?
I don't think occasional HDFS corruption is a known issue. That would be,
umm... lets just say pretty severe. Are you sure you've configured it
properly?
Your task
Hi David,
If a DN is decommissioned and returns, Does the NM update it's block meta
data. If it does, How does it decide when to update the meta data map?
I presume you meant the NN update it's block meta data. The answer is
yes. Soon as the NN decides that a node is dead / decommissioned and
Hi Andreina,
You are right! FSNamesystem.java:3156 (and line 3023) has a less than
comparison, when in fact it ought to be = . Could you please file a JIRA?
Thanks
Ravi.
if (numLive datanodeThreshold) {
if (!.equals(msg)) {
msg += \n;
}
msg +=
Or maybe its a flaky network connection? Perhaps you can do a ping and
check the network link is reliable?
The only daemon that needs to be up is Namenode and unless you are taking
it down and bringing it back up often (please don't), you should not see
that message.
2012/4/25 Lukáš Kryške
Hi Pedro,
I know its sub-optimal but you should be able to put in as many
System.out.println / log messages as you want and you should be able to see
them in stdout, and syslog files. Which version of hadoop are you using?
On Thu, Mar 29, 2012 at 10:33 AM, Pedro Costa psdc1...@gmail.com wrote:
, Ravi Prakash ravihad...@gmail.com wrote:
Hi Pedro,
I know its sub-optimal but you should be able to put in as many
System.out.println / log messages as you want and you should be able to see
them in stdout, and syslog files. Which version of hadoop are you using?
On Thu, Mar 29, 2012
Also check out Hadoop Rumen
On Thu, Mar 29, 2012 at 10:22 AM, Tom Deutsch tdeut...@us.ibm.com wrote:
Matthieu - you are welcome to contact me off list for assistance with Jaql.
---
Sent from my Blackberry so please excuse typing and spelling errors.
Sector-Sphere
On Mon, Jan 30, 2012 at 4:24 PM, Ronald Petty ronald.pe...@gmail.comwrote:
R.V.,
Are you looking for the platforms that due distributed computation or the
larger ecosystems like programming apis, etc.?
Here are some platforms:
C-Squared
Globus
Condor
Here are some
Take a look at distributed cache for distributing data to all nodes. I'm
not sure what you mean by messages. The MR programming paradigm is
different from MPI.
http://hadoop.apache.org/common/docs/r0.18.3/mapred_tutorial.html#DistributedCache
On Sat, Jan 28, 2012 at 5:52 AM, Oliaei
Courtesy Kihwal and Bobby
Have you tried increasing the max heap size with -Xmx? and make sure that
you have swap enabled.
On Wed, Jan 11, 2012 at 6:59 PM, Gaurav Bagga gbagg...@gmail.com wrote:
Hi
hadoop-0.19
I have a working hadoop cluster which has been running perfectly for
months.
From what I know the number of containers will depend on the amount of
resources your node has. If it has 8 Gb RAM and each container has 2 Gb,
then there'll be a maximum of 4 containers.
On Tue, Jan 10, 2012 at 5:44 AM, raghavendhra rahul
raghavendhrara...@gmail.com wrote:
Hi,
Couldn't you write a simple wrapper around your binary, include the binary
using the -file option and use Streaming?
Or use the distributed cache to copy your binaries to all the compute
nodes.
On Tue, Jan 10, 2012 at 5:01 PM, Daren Hasenkamp dhasenk...@berkeley.eduwrote:
Hi,
I would like to
Hi,
Clearly the jar file containing the class
org/apache/hadoop/conf/Configuration is not available on the CLASSPATH.
Did you build hadoop properly? The way I would usually check this is:
1. I have a shell script findclass
#!/bin/sh
LOOK_FOR=$1
if [ -z $LOOK_FOR ]; then
echo -e Usage:
Hi Jeremy,
Couple of questions:
1. Which version of Hadoop are you using?
2. If you write something into HDFS, can you subsequently read it?
3. Are you sure your secondarynamenode configuration is correct? It seems
like your SNN is telling your NN to roll the edit log (move the journaling
Actually I take that back. Restarting the NN might not result in loss of
data. It will probably just take longer to start up because it would read
the fsimage, then apply the fsedits (rather than the SNN doing it).
On Wed, Sep 7, 2011 at 10:46 AM, Ravi Prakash ravihad...@gmail.com wrote:
Hi
, how could it replay those?
-jeremy
On Sep 7, 2011, at 8:48 AM, Ravi Prakash wrote:
Actually I take that back. Restarting the NN might not result in loss of
data. It will probably just take longer to start up because it would read
the fsimage, then apply the fsedits (rather than the SNN
:
Things still work in hdfs but the edits file is not being updated.
Timestamp is sept 2nd.
-jeremy
On Sep 7, 2011, at 9:45 AM, Ravi Prakash ravihad...@gmail.com wrote:
If your HDFS is still working, the fsimage file won't be getting updated
but
the edits file still should. That's why I asked
Hi Avi,
If you restored the metadata from Step 1, it would have no memory of what
happened after that point. I guess you could try using the Offline Image
Viewer and the OfflineEditsViewer tools, to read the corrupted metadata and
see if you can recover the blocks from there.
Cheers
Ravi
On
In short you MUST use priviledged resourced.
In long:
Here's what I did to setup a secure single node cluster. I'm sure there's
other ways, but here's how I did it.
1.Install krb5-server
2.Setup the kerberos configuration (files attached).
/var/kerberos/krb5kdc/kdc.conf and
: Ravi Prakash ravihad...@gmail.com
Date: Thu, Aug 18, 2011 at 4:19 PM
Subject: Help running built artifacts
To: common-user@hadoop.apache.org
Hi,
http://wiki.apache.org/hadoop/HowToContribute is a great resource detailing
the steps needed to build jars and tars from the source code. However, I am
Hi,
http://wiki.apache.org/hadoop/HowToContribute is a great resource detailing
the steps needed to build jars and tars from the source code. However, I am
still not sure what the best way to run hadoop servers (NN, SNN, DNs, JT,
TTs) using those built jars is. Could we all please reach consensus
Hi folks,
I think I remember seeing a few of changes in the starting scripts. Can
someone please point me to a twiki containing steps on how to start up
Hadoop from a source checkout? The method I used to use (export
HADOOP_HDFS_HOME, HADOOP_COMMON_HOME, and then calling hdfs/bin/start-dfs.sh
set this? in the data node configuration or namenode?
It seems the default is set to 3 seconds.
On Tue, Mar 29, 2011 at 5:37 PM, Ravi Prakash ravip...@yahoo-inc.com wrote:
I set these parameters for quickly discovering live / dead nodes.
For 0.20 : heartbeat.recheck.interval
For 0.22
I set these parameters for quickly discovering live / dead nodes.
For 0.20 : heartbeat.recheck.interval
For 0.22 : dfs.namenode.heartbeat.recheck-interval dfs.heartbeat.interval
Cheers,
Ravi
On 3/29/11 10:24 AM, Michael Segel michael_se...@hotmail.com wrote:
Rita,
When the NameNode doesn't
101 - 187 of 187 matches
Mail list logo