Devi,
The libhdfs is purely a client library and hence you require it only
on the node where you wish to consume it. Hence, "client node" is
sufficient.
On Fri, May 18, 2012 at 10:50 PM, Hadoop wrote:
> Harsh,
>
> Thanks for the response.I was able to install it.
>
> Should I just install it in
Todd-
Thanks for your reply. I went out on a limb and started digging in the
source code and figures it was FSImage. So I saved it, and copied over
the copy from my checkpoint directory and got running again.
I ran a few jobs to test and returned to getting a problem new node
running. Once again
Harsh,
Thanks for the response.I was able to install it.
Should I just install it in clients hone alone or in the namenode and data
nodes too?
Thanks,Devi
Sent from my iPhone
On May 16, 2012, at 9:41 PM, Harsh J wrote:
> Devi,
>
> [Moving question to cdh-u...@cloudera.org, bcc'd hdfs-user@
Hi Terry,
It seems like something got truncated in your FSImage... though it's
unclear how that might have happened.
If you're able to share your logs and your dfs.name.dir contents, feel
free to contact me off-list and I can try to take a look to diagnose
the issue and try to recover the system.
Sorry, forgot to attach the trace:
2012-05-18 09:54:45,355 INFO
org.apache.hadoop.hdfs.server.common.Storage: Number of files = 128
2012-05-18 09:54:45,379 ERROR
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
initialization failed.
java.io.EOFException
at java.io.DataInp
Running Apache 1.0.2 ~12 datanodes
Ran FSCK / -> OK, before, everything running as expected.
Started trying to use a script to assign nodes to racks, which required
several stop-dfs.sh / start-dfs.sh cycles. (with some stop-all.sh /
start-all.sh too if that matters.
Got past errors in script and
Thanks Harsh ……
Cheers
Subroto Sanyal
On May 18, 2012, at 1:52 PM, Harsh J wrote:
> Yes this is intentional, and an incompatible change, and was done via
> https://issues.apache.org/jira/browse/HADOOP-6201 to have better API
> behavior.
>
> On Fri, May 18, 2012 at 2:04 PM, Subroto wrote:
>> Hi
Yes this is intentional, and an incompatible change, and was done via
https://issues.apache.org/jira/browse/HADOOP-6201 to have better API
behavior.
On Fri, May 18, 2012 at 2:04 PM, Subroto wrote:
> Hi,
>
> I was running a simple unit test for verifying the behavior of a/m API.
> The UT is some t
Hi,
I was running a simple unit test for verifying the behavior of a/m API.
The UT is some thing like this:
public void testResolve_SimpleGlob() throws IOException {
File folder = _tempFolder.newFolder("folder");
File file1 = createFile(folder, "2010/test1");
File file2 =