HDFS API's

2010-05-26 Thread Vidur Goyal
I was trying to run hdfs_test.c present in $HADOOP_HOME/src/c++/libhdfs
directory of hadoop. I am facing problems running it properly. Can
somebody guide me how to run it.

vidur


how to decode the metadata file of a block

2010-06-09 Thread Vidur Goyal
Hi,

Can somebody give me some insight of how to read the contents of metadata
file using hdfs api's and the encoding that's being used.

Thanks,
Vidur

-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.



Re: Appending and seeking files while writing

2010-06-11 Thread Vidur Goyal
hadooprocks,

As a project requirement I have to do the same and write a seek()
operation for FSDataoutputStream. It will be very kind of you to give me
some insight on this. I have explored the web about recompiling Hadoop
once I change the contents , Can you give me some documents that help me
do that too.

Thanks,
Vidur


> Stas,
>
> I also believe that there should be a seek interface on the write path so
> that the FS API is complete. The FsDataInputStream already support seek()
> -
> so should FsDataOutputStream. For File systems, that do not support the
> seek
> on the write path, the seek can be a no operation.
>
> Could you open a JIRA to track this. I am willing to provide the patch if
> you do not have the time to do so.
>
> thanks
> hadooprocks
>
>
>  On Thu, Jun 10, 2010 at 5:05 AM, Stas Oskin  wrote:
>
>> Hi.
>>
>> Was the append functionality finally added to 0.20.1 version?
>>
>> Also, is the ability to seek file being written and write data in other
>> place also supported?
>>
>> Thanks in advance!
>>
>
> --
> This message has been scanned for viruses and
> dangerous content by MailScanner, and is
> believed to be clean.
>
>


-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.



Re: Appending and seeking files while writing

2010-06-13 Thread Vidur Goyal
Append is supported in hadoop 0.20 .


> Hi.
>
> I think this really depends on the append functionality, any idea whether
> it
> supports such behaviour now?
>
> Regards.
>
> On Fri, Jun 11, 2010 at 10:41 AM, hadooprcoks 
> wrote:
>
>> Stas,
>>
>> I also believe that there should be a seek interface on the write path
>> so
>> that the FS API is complete. The FsDataInputStream already support
>> seek() -
>> so should FsDataOutputStream. For File systems, that do not support the
>> seek
>> on the write path, the seek can be a no operation.
>>
>> Could you open a JIRA to track this. I am willing to provide the patch
>> if
>> you do not have the time to do so.
>>
>> thanks
>> hadooprocks
>>
>>
>>  On Thu, Jun 10, 2010 at 5:05 AM, Stas Oskin 
>> wrote:
>>
>> > Hi.
>> >
>> > Was the append functionality finally added to 0.20.1 version?
>> >
>> > Also, is the ability to seek file being written and write data in
>> other
>> > place also supported?
>> >
>> > Thanks in advance!
>> >
>>
>
> --
> This message has been scanned for viruses and
> dangerous content by MailScanner, and is
> believed to be clean.
>
>


-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.



problem setting up development environment for hadoop

2010-06-13 Thread Vidur Goyal
Hello All,

I have been trying to set up a development environment for hdfs using this
link http://wiki.apache.org/hadoop/EclipseEnvironment , but the project
gives error after the build is completed. It does not contain certain
files. Please help !

vidur

-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.



setting up hadoop 0.20.1 development environment

2010-06-14 Thread Vidur Goyal
Hi,

I am trying to set up a development cluster for hadoop 0.20.1 in eclipse.
I used this url
http://svn.apache.org/repos/asf/hadoop/common/tags/release-0.20.1/ to
check out the build. I compiled "compile , compile-core-test , and
eclipse-files" using ant. Then when I build the project , I am getting
errors in bin/benchmarks directory. I have followed the screencast from
cloudera
http://www.cloudera.com/blog/2009/04/configuring-eclipse-for-hadoop-development-a-screencast/.

Thanks,
Vidur

-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.



How is a block allocated.

2010-06-27 Thread Vidur Goyal
Hi all,

If i have a LocatedBlock object and i want to link it with a file , how
should i proceed. What is the process by which a block gets linked with a
file?


Regards,
Vidur



-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.



create error

2010-07-03 Thread Vidur Goyal
Hi,

I am trying to create a file in hdfs . I am calling create from an
instance of DFSClient. This is a part of code that i am using

byte[] buf = new byte[65536];
int len;
while ((len = dis.available()) != 0) {
if (len < buf.length) {
break;
} else {
dis.read(buf, 0, buf.length);
ds.write(buf, 0, buf.length);
}
}

dis is DataInputStream for the local file system from which i am copying
the file and ds is the DataOutputStream to hdfs.

and i get these errors.

2010-07-03 13:45:07,480 ERROR
org.apache.hadoop.hdfs.server.datanode.DataNode:
DatanodeRegistration(127.0.0.1:50010,
storageID=DS-455297472-127.0.0.1-50010-1278144155322, infoPort=50075,
ipcPort=50020):DataXceiver
java.io.EOFException: while trying to read 65557 bytes
at
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readToBuf(BlockReceiver.java:265)
at
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readNextPacket(BlockReceiver.java:309)
at
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:373)
at
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:525)
at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:357)
at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:103)
at java.lang.Thread.run(Thread.java:636)


When i run the loop for number of times that is a multiple of block size ,
the operation runs just fine. As soon as i change the buffer array size to
a non block size , it starts giving errors.
I am in middle of a project . Any help will be appreciated.

thanks
vidur

-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.