Re: How to find EOF

2012-06-12 Thread Denny Ye
DFSInputStream.available(); 2012/6/13 Mohammad Tariq > Hello list, > >Is it possible to find the end of a file stored in hdfs using the > hdfs API??currently I am comparing FSDataInputStream.getPos() with > FileSystem.getFileStatus().getLen() to serve the purpose..Thank you > > Regards, >

Does NameNode directory need fine-grained path lock?

2012-05-08 Thread Denny Ye
such application sub-directory to reduce impact each other? It can be used with five(or less) top level to HDFS path with individual lock tree-like structure. Can anybody provide your feedback or advice? Thanks -Regards Denny Ye

Re: Nonempty files become empty after being saved to Hdfs.

2012-02-23 Thread Denny Ye
hi Peter, There is no any operation log? Please using the state change log with debug level. I checked the code for creating 'symlink'. It looks like regular operation with other interfaces at NameNode. GC, directory lock, shell script may be doubtful point. -Regards Denny Ye

Re: Nonempty files become empty after being saved to Hdfs.

2012-02-19 Thread Denny Ye
s only one tip to your doubt. -Regards Denny Ye 2012/2/19 Tianqiang Peter Li > Hi, guys, > I am using scribe to write data directly to hdfs, it works well most of > the time, but sporadically, I found some of files(1-2 out of 6000 per day) > written to hdfs become empty files, in

Re: Replication

2012-02-16 Thread Denny Ye
hi Ajay Does the file that related with that blockId(blk_4884628009930930282_210741) has being existed at HDFS? Your setting is right for new file to HDFS after the configuration to take effect. -Regards Denny Ye 2012/2/15 Harsh J > Ajay, > > Replication is a per-file propert

Re: facing issues in HDFSProxy

2012-02-16 Thread Denny Ye
Sure, it's the temporary code, you can delete that line. -Regards Denny Ye 2012/2/16 Stuti Awasthi > Thanks Denny, > > ** ** > > Commenting below line in HDFSProxy.java class and rebuilding it . Will > this help ? > > > sslConf.set("pr

Re: facing issues in HDFSProxy

2012-02-16 Thread Denny Ye
In my local Hadoop version, I saw the temporary code with inexistent property name. Hashtable does not accept 'null' as the normal value. It's the mistake of unit testing -Regards Denny Ye 2012/2/16 Stuti Awasthi > Hi all, > > Any pointers for this ? >

Re: How-to use DFSClient's BlockReader from Java

2012-01-09 Thread Denny Ye
kReader(s, src, blk.getBlockId(), accessToken, blk.getGenerationStamp(), offsetIntoBlock, blk.getNumBytes() - offsetIntoBlock, buffersize, verifyChecksum, clientName); ******* -Regards Denny Ye

Re: Could not obtain block

2011-11-11 Thread Denny Ye
k the NameNode 50070 web interface to find out concrete DataNodes. It may caused by the failure of DataNode. The final root cause need your tracking process. Good luck -Regards Denny Ye On Wed, Nov 9, 2011 at 2:00 PM, Steve Lewis wrote: > Just recently my hadoop jobs started failing with C

Re: structured data split

2011-11-11 Thread Denny Ye
Incomplete structured data. HDFS do nothing for this mechanism. -Regards Denny Ye On Fri, Nov 11, 2011 at 3:43 PM, 臧冬松 wrote: > Usually large file in HDFS is split into bulks and store in different > DataNodes. > A map task is assigned to deal with that bulk, I wonder what if the >

Re: Fw: Problems about the job counters

2011-06-29 Thread Denny Ye
is total bytes that spilled to disk). HDFS_BYTES_READ only represents the map input bytes from HDFS. Referenced blogs of mine to explains 'Shuffle' phase in Chinese. http://langyu.iteye.com/blog/992916 --Regards Denny Ye 2011/6/15 hailong.yang1115 > ** > > S