DFSInputStream.available();
2012/6/13 Mohammad Tariq
> Hello list,
>
>Is it possible to find the end of a file stored in hdfs using the
> hdfs API??currently I am comparing FSDataInputStream.getPos() with
> FileSystem.getFileStatus().getLen() to serve the purpose..Thank you
>
> Regards,
>
such
application sub-directory to reduce impact each other? It can be used with
five(or less) top level to HDFS path with individual lock tree-like
structure.
Can anybody provide your feedback or advice? Thanks
-Regards
Denny Ye
hi Peter,
There is no any operation log? Please using the state change log with
debug level.
I checked the code for creating 'symlink'. It looks like regular
operation with other interfaces at NameNode. GC, directory lock, shell
script may be doubtful point.
-Regards
Denny Ye
s only one tip to
your doubt.
-Regards
Denny Ye
2012/2/19 Tianqiang Peter Li
> Hi, guys,
> I am using scribe to write data directly to hdfs, it works well most of
> the time, but sporadically, I found some of files(1-2 out of 6000 per day)
> written to hdfs become empty files, in
hi Ajay
Does the file that related with that
blockId(blk_4884628009930930282_210741) has being existed at HDFS?
Your setting is right for new file to HDFS after the configuration to
take effect.
-Regards
Denny Ye
2012/2/15 Harsh J
> Ajay,
>
> Replication is a per-file propert
Sure, it's the temporary code, you can delete that line.
-Regards
Denny Ye
2012/2/16 Stuti Awasthi
> Thanks Denny,
>
> ** **
>
> Commenting below line in HDFSProxy.java class and rebuilding it . Will
> this help ?
>
>
> sslConf.set("pr
In my local Hadoop version, I saw the temporary code with inexistent
property name.
Hashtable does not accept 'null' as the normal value.
It's the mistake of unit testing
-Regards
Denny Ye
2012/2/16 Stuti Awasthi
> Hi all,
>
> Any pointers for this ?
>
kReader(s, src, blk.getBlockId(),
accessToken,
blk.getGenerationStamp(),
offsetIntoBlock, blk.getNumBytes() - offsetIntoBlock,
buffersize, verifyChecksum, clientName);
*******
-Regards
Denny Ye
k the NameNode 50070 web interface to find out
concrete DataNodes. It may caused by the failure of DataNode. The final
root cause need your tracking process. Good luck
-Regards
Denny Ye
On Wed, Nov 9, 2011 at 2:00 PM, Steve Lewis wrote:
> Just recently my hadoop jobs started failing with C
Incomplete
structured data. HDFS do nothing for this mechanism.
-Regards
Denny Ye
On Fri, Nov 11, 2011 at 3:43 PM, 臧冬松 wrote:
> Usually large file in HDFS is split into bulks and store in different
> DataNodes.
> A map task is assigned to deal with that bulk, I wonder what if the
>
is
total bytes that spilled to disk).
HDFS_BYTES_READ only represents the map input bytes from HDFS.
Referenced blogs of mine to explains 'Shuffle' phase in Chinese.
http://langyu.iteye.com/blog/992916
--Regards
Denny Ye
2011/6/15 hailong.yang1115
> **
>
> S
11 matches
Mail list logo