[ 
https://issues.apache.org/jira/browse/HDFS-10369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon resolved HDFS-10369.
--------------------------------
    Resolution: Invalid

You're mallocing a buffer of 5 bytes here, seems your C code is just broken.

> hdfsread crash when reading data reaches to 128M
> ------------------------------------------------
>
>                 Key: HDFS-10369
>                 URL: https://issues.apache.org/jira/browse/HDFS-10369
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: fs
>            Reporter: vince zhang
>            Priority: Major
>
> see code below, it would crash after   printf("hdfsGetDefaultBlockSize2:%d, 
> ret:%d\n", hdfsGetDefaultBlockSize(fs), ret);
>   
> hdfsFile read_file = hdfsOpenFile(fs, "/testpath", O_RDONLY, 0, 0, 1); 
>   int total = hdfsAvailable(fs, read_file);
>   printf("Total:%d\n", total);
>   char* buffer = (char*)malloc(sizeof(size+1) * sizeof(char));
>   int ret = -1; 
>   int len = 0;
>   ret = hdfsSeek(fs, read_file, 134152192);
>   printf("hdfsGetDefaultBlockSize1:%d, ret:%d\n", 
> hdfsGetDefaultBlockSize(fs), ret);
>   ret = hdfsRead(fs, read_file, (void*)buffer, size);
>   printf("hdfsGetDefaultBlockSize2:%d, ret:%d\n", 
> hdfsGetDefaultBlockSize(fs), ret);
>   ret = hdfsRead(fs, read_file, (void*)buffer, size);
>   printf("hdfsGetDefaultBlockSize3:%d, ret:%d\n", 
> hdfsGetDefaultBlockSize(fs), ret);
>   return 0;



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to