[ 
https://issues.apache.org/jira/browse/AVRO-1208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13559966#comment-13559966
 ] 

Yin Huai commented on AVRO-1208:
--------------------------------

[~jadler] I used Hive's SerDe to serialize data to bytes and the type of 
columns in a Trevni file should be ValueType.BYTES. So, in InputBuffer, 
readBytes should be called instead of readString. Can you share your testing 
results with me?

Also, when I was testing Trevni locally last year, I found if HadoopInput is 
used with Local FS, the read method in ChecksumFSInputChecker will be used, and 
this method opens the file and check the checksum in every read call. This 
behavior can also cause low read performance.
                
> Improve Trevni's performance on row-oriented data access
> --------------------------------------------------------
>
>                 Key: AVRO-1208
>                 URL: https://issues.apache.org/jira/browse/AVRO-1208
>             Project: Avro
>          Issue Type: Improvement
>    Affects Versions: 1.7.3
>            Reporter: Yin Huai
>
> Trevni uses an 64KB internal buffer to store values of a column. When 
> accessing a column, it reads 64KB (if we do not consider compression and 
> checksum) data from the storage layer. However, when the table is accessed in 
> a row-oriented fashion (a entire row needs to be handed over to the upper 
> layer), in the worst case (a full table scan and values of this table are all 
> the same size), every 64KB data read can cause a seek.
> This jira is used to discuss if we should consider the data access pattern 
> mentioned above and if so, how to improve the performance of Trevni. 
> Row-oriented data processing engines, e.g. Hive, can benefit from this work.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to