[ 
https://issues.apache.org/jira/browse/DRILL-4154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15038414#comment-15038414
 ] 

Rahul Challapalli commented on DRILL-4154:
------------------------------------------

[~parthc] I couldn't reproduce this issue as well. It could be a user error 
from my side and the fact that upgrading the parquet files did not update the 
directory timestamp caused some confusion. This should be fixed in the upgrade 
tool.

> Metadata Caching : Upgrading cache to v2 from v1 corrupts the cache in some 
> scenarios
> -------------------------------------------------------------------------------------
>
>                 Key: DRILL-4154
>                 URL: https://issues.apache.org/jira/browse/DRILL-4154
>             Project: Apache Drill
>          Issue Type: Bug
>            Reporter: Rahul Challapalli
>            Assignee: Parth Chandra
>            Priority: Critical
>         Attachments: broken-cache.txt, fewtypes_varcharpartition.tar.tgz, 
> old-cache.txt
>
>
> git.commit.id.abbrev=46c47a2
> I copied the data along with the cache file onto maprfs. Now I ran the 
> upgrade tool (https://github.com/parthchandra/drill-upgrade). Now I ran the 
> metadata_caching suite from the functional tests (concurrency 10) without the 
> datagen phase. I see 3 test failures and when I looked at the cache file it 
> seems to be containing wrong information for the varchar column. 
> Sample from the cache :
> {code}
>       {
>         "name" : [ "varchar_col" ]
>       }, {
>         "name" : [ "float_col" ],
>         "mxValue" : 68797.22,
>         "nulls" : 0
>       }
> {code}
> Now I followed the same steps and instead of running the suites I executed 
> the "REFRESH TABLE METADATA" command or any query on that folder,  the cache 
> file seems to be created properly
> I attached the data and cache files required. Let me know if you need anything



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to