[ 
https://issues.apache.org/jira/browse/SOLR-6089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-6089.
-------------------------------

       Resolution: Fixed
    Fix Version/s: 4.10
                   5.0

> When using the HDFS block cache, when a file is deleted, it's underlying data 
> entries in the block cache are not removed, which is a problem with the 
> global block cache option.
> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: SOLR-6089
>                 URL: https://issues.apache.org/jira/browse/SOLR-6089
>             Project: Solr
>          Issue Type: Bug
>          Components: hdfs
>            Reporter: Mark Miller
>            Assignee: Mark Miller
>             Fix For: 5.0, 4.10
>
>         Attachments: SOLR-6089.patch
>
>
> Patrick Hunt noticed this. Without the global block cache, the block cache 
> was not reused after a directory was closed. Now that it is reused when using 
> the global cache, leaving the underlying entries presents a problem if that 
> directory is created again because blocks from the previous directory may be 
> read. This could happen when you remove a solrcore and recreate it with the 
> same data directory (or a collection with the same name). I could only 
> reproduce it easily using index merges (core admin) with the sequence: merge 
> index, delete collection, create collection, merge index. Reads on the final 
> merged index can look corrupt or queries may just return no results.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Reply via email to