[jira] [Updated] (SOLR-6184) Replication fetchLatestIndex always failed, that will occur the recovery error.

2014-09-01 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-6184:

Labels: difficulty-medium impact-medium  (was: )

 Replication fetchLatestIndex always failed, that will occur the recovery 
 error.
 ---

 Key: SOLR-6184
 URL: https://issues.apache.org/jira/browse/SOLR-6184
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.6, 4.6.1
 Environment: the index file size is more than 70G
Reporter: Raintung Li
  Labels: difficulty-medium, impact-medium
 Attachments: Solr-6184.txt


 Usually the copy full index 70G need 20 minutes at least, 100M read/write 
 network or disk r/w.  If in the 20 minutes happen one hard commit, that means 
 the copy full index snap pull will be failed, the temp folder will be removed 
 because it is failed pull task. 
 In the production, update index will happen in every minute, redo pull task 
 always failed because index always change. 
 And also always redo the pull it will occur the network and disk usage keep 
 the high level.
 For my suggestion, the fetchLatestIndex can be do again in some frequency. 
 Don't need remove the tmp folder, and copy the largest index at first. Redo 
 the fetchLatestIndex don't download the same biggest file again, only will 
 copy the commit index just now, at last the task will be easy success.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6184) Replication fetchLatestIndex always failed, that will occur the recovery error.

2014-06-20 Thread Raintung Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raintung Li updated SOLR-6184:
--

Description: 
Usually the copy full index 70G need 20 minutes at least, 100M read/write 
network or disk r/w.  If in the 20 minutes happen one hard commit, that means 
the copy full index snap pull will be failed, the temp folder will be removed 
because it is failed pull task. 
In the production, update index will happen in every minute, redo pull task 
always failed because index always change. 

And also always redo the pull it will occur the network and disk usage keep the 
high level.

For my suggestion, the fetchLatestIndex can be do again in some frequency. 
Don't need remove the tmp folder, and copy the largest index at first. Redo the 
fetchLatestIndex don't download the same biggest file again, only will copy the 
commit index just now, at last the task will be easy success.



  was:
Usually the copy full index 70G need 20 minutes at least, 100M read/write 
network or disk r/w.  If in the 20 minutes happen one hard commit, that means 
the copy full index snap pull will be failed, the temp folder will be removed 
because it is failed pull task. 
In the production, update index will happen in every minute, redo pull task 
always failed because index always change. 

And also always redo the pull it will occur the network and disk usage keep the 
high level.

For my suggestion, the fetchLatestIndex can be done again in some frequency. 
Don't need remove the tmp folder, and copy the largest index at first. Redo the 
fetchLatestIndex don't download the same biggest file again, only will copy the 
commit index just now, at last the task will be easy success.




 Replication fetchLatestIndex always failed, that will occur the recovery 
 error.
 ---

 Key: SOLR-6184
 URL: https://issues.apache.org/jira/browse/SOLR-6184
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.6, 4.6.1
 Environment: the index file size is more than 70G
Reporter: Raintung Li

 Usually the copy full index 70G need 20 minutes at least, 100M read/write 
 network or disk r/w.  If in the 20 minutes happen one hard commit, that means 
 the copy full index snap pull will be failed, the temp folder will be removed 
 because it is failed pull task. 
 In the production, update index will happen in every minute, redo pull task 
 always failed because index always change. 
 And also always redo the pull it will occur the network and disk usage keep 
 the high level.
 For my suggestion, the fetchLatestIndex can be do again in some frequency. 
 Don't need remove the tmp folder, and copy the largest index at first. Redo 
 the fetchLatestIndex don't download the same biggest file again, only will 
 copy the commit index just now, at last the task will be easy success.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6184) Replication fetchLatestIndex always failed, that will occur the recovery error.

2014-06-20 Thread Raintung Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raintung Li updated SOLR-6184:
--

Attachment: Solr-6184.txt

 Replication fetchLatestIndex always failed, that will occur the recovery 
 error.
 ---

 Key: SOLR-6184
 URL: https://issues.apache.org/jira/browse/SOLR-6184
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.6, 4.6.1
 Environment: the index file size is more than 70G
Reporter: Raintung Li
 Attachments: Solr-6184.txt


 Usually the copy full index 70G need 20 minutes at least, 100M read/write 
 network or disk r/w.  If in the 20 minutes happen one hard commit, that means 
 the copy full index snap pull will be failed, the temp folder will be removed 
 because it is failed pull task. 
 In the production, update index will happen in every minute, redo pull task 
 always failed because index always change. 
 And also always redo the pull it will occur the network and disk usage keep 
 the high level.
 For my suggestion, the fetchLatestIndex can be do again in some frequency. 
 Don't need remove the tmp folder, and copy the largest index at first. Redo 
 the fetchLatestIndex don't download the same biggest file again, only will 
 copy the commit index just now, at last the task will be easy success.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org