Re: new tlog files are not created per commit but adding into latest existing tlog file after replica reload

2021-03-04 Thread Michael Hu
Hi experts:

After I sent out previous email, I issued commit on that replica core and 
observed the same "ClosedChannelException", please refer to below under 
"issuing core commit" section

Then I issued a core reload, and I see the timestamp of the latest tlog file 
changed, please refer to "files under tlog directory " section below. Not sure 
those information is useful or not.

Thank you!

--Michael Hu

--- beginning for issuing core commit ---

$ curl 
'http://localhost:8983/solr/myconection_myshard_replica_t7/update?commit=true'

{

  "responseHeader":{

"status":500,

"QTime":71},

  "error":{

"metadata":[

  "error-class","org.apache.solr.common.SolrException",

  "root-error-class","java.nio.channels.ClosedChannelException"],

"msg":"java.nio.channels.ClosedChannelException",

"trace":"org.apache.solr.common.SolrException:

--- end for issuing core commit ---

--- beginning for files under tlog directory ---
before core reload:

-rw-r--r-- 1 solr solr   47527321 Mar  4 20:14 tlog.877

-rw-r--r-- 1 solr solr   42614907 Mar  4 20:14 tlog.878

-rw-r--r-- 1 solr solr   37524663 Mar  4 20:14 tlog.879

-rw-r--r-- 1 solr solr   44067997 Mar  4 20:14 tlog.880

-rw-r--r-- 1 solr solr   33209784 Mar  4 20:15 tlog.881

-rw-r--r-- 1 solr solr   55435186 Mar  4 20:15 tlog.882

-rw-r--r-- 1 solr solr 2179991713 Mar  4 20:29 tlog.883


after core reload:

-rw-r--r-- 1 solr solr   47527321 Mar  4 20:14 tlog.877
-rw-r--r-- 1 solr solr   42614907 Mar  4 20:14 tlog.878
-rw-r--r-- 1 solr solr   37524663 Mar  4 20:14 tlog.879
-rw-r--r-- 1 solr solr   44067997 Mar  4 20:14 tlog.880
-rw-r--r-- 1 solr solr   33209784 Mar  4 20:15 tlog.881
-rw-r--r-- 1 solr solr   55435186 Mar  4 20:15 tlog.882
-rw-r--r-- 1 solr solr 2179991717 Mar  4 22:23 tlog.883


--- end for files under tlog directory ---



From: Michael Hu 
Sent: Thursday, March 4, 2021 1:58 PM
To: solr-user@lucene.apache.org 
Subject: new tlog files are not created per commit but adding into latest 
existing tlog file after replica reload

Hi experts:

Need some help and suggestion about an issue I am facing

Solr info:
 - Solr 8.7
 - Solr cloud with tlog replica; replica size is 3 for my Solr collection

Issue:
 - before issuing collection reload; I observed a new tlog file are created 
after every commit; and those tlog files are deleted after a while (may be 
after index are merged?)
 - then I issued a collection reload using collection API on my collection at 
20:15
 - after leader replica is reloaded; no new tlog file are created; instead 
latest tlog file is growing, and no tlog file is deleted after reload. Below 
under "files under tlog directory" section is a snapshot of the tlog files 
under tlog directory of the leader replica. Again, I issued collection reload 
at 20:15, and after that tlog.883 is growing
 - I looked into log file and find error log entries below under "log entries" 
section, and the log entry repeats continuously for every auto commit after 
reload. I hope this log entry can provide some information for the issue.

Please help and suggestion what I may do incorrectly. Or this is a known issue, 
is there a way I can fix or work-around it?

Thank you so much!

--Michael Hu

--- beginning for files under tlog directory ---

-rw-r--r-- 1 solr solr   47527321 Mar  4 20:14 tlog.877

-rw-r--r-- 1 solr solr   42614907 Mar  4 20:14 tlog.878

-rw-r--r-- 1 solr solr   37524663 Mar  4 20:14 tlog.879

-rw-r--r-- 1 solr solr   44067997 Mar  4 20:14 tlog.880

-rw-r--r-- 1 solr solr   33209784 Mar  4 20:15 tlog.881

-rw-r--r-- 1 solr solr   55435186 Mar  4 20:15 tlog.882

-rw-r--r-- 1 solr solr 2179991713 Mar  4 20:29 tlog.883

--- end for files under tlog directory ---

--- beginning for log entries ---

2021-03-04 20:15:38.251 ERROR (commitScheduler-4327-thread-1) [c:mycollection 
s:myshard r:core_node10 x:mycolletion_myshard_replica_t7] o.a.s.u.CommitTracker 
auto commit error...:

org.apache.solr.common.SolrException: java.nio.channels.ClosedChannelException

at 
org.apache.solr.update.TransactionLog.writeCommit(TransactionLog.java:503)

at org.apache.solr.update.UpdateLog.postCommit(UpdateLog.java:835)

at org.apache.solr.update.UpdateLog.preCommit(UpdateLog.java:819)

at 
org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:673)

at org.apache.solr.update.CommitTracker.run(CommitTracker.java:27

new tlog files are not created per commit but adding into latest existing tlog file after replica reload

2021-03-04 Thread Michael Hu
Hi experts:

Need some help and suggestion about an issue I am facing

Solr info:
 - Solr 8.7
 - Solr cloud with tlog replica; replica size is 3 for my Solr collection

Issue:
 - before issuing collection reload; I observed a new tlog file are created 
after every commit; and those tlog files are deleted after a while (may be 
after index are merged?)
 - then I issued a collection reload using collection API on my collection at 
20:15
 - after leader replica is reloaded; no new tlog file are created; instead 
latest tlog file is growing, and no tlog file is deleted after reload. Below 
under "files under tlog directory" section is a snapshot of the tlog files 
under tlog directory of the leader replica. Again, I issued collection reload 
at 20:15, and after that tlog.883 is growing
 - I looked into log file and find error log entries below under "log entries" 
section, and the log entry repeats continuously for every auto commit after 
reload. I hope this log entry can provide some information for the issue.

Please help and suggestion what I may do incorrectly. Or this is a known issue, 
is there a way I can fix or work-around it?

Thank you so much!

--Michael Hu

--- beginning for files under tlog directory ---

-rw-r--r-- 1 solr solr   47527321 Mar  4 20:14 tlog.877

-rw-r--r-- 1 solr solr   42614907 Mar  4 20:14 tlog.878

-rw-r--r-- 1 solr solr   37524663 Mar  4 20:14 tlog.879

-rw-r--r-- 1 solr solr   44067997 Mar  4 20:14 tlog.880

-rw-r--r-- 1 solr solr   33209784 Mar  4 20:15 tlog.881

-rw-r--r-- 1 solr solr   55435186 Mar  4 20:15 tlog.882

-rw-r--r-- 1 solr solr 2179991713 Mar  4 20:29 tlog.883

--- end for files under tlog directory ---

--- beginning for log entries ---

2021-03-04 20:15:38.251 ERROR (commitScheduler-4327-thread-1) [c:mycollection 
s:myshard r:core_node10 x:mycolletion_myshard_replica_t7] o.a.s.u.CommitTracker 
auto commit error...:

org.apache.solr.common.SolrException: java.nio.channels.ClosedChannelException

at 
org.apache.solr.update.TransactionLog.writeCommit(TransactionLog.java:503)

at org.apache.solr.update.UpdateLog.postCommit(UpdateLog.java:835)

at org.apache.solr.update.UpdateLog.preCommit(UpdateLog.java:819)

at 
org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:673)

at org.apache.solr.update.CommitTracker.run(CommitTracker.java:273)

at 
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)

at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)

at 
java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)

at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)

at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)

at java.base/java.lang.Thread.run(Thread.java:834)

Caused by: java.nio.channels.ClosedChannelException

at 
java.base/sun.nio.ch.FileChannelImpl.ensureOpen(FileChannelImpl.java:150)

at java.base/sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:266)

at java.base/java.nio.channels.Channels.writeFullyImpl(Channels.java:74)

at java.base/java.nio.channels.Channels.writeFully(Channels.java:97)

at java.base/java.nio.channels.Channels$1.write(Channels.java:172)

at 
org.apache.solr.common.util.FastOutputStream.flush(FastOutputStream.java:216)

at 
org.apache.solr.common.util.FastOutputStream.flushBuffer(FastOutputStream.java:209)

at 
org.apache.solr.common.util.FastOutputStream.flush(FastOutputStream.java:193)

at 
org.apache.solr.update.TransactionLog.writeCommit(TransactionLog.java:498)

... 10 more

--- end for log entries ---



Need a way to get notification when a new field is added into managed schema

2018-12-06 Thread Michael Hu
Environment: Solr 7.4


I use mutable managed schema. I need a way for getting notification when a new 
field is added into schema.


First, I try to extend "org.apache.solr.schema.ManagedIndexSchema". 
Unfortunately, it is defined as final class, so that I am not able to extend it.


Then, I try to implement my own IndexSchemaFactory and IndexSchema by extending 
"org.apache.solr.schema.IndexSchema" and wrapping an instance of 
ManagedIndexSchema, and delegating all methods to the wrapped instance. 
However, when I test the implementation, I find out that 
"com.vmware.ops.data.solr.processor.AddSchemaFieldsUpdateProcessor" casts 
IndexSchema to ManagedIndexSchema at 
https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.4.0/solr/core/src/java/org/apache/solr/update/processor/AddSchemaFieldsUpdateProcessorFactory.java#L456
 . So it does not work with the update processor. (NOTE: the cast happens for 
Solr 7.5 as well)


Can you suggest a way so that I can get notification when a new field is added 
into schema.?


Thank you for your help!


--Michael







Re: Solr core corrupted for version 7.4.0, please help!

2018-08-22 Thread Michael Hu (CMBU)
Can someone advise me how to solve this issue, please?


Thank you so much!


--Michael



From: Michael Hu (CMBU)
Sent: Friday, August 17, 2018 1:22 PM
To: solr-user@lucene.apache.org
Cc: Mohsin Beg
Subject: Re: Solr core corrupted for version 7.4.0, please help!



Can someone advise me how to solve this issue?

Thank you!

--Michael


From: Michael Hu (CMBU)
Sent: Thursday, August 16, 2018 6:14 PM
To: solr-user@lucene.apache.org
Cc: Mohsin Beg
Subject: Solr core corrupted for version 7.4.0, please help!


Environment:

  *   solr 7.4.0
  *   all cores are vanilla cores with "loadOnStartUp" set to false, and 
"transient" set to true
  *   we have about 75 cores with "transientCacheSize" set to 32


Issue: we have core corruption from time to time (2-3 core corruption a day)


How to reproduce:

  *   Set the "transientCacheSize" to 1
  *   Ingest high load to core1 only (no issue at this time)
  *   Continue ingest high load to core1 and start ingest load to core2 
simultaneously (core2 immediately corrupted) (stack trace is attached below)


Please advise how to resolve this issue?


Thank you so much!


--Michael


stack trace:


2018-08-16 23:02:31.212 ERROR (qtp225472281-4098) [   
x:aggregator-core-be43376de27b1675562841f64c498] o.a.s.u.SolrIndexWriter Error 
closing IndexWriter

java.nio.file.NoSuchFileException: 
/opt/solr/volumes/data1/4cf838d4b9e4675-core-897/index/_2_Lucene50_0.pos

at 
sun.nio.fs.UnixException.translateToIOException(UnixException.java:86) 
~[?:1.8.0_162]

at 
sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) 
~[?:1.8.0_162]

at 
sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) 
~[?:1.8.0_162]

at 
sun.nio.fs.UnixFileAttributeViews$Basic.readAttributes(UnixFileAttributeViews.java:55)
 ~[?:1.8.0_162]

at 
sun.nio.fs.UnixFileSystemProvider.readAttributes(UnixFileSystemProvider.java:144)
 ~[?:1.8.0_162]

at 
sun.nio.fs.LinuxFileSystemProvider.readAttributes(LinuxFileSystemProvider.java:99)
 ~[?:1.8.0_162]

at java.nio.file.Files.readAttributes(Files.java:1737) ~[?:1.8.0_162]

at java.nio.file.Files.size(Files.java:2332) ~[?:1.8.0_162]

at org.apache.lucene.store.FSDirectory.fileLength(FSDirectory.java:243) 
~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
jpountz - 2018-06-18 16:51:45]

at 
org.apache.lucene.store.NRTCachingDirectory.fileLength(NRTCachingDirectory.java:128)
 ~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
jpountz - 2018-06-18 16:51:45]

at 
org.apache.lucene.index.SegmentCommitInfo.sizeInBytes(SegmentCommitInfo.java:217)
 ~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
jpountz - 2018-06-18 16:51:45]

at org.apache.lucene.index.MergePolicy.size(MergePolicy.java:558) 
~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
jpountz - 2018-06-18 16:51:45]

at 
org.apache.lucene.index.TieredMergePolicy.getSegmentSizes(TieredMergePolicy.java:279)
 ~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
jpountz - 2018-06-18 16:51:45]

at 
org.apache.lucene.index.TieredMergePolicy.findMerges(TieredMergePolicy.java:300)
 ~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
jpountz - 2018-06-18 16:51:45]

at 
org.apache.lucene.index.IndexWriter.updatePendingMerges(IndexWriter.java:2199) 
~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
jpountz - 2018-06-18 16:51:45]

at 
org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:2162) 
~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
jpountz - 2018-06-18 16:51:45]

at org.apache.lucene.index.IndexWriter.flush(IndexWriter.java:3571) 
~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
jpountz - 2018-06-18 16:51:45]

at org.apache.lucene.index.IndexWriter.shutdown(IndexWriter.java:1028) 
~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
jpountz - 2018-06-18 16:51:45]

at org.apache.lucene.index.IndexWriter.close(IndexWriter.java:1071) 
~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
jpountz - 2018-06-18 16:51:45]

at 
org.apache.solr.update.SolrIndexWriter.close(SolrIndexWriter.java:286) 
[solr-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - jpountz - 
2018-06-18 16:55:13]

at 
org.apache.solr.update.DirectUpdateHandler2.closeWriter(DirectUpdateHandler2.java:917)
 [solr-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - jpountz 
- 2018-06-18 16:55:13]

at 
org.apache.solr.update.DefaultSolrCoreState.closeIndexWriter(DefaultSolrCoreState.java:105)
 [solr-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f6

Re: Solr core corrupted for version 7.4.0, please help!

2018-08-17 Thread Michael Hu (CMBU)

Can someone advise me how to solve this issue?

Thank you!

--Michael


From: Michael Hu (CMBU)
Sent: Thursday, August 16, 2018 6:14 PM
To: solr-user@lucene.apache.org
Cc: Mohsin Beg
Subject: Solr core corrupted for version 7.4.0, please help!


Environment:

  *   solr 7.4.1
  *   all cores are vanilla cores with "loadOnStartUp" set to false, and 
"transient" set to true
  *   we have about 75 cores with "transientCacheSize" set to 32


Issue: we have core corruption from time to time (2-3 core corruption a day)


How to reproduce:

  *   Set the "transientCacheSize" to 1
  *   Ingest high load to core1 only (no issue at this time)
  *   Continue ingest high load to core1 and start ingest load to core2 
simultaneously (core2 immediately corrupted) (stack trace is attached below)


Please advise how to resolve this issue?


Thank you so much!


--Michael


stack trace:


2018-08-16 23:02:31.212 ERROR (qtp225472281-4098) [   
x:aggregator-core-be43376de27b1675562841f64c498] o.a.s.u.SolrIndexWriter Error 
closing IndexWriter

java.nio.file.NoSuchFileException: 
/opt/solr/volumes/data1/4cf838d4b9e4675-core-897/index/_2_Lucene50_0.pos

at 
sun.nio.fs.UnixException.translateToIOException(UnixException.java:86) 
~[?:1.8.0_162]

at 
sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) 
~[?:1.8.0_162]

at 
sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) 
~[?:1.8.0_162]

at 
sun.nio.fs.UnixFileAttributeViews$Basic.readAttributes(UnixFileAttributeViews.java:55)
 ~[?:1.8.0_162]

at 
sun.nio.fs.UnixFileSystemProvider.readAttributes(UnixFileSystemProvider.java:144)
 ~[?:1.8.0_162]

at 
sun.nio.fs.LinuxFileSystemProvider.readAttributes(LinuxFileSystemProvider.java:99)
 ~[?:1.8.0_162]

at java.nio.file.Files.readAttributes(Files.java:1737) ~[?:1.8.0_162]

at java.nio.file.Files.size(Files.java:2332) ~[?:1.8.0_162]

at org.apache.lucene.store.FSDirectory.fileLength(FSDirectory.java:243) 
~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
jpountz - 2018-06-18 16:51:45]

at 
org.apache.lucene.store.NRTCachingDirectory.fileLength(NRTCachingDirectory.java:128)
 ~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
jpountz - 2018-06-18 16:51:45]

at 
org.apache.lucene.index.SegmentCommitInfo.sizeInBytes(SegmentCommitInfo.java:217)
 ~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
jpountz - 2018-06-18 16:51:45]

at org.apache.lucene.index.MergePolicy.size(MergePolicy.java:558) 
~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
jpountz - 2018-06-18 16:51:45]

at 
org.apache.lucene.index.TieredMergePolicy.getSegmentSizes(TieredMergePolicy.java:279)
 ~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
jpountz - 2018-06-18 16:51:45]

at 
org.apache.lucene.index.TieredMergePolicy.findMerges(TieredMergePolicy.java:300)
 ~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
jpountz - 2018-06-18 16:51:45]

at 
org.apache.lucene.index.IndexWriter.updatePendingMerges(IndexWriter.java:2199) 
~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
jpountz - 2018-06-18 16:51:45]

at 
org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:2162) 
~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
jpountz - 2018-06-18 16:51:45]

at org.apache.lucene.index.IndexWriter.flush(IndexWriter.java:3571) 
~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
jpountz - 2018-06-18 16:51:45]

at org.apache.lucene.index.IndexWriter.shutdown(IndexWriter.java:1028) 
~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
jpountz - 2018-06-18 16:51:45]

at org.apache.lucene.index.IndexWriter.close(IndexWriter.java:1071) 
~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
jpountz - 2018-06-18 16:51:45]

at 
org.apache.solr.update.SolrIndexWriter.close(SolrIndexWriter.java:286) 
[solr-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - jpountz - 
2018-06-18 16:55:13]

at 
org.apache.solr.update.DirectUpdateHandler2.closeWriter(DirectUpdateHandler2.java:917)
 [solr-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - jpountz 
- 2018-06-18 16:55:13]

at 
org.apache.solr.update.DefaultSolrCoreState.closeIndexWriter(DefaultSolrCoreState.java:105)
 [solr-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - jpountz 
- 2018-06-18 16:55:13]

at 
org.apache.solr.update.DefaultSolrCoreState.close(DefaultSolrCoreState.java:399)
 [solr-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - jpountz 
- 2018-06-18 16:55:13]

at 
org.apache.solr.update.SolrCoreState.decrefSolrCoreState(Solr

Solr core corrupted for version 7.4.0, please help!

2018-08-16 Thread Michael Hu (CMBU)
Environment:

  *   solr 7.4.1
  *   all cores are vanilla cores with "loadOnStartUp" set to false, and 
"transient" set to true
  *   we have about 75 cores with "transientCacheSize" set to 32


Issue: we have core corruption from time to time (2-3 core corruption a day)


How to reproduce:

  *   Set the "transientCacheSize" to 1
  *   Ingest high load to core1 only (no issue at this time)
  *   Continue ingest high load to core1 and start ingest load to core2 
simultaneously (core2 immediately corrupted) (stack trace is attached below)


Please advise how to resolve this issue?


Thank you so much!


--Michael


stack trace:


2018-08-16 23:02:31.212 ERROR (qtp225472281-4098) [   
x:aggregator-core-be43376de27b1675562841f64c498] o.a.s.u.SolrIndexWriter Error 
closing IndexWriter

java.nio.file.NoSuchFileException: 
/opt/solr/volumes/data1/4cf838d4b9e4675-core-897/index/_2_Lucene50_0.pos

at 
sun.nio.fs.UnixException.translateToIOException(UnixException.java:86) 
~[?:1.8.0_162]

at 
sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) 
~[?:1.8.0_162]

at 
sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) 
~[?:1.8.0_162]

at 
sun.nio.fs.UnixFileAttributeViews$Basic.readAttributes(UnixFileAttributeViews.java:55)
 ~[?:1.8.0_162]

at 
sun.nio.fs.UnixFileSystemProvider.readAttributes(UnixFileSystemProvider.java:144)
 ~[?:1.8.0_162]

at 
sun.nio.fs.LinuxFileSystemProvider.readAttributes(LinuxFileSystemProvider.java:99)
 ~[?:1.8.0_162]

at java.nio.file.Files.readAttributes(Files.java:1737) ~[?:1.8.0_162]

at java.nio.file.Files.size(Files.java:2332) ~[?:1.8.0_162]

at org.apache.lucene.store.FSDirectory.fileLength(FSDirectory.java:243) 
~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
jpountz - 2018-06-18 16:51:45]

at 
org.apache.lucene.store.NRTCachingDirectory.fileLength(NRTCachingDirectory.java:128)
 ~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
jpountz - 2018-06-18 16:51:45]

at 
org.apache.lucene.index.SegmentCommitInfo.sizeInBytes(SegmentCommitInfo.java:217)
 ~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
jpountz - 2018-06-18 16:51:45]

at org.apache.lucene.index.MergePolicy.size(MergePolicy.java:558) 
~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
jpountz - 2018-06-18 16:51:45]

at 
org.apache.lucene.index.TieredMergePolicy.getSegmentSizes(TieredMergePolicy.java:279)
 ~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
jpountz - 2018-06-18 16:51:45]

at 
org.apache.lucene.index.TieredMergePolicy.findMerges(TieredMergePolicy.java:300)
 ~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
jpountz - 2018-06-18 16:51:45]

at 
org.apache.lucene.index.IndexWriter.updatePendingMerges(IndexWriter.java:2199) 
~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
jpountz - 2018-06-18 16:51:45]

at 
org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:2162) 
~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
jpountz - 2018-06-18 16:51:45]

at org.apache.lucene.index.IndexWriter.flush(IndexWriter.java:3571) 
~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
jpountz - 2018-06-18 16:51:45]

at org.apache.lucene.index.IndexWriter.shutdown(IndexWriter.java:1028) 
~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
jpountz - 2018-06-18 16:51:45]

at org.apache.lucene.index.IndexWriter.close(IndexWriter.java:1071) 
~[lucene-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
jpountz - 2018-06-18 16:51:45]

at 
org.apache.solr.update.SolrIndexWriter.close(SolrIndexWriter.java:286) 
[solr-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - jpountz - 
2018-06-18 16:55:13]

at 
org.apache.solr.update.DirectUpdateHandler2.closeWriter(DirectUpdateHandler2.java:917)
 [solr-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - jpountz 
- 2018-06-18 16:55:13]

at 
org.apache.solr.update.DefaultSolrCoreState.closeIndexWriter(DefaultSolrCoreState.java:105)
 [solr-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - jpountz 
- 2018-06-18 16:55:13]

at 
org.apache.solr.update.DefaultSolrCoreState.close(DefaultSolrCoreState.java:399)
 [solr-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - jpountz 
- 2018-06-18 16:55:13]

at 
org.apache.solr.update.SolrCoreState.decrefSolrCoreState(SolrCoreState.java:83) 
[solr-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - jpountz - 
2018-06-18 16:55:13]

at org.apache.solr.core.SolrCore.close(SolrCore.java:1572) 
[solr-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - jpountz - 
2018-06-18 16:55:13]

at 

How can I prevent adding duplicated copyfield into managed schema

2018-02-06 Thread Michael Hu
Hi Solr experts:


Question: how can I prevent multiple concurrent requests adding the same 
duplicated copyfield into managed schema? (Note: I am using Solr 6.6.2)


User case: For a new field named srcField, I need to create another field named 
destField, and a new copyField with srcField as source and destField as 
destination; and add srcField, descField, and the copyField into schema using 
following APIs:


  *   For add fields: IndexSchema.addFields(Collection)
  *   For add copyField: Index.addCopyFields(Map

Is IndexSchema addFields and addCopyFields concurrent?

2017-05-17 Thread Michael Hu
Hi all:


I am new to Solr, and I am using Solr 6.4.2. I try to add fields and copyFields 
to schema programmatically as below. However, in a very few occasions, I see a 
few fields are not added but copyFields are added when I try to add a lot of 
fields and copyFields (about 80 fields, 40 copyFields (one field is source, the 
other is destination). This causes core initialization failure because the 
associated fields for copyFields do not exist?


Can someone help me?


Thank you!


--Michael Hu


synchronized (oldSchema.getSchemaUpdateLock()) {

try {

IndexSchema newSchema = 
oldSchema.addFields(newFields).addCopyFields(newCopyFields, true);

if (null != newSchema) {

core.setLatestSchema(newSchema);

cmd.getReq().updateSchemaToLatest();

latestSchemaMap.put(core.getName(), newSchema);

log.debug("Successfully added field(s) to the 
schema.");

break; // success - exit from the retry loop

} else {

throw new 
SolrException(SolrException.ErrorCode.SERVER_ERROR, "Failed to add fields.");

}

} catch (ManagedIndexSchema.FieldExistsException e) {

log.error("At least one field to be added already 
exists in the schema - retrying.");

oldSchema = core.getLatestSchema();

cmd.getReq().updateSchemaToLatest();

} catch (ManagedIndexSchema.SchemaChangedInZkException 
e) {

log.debug("Schema changed while processing request 
- retrying.");

oldSchema = core.getLatestSchema();

cmd.getReq().updateSchemaToLatest();

}

}