[JENKINS] Lucene-Solr-master-Windows (32bit/jdk1.8.0_102) - Build # 6152 - Still Unstable!

2016-10-01 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6152/
Java: 32bit/jdk1.8.0_102 -server -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.testDelegationTokenCancelFail

Error Message:
expected:<200> but was:<404>

Stack Trace:
java.lang.AssertionError: expected:<200> but was:<404>
at 
__randomizedtesting.SeedInfo.seed([7AAEE12A5DEEF44:6F15DB387544FDA8]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.cancelDelegationToken(TestSolrCloudWithDelegationTokens.java:137)
at 
org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.testDelegationTokenCancelFail(TestSolrCloudWithDelegationTokens.java:282)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Assigned] (SOLR-7229) Allow DIH to handle attachments as separate documents

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch reassigned SOLR-7229:
---

Assignee: Alexandre Rafalovitch

> Allow DIH to handle attachments as separate documents
> -
>
> Key: SOLR-7229
> URL: https://issues.apache.org/jira/browse/SOLR-7229
> Project: Solr
>  Issue Type: Improvement
>Reporter: Tim Allison
>Assignee: Alexandre Rafalovitch
>Priority: Minor
>
> With Tika 1.7's RecursiveParserWrapper, it is possible to maintain metadata 
> of individual attachments/embedded documents.  Tika's default handling was to 
> maintain the metadata of the container document and concatenate the contents 
> of all embedded files.  With SOLR-7189, we added the legacy behavior.
> It might be handy, for example, to be able to send an MSG file through DIH 
> and treat the container email as well each attachment as separate (child?) 
> documents, or send a zip of jpeg files and correctly index the geo locations 
> for each image file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-6209) Database connections lost during data import

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch closed SOLR-6209.
---
Resolution: Workaround

Unknown Solr version, old version of 3rd party product (database) and possible 
workaround provided.

I am closing this issue. If this still happens with latest Solr version, I 
recommend opening a new issue with updated details and information on 
product/component versions.

> Database connections lost during data import
> 
>
> Key: SOLR-6209
> URL: https://issues.apache.org/jira/browse/SOLR-6209
> Project: Solr
>  Issue Type: Bug
> Environment: OS: Windows 7
> RAM: 8 GB
>Reporter: SANKET
>  Labels: solr
>   Original Estimate: 30h
>  Remaining Estimate: 30h
>
> Follow the steps to generate the error:
> 1. Configure the large amount of data (around 4  GB  or more than 50 millions 
> of records)
> 2. Give proper data-config.xml file for indexing the data from remote 
> database server.
> 3. During indexing the data into solr from SQL SERVER 2010, at the half way 
> unplug the network cable and see the status in solr.
> e.g.
> localhost:8083/solr/core1/dataimport?command=status
> or
> localhost:8083/solr/core1/dataimport
> 4. Pass few seconds then again plug back the cable.
> 5. You can clearly see that there is just only "Time Elapsed" parameter 
> increase. "Total Rows Fetched" & "Total Documents Processed" remains same for 
> infinite time.
> 6. You can regenerate this for small data also.
> 7. Work around is you need to restart the solr. (But this is not good 
> solution)
> Note:
> This is very important issue because, so many organizations not using this 
> valuable products just because of the this database infinite connection issue.
> Solution can be: Forcefully abort the data indexing or provide mechanism for 
> forcefully abort the indexing.
> Hope you guys knows that abort command is also not working.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5602) Solr DIH shows in-consistent status

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15539791#comment-15539791
 ] 

Alexandre Rafalovitch commented on SOLR-5602:
-

Is this still observable in the latest Solr? I suspect the issue may have been 
to a long commit process for a big job and not a bug.

> Solr DIH shows in-consistent status
> ---
>
> Key: SOLR-5602
> URL: https://issues.apache.org/jira/browse/SOLR-5602
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - DataImportHandler
>Affects Versions: 4.2
>Reporter: Liu Xiang
>
> I have one DIH index job which takes about 4 hr to finish.
> The job was launched at 11:28 am and completed at 15:10 pm.
> However, in DIH response, although "statusMessages" showed correct 
> information, "status" kept showing "busy" until 16:40 pm.
> After that, it became "idle". 
> This index job is one step of our data pipeline, we are using both "status" 
> and statusMessages" to decide should the job move to next step, I would like 
> to know the reason causing the in-consistent status.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5931) solrcore.properties is not reloaded when core is reloaded

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15539789#comment-15539789
 ] 

Alexandre Rafalovitch commented on SOLR-5931:
-

This case seems to be a mix of lacked-features, maybe-bug, and is again an old 
version of Solr (so may have been fixed already).

I don't see any next action in this. I think it may be easiest to close this 
issue and open a new one if something like this comes up against latest Solr.

> solrcore.properties is not reloaded when core is reloaded
> -
>
> Key: SOLR-5931
> URL: https://issues.apache.org/jira/browse/SOLR-5931
> Project: Solr
>  Issue Type: Bug
>  Components: multicore
>Affects Versions: 4.7
>Reporter: Gunnlaugur Thor Briem
>Assignee: Shalin Shekhar Mangar
>Priority: Minor
>
> When I change solrcore.properties for a core, and then reload the core, the 
> previous values of the properties in that file are still in effect. If I 
> *unload* the core and then add it back, in the “Core Admin” section of the 
> admin UI, then the changes in solrcore.properties do take effect.
> My specific test case is a DataImportHandler where {{db-data-config.xml}} 
> uses a property to decide which DB host to talk to:
> {code:xml}
>  url="jdbc:postgresql://${dbhost}/${solr.core.name}" .../>
> {code}
> When I change that {{dbhost}} property in {{solrcore.properties}} and reload 
> the core, the next dataimport operation still connects to the previous DB 
> host. Reloading the dataimport config does not help. I have to unload the 
> core (or fully restart the whole Solr) for the properties change to take 
> effect.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-3366) Restart of Solr during data import causes an empty index to be generated on restart

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch closed SOLR-3366.
---
Resolution: Won't Fix

An old possible bug related to (old) replication, Tomcat and edge-case 
activity. I am closing this. If the problem still occurs with more recent Solr, 
a new issue with updated details can be created.

> Restart of Solr during data import causes an empty index to be generated on 
> restart
> ---
>
> Key: SOLR-3366
> URL: https://issues.apache.org/jira/browse/SOLR-3366
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - DataImportHandler, replication (java)
>Affects Versions: 3.4
>Reporter: Kevin Osborn
>
> We use the DataImportHandler and Java replication in a fairly simple setup of 
> a single master and 4 slaves. We had an operating index of about 16,000 
> documents. The DataImportHandler is pulled periodically by an external 
> service using the "command=full-import=false" command for a delta 
> import.
> While processing one of these commands, we did a deployment which required us 
> to restart the application server (Tomcat 7). So, the import was interrupted. 
> Prior to this deployment, the full index of 16,000 documents had been 
> replicated to all slaves and was working correctly.
> Upon restart, the master restarted with an empty index and then this empty 
> index was replicated across all slaves. So, our search index was now empty.
> My expected behavior was to lose any changes in the delta import (basically 
> prior to the commit). However, I was not expecting to lose all data. Perhaps 
> this is due to the fact that I am using the full-import method, even though 
> it is really a delta, for performance reasons? Or does the data import just 
> put the index in some sort of invalid state?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3689) DIH status: Create a machine readable status element

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15539678#comment-15539678
 ] 

Alexandre Rafalovitch commented on SOLR-3689:
-

I believe this has been implemented long ago as Admin UI uses status API. So, 
it should be safe to close. Any objections?

> DIH status: Create a machine readable status element
> 
>
> Key: SOLR-3689
> URL: https://issues.apache.org/jira/browse/SOLR-3689
> Project: Solr
>  Issue Type: Improvement
>  Components: contrib - DataImportHandler
>Affects Versions: 4.0-ALPHA
>Reporter: Sauvik Sarkar
>Priority: Minor
>  Labels: DIH, DataImportHandler, dih, impact-low
>
> Currently when the DataImportHandler process is executed the response does 
> not have a machine readable DIH status. Although in the detail messages it 
> does indicate the DIH status but it needs to be machine friendly. It would be 
> nice if we have a new DIH status element which will contain the status of the 
> DIH run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2886) Out of Memory Error with DIH and TikaEntityProcessor

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15539674#comment-15539674
 ] 

Alexandre Rafalovitch commented on SOLR-2886:
-

Does this happen with the latest version of Solr/Tika? If not or cannot be 
reproduced, I suggest closing the case.

> Out of Memory Error with DIH and TikaEntityProcessor
> 
>
> Key: SOLR-2886
> URL: https://issues.apache.org/jira/browse/SOLR-2886
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - DataImportHandler, contrib - Solr Cell (Tika 
> extraction)
>Affects Versions: 4.0-ALPHA
>Reporter: Tricia Jenkins
>
> I've recently upgraded from apache-solr-4.0-2011-06-14_08-33-23.war to 
> apache-solr-4.0-2011-10-14_08-56-59.war and then 
> apache-solr-4.0-2011-10-30_09-00-00.war to index ~5300 pdfs, of various 
> sizes, using the TikaEntityProcessor.  My indexing would run to completion 
> and was completely successful under the June build.  The only error was 
> readability of the fulltext in highlighting.  This was fixed in Tika 0.10 
> (TIKA-611).  I chose to use the October 14 build of Solr because Tika 0.10 
> had recently been included (SOLR-2372).  
> On the same machine without changing any memory settings my initial problem 
> is a Perm Gen error.  Fine, I increase the PermGen space.
> I've set the "onError" parameter to "skip" for the TikaEntityProcessor.  Now 
> I get several (6)
> SEVERE: Exception thrown while getting data
> java.net.SocketTimeoutException: Read timed out
> SEVERE: Exception in entity : 
> tika:org.apache.solr.handler.dataimport.DataImport
> HandlerException: Exception in invoking url  # 2975
> pairs.  And after ~3881 documents, with auto commit set unreasonably 
> frequently I consistently get an Out of Memory Error 
> SEVERE: Exception while processing: f document : 
> null:org.apache.solr.handler.dataimport.DataImportHandlerException: 
> java.lang.OutOfMemoryError: Java heap space
> The stack trace points to 
> org.apache.pdfbox.io.RandomAccessBuffer.expandBuffer(RandomAccessBuffer.java:151)
>  and 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:718).
> The October 30 build performs identically.
> Funny thing is that monitoring via JConsole doesn't reveal any memory issues.
> Because the out of Memory error did not occur in June, this leads me to 
> believe that a bug has been introduced to the code since then.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-3149) Update obsolete schema.xml in example-DIH

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch closed SOLR-3149.
---
Resolution: Resolved

Was resolved a while ago.

> Update obsolete schema.xml in example-DIH
> -
>
> Key: SOLR-3149
> URL: https://issues.apache.org/jira/browse/SOLR-3149
> Project: Solr
>  Issue Type: Improvement
>  Components: Schema and Analysis
>Affects Versions: 3.5, 4.0-ALPHA
>Reporter: Yusuke Yanbe
>Priority: Minor
>  Labels: dataimportHandler, documentaion, newbie
> Attachments: SOLR-3149.patch
>
>   Original Estimate: 3h
>  Remaining Estimate: 3h
>
> The version of example/example-DIH/solr/db/conf/schema.xml is 1.1 (too old) 
> where example/solr/conf/schema.xml is 1.4. I believe that it is important to 
> keep all schema.xml up to date for newbies.
> The example/example-DIH/solr/db/conf/schema.xml will be referred as primary 
> hints for newbies because most of them may want to try import some data from 
> their preexisting DB or something first, referring [1]. Even 
> DataImportHandler tutorial itself can be done without problem, obsolete 
> schema.xml may confusing for them. 
> Typical difference of new and old schema.xml is existence of explanation of 
> *TrieField. Because old one's default types are solr.IntField or 
> solr.DateField and no mention of this. Consequently, when they try range 
> queries or boosting query based on old schema.xml, they may face unintended 
> slow response or error.
> [1] http://wiki.apache.org/solr/DataImportHandler#Full_Import_Example



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-4964) DB DIH-example not working

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch reassigned SOLR-4964:
---

Assignee: Alexandre Rafalovitch  (was: Shalin Shekhar Mangar)

> DB DIH-example not working
> --
>
> Key: SOLR-4964
> URL: https://issues.apache.org/jira/browse/SOLR-4964
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - DataImportHandler
>Affects Versions: 4.3.1
>Reporter: Shinichiro Abe
>Assignee: Alexandre Rafalovitch
>Priority: Minor
> Fix For: 4.9, 6.0
>
> Attachments: SOLR-4964.patch
>
>
> The delta-import does not work at example-DIH, full-import is work though. 
> Also, the field values do not put into "cat" field.
> I rewrote db-data-config.xml adding deltaImportQuery/pk attributes .
> Please confirm and commit a patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-8580) Closing leaked Closeable resources

2016-10-01 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya closed SOLR-8580.
--
Resolution: Won't Fix

Sure, closed.

> Closing leaked Closeable resources
> --
>
> Key: SOLR-8580
> URL: https://issues.apache.org/jira/browse/SOLR-8580
> Project: Solr
>  Issue Type: Bug
>Reporter: Ishan Chattopadhyaya
>Priority: Trivial
> Attachments: SOLR-8580.patch
>
>
> # A ChannelFastInputStream in TransactionLog
> # A ZkStateReader in ZooKeeperInspector



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-6675) Solr webapp deployment is very slow with in solrconfig.xml

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch closed SOLR-6675.
---
Resolution: Won't Fix

We no longer supports Tomcat or WAR method of deployment.

> Solr webapp deployment is very slow with  in solrconfig.xml
> -
>
> Key: SOLR-6675
> URL: https://issues.apache.org/jira/browse/SOLR-6675
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.7
> Environment: Linux Redhat 64bit
>Reporter: Forest Soup
>Priority: Critical
>  Labels: performance
> Attachments: 1014.zip, callstack.png
>
>
> We have a SolrCloud with Solr version 4.7 with Tomcat 7. And our solr 
> index(cores) are big(50~100G) each core. 
> When we start up tomcat, the solr webapp deployment is very slow. From 
> tomcat's catalina log, every time it takes about 10 minutes to get deployed. 
> After we analyzing java core dump, we notice it's because the loading process 
> cannot finish until the MBean calculation for large index is done.
>  
> So we tried to remove the  from solrconfig.xml, after that, the loading 
> of solr webapp only take about 1 minute. So we can sure the MBean calculation 
> for large index is the root cause.
> Could you please point me if there is any async way to do statistic 
> monitoring without  in solrconfig.xml, or let it do calculation after 
> the deployment? Thanks!
> The callstack.png file in the attachment is the call stack of the long 
> blocking thread which is doing statistics calculation.
> The catalina log of tomcat:
> INFO: Starting Servlet Engine: Apache Tomcat/7.0.54
> Oct 13, 2014 2:00:29 AM org.apache.catalina.startup.HostConfig deployWAR
> INFO: Deploying web application archive 
> /opt/ibm/solrsearch/tomcat/webapps/solr.war
> Oct 13, 2014 2:10:23 AM org.apache.catalina.startup.HostConfig deployWAR
> INFO: Deployment of web application archive 
> /opt/ibm/solrsearch/tomcat/webapps/solr.war has finished in 594,325 ms 
> < Time taken for solr app Deployment is about 10 minutes 
> ---
> Oct 13, 2014 2:10:23 AM org.apache.catalina.startup.HostConfig deployDirectory
> INFO: Deploying web application directory 
> /opt/ibm/solrsearch/tomcat/webapps/manager
> Oct 13, 2014 2:10:26 AM org.apache.catalina.startup.HostConfig deployDirectory
> INFO: Deployment of web application directory 
> /opt/ibm/solrsearch/tomcat/webapps/manager has finished in 2,035 ms
> Oct 13, 2014 2:10:26 AM org.apache.catalina.startup.HostConfig deployDirectory
> INFO: Deploying web application directory 
> /opt/ibm/solrsearch/tomcat/webapps/examples
> Oct 13, 2014 2:10:27 AM org.apache.catalina.startup.HostConfig deployDirectory
> INFO: Deployment of web application directory 
> /opt/ibm/solrsearch/tomcat/webapps/examples has finished in 1,789 ms
> Oct 13, 2014 2:10:27 AM org.apache.catalina.startup.HostConfig deployDirectory
> INFO: Deploying web application directory 
> /opt/ibm/solrsearch/tomcat/webapps/docs
> Oct 13, 2014 2:10:28 AM org.apache.catalina.startup.HostConfig deployDirectory
> INFO: Deployment of web application directory 
> /opt/ibm/solrsearch/tomcat/webapps/docs has finished in 1,037 ms
> Oct 13, 2014 2:10:28 AM org.apache.catalina.startup.HostConfig deployDirectory
> INFO: Deploying web application directory 
> /opt/ibm/solrsearch/tomcat/webapps/ROOT
> Oct 13, 2014 2:10:29 AM org.apache.catalina.startup.HostConfig deployDirectory
> INFO: Deployment of web application directory 
> /opt/ibm/solrsearch/tomcat/webapps/ROOT has finished in 948 ms
> Oct 13, 2014 2:10:29 AM org.apache.catalina.startup.HostConfig deployDirectory
> INFO: Deploying web application directory 
> /opt/ibm/solrsearch/tomcat/webapps/host-manager
> Oct 13, 2014 2:10:30 AM org.apache.catalina.startup.HostConfig deployDirectory
> INFO: Deployment of web application directory 
> /opt/ibm/solrsearch/tomcat/webapps/host-manager has finished in 951 ms
> Oct 13, 2014 2:10:31 AM org.apache.coyote.AbstractProtocol start
> INFO: Starting ProtocolHandler ["http-bio-8080"]
> Oct 13, 2014 2:10:31 AM org.apache.coyote.AbstractProtocol start
> INFO: Starting ProtocolHandler ["ajp-bio-8009"]
> Oct 13, 2014 2:10:31 AM org.apache.catalina.startup.Catalina start
> INFO: Server startup in 601506 ms



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6769) Election bug

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15539639#comment-15539639
 ] 

Alexandre Rafalovitch commented on SOLR-6769:
-

There has been some fixes related to that, I believe.

Is this reproducible against latest version of Solr? If yes, the case can be 
updated with more details so it is more visible. 

If not, let's close it and see if somebody will see it again.

> Election bug
> 
>
> Key: SOLR-6769
> URL: https://issues.apache.org/jira/browse/SOLR-6769
> Project: Solr
>  Issue Type: Bug
>Reporter: Alexander S.
> Attachments: Screenshot 876.png
>
>
> Hello, I have a very simple set up: 2 shards and 2 replicas (4 nodes in 
> total).
> What I did is just stopped the shards, but if first shard stopped immediately 
> the second one took about 5 minutes to stop. You can see on the screenshot 
> what happened next. In short:
> 1. Shard 1 stopped normally
> 3. Replica 1 became a leader
> 2. Shard 2 still was performing some job but wasn't accepting connection
> 4. Replica 2 did not became a leader because Shard 2 is still there but 
> doesn't work
> 5. Entire cluster went down until Shard 2 stopped and Replica 2 became a 
> leader
> Marked as critical because this shuts down the entire cluster. Please adjust 
> if I am wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7319) Workaround the "Four Month Bug" causing GC pause problems

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15539635#comment-15539635
 ] 

Alexandre Rafalovitch commented on SOLR-7319:
-

Was there a practical outcome of this discussion? I see the commits rolled in 
and rolled back. The last entry mentions a possible utility. Should that be 
span out with more explanations into its own improvement JIRA and this (bug) 
issue closed?

> Workaround the "Four Month Bug" causing GC pause problems
> -
>
> Key: SOLR-7319
> URL: https://issues.apache.org/jira/browse/SOLR-7319
> Project: Solr
>  Issue Type: Bug
>  Components: scripts and tools
>Affects Versions: 5.0
>Reporter: Shawn Heisey
>Assignee: Shawn Heisey
> Attachments: SOLR-7319.patch, SOLR-7319.patch, SOLR-7319.patch
>
>
> A twitter engineer found a bug in the JVM that contributes to GC pause 
> problems:
> http://www.evanjones.ca/jvm-mmap-pause.html
> Problem summary (in case the blog post disappears):  The JVM calculates 
> statistics on things like garbage collection and writes them to a file in the 
> temp directory using MMAP.  If there is a lot of other MMAP write activity, 
> which is precisely how Lucene accomplishes indexing and merging, it can 
> result in a GC pause because the mmap write to the temp file is delayed.
> We should implement the workaround in the solr start scripts (disable 
> creation of the mmap statistics tempfile) and document the impact in 
> CHANGES.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8580) Closing leaked Closeable resources

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15539630#comment-15539630
 ] 

Alexandre Rafalovitch commented on SOLR-8580:
-

Safe to close? There was an agreement to not go ahead with this patch.

> Closing leaked Closeable resources
> --
>
> Key: SOLR-8580
> URL: https://issues.apache.org/jira/browse/SOLR-8580
> Project: Solr
>  Issue Type: Bug
>Reporter: Ishan Chattopadhyaya
>Priority: Trivial
> Attachments: SOLR-8580.patch
>
>
> # A ChannelFastInputStream in TransactionLog
> # A ZkStateReader in ZooKeeperInspector



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9036) Solr slave is doing full replication (entire index) of index after master restart

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15539623#comment-15539623
 ] 

Alexandre Rafalovitch commented on SOLR-9036:
-

The work here seems to be all done. Safe to close?

> Solr slave is doing full replication (entire index) of index after master 
> restart
> -
>
> Key: SOLR-9036
> URL: https://issues.apache.org/jira/browse/SOLR-9036
> Project: Solr
>  Issue Type: Bug
>  Components: replication (java)
>Affects Versions: 5.3.1, 6.0
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Critical
>  Labels: impact-high
> Fix For: 5.5.2, 5.6, 6.0.1, 6.1, master (7.0)
>
> Attachments: SOLR-9036.patch, SOLR-9036.patch, SOLR-9036.patch
>
>
> This was first described in the following email:
> https://mail-archives.apache.org/mod_mbox/lucene-solr-user/201604.mbox/%3ccafgnfoyn+xmpxwzwbjuzddeuz7tjqhqktek6q7u8xgstqy3...@mail.gmail.com%3E
> I tried Solr 5.3.1 and Solr 6 and I can reproduce the problem. If the master 
> comes back online before the next polling interval then the slave finds 
> itself in sync with the master but if the master is down for at least one 
> polling interval then the slave pulls the entire full index from the master 
> even if the index has not changed on the master.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-2017) extractingUpdateRequestHandler waits indefinitely, while simultaneous commit never finishes

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch closed SOLR-2017.
---
Resolution: Cannot Reproduce

Alpha code issues from several releases ago. If this happens again against 
master or even stable version of Solr, a new case can be created.

> extractingUpdateRequestHandler waits indefinitely, while simultaneous commit 
> never finishes
> ---
>
> Key: SOLR-2017
> URL: https://issues.apache.org/jira/browse/SOLR-2017
> Project: Solr
>  Issue Type: Bug
>  Components: update
>Affects Versions: 4.0-ALPHA
> Environment: Windows XP, jdk 1.6.0_19
>Reporter: Karl Wright
>
> While trying to index data using extractingUpdateRequestHandler on a trunk 
> build I made last night, I managed to trigger what appears to be a merge 
> problem twice.  Symptom is that updates are blocked, while the commit never 
> finishes.  CPU is utilized 100%.  I cannot give a complete, single thread 
> dump, but I managed to get two, which are overlapping, which are attached.  
> If there's a quick response to this ticket I can generate more.
> First thread dump:
> ...
> at java.lang.Object.wait(Native Method)
> - waiting on <0x29642f00> (a java.util.TaskQueue)
> at java.util.TimerThread.mainLoop(Timer.java:509)
> - locked <0x29642f00> (a java.util.TaskQueue)
> at java.util.TimerThread.run(Timer.java:462)
> "25615188@qtp-20051738-9 - Acceptor0 SocketConnector@0.0.0.0:8983" prio=6 
> tid=0x 03076800 nid=0x19ac runnable [0x034df000]
>java.lang.Thread.State: RUNNABLE
> at java.net.PlainSocketImpl.socketAccept(Native Method)
> at java.net.PlainSocketImpl.accept(PlainSocketImpl.java:390)
> - locked <0x29770448> (a java.net.SocksSocketImpl)
> at java.net.ServerSocket.implAccept(ServerSocket.java:453)
> at java.net.ServerSocket.accept(ServerSocket.java:421)
> at 
> org.mortbay.jetty.bio.SocketConnector.accept(SocketConnector.java:99)
> at 
> org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.ja
> va:707)
> at 
> org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.j
> ava:582)
> "11245030@qtp-20051738-8" prio=6 tid=0x03075000 nid=0x10b4 waiting on 
> condition [0x0348e000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x2970bad8> (a 
> java.util.concurrent.locks.Reentr
> antReadWriteLock$NonfairSync)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInt
> errupt(AbstractQueuedSynchronizer.java:747)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireShared
> (AbstractQueuedSynchronizer.java:877)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireShared(A
> bstractQueuedSynchronizer.java:1197)
> at 
> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.lock(Reent
> rantReadWriteLock.java:594)
> at 
> org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandle
> r2.java:211)
> at 
> org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpd
> ateProcessorFactory.java:61)
> at 
> org.apache.solr.handler.extraction.ExtractingDocumentLoader.doAdd(Ext
> ractingDocumentLoader.java:120)
> at 
> org.apache.solr.handler.extraction.ExtractingDocumentLoader.addDoc(Ex
> tractingDocumentLoader.java:125)
> at 
> org.apache.solr.handler.extraction.ExtractingDocumentLoader.load(Extr
> actingDocumentLoader.java:195)
> at 
> org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(Co
> ntentStreamHandlerBase.java:54)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandl
> erBase.java:131)
> at 
> org.apache.solr.core.RequestHandlers$LazyRequestHandlerWrapper.handle
> Request(RequestHandlers.java:237)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:1323)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter
> .java:337)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilte
> r.java:240)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(Servlet
> Handler.java:1157)
> at 
> org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:3
> 88)
> at 
> org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.jav
> a:216)
> at 
> org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:1
> 82)
> at 
> org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:7
> 65)
> 

[jira] [Assigned] (SOLR-3482) Cannot index emails, mistakes of configuration file data-config.xml solrconfig.xml, Cannot find tika

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch reassigned SOLR-3482:
---

Assignee: Alexandre Rafalovitch

> Cannot index emails, mistakes of configuration file data-config.xml 
> solrconfig.xml, Cannot find tika 
> -
>
> Key: SOLR-3482
> URL: https://issues.apache.org/jira/browse/SOLR-3482
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - DataImportHandler
>Affects Versions: 4.0-ALPHA
> Environment: windows
>Reporter: Emma Bo Liu
>Assignee: Alexandre Rafalovitch
>Priority: Minor
>  Labels: core, email, index, solr, tika
>
> The mail core cannot be brought up. There are mistakes of data-config.xml 
> solrconfig.xml. The example of mail core is not complete, miss files.There is 
> mistake of the sor mailEnitityPorcessor tutorial.
> It cannot find the tika even tough it include the dataimporter-extra jar 
> file. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-2683) Solr classpath is not setup correctly if core's instanceDir does not exist on startup

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch closed SOLR-2683.
---
Resolution: Won't Fix

instanceDir in solr.xml is long deprecated (gone?).

> Solr classpath is not setup correctly if core's instanceDir does not exist on 
> startup
> -
>
> Key: SOLR-2683
> URL: https://issues.apache.org/jira/browse/SOLR-2683
> Project: Solr
>  Issue Type: Bug
>  Components: multicore
>Affects Versions: 4.0-ALPHA
>Reporter: Yury Kats
>
> When I launch Solr and the core's instanceDir does not exist, the directory 
> is created, but none of the JARs listed in solrconfig.xml in  entries 
> are added to the classpath, thus resulting in ClassNotFound exceptions at 
> runtime.
>  entries in solrconfig.xml are relative to core's instanceDir. It seems 
> that  entries are processed before instanceDir is created and therefore 
> can't be resolved. 
> Example solr.xml:
>   
> 
>   
> 
>   
>   
> solrconfig.xml:
>   ...
>   
>   ...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2270) NPE reported from QueryComponent.mergeIds with field collapse

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15539597#comment-15539597
 ] 

Alexandre Rafalovitch commented on SOLR-2270:
-

Can this reproduced against latest Solr, preferably with the stock example?

> NPE reported from QueryComponent.mergeIds with field collapse
> -
>
> Key: SOLR-2270
> URL: https://issues.apache.org/jira/browse/SOLR-2270
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.0-ALPHA
> Environment: 2 cores with a shard in the same solr instance 
> configured in distributed mode using zookeeper integration.
>Reporter: Massimo Schiavon
>
> This is the request:
> http://***:8983/solr/1/select/?q=apache=true=true=site
> and the exception stacktrace:
> java.lang.NullPointerException
> at 
> org.apache.solr.handler.component.QueryComponent.mergeIds(QueryComponent.java:665)
> at 
> org.apache.solr.handler.component.QueryComponent.handleResponses(QueryComponent.java:560)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:326)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:1359)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:337)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:240)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at 
> org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
> at 
> org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
> at 
> org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
> at 
> org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
> at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
> at 
> org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
> at 
> org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:114)
> at 
> org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
> at org.mortbay.jetty.Server.handle(Server.java:326)
> at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
> at 
> org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
> at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
> at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
> at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
> at 
> org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228)
> at 
> org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
> this request works correctly:
> http://***:8983/solr/1/select/?q=apache=true=true=site=true
> I give a look at the code of QueryComponent.mergeIds(ResponseBuilder, 
> ShardRequest)
> [...]
> SolrDocumentList docs = 
> (SolrDocumentList)srsp.getSolrResponse().getResponse().get("response");
> [...]
> response for queries with field collapse are in a completely different format



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-2430) Swapping cores with persistent switched on should save swapped core to defaultCoreName

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch closed SOLR-2430.
---
Resolution: Won't Fix

defaultCoreName does not exist any more.

> Swapping cores with persistent switched on should save swapped core to 
> defaultCoreName
> --
>
> Key: SOLR-2430
> URL: https://issues.apache.org/jira/browse/SOLR-2430
> Project: Solr
>  Issue Type: Bug
>  Components: multicore
>Affects Versions: 4.0-ALPHA
> Environment: CentOS
>Reporter: bidorbuy
>  Labels: core, multicore
>
> Running on the latest trunk version and configured multi-cores with 
> persistent turned on and set a default-core. When swapping cores I would have 
> expected default behavior to be that the swapped core name would be persisted 
> as the new defaultCoreName. i.e. if switching from primary to staging, the 
> defaultCoreName should be written to "staging".
> When swapping out cores (i.e. from primary to staging) and then restarting 
> Jetty, Solr falls back to the current configured default-core (=primary) 
> instead of the previously swapped one (=staging). If this is intended, can 
> perhaps the swap command be extended to force rewritting Solr.xml
> Current config file:
> 
> 
>defaultCoreName="primary">
>  dataDir="../../data/primary"/>
>  dataDir="../../data/staging"/>
>   
> 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-2680) NullPointerException when doing a delta-import and no pk is specified on sub-entity

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch reassigned SOLR-2680:
---

Assignee: Alexandre Rafalovitch

> NullPointerException when doing a delta-import and no pk is specified on 
> sub-entity
> ---
>
> Key: SOLR-2680
> URL: https://issues.apache.org/jira/browse/SOLR-2680
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - DataImportHandler
>Affects Versions: 4.0-ALPHA
>Reporter: Daniel Rijkhof
>Assignee: Alexandre Rafalovitch
>  Labels: dataimport, dih
>
> my working sub element:
> {code:xml} 
>pk="ID"
>   query="select * from jobexperience je where je.PROFESSIONAL_ID = 
> '${user.USERID_FT}'"
>   deltaQuery="select je.ID as ID from jobexperience je where 
> je.LASTMODIFIEDDATE > '${dataimporter.last_index_time}'"
>   parentDeltaQuery="select je.PROFESSIONAL_ID as ID from jobexperience je 
> where je.ID=${jobexperience.ID}" 
> />
> {code}
> my failing sub element (resulting in NullPointerException):
> {code:xml} 
>query="select * from jobexperience je where je.PROFESSIONAL_ID = 
> '${user.USERID_FT}'"
>   deltaQuery="select je.ID as ID from jobexperience je where 
> je.LASTMODIFIEDDATE > '${dataimporter.last_index_time}'"
>   parentDeltaQuery="select je.PROFESSIONAL_ID as ID from jobexperience je 
> where je.ID=${jobexperience.ID}" 
> />
> {code}
> Stacktrace:
> {code}
> SEVERE: Delta Import Failed
> java.lang.NullPointerException
>   at 
> org.apache.solr.handler.dataimport.DocBuilder.findMatchingPkColumn(DocBuilder.java:830)
>   at 
> org.apache.solr.handler.dataimport.DocBuilder.collectDelta(DocBuilder.java:891)
>   at 
> org.apache.solr.handler.dataimport.DocBuilder.collectDelta(DocBuilder.java:870)
>   at 
> org.apache.solr.handler.dataimport.DocBuilder.doDelta(DocBuilder.java:284)
>   at 
> org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:178)
>   at 
> org.apache.solr.handler.dataimport.DataImporter.doDeltaImport(DataImporter.java:374)
>   at 
> org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:413)
>   at 
> org.apache.solr.handler.dataimport.DataImporter$1.run(DataImporter.java:392)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-2990) solr OOM issues

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch closed SOLR-2990.
---
Resolution: Cannot Reproduce

Ancient bug report that may have actually been against ancient Tika. No next 
action here based on information available.

If this problem happens again with more recent component versions, a new issue 
can be created.

> solr OOM issues
> ---
>
> Key: SOLR-2990
> URL: https://issues.apache.org/jira/browse/SOLR-2990
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - Solr Cell (Tika extraction)
>Affects Versions: 4.0-ALPHA
> Environment: CentOS 5.x/6.x
> Solr Build apache-solr-4.0-2011-11-04_09-29-42 (includes tika 1.0)
> java -server -Xms2G -Xmx2G -XX:+HeapDumpOnOutOfMemoryError 
> -XX:HeapDumpPath=/var/log/oom/solr.dump.1 -Dsolr.data.dir=/opt/solr.data 
> -Djava.util.logging.config.file=solr-logging.properties -DSTOP.PORT=8907 
> -DSTOP.KEY=STOP -jar start.jar
>Reporter: Rob Tulloh
>
> We see intermittent issues with OutOfMemory caused by tika failing to process 
> content. Here is an example:
> Dec 29, 2011 7:12:05 AM org.apache.solr.common.SolrException log
> SEVERE: java.lang.OutOfMemoryError: Java heap space
> at 
> org.apache.poi.hmef.attribute.TNEFAttribute.(TNEFAttribute.java:50)
> at 
> org.apache.poi.hmef.attribute.TNEFAttribute.create(TNEFAttribute.java:76)
> at org.apache.poi.hmef.HMEFMessage.process(HMEFMessage.java:74)
> at org.apache.poi.hmef.HMEFMessage.process(HMEFMessage.java:98)
> at org.apache.poi.hmef.HMEFMessage.process(HMEFMessage.java:98)
> at org.apache.poi.hmef.HMEFMessage.process(HMEFMessage.java:98)
> at org.apache.poi.hmef.HMEFMessage.(HMEFMessage.java:63)
> at 
> org.apache.tika.parser.microsoft.TNEFParser.parse(TNEFParser.java:79)
> at 
> org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:242)
> at 
> org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:242)
> at 
> org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:129)
> at 
> org.apache.solr.handler.extraction.ExtractingDocumentLoader.load(ExtractingDocumentLoader.java:195)
> at 
> org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:58)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129)
> at 
> org.apache.solr.core.RequestHandlers$LazyRequestHandlerWrapper.handleRequest(RequestHandlers.java:244)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:1478)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:353)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:248)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at 
> org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
> at 
> org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
> at 
> org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
> at 
> org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
> at 
> org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3651) unable to find Instance directory names with "."

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15539586#comment-15539586
 ] 

Alexandre Rafalovitch commented on SOLR-3651:
-

Is this still an issue with new Admin UI in latest Solr?

> unable to find Instance directory names with "."
> 
>
> Key: SOLR-3651
> URL: https://issues.apache.org/jira/browse/SOLR-3651
> Project: Solr
>  Issue Type: Bug
>  Components: multicore
>Affects Versions: 4.0-ALPHA
> Environment: MacOSX Darwin
>Reporter: Phani Vempaty
>  Labels: patch
>
> In multicore scenario, if I have "." in instance directory name - it is not 
> able to find it in solr UI or even not able to ping it due to which I'm not 
> able to get the statistics regarding that particular index like last modified 
> time etc..
> Example:
> Try giving an instance directory name as "vempap.public.message" - you can 
> see the core loaded in the UI, but when you click on it - it says 
> "vempap.public.message" Not Found .. I spent a lot of time debugging & now, 
> when I just replace "." with "_" - it works fine & well..
> Please let me know if it is intended to be like that or it is a bug. Sorry it 
> is already mentioned earlier - couldn't find related bug when I did a quick 
> search.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-3269) name should not reject non-indexable query fields

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch closed SOLR-3269.
---
Resolution: Won't Fix

defaultSearchField is long deprecated.

>  name should not reject 
> non-indexable query fields
> ---
>
> Key: SOLR-3269
> URL: https://issues.apache.org/jira/browse/SOLR-3269
> Project: Solr
>  Issue Type: Bug
>  Components: Schema and Analysis
>Affects Versions: 3.5
>Reporter: Benson Margulies
>Priority: Minor
>
> I have a custom query processing component that maps field names. So, I want 
> the default field to be a field that is indexed='false', because a lucene 
> index on it is useless. The RequestHandler I have takes that field from the 
> query and maps it to booleans on other fields.
> It would be nice if the schema check did not reject my attempt to list this 
> field as the default search field when it is not indexed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4844) DIH incorrectly processes informix TEXT type field, resulting in solr returning binary address in search results.

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15539577#comment-15539577
 ] 

Alexandre Rafalovitch commented on SOLR-4844:
-

Does this still happen with the latest Solr and latest Informix? Or was this a 
one-off edge-case that we no longer need to worry about?

> DIH incorrectly processes informix TEXT type field, resulting in solr 
> returning binary address in search results.
> -
>
> Key: SOLR-4844
> URL: https://issues.apache.org/jira/browse/SOLR-4844
> Project: Solr
>  Issue Type: Bug
>  Components: Schema and Analysis
>Affects Versions: 3.5
> Environment: solr 3.5, centos, informix 11.x
>Reporter: mark meyer
>
> please see discussions on solr users group and solr developers list:
> http://lucene.472066.n3.nabble.com/having-trouble-storing-large-text-blob-fields-returns-binary-address-in-search-results-td4063979.html#a4064335
> http://lucene.472066.n3.nabble.com/have-developer-question-about-ClobTransformer-and-DIH-td4064256.html#a4064773



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2650) Empty docs array on response with grouping and result pagination

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15539571#comment-15539571
 ] 

Alexandre Rafalovitch commented on SOLR-2650:
-

Can this be reproduced against latest Solr? The code path and libraries 
involved changed many times over.

> Empty docs array on response with grouping and result pagination
> 
>
> Key: SOLR-2650
> URL: https://issues.apache.org/jira/browse/SOLR-2650
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 3.3
>Reporter: Massimo Schiavon
> Attachments: grouping_patch.txt
>
>
> Requesting a certain number of rows and setting start parameter to a greater 
> value returns 0 results with grouping enabled.
> For example, requesting:
> http://localhost:8080/solr/web/select/?q=*:*=1=2
> (grouping and highlighting are enabled by default)
> I get this response:
> [...]
>   response: {
>   numFound: 117852
>   start: 2
>   docs: [ ]
>   }
>   highlighting: {
> 0938630598: {
>   title: [ "..." ]
>   content: [ "..." ]
> }
>   }
> [...]
> docs array is empty while the highlighted values of the document are present
> Debugging the request in
> org.apache.solr.search.Grouping.Command.createSimpleResponse() at row 534
> [...]
>  int len = Math.min(numGroups, docsGathered);
>   if (offset > len) {
> len = 0;
>   }
> [...]
> The initial vars values are:
> numGroups = 1
> docsGathered = 3
> offset = 2
> so after the execution len = 0
> I've tried commenting the if statement and this resolves the issue but could 
> introduce some other bugs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-2483) DIH - an uppercase problem in query parameters

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch reassigned SOLR-2483:
---

Assignee: Alexandre Rafalovitch

> DIH - an uppercase problem in query parameters
> --
>
> Key: SOLR-2483
> URL: https://issues.apache.org/jira/browse/SOLR-2483
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - DataImportHandler
>Affects Versions: 3.1
> Environment: Windows Vista
> Java 1.6
>Reporter: Lubo Torok
>Assignee: Alexandre Rafalovitch
>  Labels: DataImportHandler, entity, newdev, parameter, sql
>
> I have two tables called "PROBLEM" and "KOMENTAR"(means 'comment' in English) 
> in DB. One problem can have more comments. I want to index them all.
> schema.xml looks as follows
> ... some fields ...
>  
> ... some fields...
> data-config.xml:
> 
>  pk="problem_id">
>  
>   
>   
> If you write '${problem.PROBLEM_ID}' in lower case, i.e. 
> '${problem.problem_id}' SOLR will not import the inner entity. Seems strange 
> to me and it took me some time to figure this out.
> Note that primary key in "PROBLEM" is called "ID". I defined the alias 
> "problem_id" (yes,lower case) in SQL. In schema, there is this field defined 
> as "problem_id" again in lower case. But, when I run
> http://localhost:8983/solr/dataimport?command=full-import=true=on
> so I can see some debug information there is this part
> ...
> 
> −
> 
> −
> 
> −
> 
> select to_char(id) as problem_id, nazov as problem_nazov, cislo as 
> problem_cislo, popis as problem_popis from problem
> 
> 0:0:0.465
> --- row #1-
> test zodpovedneho
> 2533274790395945
> 201009304
> csfdewafedewfw
> -
> −
> 
> −
> 
> select id as komentar_id, nazov as komentar_nazov, text as komentar_text from 
> komentar where to_char(fk_problem)='2533274790395945'
> 
> ...
> where you can see that, internally, the fields of "PROBLEM" are represented 
> in uppercase despite the user (me) had not defined them this way. The result 
> is I guess that parameter referring to the parent entity ${entity.field} 
> should always be in uppercase, i.e. ${entity.FIELD}.
> Here is an example of the indexed entity as written after full-import command 
> with debug and verbose on:
> 
> −
> 
> −
> 
> test zodpovedneho
> 
> −
> 
> 2533274790395945
> 
> −
> 
> 201009304
> 
> −
> 
> csfdewafedewfw
> 
> −
> 
> java.math.BigDecimal:5066549580791985
> 
> −
> 
> a.TXT
> 
> 
> here are the field names in lower case. I consider this as a bug. Maybe I am 
> wrong and its a feature. I work with SOLR only for few days.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-1847) Solrj doesn't know if PDF was actually parsed by Tika

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch closed SOLR-1847.
---
Resolution: Cannot Reproduce

All components involved in this issue have been updated multiple times. If the 
problems still happens, the case can be reopened with new details or new case 
can be created.

> Solrj doesn't know if PDF was actually parsed by Tika
> -
>
> Key: SOLR-1847
> URL: https://issues.apache.org/jira/browse/SOLR-1847
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - Solr Cell (Tika extraction)
>Affects Versions: 1.5
> Environment: TOMCAT 6.0.24, SOLR 1.5Dev, Solrj1.5Dev Tika
>Reporter: elsadek
>  Labels: Solr, Solrj, Tika, Tomcat6
>
> When posting pdf files using solrj the only response we get from Solr is only 
> server response status, but never know whether
> pdf was actually parsed or not, checking the log I found that  Tika wasn't 
> able
> to succeed with some pdf files because of content nature (texts in images 
> only) or are corrupted:
> 
>  25 mars 2010 14:54:07 org.apache.pdfbox.util.PDFStreamEngine 
> processOperator
>  INFO: unsupported/disabled operation: EI
>
>  25 mars 2010 14:54:02 org.apache.pdfbox.filter.FlateFilter decode
>  GRAVE: Stop reading corrupt stream
> The question is how can I catch these kinds of exceptions through Solrj ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-2130) Empty index directory causes FileNotFoundException error when starting in-memory SOLR server (RAMDirectory)

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch closed SOLR-2130.
---
Resolution: Won't Fix

This code has been changed multiple times. If the problem still exists in the 
latest code, a new issue can be created.

> Empty index directory causes FileNotFoundException error when starting 
> in-memory SOLR server (RAMDirectory)
> ---
>
> Key: SOLR-2130
> URL: https://issues.apache.org/jira/browse/SOLR-2130
> Project: Solr
>  Issue Type: Bug
>  Components: clients - java
>Affects Versions: 1.4.1
> Environment: Windows XP/Windows 7
>Reporter: Ian Rowland
> Attachments: TechSpike.zip
>
>
> When creating an in-memory Solr Server (using RAMDIrectory) if an empty index 
> directory exists when the server is created the following error occurs:
> java.lang.RuntimeException: java.io.FileNotFoundException: no segments* file 
> found in org.apache.lucene.store.RAMDirectory@177b093: files:
>   at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1068)
> The code expects a segment file to be present - but as it is an in-memory 
> server there isn't one to find and the error occurs.
> The workaround is to ensure the directory is deleted before starting the 
> server, but the creation process creates another empty index folder :(



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7826) Permission issues when creating cores with bin/solr

2016-10-01 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15539560#comment-15539560
 ] 

Shawn Heisey commented on SOLR-7826:


+1 to comments by [~hossman].  I just opened SOLR-9590 for exploration.

If "bin/solr create" IS run as root, the idea of paying attention to the owner 
of the parent directory and matching it seems like a good idea too.

> Permission issues when creating cores with bin/solr
> ---
>
> Key: SOLR-7826
> URL: https://issues.apache.org/jira/browse/SOLR-7826
> Project: Solr
>  Issue Type: Improvement
>Reporter: Shawn Heisey
>Assignee: Jan Høydahl
>Priority: Minor
>  Labels: newdev
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-7826.patch, SOLR-7826.patch
>
>
> Ran into an interesting situation on IRC today.
> Solr has been installed as a service using the shell script 
> install_solr_service.sh ... so it is running as an unprivileged user.
> User is running "bin/solr create" as root.  This causes permission problems, 
> because the script creates the core's instanceDir with root ownership, then 
> when Solr is instructed to actually create the core, it cannot create the 
> dataDir.
> Enhancement idea:  When the install script is used, leave breadcrumbs 
> somewhere so that the "create core" section of the main script can find it 
> and su to the user specified during install.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-1998) Download directories should have hash files as well as sigs

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch closed SOLR-1998.
---
Resolution: Fixed

The hashes are present on the download sites. Must be just an ancient issue 
that's been resolved elsewhere.

> Download directories should have hash files as well as sigs
> ---
>
> Key: SOLR-1998
> URL: https://issues.apache.org/jira/browse/SOLR-1998
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 1.4.1
> Environment: http://www.apache.org/dist/lucene/solr/1.4.1/
>Reporter: Sebb
>
> The 1.4.1 release in http://www.apache.org/dist/lucene/solr/1.4.1/ does not 
> have any MD5 or SHA hash files.
> Previous releases had hash files; so should 1.4.1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-2094) When using a XPathEntityProcessor nested within a SQLEntityProcessor, the xpathReader isn't reinitilized for each new document

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch reassigned SOLR-2094:
---

Assignee: Alexandre Rafalovitch

> When using a XPathEntityProcessor nested within a SQLEntityProcessor, the 
> xpathReader isn't reinitilized for each new document 
> ---
>
> Key: SOLR-2094
> URL: https://issues.apache.org/jira/browse/SOLR-2094
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - DataImportHandler
>Affects Versions: 1.4.1
> Environment: Solr 1.4
>Reporter: Niall O'Connor
>Assignee: Alexandre Rafalovitch
>
> I have a dih config with a SqlEntityProcessor that retrives a table. I then 
> have a sub-entity with the XPathEntityProcessor type, this takes a value from 
> the table as input to parse through an xml doc. 
> I find that the first document is created correctly, but then the xpathReader 
> of the XPathEntityProcessor does not reinitialize for the following documents 
> so the initial documents input is used. 
>   url="l"
>user="hivseqdb" password="hivseqdb" batchSize="1"/>
>
> 
> 
>query="SELECT * FROM hivseqdb.sequenceentry where se_id != '1'">
>   
>dataSource="xmlFile"
>   pk="fma-id"
>   forEach="/tissue-samples" 
>   processor="XPathEntityProcessor" 
>   
> url="/opt/hivseqdb/solr/conf/sub_ontology_translated.xml" 
>   stream="true">
>  xpath="/tissue-samples/tissue[@fma-id='${Sequence.sampleTissueCode}']/parent-path"/>
> 
> DocBuilder dose call init on the XPathEntityProcessor but there is a 
> conditional in the init method to check if the xpathReader is null:
>   public void init(Context context) {
> super.init(context);
> if (xpathReader == null)
>   initXpathReader();
> pk = context.getEntityAttribute("pk");
> dataSource = context.getDataSource();
> rowIterator = null;
>   }
> So the xPathReader is used again and again. Is there away to reinitialize the 
> xPathReader for every document? Or what is the specific design reason for 
> preserving it?
>   
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9590) Service installation -- save breadcrumbs for other scripts to use

2016-10-01 Thread Shawn Heisey (JIRA)
Shawn Heisey created SOLR-9590:
--

 Summary: Service installation -- save breadcrumbs for other 
scripts to use
 Key: SOLR-9590
 URL: https://issues.apache.org/jira/browse/SOLR-9590
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
  Components: scripts and tools
Reporter: Shawn Heisey
Priority: Minor


When I opened SOLR-7826, I brought up the idea of installation breadcrumbs.

If we had good breadcrumb data saved in the install directory by the install 
script, a number of other scripts could use the breadcrumbs to gather relevant 
data about the *service* installation, for additional safety and more automatic 
operation.

The "bin/solr create" command could verify that it is running as the exact same 
user that installed Solr, and abort if they don't match.

What if zkcli.sh (and bin/solr zookeeper options) no longer needed to be told 
where zookeeper was, because it could find its way to 
/etc/default/.in.sh or $SOLR_HOME/solr.xml and grab zkHost from there? 
 The same thing could happen for zkHost in the idea that I filed as SOLR-9587.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-1917) Possible Null Pointer Exception in highlight or debug component

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch closed SOLR-1917.
---
Resolution: Cannot Reproduce

Ancient bug that may or may not have been triggered by some other ancient code. 
No longer relevant. If still happens, a new issue can be opened.

> Possible Null Pointer Exception in highlight or debug component
> ---
>
> Key: SOLR-1917
> URL: https://issues.apache.org/jira/browse/SOLR-1917
> Project: Solr
>  Issue Type: Bug
>  Components: highlighter
>Affects Versions: 1.4
>Reporter: David Bowen
> Attachments: SOLR-1917.patch
>
>
> This bug may only show up if you have the patch for SOLR-1143 installed, but 
> it should be fixed in any case since the existing logic is wrong.  It is 
> explicitly looking for the nulls that can cause the exception, but only after 
> the exception would have already happened.
> What happens is that there is an array of Map.Entry objects which is 
> converted into a SimpleOrderedMap, and then there is a method that iterates 
> over the SimpleOrderedMap looking for null names.  That's wrong because it is 
> the array elements themselves which can be null, so constructing the 
> SimpleOrderedMap throws an NPE.
> I will attach a patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-1807) UpdateHandler plugin is not fully supported

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch closed SOLR-1807.
---
Resolution: Won't Fix

High-level code discussion that is long out of date. No next action.

> UpdateHandler plugin is not fully supported
> ---
>
> Key: SOLR-1807
> URL: https://issues.apache.org/jira/browse/SOLR-1807
> Project: Solr
>  Issue Type: Bug
>  Components: update
>Affects Versions: 1.4
>Reporter: John Wang
>
> UpdateHandler is published as a supported Plugin, but code such as the 
> following:
> if (core.getUpdateHandler() instanceof DirectUpdateHandler2) {
> ((DirectUpdateHandler2) 
> core.getUpdateHandler()).forceOpenWriter();
>   } else {
> LOG.warn("The update handler being used is not an instance or 
> sub-class of DirectUpdateHandler2. " +
> "Replicate on Startup cannot work.");
>   } 
> suggest that it is really not fully supported.
> Must all implementations of UpdateHandler be subclasses of 
> DirectUpdateHandler2 for it to work with replication?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-1775) Replication of 300MB stops indexing for 5 seconds when syncing

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch closed SOLR-1775.
---
Resolution: Cannot Reproduce

We no longer use this replication method. If a similar problem happens with new 
version of Solr, a new issue can be created reflecting modern specs.

> Replication of 300MB stops indexing for 5 seconds when syncing
> --
>
> Key: SOLR-1775
> URL: https://issues.apache.org/jira/browse/SOLR-1775
> Project: Solr
>  Issue Type: Bug
>  Components: replication (java)
>Affects Versions: 1.4
> Environment: Centos 5.3
>Reporter: Bill Bell
>
> When using Java replication in v1.4 and doing a sync from master to slave, 
> the slave delays for about 5-10 seconds. When using rsync this does not occur.
> Is there a way to thread better or lower the priority to not impact queries 
> when it is bringing over the index files from the master? Maybe a separate 
> process?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-1605) ExtractingRequestHandler does not embed original document

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch closed SOLR-1605.
---
Resolution: Won't Fix

Solr is STILL not a CMS and has no plans to become one.

> ExtractingRequestHandler does not embed original document
> -
>
> Key: SOLR-1605
> URL: https://issues.apache.org/jira/browse/SOLR-1605
> Project: Solr
>  Issue Type: Wish
>  Components: contrib - Solr Cell (Tika extraction)
>Affects Versions: 1.4
>Reporter: Lance Norskog
>
> The ExtractingRequestHandler does not have the option to embed the original 
> document file as a saved field. 
> This would be generally useful for content management system purposes, since 
> the search index can also directly serve the content making for a much 
> simpler system architecture.
> My use case is to highlight indexed HTML. Since the raw HTML text is not 
> indexed, it is not possible to request it highlighted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-1645) Add human content-type

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch closed SOLR-1645.
---
Resolution: Implemented

The example files in latest (6.x) Solr demonstrates how to do such mapping 
using Scripting URP. This should be sufficient for those interested.

> Add human content-type
> --
>
> Key: SOLR-1645
> URL: https://issues.apache.org/jira/browse/SOLR-1645
> Project: Solr
>  Issue Type: Improvement
>  Components: contrib - Solr Cell (Tika extraction)
>Affects Versions: 1.4
>Reporter: Khalid Yagoubi
> Fix For: 6.0, 4.9
>
>
> Idea is to allow Solr-Cell to "calculate" the human content-type from the 
> extracted content-type and map it to a field in the schema. 
> So the user can search on "media: image" or "media:video"
> Idea :
> 1) Hardcode a hashmap in somewhere in extraction classes and get human 
> content-type from extracted content-type. I Think to SolrContentHandler.java
> 2) Write an xml file where we can put a mapping like in tika-config.xml for 
> parsers
> 3) Use tika-config.xml to get all supported mime-types



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-2203) Include jboss-web.xml to WAR distribution

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch closed SOLR-2203.
---
Resolution: Won't Fix

We no longer support deploying Sorl as WAR files.

> Include jboss-web.xml to WAR distribution
> -
>
> Key: SOLR-2203
> URL: https://issues.apache.org/jira/browse/SOLR-2203
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 1.4.1
>Reporter: George Gastaldi
>Priority: Minor
> Attachments: jboss-web.xml, jboss-web.xml
>
>
> Include jboss-web.xml inside WAR distribution to allow deployments on JBoss 
> to register SOLR under Context Root "/solr".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-2313) Clear root Entity cache when entity is processed

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch reassigned SOLR-2313:
---

Assignee: Alexandre Rafalovitch

> Clear root Entity cache when entity is processed
> 
>
> Key: SOLR-2313
> URL: https://issues.apache.org/jira/browse/SOLR-2313
> Project: Solr
>  Issue Type: Improvement
>  Components: contrib - DataImportHandler
>Affects Versions: 1.4.1
> Environment: Linux, JDBC, Postgres 8.4.6
>Reporter: Shane
>Assignee: Alexandre Rafalovitch
>
> The current process clears the entity caches once all root entity elements 
> have been imported.  When a config file has dozens of root entities, the 
> result is one "idle in transaction" process for each entity processed, 
> effectively eating up the databases available connections.  The simple 
> solution would be to clear a root entity's cache once that entity has been 
> processed.
> The following is a diff that I used in my instance to clear the cache when 
> the entity completed:
> --- DocBuilder.java   2011-01-12 10:05:58.0 -0700
> +++ DocBuilder.java.new   2011-01-12 10:05:31.0 -0700
> @@ -435,6 +435,9 @@
>  writer.log(SolrWriter.END_ENTITY, null, null);
>}
>entityProcessor.destroy();
> + if(entity.isDocRoot) {
> + entity.clearCache();
> + }
>  }
>}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-2374) Create UpdateFileRequestHandler

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch closed SOLR-2374.
---
Resolution: Implemented

This has been implemented in an alternative way for ManagedStopFilterFactory 
and ManagedSynonymFilterFactory in SOLR-5653 and related issues.

> Create UpdateFileRequestHandler
> ---
>
> Key: SOLR-2374
> URL: https://issues.apache.org/jira/browse/SOLR-2374
> Project: Solr
>  Issue Type: New Feature
>  Components: update
>Affects Versions: 1.4.1
>Reporter: Timo Schmidt
>  Labels: config, file, patch, upload
> Fix For: 6.0, 4.9
>
> Attachments: UpdateFileRequestHandler.patch, patchV2.diff
>
>   Original Estimate: 4h
>  Remaining Estimate: 4h
>
> It would be nice to be able to update files like synonyms.txt and 
> stopwords.txt with a seperrat request handler. Since i am very new to solr 
> development i've prepared a patch with a new UpdateFileRequest handler. Maybe 
> it would be good to refactor the existing fileRequestHandler.
> Currently it is implemented that you need to whitelist all files which should 
> be editable. I think this is better for security reasons.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8146) Allowing SolrJ CloudSolrClient to have preferred replica for query/read

2016-10-01 Thread Susheel Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15539501#comment-15539501
 ] 

Susheel Kumar commented on SOLR-8146:
-

Thank you, Noble. I am going thru the changes and will get back to you.

> Allowing SolrJ CloudSolrClient to have preferred replica for query/read
> ---
>
> Key: SOLR-8146
> URL: https://issues.apache.org/jira/browse/SOLR-8146
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java
>Affects Versions: 5.3
>Reporter: Arcadius Ahouansou
> Attachments: SOLR-8146.patch, SOLR-8146.patch, SOLR-8146.patch, 
> SOLR-8146.patch
>
>
> h2. Backgrouds
> Currently, the CloudSolrClient randomly picks a replica to query.
> This is done by shuffling the list of live URLs to query then, picking the 
> first item from the list.
> This ticket is to allow more flexibility and control to some extend which 
> URLs will be picked up for queries.
> Note that this is for queries only and would not affect update/delete/admin 
> operations.
> h2. Implementation
> The current patch uses regex pattern and moves to the top of the list of URLs 
> only those matching the given regex specified by the system property 
> {code}solr.preferredQueryNodePattern{code}
> Initially, I thought it may be good to have Solr nodes tagged with a string 
> pattern (snitch?) and use that pattern for matching the URLs.
> Any comment, recommendation or feedback would be appreciated.
> h2. Use Cases
> There are many cases where the ability to choose the node where queries go 
> can be very handy:
> h3. Special node for manual user queries and analytics
> One may have a SolrCLoud cluster where every node host the same set of 
> collections with:  
> - multiple large SolrCLoud nodes (L) used for production apps and 
> - have 1 small node (S) in the same cluster with less ram/cpu used only for 
> manual user queries, data export and other production issue investigation.
> This ticket would allow to configure the applications using SolrJ to query 
> only the (L) nodes
> This use case is similar to the one described in SOLR-5501 raised by [~manuel 
> lenormand]
> h3. Minimizing network traffic
>  
> For simplicity, let's say that we have  a SolrSloud cluster deployed on 2 (or 
> N) separate racks: rack1 and rack2.
> On each rack, we have a set of SolrCloud VMs as well as a couple of client 
> VMs querying solr using SolrJ.
> All solr nodes are identical and have the same number of collections.
> What we would like to achieve is:
> - clients on rack1 will by preference query only SolrCloud nodes on rack1, 
> and 
> - clients on rack2 will by preference query only SolrCloud nodes on rack2.
> - Cross-rack read will happen if and only if one of the racks has no 
> available Solr node to serve a request.
> In other words, we want read operations to be local to a rack whenever 
> possible.
> Note that write/update/delete/admin operations should not be affected.
> Note that in our use case, we have a cross DC deployment. So, replace 
> rack1/rack2 by DC1/DC2
> Any comment would be very appreciated.
> Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-1329) StatsComponent needs trie support

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch closed SOLR-1329.
---
Resolution: Cannot Reproduce

Multiple-times ancient code. A new issue can be created if something similar is 
seen again.

> StatsComponent needs trie support
> -
>
> Key: SOLR-1329
> URL: https://issues.apache.org/jira/browse/SOLR-1329
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 1.4
>Reporter: Yonik Seeley
>
> Currently, the stats component uses FieldCache.StringIndex - won't work for 
> trie fields.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-1509) ShowFileRequestHandler has missleading error when asked for absolute path

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch closed SOLR-1509.
---
Resolution: Not A Bug

An ancient confusing error message. If anybody is still confused about it, the 
issue can be reopened/new issue created.

> ShowFileRequestHandler has missleading error when asked for absolute path
> -
>
> Key: SOLR-1509
> URL: https://issues.apache.org/jira/browse/SOLR-1509
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 1.4
>Reporter: Simon Rosenthal
>Priority: Minor
>
> When a user attempts to use the ShowFileRequestHandler (ie: /admin/file ) to 
> access a file using an absolute path (which may result from solr.xml 
> containing an absolute path for schema.xml or solrconfig.xml outside of the 
> normal conf dir) then the error message indicates that a file with the path 
> consisting of the confdir + the absolute path can't be found.  the Handler 
> should explicitly check for absolute paths (like it checks for ".." and error 
> message should make it clear that absolute paths are not allowed.
> Example of current behavior...
> {noformat}
> schema path = /home/solrdata/rig1/conf/schema.xml
> url displayed in admin form = 
> http://host:port/solr/core1/admin/file/?file=/home/solrdata/rig1/conf/schema.xml
> error message: Can not find: schema.xml 
> [/path/to/core1/conf/directory/home/solrdata/rig1/conf/schema.xml]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-2183) DataImportHandler treatment of case for dynamic column mapping vs explicit mapping

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch reassigned SOLR-2183:
---

Assignee: Alexandre Rafalovitch

> DataImportHandler treatment of case for dynamic column mapping vs explicit 
> mapping
> --
>
> Key: SOLR-2183
> URL: https://issues.apache.org/jira/browse/SOLR-2183
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - DataImportHandler
>Affects Versions: 1.4.1
> Environment: OOB install, jetty, Win XP
>Reporter: K A
>Assignee: Alexandre Rafalovitch
>
> There is a difference to how DIH treats the case of columns when using the 
> DataImportHandler and using explicit mapping vs dynamic mapping. The exact 
> test cases I used are described below:
> -
> From http://wiki.apache.org/solr/DataImportHandler#A_shorter_data-config : 
> "It is possible to totally avoid the field entries in entities if the names 
> of the fields are same (case does not matter) as those in Solr schema" 
> I confirmed that matching the schema.xml field case to the database table is 
> needed for dynamic fields, and the wiki statement above is incorrect, or at 
> the very least confusing, possibly a bug. 
> My database is Oracle 10g and the column names have been created in all 
> uppercase in the database. 
> In Oracle: 
> Table name: wide_table 
> Column names: COLUMN_1 ... COLUMN_100 (yes, uppercase) 
> Please see following scenarios and results I found: 
> data-config.xml 
>  
>  
>  
> schema.xml 
>  multiValued="true" /> 
> Result: 
> Nothing Imported 
> = 
> data-config.xml 
>  
>  
>  
> schema.xml 
>  multiValued="true" /> 
> Result: 
> Note query column names changed to uppercase. 
> Nothing Imported 
> = 
> data-config.xml 
>  
>  
>  
> schema.xml 
>  multiValued="true" /> 
> Result: 
> Note ONLY the field entry was changed to caps 
> All records imported, with only COLUMN_100 id field. 
>  
> data-config.xml 
>  
>  
>  
> schema.xml 
>  multiValued="true" /> 
> Result: 
> Note BOTH the field entry was changed to caps in data-config.xml, and the 
> dynamicField wildcard in schema.xml 
> All records imported, with all fields specified. This is the behavior 
> desired. 
> = 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-2217) Odd response format when using extractOnly option with Solr Cell

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch closed SOLR-2217.
---
Resolution: Cannot Reproduce

This is many versions behind for all components. If this issue is seen with 
more recent Solr version, a new ticket can be opened.

> Odd response format when using extractOnly option with Solr Cell
> 
>
> Key: SOLR-2217
> URL: https://issues.apache.org/jira/browse/SOLR-2217
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - Solr Cell (Tika extraction)
>Affects Versions: 1.4.1
> Environment: Ubuntu 10.4 LTS (Lucid), Java version "1.6.0_18" OpenJDK 
> Runtime Environment (IcedTea6 1.8.2) (6b18-1.8.2-4ubuntu2) OpenJDK 64-Bit 
> Server VM (build 16.0-b13, mixed mode), Tomcat 6
>Reporter: Donovan Jimenez
>Priority: Minor
>
> When using the extractOnly request parameter, the 
> oas.handler.extraction.ExtractingDocumentLoader is using stream.getName() for 
> parts of the response, but this name appears to be null because the 
> serialized response will return an unnamed string and a list named 
> "null_metadata". It seems more appropriate to use "content" (producing a 
> named string "content" and list "content_metadata") or to use whatever 
> oas.handler.extraction.SolrContentHandler is using for the content field name 
> (coded to "content", but mappable by request parameters).
> 201 rsp.add(*stream.getName()*, writer.toString());
> 202 writer.close();
> 203 String[] names = metadata.names();
> 204 NamedList metadataNL = new NamedList();
> 205 for (int i = 0; i < names.length; i++) {
> 206   String[] vals = metadata.getValues(names[i]);
> 207   metadataNL.add(names[i], vals);
> 208 }
> 209 rsp.add(*stream.getName()* + "_metadata", metadataNL);
> This is mostly to avoid having to use the odd empty string and null_metadata 
> identifiers in unserialized data (like JSON, PHP, RUBY, etc)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2220) DIH: ClassCastException in MailEntityProcessor

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15539491#comment-15539491
 ] 

Alexandre Rafalovitch commented on SOLR-2220:
-

We are now multiple Java versions ahead, plus some DIH changes. However, mail 
component is not used much, so theoretically this may still be an issue.

Does this still happen with the latest Solr? 

> DIH: ClassCastException in MailEntityProcessor
> --
>
> Key: SOLR-2220
> URL: https://issues.apache.org/jira/browse/SOLR-2220
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - DataImportHandler
>Affects Versions: 1.4, 1.4.1
>Reporter: Koji Sekiguchi
>
> I hit ClassCastException in MailEntityProcessor, but it ignored due to the 
> following catch block:
> {code}
> private Map getDocumentFromMail(Message mail) {
>   Map row = new HashMap();
>   try {
> addPartToDocument(mail, row, true);
> return row;
>   } catch (Exception e) {
> return null;
>   }
> }
> {code}
> The exception is "com.sun.mail.imap.IMAPInputStream cannot be cast to 
> javax.mail.Multipart" in addPartToDocument() method:
> {code}
> if (part.isMimeType("multipart/*")) {
>   Multipart mp = (Multipart) part.getContent();
> :
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-2222) Merge duplicates documents with uniqueKey

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch closed SOLR-.
---
Resolution: Won't Fix

This code path has been rewritten multiple times. If the issue still exists in 
new approach, a new ticket can be created.

> Merge duplicates documents with uniqueKey
> -
>
> Key: SOLR-
> URL: https://issues.apache.org/jira/browse/SOLR-
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 1.4.1
>Reporter: Andreas Laager
>
> When merging one core into an other one could get multiple documents for one 
> uniqueKey. As a result the facet counts are wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-2238) SolrResponseBase.toString() throws unexpected NullPointer exception

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch closed SOLR-2238.
---
Resolution: Won't Fix

Ancient code path. If happens again with latest Solr a new issue can be opened.

> SolrResponseBase.toString() throws unexpected NullPointer exception
> ---
>
> Key: SOLR-2238
> URL: https://issues.apache.org/jira/browse/SOLR-2238
> Project: Solr
>  Issue Type: Bug
>  Components: clients - java
>Affects Versions: 1.4, 1.4.1
> Environment: SolrJ 1.4.1 running om IBM WebSphere Application Server 
> 7 on AIX.
>Reporter: Dick Larsson
>Priority: Minor
>
> SolrResponseBase.toString() method, row 55 does not check for null and can 
> throw NullPointerException.
> SolrResponseBase has a field 
> private NamedList response
> The SolrResponseBase.toString() just returns response.toString() without 
> checking if field "response" is null, causing a NullPointerException to be 
> thrown.
> You can create an instance of QueryResponse.
> But if you invoke toString a nullpointer will be thrown
> QueryResponse rsp = new QueryResponse();
> System.out.println(rsp);



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-2256) CommonsHttpSolrServer.deleteById(emptyList) causes SolrException: missing_content_stream

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch closed SOLR-2256.
---
Resolution: Not A Problem

Ancient code and the only replication said it was probably not a bug. 

> CommonsHttpSolrServer.deleteById(emptyList) causes SolrException: 
> missing_content_stream
> 
>
> Key: SOLR-2256
> URL: https://issues.apache.org/jira/browse/SOLR-2256
> Project: Solr
>  Issue Type: Bug
>  Components: clients - java
>Affects Versions: 1.4.1
>Reporter: Maxim Valyanskiy
>Priority: Minor
>
> Call to deleteById method of CommonsHttpSolrServer with empty list causes 
> following exception:
> org.apache.solr.common.SolrException: missing_content_stream
> missing_content_stream
> request: http://127.0.0.1:8983/solr/update/javabin
> at 
> org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:435)
> at 
> org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:244)
> at 
> org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:105)
> at org.apache.solr.client.solrj.SolrServer.deleteById(SolrServer.java:106)
> at 
> ru.org.linux.spring.SearchQueueListener.reindexMessage(SearchQueueListener.java:89)
> Here is TCP stream captured by Wireshark:
> =
> POST /solr/update HTTP/1.1
> Content-Type: application/x-www-form-urlencoded; charset=UTF-8
> User-Agent: Solr[org.apache.solr.client.solrj.impl.CommonsHttpSolrServer] 1.0
> Host: 127.0.0.1:8983
> Content-Length: 20
> wt=javabin=1
> =
> HTTP/1.1 400 missing_content_stream
> Content-Type: text/html; charset=iso-8859-1
> Content-Length: 1401
> Server: Jetty(6.1.3)
> = [ html reply skipped ] ===



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-2266) java.lang.ArrayIndexOutOfBoundsException in field cache when using a tdate field in a boost function with rord()

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch closed SOLR-2266.
---
Resolution: Won't Fix

The underlying components have changed multiple times. If something like this 
is seen again, a new case can be opened.

> java.lang.ArrayIndexOutOfBoundsException in field cache when using a tdate 
> field in a boost function with rord()
> 
>
> Key: SOLR-2266
> URL: https://issues.apache.org/jira/browse/SOLR-2266
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 1.4.1
> Environment: Mac OS 10.6
> java version "1.6.0_22"
> Java(TM) SE Runtime Environment (build 1.6.0_22-b04-307-10M3261)
> Java HotSpot(TM) 64-Bit Server VM (build 17.1-b03-307, mixed mode)
>Reporter: Peter Wolanin
>
> I have been testing a switch to long and tdate instead of int and date fields 
> in the schema.xml for our Drupal integration.  This indexes fine, but search 
> fails with a 500 error.
> {code}
> INFO: [d7] webapp=/solr path=/select 
> params={spellcheck=true=true=1=1=term=map=json=10=1.2=id,entity_id,entity,bundle,bundle_name,nid,title,comment_count,type,created,changed,score,path,url,uid,name=0=true=term=recip(rord(created),4,19,19)^200.0}
>  status=500 QTime=4 
> Dec 5, 2010 11:52:28 AM org.apache.solr.common.SolrException log
> SEVERE: java.lang.ArrayIndexOutOfBoundsException: 39
> at 
> org.apache.lucene.search.FieldCacheImpl$StringIndexCache.createValue(FieldCacheImpl.java:721)
> at 
> org.apache.lucene.search.FieldCacheImpl$Cache.get(FieldCacheImpl.java:224)
> at 
> org.apache.lucene.search.FieldCacheImpl.getStringIndex(FieldCacheImpl.java:692)
> at 
> org.apache.solr.search.function.ReverseOrdFieldSource.getValues(ReverseOrdFieldSource.java:61)
> at 
> org.apache.solr.search.function.TopValueSource.getValues(TopValueSource.java:57)
> at 
> org.apache.solr.search.function.ReciprocalFloatFunction.getValues(ReciprocalFloatFunction.java:61)
> at 
> org.apache.solr.search.function.FunctionQuery$AllScorer.(FunctionQuery.java:123)
> at 
> org.apache.solr.search.function.FunctionQuery$FunctionWeight.scorer(FunctionQuery.java:93)
> at 
> org.apache.lucene.search.BooleanQuery$BooleanWeight.scorer(BooleanQuery.java:297)
> at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:250)
> at org.apache.lucene.search.Searcher.search(Searcher.java:171)
> at 
> org.apache.solr.search.SolrIndexSearcher.getDocListAndSetNC(SolrIndexSearcher.java:1101)
> at 
> org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:880)
> at 
> org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:341)
> at 
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:182)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:195)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:1316)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:338)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:241)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1089)
> at com.acquia.search.HmacFilter.doFilter(HmacFilter.java:62)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1089)
> at 
> org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:365)
> at 
> org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
> at 
> org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181)
> at 
> org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:712)
> at 
> org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:405)
> at 
> org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:211)
> at 
> org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:114)
> at 
> org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:139)
> at org.mortbay.jetty.Server.handle(Server.java:285)
> at 
> org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:502)
> at 
> org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:821)
> at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:513)
> at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:208)
> at 

[jira] [Closed] (SOLR-2278) PHPSerialized fails with Solr spatial

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch closed SOLR-2278.
---
Resolution: Cannot Reproduce

> PHPSerialized fails with Solr spatial
> -
>
> Key: SOLR-2278
> URL: https://issues.apache.org/jira/browse/SOLR-2278
> Project: Solr
>  Issue Type: Bug
>  Components: spatial
>Affects Versions: 1.4.1
>Reporter: Markus Jelsma
>
> Solr throws a java.lang.IllegalArgumentException: Map size must not be 
> negative exception when using the PHP Serialized response writer with JTeam 
> SolrSpatial plugin in front. At first it may seem a bug in the plugin but 
> according to some posts in the mailing list thread ( 
> http://lucene.472066.n3.nabble.com/Map-size-must-not-be-negative-with-spatial-results-php-serialized-td2039782.html
>  ) it just might be a bug in Solr.
> The only way to reproduce the issue that i know of is using using LocalParams 
> to set spatial parameters and having the spatial search component activated 
> as last-components. If the query yields no results, the exception won't show 
> up.
>   
>class="nl.jteam.search.solrext.spatial.GeoDistanceComponent">
> 
>   distance
> 
>   
>   
>class="nl.jteam.search.solrext.spatial.SpatialTierQueryParserPlugin" 
> basedOn="dismax">
> 1
> 1
> 60
> ad_latitude
> ad_longitude
> _tier_
>   
> In the request handler:
> 
>   geodistance
> 
> query:
> http://localhost:8983/solr/search?q={!spatial%20lat=51.9562%20long=6.02606%20radius=432%20unit=km}auto=php



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-2323) Solr should clean old replication temp dirs

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch closed SOLR-2323.
---
Resolution: Won't Fix

An issue against the replication method no longer used in Solr.

> Solr should clean old replication temp dirs
> ---
>
> Key: SOLR-2323
> URL: https://issues.apache.org/jira/browse/SOLR-2323
> Project: Solr
>  Issue Type: Bug
>  Components: replication (java)
>Affects Versions: 1.4.1
>Reporter: Markus Jelsma
>
> In a high commit rate environment (polling < 10s and commits every minute) 
> the shutdown/restart of a slave can result in old temp directories laying 
> around, filling up disk space as we go on. This happens with the following 
> scenario:
> 1. master has index version 2
> 2. slave downloads files for version 2 to index.2 temp directory
> 3. slave is shutdown
> 4. master increments to version 3
> 5. slave is started
> 6. slave downloads files for version 3 to index.3 temp directory
> The result is index.2 temp directory not getting deleted by any process. This 
> is very annoying in such an environment where nodes are restarted frequently 
> (for whatever reason). Working around the problem can be done by either 
> manually deleting the temp directories between shutdown and startup or by 
> calling the disablepoll command followed by an abortfetch command which will 
> (after a long wait) finally purge the temp directory.
> See this thread:
> http://www.mail-archive.com/solr-user@lucene.apache.org/msg45120.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-2478) DocumentObjectBinder.toSolrInputDocument not processing the dynamic field values.

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch closed SOLR-2478.
---
Resolution: Incomplete

A clarification question about ancient functionality. No next action.

> DocumentObjectBinder.toSolrInputDocument not processing the dynamic field 
> values.
> -
>
> Key: SOLR-2478
> URL: https://issues.apache.org/jira/browse/SOLR-2478
> Project: Solr
>  Issue Type: Bug
>  Components: clients - java
>Affects Versions: 1.4.1
> Environment: solr 1.4.1
>Reporter: Nagarajan Shanmugam
>
> When i tried to add a bean using SolrServer.addBean(). It is actually not 
> processing the dynamic fileds. 
> If I look the index file using lucene luke under the dynamic field name it 
> creating the field.
> Example
>  Class Bean {
>@Field("oem_*")
>public Map oems;
> }
> schema.xml
>  multiValued="false"/>
> Bean bean = new Bean();
> bean.oems.put("oem_1","OEM1");
> solrServer.addBean(bean)
> When i call this method its adding the document in index. 
> Using lucene luke I open the index and look for the field oem_1. But it not 
> found. Instead it has created the field oem_* and add the map value 
> {oem_1:OEM1}
> Is this the way dynamic fields works?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-4945) Japanese Autocomplete and Highlighter broken

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch closed SOLR-4945.
---
Resolution: Won't Fix

The issue was fixed in later (3.x) version of Solr

> Japanese Autocomplete and Highlighter broken
> 
>
> Key: SOLR-4945
> URL: https://issues.apache.org/jira/browse/SOLR-4945
> Project: Solr
>  Issue Type: Bug
>  Components: highlighter
>Affects Versions: 1.4.1
>Reporter: Shruthi Khatawkar
>
> Autocomplete is implemented with Highlighter functionality. This works fine 
> for most of the languages but breaks for Japanese.
> multivalued,termVector,termPositions and termOffset are set to true.
> Here is an example:
> Query: product classic.
> Result:
> Actual : 
> この商品の互換性の機種にproduct 1 やclassic Touch2 が記載が有りません。 USB接続ケーブルをproduct 1 やclassic 
> Touch2に付属の物を使えば利用出来ると思いますが 間違っていますか?
> With Highlighter (  tags being used):
> この商品の互換性の機種にproduct 1 やclassic Touch2 が記載が有りません。 
> USB接続ケーブルをproduct 1 やclassic Touch2に付属の物を使えば利用出来ると思いますが 間違っていますか?
> Though query terms "product classic" is repeated twice, highlighting is 
> happening only on the first instance. As shown above.
> Solr returns only first instance offset and second instance is ignored.
> Also it's observed, highlighter repeats first letter of the token if there is 
> numeric.
> For eg.Query : product and We have product1, highlighter returns as 
> pproduct1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-2475) StackOverflow error from dataimporthandler

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch closed SOLR-2475.
---
Resolution: Cannot Reproduce

One off error against ancient DIH. Create new issue if this ever happens again 
against the latest Solr.

> StackOverflow error from dataimporthandler
> --
>
> Key: SOLR-2475
> URL: https://issues.apache.org/jira/browse/SOLR-2475
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - DataImportHandler
>Affects Versions: 1.4.1
> Environment: windows
>Reporter: carole yang
>
> This error just show up in the middle of dataimport and abort the process.  
> StackOverflow look more like recursion related problem. I am not sure if the 
> problem stem from mysql or the recursive function buildDocument.
> This is the connection related configuration im the data-config.xml.
>  url="jdbc:mysql://localhost:3306/cds" user="xxx" password="xxx" 
> batchSize="-1" encoding="UTF-8"/>
> Apr 21, 2011 1:01:22 AM org.apache.solr.handler.dataimport.DataImporter 
> doFullImport
> SEVERE: Full Import failed
> org.apache.solr.handler.dataimport.DataImportHandlerException: 
> java.lang.StackOverflowError
>   at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:424)
>   at 
> org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:242)
>   at 
> org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:180)
>   at 
> org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:331)
>   at 
> org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:389)
>   at 
> org.apache.solr.handler.dataimport.DataImporter$1.run(DataImporter.java:370)
> Caused by: java.lang.StackOverflowError
>   at java.net.SocketOutputStream.socketWrite0(Native Method)
>   at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92)
>   at java.net.SocketOutputStream.write(SocketOutputStream.java:136)
>   at 
> java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
>   at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:123)
>   at com.mysql.jdbc.MysqlIO.send(MysqlIO.java:3294)
>   at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:1940)
>   at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2109)
>   at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2642)
>   at 
> com.mysql.jdbc.StatementImpl.executeSimpleNonQuery(StatementImpl.java:1544)
>   at com.mysql.jdbc.RowDataDynamic.close(RowDataDynamic.java:198)
>   at com.mysql.jdbc.ResultSetImpl.realClose(ResultSetImpl.java:7556)
>   at com.mysql.jdbc.ResultSetImpl.close(ResultSetImpl.java:907)
>   at com.mysql.jdbc.StatementImpl.realClose(StatementImpl.java:2363)
>   at 
> com.mysql.jdbc.ConnectionImpl.closeAllOpenStatements(ConnectionImpl.java:1539)
>   at com.mysql.jdbc.ConnectionImpl.realClose(ConnectionImpl.java:4402)
>   at com.mysql.jdbc.ConnectionImpl.cleanup(ConnectionImpl.java:1315)
>   at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2675)
>   at 
> com.mysql.jdbc.StatementImpl.executeSimpleNonQuery(StatementImpl.java:1544)
>   at com.mysql.jdbc.RowDataDynamic.close(RowDataDynamic.java:198)
>   at com.mysql.jdbc.ResultSetImpl.realClose(ResultSetImpl.java:7556)
>   at com.mysql.jdbc.ResultSetImpl.close(ResultSetImpl.java:907)
>   at com.mysql.jdbc.StatementImpl.realClose(StatementImpl.java:2363)
>   at 
> com.mysql.jdbc.ConnectionImpl.closeAllOpenStatements(ConnectionImpl.java:1539)
>   at com.mysql.jdbc.ConnectionImpl.realClose(ConnectionImpl.java:4402)
>   at com.mysql.jdbc.ConnectionImpl.cleanup(ConnectionImpl.java:1315)
>   at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2675)
>   at 
> com.mysql.jdbc.StatementImpl.executeSimpleNonQuery(StatementImpl.java:1544)
>   at com.mysql.jdbc.RowDataDynamic.close(RowDataDynamic.java:198)
>   at com.mysql.jdbc.ResultSetImpl.realClose(ResultSetImpl.java:7556)
>   at com.mysql.jdbc.ResultSetImpl.close(ResultSetImpl.java:907)
>   at com.mysql.jdbc.StatementImpl.realClose(StatementImpl.java:2363)
>   at 
> com.mysql.jdbc.ConnectionImpl.closeAllOpenStatements(ConnectionImpl.java:1539)
>   at com.mysql.jdbc.ConnectionImpl.realClose(ConnectionImpl.java:4402)
>   at com.mysql.jdbc.ConnectionImpl.cleanup(ConnectionImpl.java:1315)
>   at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2675)
>   at 
> com.mysql.jdbc.StatementImpl.executeSimpleNonQuery(StatementImpl.java:1544)
>   at com.mysql.jdbc.RowDataDynamic.close(RowDataDynamic.java:198)
>   at com.mysql.jdbc.ResultSetImpl.realClose(ResultSetImpl.java:7556)
>   at 

[jira] [Closed] (SOLR-1238) exception in solrJ when authentication is used

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch closed SOLR-1238.
---
Resolution: Won't Fix

This is a stuck-in-discussion bug that affects many-generations old libraries. 
If something like this still happens, a new issue can be open against relevant 
versions.

> exception in solrJ when authentication is used
> --
>
> Key: SOLR-1238
> URL: https://issues.apache.org/jira/browse/SOLR-1238
> Project: Solr
>  Issue Type: Bug
>  Components: clients - java
>Affects Versions: 1.3
>Reporter: Noble Paul
>Priority: Minor
> Attachments: SOLR-1238.patch
>
>
> see the thread http://markmail.org/thread/w36ih2fnphbubian
> {code}
> I am facing getting error when I am using Authentication in Solr. I
> followed Wiki. The error doesnot appear when I searching. Below is the
> code snippet and the error.
> Please note I am using Solr 1.4 Development build from SVN.
>HttpClient client=new HttpClient();
>AuthScope scope = new 
> AuthScope(AuthScope.ANY_HOST,AuthScope.ANY_PORT,null, null);
>client.getState().setCredentials(scope,new 
> UsernamePasswordCredentials("guest", "guest"));
>SolrServer server =new 
> CommonsHttpSolrServer("http://localhost:8983/solr",client);
>SolrInputDocument doc1=new SolrInputDocument();
>//Add fields to the document
>doc1.addField("employeeid", "1237");
>doc1.addField("employeename", "Ann");
>doc1.addField("employeeunit", "etc");
>doc1.addField("employeedoj", "1995-11-31T23:59:59Z");
>server.add(doc1);
> Exception in thread "main"
> org.apache.solr.client.solrj.SolrServerException:
> org.apache.commons.httpclient.ProtocolException: Unbuffered entity
> enclosing request can not be repeated.
>at 
> org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:468)
>at 
> org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:242)
>at 
> org.apache.solr.client.solrj.request.UpdateRequest.process(UpdateRequest.java:259)
>at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:63)
>at test.SolrAuthenticationTest.(SolrAuthenticationTest.java:49)
>at test.SolrAuthenticationTest.main(SolrAuthenticationTest.java:113)
> Caused by: org.apache.commons.httpclient.ProtocolException: Unbuffered
> entity enclosing request can not be repeated.
>at 
> org.apache.commons.httpclient.methods.EntityEnclosingMethod.writeRequestBody(EntityEnclosingMethod.java:487)
>at 
> org.apache.commons.httpclient.HttpMethodBase.writeRequest(HttpMethodBase.java:2114)
>at 
> org.apache.commons.httpclient.HttpMethodBase.execute(HttpMethodBase.java:1096)
>at 
> org.apache.commons.httpclient.HttpMethodDirector.executeWithRetry(HttpMethodDirector.java:398)
>at 
> org.apache.commons.httpclient.HttpMethodDirector.executeMethod(HttpMethodDirector.java:171)
>at 
> org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:397)
>at 
> org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:323)
>at 
> org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:415)
>... 5 more.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-1086) Need to rectify inconsistent behavior when people associate an analyzer with a non-TextField fieldType

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch closed SOLR-1086.
---
Resolution: Fixed

This validation has been long fixed. Quick test throws back an error:

{noformat}
StrField (ignored) does not support specifying an analyzer
{noformat}

> Need to rectify inconsistent behavior when people associate an analyzer with 
> a non-TextField fieldType
> --
>
> Key: SOLR-1086
> URL: https://issues.apache.org/jira/browse/SOLR-1086
> Project: Solr
>  Issue Type: Bug
>  Components: Schema and Analysis
>Affects Versions: 1.1.0, 1.2, 1.3, 1.4
>Reporter: Hoss Man
>  Labels: newdev
>
> Currently, specifying an  is only supported when using the 
> TextField class -- however:
>  1) no error is logged if an  is declared for other field types
>  2) the analysis screen gives the mistaken impression that the analyzer is 
> being used...
> http://www.nabble.com/Field-tokenizer-question-to22594575.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-6520) Documentation web page is missing link to live Solr Reference Guide

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch reassigned SOLR-6520:
---

Assignee: Alexandre Rafalovitch

> Documentation web page is missing link to live Solr Reference Guide
> ---
>
> Key: SOLR-6520
> URL: https://issues.apache.org/jira/browse/SOLR-6520
> Project: Solr
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 4.10
> Environment: web
>Reporter: Alexandre Rafalovitch
>Assignee: Alexandre Rafalovitch
>Priority: Minor
>  Labels: documentation, website
>
> The [official document page for 
> Solr|https://lucene.apache.org/solr/documentation.html] is missing the link 
> to the live Solr Reference Guide. Only the link to PDF is there. In fact, one 
> has to go to the WIKI, it seems to find the link. 
> It is also not linked from [the release-specific documentation 
> page|https://lucene.apache.org/solr/4_10_0/index.html] either.
> This means the search engines do not easily discover the new content and it 
> does not show up in searches for when people look for information. It also 
> means people may hesitate to look at it, if they have to download the whole 
> PDF first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-6808) Create a shippable tutorial integrated with running Solr instance

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch reassigned SOLR-6808:
---

Assignee: Alexandre Rafalovitch

> Create a shippable tutorial integrated with running Solr instance
> -
>
> Key: SOLR-6808
> URL: https://issues.apache.org/jira/browse/SOLR-6808
> Project: Solr
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Alexandre Rafalovitch
>Assignee: Alexandre Rafalovitch
>  Labels: beginners, documentation, usability
> Fix For: 5.0
>
>
> It would be good to have a tutorial shipping with Solr distribution that is 
> *active* and actually uses live Solr instance to demonstrate and explain 
> concepts.
> Some very rough ideas to start the conversation going:
> Start bin/solr start -e tutorial-basic
> Provide instructions as admin-extra files
> Use new REST functionality to create/modify types and admin handlers



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-6807) Make handleSelect=false by default

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch reassigned SOLR-6807:
---

Assignee: Alexandre Rafalovitch

> Make handleSelect=false by default
> --
>
> Key: SOLR-6807
> URL: https://issues.apache.org/jira/browse/SOLR-6807
> Project: Solr
>  Issue Type: Wish
>Affects Versions: 4.10.2
>Reporter: Alexandre Rafalovitch
>Assignee: Alexandre Rafalovitch
>Priority: Minor
>  Labels: solrconfig.xml
> Fix For: 5.0
>
>
> In the solrconfig.xml, we have a long explanation on the legacy 
> ** section. Since we are cleaning up 
> legacy stuff for version 5, is it safe now to flip handleSelect's default to 
> be *false* and therefore remove both the attribute and the whole section 
> explaining it?
> Then, a section in Reference Guide or even a blog post can explain what to do 
> for the old clients that still need it. But it does not seem to be needed 
> anymore for the new users. And possibly cause confusing now that we have 
> implicit, explicit and overlay handlers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-6960) Config reporting handler is missing initParams defaults

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch reassigned SOLR-6960:
---

Assignee: Alexandre Rafalovitch

> Config reporting handler is missing initParams defaults
> ---
>
> Key: SOLR-6960
> URL: https://issues.apache.org/jira/browse/SOLR-6960
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.0
>Reporter: Alexandre Rafalovitch
>Assignee: Alexandre Rafalovitch
>
> *curl http://localhost:8983/solr/techproducts/config/requestHandler* produces 
> (fragments):
> {quote}
>   "/update":{
> "name":"/update",
> "class":"org.apache.solr.handler.UpdateRequestHandler",
> "defaults":{}},
>   "/update/json/docs":{
> "name":"/update/json/docs",
> "class":"org.apache.solr.handler.UpdateRequestHandler",
> "defaults":{
>   "update.contentType":"application/json",
>   "json.command":"false"}},
> {quote}
> Where are the defaults from initParams:
> {quote}
> 
> 
>   text
> 
> 
>   
> 
>   \_src_
>   true
> 
>   
> {quote}
> Obviously, a test is missing as well to catch this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-8723) Admin UIs' schema screen do not show the multiTerm part of the analyzer definition

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch reassigned SOLR-8723:
---

Assignee: Alexandre Rafalovitch

> Admin UIs' schema screen do not show the multiTerm part of the analyzer 
> definition
> --
>
> Key: SOLR-8723
> URL: https://issues.apache.org/jira/browse/SOLR-8723
> Project: Solr
>  Issue Type: Improvement
>  Components: UI
>Affects Versions: 5.5
>Reporter: Alexandre Rafalovitch
>Assignee: Alexandre Rafalovitch
>Priority: Minor
>
> If a field type is created with multiterm analyzer chain, it does not show up 
> in the Admin UI's Schema screen.
> This happens for both old and new UI implementations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-8854) Config Overlay reports phantom znodeVersion

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch reassigned SOLR-8854:
---

Assignee: Alexandre Rafalovitch

> Config Overlay reports phantom znodeVersion
> ---
>
> Key: SOLR-8854
> URL: https://issues.apache.org/jira/browse/SOLR-8854
> Project: Solr
>  Issue Type: Bug
>  Components: config-api
>Affects Versions: 5.5, 6.0
>Reporter: Alexandre Rafalovitch
>Assignee: Alexandre Rafalovitch
>Priority: Minor
> Fix For: 6.0
>
>
> Stock techproduct example. Calling 
> http://localhost:8983/solr/techproducts/config/overlay returns
> {noformat}
> {
>   "responseHeader":{
> "status":0,
> "QTime":4},
>   "overlay":{"znodeVersion":-1}}
> {noformat}
> This **znodeVersion** is phantom for several reasons:
> # There is no configoverlay.json file on the filesystem where all overlay 
> properties are supposed to be stored
> # The overall config endpoint does not include this property even though it 
> is supposed to be the superset of properties
> # What is this parameter doing in the non-cloud installation?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-7812) Need a playground to quickly test analyzer stacks

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch reassigned SOLR-7812:
---

Assignee: Alexandre Rafalovitch

> Need a playground to quickly test analyzer stacks
> -
>
> Key: SOLR-7812
> URL: https://issues.apache.org/jira/browse/SOLR-7812
> Project: Solr
>  Issue Type: Wish
>  Components: Schema and Analysis
>Reporter: Alexandre Rafalovitch
>Assignee: Alexandre Rafalovitch
>Priority: Minor
>  Labels: analyzers, beginners, usability
>
> (from email by Robert Oschler)
> (Would be useful to have)... a convenient "playground" for testing index and 
> query filters?
> I'm  imagining a utility where you can select a set of index and query
> filters, and then enter  a string as a test "document" and a query string
> and see what kind of scores come back during a matching attempt.  This
> would be a big aid in crafting an indexing/query scheme to get the desired
> matching profile working.  Otherwise the only technique I can think of is
> to iteratively modify the schema file and retest with the admin panel with
> each combination of filters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-5711) Build Lucene Javadocs as a standalone artifact for Solr users

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch reassigned SOLR-5711:
---

Assignee: Alexandre Rafalovitch

> Build Lucene Javadocs as a standalone artifact for Solr users
> -
>
> Key: SOLR-5711
> URL: https://issues.apache.org/jira/browse/SOLR-5711
> Project: Solr
>  Issue Type: Wish
>  Components: Build, documentation
>Affects Versions: 4.6.1
>Reporter: Alexandre Rafalovitch
>Assignee: Alexandre Rafalovitch
>Priority: Minor
> Fix For: 4.9, 6.0
>
>
> Solr relies on a lot of Lucene packages. Some of the classes (e.g. 
> Tokenizers) have a lot of documentation in the Javadoc. 
> However, the JavaDoc shipped with Solr only covers Solr specific classes. And 
> full Lucene download is rather large (50M) to get them just for JavaDocs.
> It would be useful to have a separate download of just Lucene Javadocs for 
> those people who want/need to work offline and want to have all the relevant 
> documentation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-6962) bin/solr stop -a should complain about missing parameter

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch reassigned SOLR-6962:
---

Assignee: Alexandre Rafalovitch

> bin/solr stop -a should complain about missing parameter
> 
>
> Key: SOLR-6962
> URL: https://issues.apache.org/jira/browse/SOLR-6962
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools
>Affects Versions: 5.0
>Reporter: Alexandre Rafalovitch
>Assignee: Alexandre Rafalovitch
>Priority: Minor
> Attachments: SOLR-6962.patch, SOLR-6962v2.patch
>
>
> *bin/solr* has a *-a* option that expects a second parameter. If one is not 
> provided, it hangs.  It should complain and  exit just like *-e* option does.
> The most common time I hit this is when I try to do *bin/solr stop \-all* and 
> instead just type *bin/solr stop \-a* as I am more used to give full name 
> options with double-dash prefix (Unix conventions, I guess).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-4473) Reloading a core will not close (leak) associated DIH JDBC connection

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch reassigned SOLR-4473:
---

Assignee: Alexandre Rafalovitch

> Reloading a core will not close (leak) associated DIH JDBC connection
> -
>
> Key: SOLR-4473
> URL: https://issues.apache.org/jira/browse/SOLR-4473
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - DataImportHandler
>Affects Versions: 4.1
>Reporter: Alexandre Rafalovitch
>Assignee: Alexandre Rafalovitch
> Fix For: 4.9, 6.0
>
>
> I have DIH configured with Derby database. After I start Solr, I can run DIH 
> import fine. After I reload the core, DIH can no longer run with the 
> following message (excerpts): 
> ...
> EVERE: Exception while processing: vac document : 
> SolrInputDocument[]:org.apache.solr.handler.dataimport.DataImportHandlerException:
>  Unable to execute query: select * from ALERTS Processing Document # 1
>   at 
> org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow(DataImportHandlerException.java:71)
>   at 
> org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.(JdbcDataSource.java:253)
>   at 
> org.apache.solr.handler.dataimport.JdbcDataSource.getData(JdbcDataSource.java:210)
>   at 
> org.apache.solr.handler.dataimport.JdbcDataSource.getData(JdbcDataSource.java:38)
>   at 
> org.apache.solr.handler.dataimport.SqlEntityProcessor.initQuery(SqlEntityProcessor.java:59)
>   at 
> org.apache.solr.handler.dataimport.SqlEntityProcessor.nextRow(SqlEntityProcessor.java:73)
>   at 
> org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:243)
>   at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:465)
> Caused by: java.sql.SQLException: Another instance of Derby may have already 
> booted the database .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-9005) In files example, update-script.js scripting URP fails with method signature mismatch

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch reassigned SOLR-9005:
---

Assignee: Alexandre Rafalovitch  (was: Erik Hatcher)

> In files example, update-script.js scripting URP fails with method signature 
> mismatch
> -
>
> Key: SOLR-9005
> URL: https://issues.apache.org/jira/browse/SOLR-9005
> Project: Solr
>  Issue Type: Bug
>  Components: examples
>Affects Versions: 6.0
> Environment: Mac
> java version "1.8.0_31"
>Reporter: Alexandre Rafalovitch
>Assignee: Alexandre Rafalovitch
>Priority: Minor
> Attachments: SOLR-9005.patch
>
>
> Following the *files* example README:
> bin/solr start
> bin/solr create -c files -d example/files/conf
> bin/post -c files docs/solr-analytics/index.html  # (just one reproducible 
> example)
> {noformat}
> Unable to invoke function processAdd in script: update-script.js: Can't 
> unambiguously select between fixed arity signatures [(java.lang.String, 
> java.io.Reader), (java.lang.String, java.lang.String)] of the method 
> org.apache.solr.analysis.TokenizerChain.tokenStream for argument types 
> [java.lang.String, null]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2731) CSVResponseWriter should optionally return numfound

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15539412#comment-15539412
 ] 

Alexandre Rafalovitch commented on SOLR-2731:
-

For exporting significant amount of data, we now have 
[/export|https://cwiki.apache.org/confluence/display/solr/Exporting+Result+Sets]
 handler. Specifically, for importing into another instance, there is [DIH with 
SolrInputProcessor|https://wiki.apache.org/solr/DataImportHandler#SolrEntityProcessor].
 Would either of those have fulfilled the need?

The output of this writer - as proposed - would not even be able to go back 
into the Solr, that would require updating a different component and 
additional, completely new discussion.

> CSVResponseWriter should optionally return numfound
> ---
>
> Key: SOLR-2731
> URL: https://issues.apache.org/jira/browse/SOLR-2731
> Project: Solr
>  Issue Type: Improvement
>  Components: Response Writers
>Affects Versions: 3.1, 3.3, 4.0-ALPHA
>Reporter: Jon Hoffman
>  Labels: patch
> Fix For: 3.1.1, 4.9, 6.0
>
> Attachments: SOLR-2731-R1.patch, SOLR-2731.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> an optional parameter "csv.numfound=true" can be added to the request which 
> causes the first line of the response to be the numfound.  This would have no 
> impact on existing behavior, and those who are interested in that value can 
> simply read off the first line before sending to their usual csv parser.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Linux (64bit/jdk1.8.0_102) - Build # 1849 - Unstable!

2016-10-01 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/1849/
Java: 64bit/jdk1.8.0_102 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.testDelegationTokenCancelFail

Error Message:
expected:<200> but was:<404>

Stack Trace:
java.lang.AssertionError: expected:<200> but was:<404>
at 
__randomizedtesting.SeedInfo.seed([B5475738B153FF6F:DDF8621261C9ED83]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.cancelDelegationToken(TestSolrCloudWithDelegationTokens.java:141)
at 
org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.testDelegationTokenCancelFail(TestSolrCloudWithDelegationTokens.java:286)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-2731) CSVResponseWriter should optionally return numfound

2016-10-01 Thread jmlucjav (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15539209#comment-15539209
 ] 

jmlucjav commented on SOLR-2731:


this would be still a nice addition for a very specific case (that I faced 
recently): you want to export a lot of data so you combine wt=csv with 
cursorMark feature, so you can reindex the output in another solr instance. I 
managed to do without, but this would have been a cleaner way.

> CSVResponseWriter should optionally return numfound
> ---
>
> Key: SOLR-2731
> URL: https://issues.apache.org/jira/browse/SOLR-2731
> Project: Solr
>  Issue Type: Improvement
>  Components: Response Writers
>Affects Versions: 3.1, 3.3, 4.0-ALPHA
>Reporter: Jon Hoffman
>  Labels: patch
> Fix For: 3.1.1, 4.9, 6.0
>
> Attachments: SOLR-2731-R1.patch, SOLR-2731.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> an optional parameter "csv.numfound=true" can be added to the request which 
> causes the first line of the response to be the numfound.  This would have no 
> impact on existing behavior, and those who are interested in that value can 
> simply read off the first line before sending to their usual csv parser.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-6.x - Build # 459 - Still unstable

2016-10-01 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-6.x/459/

1 tests failed.
FAILED:  org.apache.solr.handler.TestReqParamsAPI.test

Error Message:
Could not get expected value  'A val' for path 'response/params/x/a' full 
output: {   "responseHeader":{ "status":0, "QTime":0},   
"response":{"znodeVersion":-1}},  from server:  
https://127.0.0.1:52606/solr/collection1_shard1_replica2

Stack Trace:
java.lang.AssertionError: Could not get expected value  'A val' for path 
'response/params/x/a' full output: {
  "responseHeader":{
"status":0,
"QTime":0},
  "response":{"znodeVersion":-1}},  from server:  
https://127.0.0.1:52606/solr/collection1_shard1_replica2
at 
__randomizedtesting.SeedInfo.seed([7073D0819238889F:F827EF5B3CC4E567]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:535)
at 
org.apache.solr.handler.TestReqParamsAPI.testReqParams(TestReqParamsAPI.java:108)
at 
org.apache.solr.handler.TestReqParamsAPI.test(TestReqParamsAPI.java:67)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[jira] [Commented] (SOLR-6677) Reduce logging during startup and shutdown

2016-10-01 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15539039#comment-15539039
 ] 

Alan Woodward commented on SOLR-6677:
-

Good point [~hossman], have done just that.

> Reduce logging during startup and shutdown
> --
>
> Key: SOLR-6677
> URL: https://issues.apache.org/jira/browse/SOLR-6677
> Project: Solr
>  Issue Type: Bug
>  Components: logging
>Reporter: Noble Paul
>Assignee: Jan Høydahl
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-6677-part-2.patch, SOLR-6677-part-4.patch, 
> SOLR-6677-part3.patch, SOLR-6677.patch, SOLR-6677.patch
>
>
> most of what is printed is neither helpful nor useful. It's just noise



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6677) Reduce logging during startup and shutdown

2016-10-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15539036#comment-15539036
 ] 

ASF subversion and git services commented on SOLR-6677:
---

Commit a2e24d1fc55c796fd966135fe19e47a150437553 in lucene-solr's branch 
refs/heads/branch_6x from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a2e24d1 ]

SOLR-6677: Call out logging changes in upgrading section of CHANGES


> Reduce logging during startup and shutdown
> --
>
> Key: SOLR-6677
> URL: https://issues.apache.org/jira/browse/SOLR-6677
> Project: Solr
>  Issue Type: Bug
>  Components: logging
>Reporter: Noble Paul
>Assignee: Jan Høydahl
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-6677-part-2.patch, SOLR-6677-part-4.patch, 
> SOLR-6677-part3.patch, SOLR-6677.patch, SOLR-6677.patch
>
>
> most of what is printed is neither helpful nor useful. It's just noise



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6677) Reduce logging during startup and shutdown

2016-10-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15539037#comment-15539037
 ] 

ASF subversion and git services commented on SOLR-6677:
---

Commit 250c9d93f39bc8d3992b0e924bcd0a7883ea0773 in lucene-solr's branch 
refs/heads/master from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=250c9d9 ]

SOLR-6677: Call out logging changes in upgrading section of CHANGES


> Reduce logging during startup and shutdown
> --
>
> Key: SOLR-6677
> URL: https://issues.apache.org/jira/browse/SOLR-6677
> Project: Solr
>  Issue Type: Bug
>  Components: logging
>Reporter: Noble Paul
>Assignee: Jan Høydahl
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-6677-part-2.patch, SOLR-6677-part-4.patch, 
> SOLR-6677-part3.patch, SOLR-6677.patch, SOLR-6677.patch
>
>
> most of what is printed is neither helpful nor useful. It's just noise



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5344) SpellCheckCollatorTest.testEstimatedHitCounts fails in jenkins from time to time

2016-10-01 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15539023#comment-15539023
 ] 

Alan Woodward commented on SOLR-5344:
-

The tests are failing because the estimated hits for a collation are too high 
(12 or 14 in most cases, where the test is expecting between 6 and 10).  I've 
tried to trace through the code, and it always ends up in 
SpellCheckCollector#L152, after an EarlyTerminatingCollectorException has been 
raised.  I don't follow what the estimation calculation is actually supposed to 
be doing here, though - is this a real bug, or should we just relax the 
constraints on the test a bit?  [~hossman] [~jdyer] you were last in this code 
(albeit three years ago!) - what do you think?

> SpellCheckCollatorTest.testEstimatedHitCounts fails in jenkins from time to 
> time
> 
>
> Key: SOLR-5344
> URL: https://issues.apache.org/jira/browse/SOLR-5344
> Project: Solr
>  Issue Type: Bug
>Reporter: Robert Muir
>
> Doesn't happen very often, but maybe one I can fix. I'll look into it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_102) - Build # 6151 - Still unstable!

2016-10-01 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6151/
Java: 64bit/jdk1.8.0_102 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigAliasReplication

Error Message:
expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<0>
at 
__randomizedtesting.SeedInfo.seed([7826A9D087041B0B:8F55478841ECB4ED]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigAliasReplication(TestReplicationHandler.java:1329)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 11364 lines...]
   [junit4] Suite: 

[jira] [Commented] (SOLR-2012) stats component, min/max on a field with no values

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15538924#comment-15538924
 ] 

Alexandre Rafalovitch commented on SOLR-2012:
-

This code has been updated many, many times. Safe to close or still something 
to worry about?

> stats component, min/max on a field with no values
> --
>
> Key: SOLR-2012
> URL: https://issues.apache.org/jira/browse/SOLR-2012
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 1.4
>Reporter: Jonathan Rochkind
>
> : 
> : When I use the stats component on a field that has no values in the result 
> set
> : (ie, stats.missing == rowCount), I'd expect that 'min'and 'max' would be
> : blank.
> : 
> : Instead, they seem to be the smallest and largest float values or something,
> : min = 1.7976931348623157E308, max = 4.9E-324 .
> : 
> : Is this a bug?
> off the top of my head it sounds like it ... would you mind opening a n 
> issue in Jira please?
> -Hoss



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-1752) SolrJ fails with exception when passing document ADD and DELETEs in the same request using XML request writer (but not binary request writer)

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch closed SOLR-1752.
---
Resolution: Fixed

XML Update format has implemented a root tag some time ago. So the issue no 
longer seems there. Specifically, the following test does work:

{noformat}
curl http://127.0.0.1:8983/solr/test1/update/?commit=true -H "Content-Type: 
text/xml" --data-binary '181234'
{noformat}

> SolrJ fails with exception when passing document ADD and DELETEs in the same 
> request using XML request writer (but not binary request writer)
> -
>
> Key: SOLR-1752
> URL: https://issues.apache.org/jira/browse/SOLR-1752
> Project: Solr
>  Issue Type: Bug
>  Components: clients - java, update
>Affects Versions: 1.4
>Reporter: Jayson Minard
>Assignee: Shalin Shekhar Mangar
> Attachments: SOLR-1752.patch, SOLR-1752.patch, SOLR-1752_2.patch
>
>
> Add this test to SolrExampleTests.java and it will fail when using the XML 
> Request Writer (now default), but not if you change the SolrExampleJettyTest 
> to use the BinaryRequestWriter.
> {code}
>  public void testAddDeleteInSameRequest() throws Exception {
> SolrServer server = getSolrServer();
> SolrInputDocument doc3 = new SolrInputDocument();
> doc3.addField( "id", "id3", 1.0f );
> doc3.addField( "name", "doc3", 1.0f );
> doc3.addField( "price", 10 );
> UpdateRequest up = new UpdateRequest();
> up.add( doc3 );
> up.deleteById("id001");
> up.setWaitFlush(false);
> up.setWaitSearcher(false);
> up.process( server );
>   }
> {code}
> terminates with exception:
> {code}
> Feb 3, 2010 8:55:34 AM org.apache.solr.common.SolrException log
> SEVERE: org.apache.solr.common.SolrException: Illegal to have multiple roots 
> (start tag in epilog?).
>  at [row,col {unknown-source}]: [1,125]
>   at org.apache.solr.handler.XMLLoader.load(XMLLoader.java:72)
>   at 
> org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:54)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:1316)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:338)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:241)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1089)
>   at 
> org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:365)
>   at 
> org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181)
>   at 
> org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:712)
>   at 
> org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:139)
>   at org.mortbay.jetty.Server.handle(Server.java:285)
>   at 
> org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:502)
>   at 
> org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnection.java:835)
>   at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:723)
>   at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:202)
>   at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:378)
>   at 
> org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:226)
>   at 
> org.mortbay.thread.BoundedThreadPool$PoolThread.run(BoundedThreadPool.java:442)
> Caused by: com.ctc.wstx.exc.WstxParsingException: Illegal to have multiple 
> roots (start tag in epilog?).
>  at [row,col {unknown-source}]: [1,125]
>   at 
> com.ctc.wstx.sr.StreamScanner.constructWfcException(StreamScanner.java:630)
>   at com.ctc.wstx.sr.StreamScanner.throwParseError(StreamScanner.java:461)
>   at 
> com.ctc.wstx.sr.BasicStreamReader.handleExtraRoot(BasicStreamReader.java:2155)
>   at 
> com.ctc.wstx.sr.BasicStreamReader.nextFromProlog(BasicStreamReader.java:2070)
>   at 
> com.ctc.wstx.sr.BasicStreamReader.nextFromTree(BasicStreamReader.java:2647)
>   at com.ctc.wstx.sr.BasicStreamReader.next(BasicStreamReader.java:1019)
>   at org.apache.solr.handler.XMLLoader.processUpdate(XMLLoader.java:90)
>   at org.apache.solr.handler.XMLLoader.load(XMLLoader.java:69)
>   ... 18 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-731) CoreDescriptor.getCoreContainer should not be public

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch closed SOLR-731.
--
Resolution: Won't Fix

The ancient discussion that did not progress anywhere. If the concern is still 
valid, this can be reopened or - better - a new issue started against more 
recent codebase.

> CoreDescriptor.getCoreContainer should not be public
> 
>
> Key: SOLR-731
> URL: https://issues.apache.org/jira/browse/SOLR-731
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 1.3
>Reporter: Henri Biestro
> Attachments: solr-731.patch
>
>
> For the very same reasons that CoreDescriptor.getCoreProperties did not need 
> to be public (aka SOLR-724)
> It also means the CoreDescriptor ctor should not need a CoreContainer
> The CoreDescriptor is only meant to be describing a "to-be created SolrCore".
> However, we need access to the CoreContainer from the SolrCore now that we 
> are guaranteed the CoreContainer always exists.
> This is also a natural consequence of SOLR-647 now that the CoreContainer is 
> not a map of CoreDescriptor but a map of SolrCore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-905) wt type of json or ruby triggers error with legacy fields

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch closed SOLR-905.
--
Resolution: Won't Fix

The replication method, the schema definitions and pretty much everything else 
has changed since the issue was opened. If this can be reproduced against more 
recent Solr and issue can be reopened or a new one created.

> wt type of json or ruby triggers error with legacy fields
> -
>
> Key: SOLR-905
> URL: https://issues.apache.org/jira/browse/SOLR-905
> Project: Solr
>  Issue Type: Bug
>  Components: Response Writers
>Affects Versions: 1.3
>Reporter: Matt Mitchell
>
> Given an index/schema with a field of "word"
> then the field name is changed to "spell"
> querying with a wt=json or ruby gives an error of (pasted in below)
> where querying with a wt=xml does not.
> will return the expected results:
> q=*:*=xml
> returns the error:
> q=*:*=json
> ERROR ->
> HTTP Status 400 - undefined field word
> type Status report
> message undefined field word
> description The request sent by the client was syntactically incorrect 
> (undefined field word).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-1464) CommonsHttpSolrServer does not conform to bean conventions

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch closed SOLR-1464.
---
Resolution: Incomplete

Requested information was not provided for 3 years. Implementation changed 
multiple times. A new issue could be opened if something similar is still 
important.

> CommonsHttpSolrServer does not conform to bean conventions
> --
>
> Key: SOLR-1464
> URL: https://issues.apache.org/jira/browse/SOLR-1464
> Project: Solr
>  Issue Type: Bug
>  Components: clients - java
>Affects Versions: 1.3
>Reporter: Sean Fitzgerald
> Attachments: CommonsHttpSolrServer.java-BEAN.patch
>
>
> Several class variables (baseURL, allowCompression, maxRetries, etc) have 
> neither getters nor setters. By creating getters and setters for these 
> properties, we can allow other developers to extend CommonsHttpSolrServer 
> with additional functionality. It is also then necessary to use these methods 
> internally, as opposed to referencing the class variables directly.
> For example, by extending a method like 
> public String getBaseURL()
> One could attach a host monitoring or home-brewed DNS resolution service to 
> intercept, thus replicating the functionality of LBHttpSolrServer with very 
> little of the code.
> Attached is a basic patch (generated using eclipse Source tools), as a 
> minimal set of changes. I have not changes the general coding style of the 
> file, though that would be preferable. I am open to suggestion on whether 
> these methods should be public (as in the attached patch), or protected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-732) Collation bug

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch closed SOLR-732.
--
Resolution: Not A Problem

This was recommended to be closed 6 years ago.

> Collation bug
> -
>
> Key: SOLR-732
> URL: https://issues.apache.org/jira/browse/SOLR-732
> Project: Solr
>  Issue Type: Bug
>  Components: spellchecker
>Affects Versions: 1.3
>Reporter: Matthew Runo
>Priority: Minor
>
> Search term: Quicksilver... I get two suggestions...
> 
> 2
> Quicksilver
> 
> 
> 220
> Quiksilver
> 
> ...and it's not correctly spelled...
> false
> ...but the collation is of the first term - not the one with the highest 
> frequency?
> Quicksilver
> Other collations, for example, 'runnning' come up with more than one 
> suggestion (cunning, running) but properly pick the 'best bet' based on 
> frequency. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-2235) java.io.IOException: The specified network name is no longer available

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch closed SOLR-2235.
---
Resolution: Won't Fix

Ancient Tomcat issue. We no longer support Tomcat on top of all other reasons 
to close this.

> java.io.IOException: The specified network name is no longer available 
> ---
>
> Key: SOLR-2235
> URL: https://issues.apache.org/jira/browse/SOLR-2235
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 1.3, 1.4, 1.4.1
>Reporter: Reshma
>
> Using Solr 1.4 hosted with Tomcat 6 on Windows 2003
> Search becomes unavailable at times. At the time of failure, solr admin page 
> will be loading. But when we make search query we are getting the following 
> error
> 
> HTTP Status 500 - The specified network name is no longer available 
> java.io.IOException: The specified network name is no longer 
> available at java.io.RandomAccessFile.readBytes(Native Method) at 
> java.io.RandomAccessFile.read(Unknown Source) at 
> org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexInput.readInternal(SimpleFSDirectory.java:132)
>  at 
> org.apache.lucene.store.BufferedIndexInput.refill(BufferedIndexInput.java:157)
>  at 
> org.apache.lucene.store.BufferedIndexInput.readByte(BufferedIndexInput.java:38)
>  at org.apache.lucene.store.IndexInput.readVInt
> (IndexInput.java:80) at 
> org.apache.lucene.index.TermBuffer.read(TermBuffer.java:64) at 
> org.apache.lucene.index.SegmentTermEnum.next(SegmentTermEnum.java:129) at 
> org.apache.lucene.index.SegmentTermEnum.scanTo
> (SegmentTermEnum.java:160) at 
> org.apache.lucene.index.TermInfosReader.get(TermInfosReader.java:211) at 
> org.apache.lucene.index.TermInfosReader.get(TermInfosReader.java:179) at 
> org.apache.lucene.index.SegmentReader.docFreq
> (SegmentReader.java:975) at 
> org.apache.lucene.index.DirectoryReader.docFreq(DirectoryReader.java:627) at 
> org.apache.solr.search.SolrIndexReader.docFreq(SolrIndexReader.java:308) at 
> org.apache.lucene.search.IndexSearcher.docFreq
> (IndexSearcher.java:147) at 
> org.apache.lucene.search.Similarity.idfExplain(Similarity.java:765) at 
> org.apache.lucene.search.TermQuery$TermWeight.(TermQuery.java:46) at 
> org.apache.lucene.search.TermQuery.createWeight
> (TermQuery.java:146) at org.apache.lucene.search.Query.weight(Query.java:99) 
> at org.apache.lucene.search.Searcher.createWeight
> (Searcher.java:230) at 
> org.apache.lucene.search.Searcher.search(Searcher.java:171) at 
> org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1044)
>  at 
> org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:940)
>  at 
> org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:344) 
> at 
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:178)
>  at 
> org.apache.solr.handler.component.CollapseComponent.process(CollapseComponent.java:118)
>  at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:195)
>  at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131)
>  at org.apache.solr.core.SolrCore.execute
> (SolrCore.java:1316) at 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:336)
>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:239)
>  at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
>  at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
>  at 
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
>  at 
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
>  at 
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128) 
> at 
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) 
> at 
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
>  at 
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:286) 
> at org.apache.coyote.http11.Http11AprProcessor.process
> (Http11AprProcessor.java:857) at 
> org.apache.coyote.http11.Http11AprProtocol$Http11ConnectionHandler.process
> (Http11AprProtocol.java:565) at 
> org.apache.tomcat.util.net.AprEndpoint$Worker.run(AprEndpoint.java:1509) at 
> java.lang.Thread.run
> (Unknown Source) 
> ===
> The error stops when we restart Tomcat.  We are using a file server to store 
> the actual index files, which are not on the same machine as Solr/Tomcat. We 
> have checked and confirmed with the network team that there was no issue. Can 
> some one help us to fix the issue



--
This message was sent by Atlassian JIRA

[jira] [Closed] (SOLR-1514) Facet search results contain 0:0 entries although '0' values were not indexed.

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch closed SOLR-1514.
---
Resolution: Cannot Reproduce

No configuration was provided to reproduce the issue.

> Facet search results contain 0:0 entries although '0' values were not indexed.
> --
>
> Key: SOLR-1514
> URL: https://issues.apache.org/jira/browse/SOLR-1514
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 1.3
> Environment: Solr is on: Linux  2.6.18-92.1.13.el5xen
>Reporter: Renata Perkowska
>
> Hi,
> in my Jmeter  ATs  I can see that under some circumstances facet search 
> results contain '0' both as keys
> and values for the integer field called 'year' although I never index zeros. 
> When I do a normal search, I don't see any indexed fields with zeros. 
> When I run my facet test (using JMeter) in isolation, everything works fine. 
> It happens only when it's being run after other tests
> (and other indexing/deleting). On the other hand it shouldn't be the case 
> that other indexing are influencing this test, as at the end of each test I'm 
> deleting
> indexed documents so before running the facet test an index is empty.
> My facet test looks as follows:
>  1. Index group of documents
>  2. Perform search on facets
>  3. Remove documents from the index.
> The results that I'm getting for an integer field 'year':
>  1990:4
>  1995:4
>  0:0
>  1991:0
>  1992:0
>  1993:0
>  1994:0
>  1996:0
>  1997:0
>  1998:0
> I'm indexing only values 1990-1999, so there certainly shouldn't be any '0'  
> as keys in the result set.
> The indexed is being optimized not after each document deletion from and 
> index, but only when an index is loaded/unloaded, so the optimization won't 
> solve the problem in this case. 
> If the facet.mincount>0 is provided, then  I'm not getting 0:0, but other 
> entries with '0' values are gone as well:
> 1990:4
> 1995:4
> I'm also indexing text fields, but I don't see a similar situation in this 
> case. This bug only happens for integer fields.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-2300) snapinstaller on slave is failing

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch closed SOLR-2300.
---
Resolution: Won't Fix

This is no longer relevant to the current replication method.

> snapinstaller on slave is failing
> -
>
> Key: SOLR-2300
> URL: https://issues.apache.org/jira/browse/SOLR-2300
> Project: Solr
>  Issue Type: Bug
>  Components: replication (scripts)
>Affects Versions: 1.3
> Environment: Linux, Jboss 5.0GA, solr 1.3.0
>Reporter: sakunthala padmanabhuni
>
> Hi,
> We are using Solr on Mac OSX and it is working fine.  Same setup we have 
> moved to Linux.  We have master, slave setup. Every 5 minutes, index will be 
> replicated from Master to Slave and will be installed on slave.  But on Linux 
> on the slave when the snapinstaller script is called, it is failing and 
> showing below error in logs.
> /bin/rm: cannot remove 
> `/ngs/app/esearcht/Slave2index/data/index/.nfs000111030749': 
> Device or resource busy
> This error is occuring in snapinstaller script at below lines.
>   cp -lr ${name}/ ${data_dir}/index.tmp$$ && \
>   /bin/rm -rf ${data_dir}/index && \
>   mv -f ${data_dir}/index.tmp$$ ${data_dir}/index
> It is not able to remove the index folder. So the index.tmp files are keep 
> growing in the data directory.
> Our data directory is "/ngs/app/esearcht/Slave2index/data".  When  checked 
> with ls -al in the index directory, there are some .nfs files still there, 
> which are not letting index directory to be deleted.  And these .nfs files 
> are still being used by SOLR in jboss.
> This setup is giving issue only in linux.  Is this known bug on linux?  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-1623) Solr hangs (often throwing java.lang.OutOfMemoryError: PermGen space) when indexing many different field names

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch closed SOLR-1623.
---
Resolution: Won't Fix

We are several JVM versions and virtual space management algorithms later now. 
If anything similar comes up against JRE 1.8, a new issue can be opened.

> Solr hangs (often throwing java.lang.OutOfMemoryError: PermGen space) when 
> indexing many different field names
> --
>
> Key: SOLR-1623
> URL: https://issues.apache.org/jira/browse/SOLR-1623
> Project: Solr
>  Issue Type: Bug
>  Components: update
>Affects Versions: 1.3, 1.4
> Environment: Tomcat Version JVM Version  
> JVM VendorOS Name OS VersionOS Architecture 
> Apache Tomcat/6.0   snapshot 1.6.0_13-b03 Sun Microsystems Inc. Linux 
> 2.6.18-164.el5  amd64 
> and/or
> Tomcat VersionJVM Version JVM Vendor  
>   OS Name   OS VersionOS Architecture 
> Apache Tomcat/6.0.18   1.6.0_12-b04Sun Microsystems Inc. Windows 
> 2003 5.2   amd64 
>Reporter: Laurent Chavet
>Priority: Critical
>
> With the following fields in schema.xml:
>  
> /> 
>  stored="true"/>
> 
> Run the following code:
> import java.util.ArrayList;
> import java.util.List;
> import org.apache.solr.client.solrj.SolrServer;
> import org.apache.solr.client.solrj.impl.CommonsHttpSolrServer;
> import org.apache.solr.common.SolrInputDocument;
> public static void main(String[] args) throws Exception {
> SolrServer server;
> try {
> server = new CommonsHttpSolrServer(args[0]);
> } catch (Exception e) {
> System.err.println("can't creater server using: " + args[0] + "  
> " + e.getMessage());
> throw e;
> }
> for (int i = 0; i < 1000; i++) {
> List batchedDocs = new 
> ArrayList();
> for (int j = 0; j < 1000; j++) {
> SolrInputDocument doc = new SolrInputDocument();
> doc.addField("id", i * 1000 + j);
> // hangs after 30 to 50 batches
> 
> doc.addField("weight_aaa"
>  + Integer.toString(i) + "_" + Integer.toString(j), i * 1000 + j);
> // hangs after about 200 batches
> //doc.addField("weight_" + Integer.toString(i) + "_" + 
> Integer.toString(j), i * 1000 + j);
> batchedDocs.add(doc);
> }
> try {
> server.add(batchedDocs, true);
> System.err.println("Done with batch=" + i);
> // server.commit(); //doesn't change anything
> } catch (Exception e) {
> System.err.println("batchId=" + i + " bad batch: " + 
> e.getMessage());
> throw e;
> }
> }
> }
> And soon the client (sometime throws) and solr will freeze. sometime you can 
> see: java.lang.OutOfMemoryError: PermGen space in the server logs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-668) Snapcleaner removes newest snapshots in Solaris

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch closed SOLR-668.
--
Resolution: Won't Fix

This is no longer relevant to the current replication method.

> Snapcleaner removes newest snapshots in Solaris
> ---
>
> Key: SOLR-668
> URL: https://issues.apache.org/jira/browse/SOLR-668
> Project: Solr
>  Issue Type: Bug
>  Components: replication (scripts)
>Affects Versions: 1.2
> Environment: Solaris 10
>Reporter: Gabriel Hernandez
>Priority: Minor
>
> When running the snapcleaner script from cron with the -N option, the script 
> is removing the newest snapshots instead of the oldest snapshots.  I tweaked 
> and validated this can be corrected by making the following change in the 
> snapcleaner script:
> elif [[ -n ${num} ]]
>   then
>   logMessage cleaning up all snapshots except for the most recent 
> ${num} ones
>   unset snapshots count
> - snapshots=`ls -cd ${data_dir}/snapshot.* 2>/dev/null`
> + snapshots=`ls -crd ${data_dir}/snapshot.* 2>/dev/null` 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4519) corrupt tlog causes fullCopy download index files every time reboot a node

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15538871#comment-15538871
 ] 

Alexandre Rafalovitch commented on SOLR-4519:
-

There does not seem to be a specific bug here anymore. Can it be closed?

> corrupt tlog causes fullCopy download index files every time reboot a node
> --
>
> Key: SOLR-4519
> URL: https://issues.apache.org/jira/browse/SOLR-4519
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.0
> Environment: The solrcloud is implemented on three servers. There are 
> three solr instance on each server. The collection has three shards. Every 
> shard has three replica. Replicas in same shard run in solr instance on 
> different server.
>Reporter: Simon Scofield
>
> There are two questions:
> 1. The tlog of one replica of shard1 is damaged by some reason. We are still 
> looking for the reason. Please give some clue if you are familia with this 
> problem.
> 2. The error replica successed to recovery by fullcopy download index files 
> from leader. Then I killed the instance and started it again, the recovery 
> process still is fullcopy download. In my opinion, after the first time 
> fullcopy recovery, the tlog should be fixed. Here is some log: 
> 2013-02-28 15:04:58,622 INFO org.apache.solr.cloud.ZkController:757 - Core 
> needs to recover:metadata
> 2013-02-28 15:04:58,622 INFO org.apache.solr.update.DefaultSolrCoreState:214 
> - Running recovery - first canceling any ongoing recovery
> 2013-02-28 15:04:58,625 INFO org.apache.solr.cloud.RecoveryStrategy:217 - 
> Starting recovery process.  core=metadata recoveringAfterStartup=true
> 2013-02-28 15:04:58,626 INFO org.apache.solr.common.cloud.ZkStateReader:295 - 
> Updating cloud state from ZooKeeper...
> 2013-02-28 15:04:58,628 ERROR org.apache.solr.update.UpdateLog:957 - 
> Exception reading versions from log
> java.io.EOFException
> at 
> org.apache.solr.common.util.FastInputStream.readUnsignedByte(FastInputStream.java:72)
> at 
> org.apache.solr.common.util.FastInputStream.readInt(FastInputStream.java:206)
> at 
> org.apache.solr.update.TransactionLog$ReverseReader.next(TransactionLog.java:705)
> at 
> org.apache.solr.update.UpdateLog$RecentUpdates.update(UpdateLog.java:906)
> at 
> org.apache.solr.update.UpdateLog$RecentUpdates.access$000(UpdateLog.java:846)
> at 
> org.apache.solr.update.UpdateLog.getRecentUpdates(UpdateLog.java:996)
> at 
> org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:256)
> at 
> org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:220)
> 2013-02-28 15:05:01,857 INFO org.apache.solr.cloud.RecoveryStrategy:399 - 
> Begin buffering updates. core=metadata
> 2013-02-28 15:05:01,857 INFO org.apache.solr.update.UpdateLog:1015 - Starting 
> to buffer updates. FSUpdateLog{state=ACTIVE, tlog=null}
> 2013-02-28 15:05:01,857 INFO org.apache.solr.cloud.RecoveryStrategy:126 - 
> Attempting to replicate from http://23.61.21.121:65201/solr/metadata/. 
> core=metadata
> 2013-02-28 15:05:02,882 INFO org.apache.solr.handler.SnapPuller:305 - 
> Master's generation: 6993
> 2013-02-28 15:05:02,882 INFO org.apache.solr.handler.SnapPuller:306 - Slave's 
> generation: 6993
> 2013-02-28 15:05:02,882 INFO org.apache.solr.handler.SnapPuller:307 - 
> Starting replication process
> 2013-02-28 15:05:02,893 INFO org.apache.solr.handler.SnapPuller:312 - Number 
> of files in latest index in master: 422
> 2013-02-28 15:05:02,897 INFO org.apache.solr.handler.SnapPuller:325 - 
> Starting download to 
> /solr/nodes/node1/bin/../solr/metadata/data/index.20130228150502893 
> fullCopy=true
> 2013-02-28 15:33:55,848 INFO org.apache.solr.handler.SnapPuller:334 - Total 
> time taken for download : 1732 secs (The size of index files is 94G)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-2508) Master/Slave replication can leave slave in inconsistent state of NullPointerException in solrHighligher.java 102

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch closed SOLR-2508.
---
Resolution: Won't Fix

This is no longer relevant to the current replication method.

> Master/Slave replication can leave slave in inconsistent state of  
> NullPointerException in solrHighligher.java 102
> --
>
> Key: SOLR-2508
> URL: https://issues.apache.org/jira/browse/SOLR-2508
> Project: Solr
>  Issue Type: Bug
>  Components: highlighter, replication (java)
>Affects Versions: 4.0-ALPHA
> Environment: Centos 5.6 with Java1.7.0b137
>Reporter: Xing Li
> Attachments: schema.xml, solrconfig.xml
>
>
> Using Solr 4/Trunk snapshot build of 5/10/2011. 
> Setup:
> --
> 1) 1 Master + 4 Slaves
> 2) Multicore setup with 8 cores.
> 3) Replication Poll Interval: 00:30:20
> Summary of Issue:
> ---
> When a slave completes a replication pull from master, it will complete the 
> data index pull but 
> based on logs it appears subsequent index warming and other actions post 
> replication 
> cleanup leaves the core/db in an inconsistent state.
> Frequency of occurrence: Very high but not 100%. I have 1 master and 4 slaves 
> and for each replication 
> pull cycle, around 50% of the gets affected. Each slave has 8 multi-cores but
> the problem always affects this particular "mysolr_blogs" db/core.
> Please note the "mysolr_blogs" data index is 1.4GB and the largest of the 8 
> by a wide margin.
> Attached is the schema.xml and solrconfig.xml for the "mysolr_blogs" core.
> Temp fix:
> -
> 1) Stop and restart the solr server when this happens.
> 2) Stop using automatic replication on this core.
> Logging:
> -
> * begins automatic replication  pull
> {code}
> May 10, 2011 10:17:40 PM org.apache.solr.handler.SnapPuller fetchLatestIndex
> INFO: Slave in sync with master.
> May 10, 2011 10:17:40 PM org.apache.solr.handler.SnapPuller fetchLatestIndex
> INFO: Slave in sync with master.
> May 10, 2011 10:17:40 PM org.apache.solr.handler.SnapPuller fetchLatestIndex
> INFO: Slave in sync with master.
> May 10, 2011 10:17:40 PM org.apache.solr.handler.SnapPuller fetchLatestIndex
> INFO: Slave in sync with master.
> May 10, 2011 10:17:40 PM org.apache.solr.handler.SnapPuller fetchLatestIndex
> INFO: Slave in sync with master.
> May 10, 2011 10:17:40 PM org.apache.solr.handler.SnapPuller fetchLatestIndex
> INFO: Slave in sync with master.
> May 10, 2011 10:17:40 PM org.apache.solr.handler.SnapPuller fetchLatestIndex
> INFO: Master's version: 1302675975227, generation: 694
> May 10, 2011 10:17:40 PM org.apache.solr.handler.SnapPuller fetchLatestIndex
> INFO: Slave in sync with master.
> May 10, 2011 10:17:40 PM org.apache.solr.handler.SnapPuller fetchLatestIndex
> INFO: Slave's version: 1302675975222, generation: 692
> May 10, 2011 10:17:40 PM org.apache.solr.handler.SnapPuller fetchLatestIndex
> INFO: Starting replication process
> May 10, 2011 10:17:40 PM org.apache.solr.handler.SnapPuller fetchLatestIndex
> INFO: Number of files in latest index in master: 10
> {code}
> * 65 seconds past and I cut out the query logs in between. Here it's pulling 
> the 1.4GB "mysolr_blogs" index data. 
> {code}
> May 10, 2011 10:18:45 PM org.apache.solr.handler.SnapPuller downloadIndexFiles
> INFO: Skipping download for 
> /db/solr-master/multicore/mysolr_blogs/data/index/1.fnx
> May 10, 2011 10:18:45 PM org.apache.solr.handler.SnapPuller fetchLatestIndex
> INFO: Total time taken for download : 65 secs
> May 10, 2011 10:18:45 PM org.apache.solr.core.SolrCore execute
> INFO: [mysolr_users] webapp=/solr path=/select 
> params={sort==off=0=%2Buname:inlove*=and=*=pcategoryid=categoryid=languageid=json=true=51}
>  hits=0 status=0 QTime=1 
> May 10, 2011 10:18:45 PM org.apache.solr.core.SolrCore execute
> INFO: [mysolr_blogs] webapp=/solr path=/select/ params={q=solr} hits=0 
> status=0 QTime=0 
> May 10, 2011 10:18:45 PM org.apache.solr.core.SolrCore execute
> INFO: [mysolr_blogs] webapp=/solr path=/select/ params={q=solr} hits=0 
> status=0 QTime=0 
> May 10, 2011 10:18:46 PM org.apache.solr.update.DirectUpdateHandler2 commit
> INFO: start 
> commit(optimize=false,waitFlush=true,waitSearcher=true,expungeDeletes=false)
> May 10, 2011 10:18:46 PM org.apache.solr.search.SolrIndexSearcher 
> INFO: Opening Searcher@4f83f9df main
> May 10, 2011 10:18:46 PM org.apache.solr.update.DirectUpdateHandler2 commit
> INFO: end_commit_flush
> May 10, 2011 10:18:46 PM org.apache.solr.search.SolrIndexSearcher warm
> INFO: autowarming Searcher@4f83f9df main from Searcher@5f7808af main
>   
> 

[jira] [Closed] (SOLR-2165) TestReplicationHandler test failure (branch_3x)

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch closed SOLR-2165.
---
Resolution: Not A Problem

Issue lost in time (3.x branch)

> TestReplicationHandler test failure (branch_3x)
> ---
>
> Key: SOLR-2165
> URL: https://issues.apache.org/jira/browse/SOLR-2165
> Project: Solr
>  Issue Type: Bug
>  Components: Build
>Affects Versions: 3.1, 4.0-ALPHA
> Environment: Hudson
>Reporter: Robert Muir
> Fix For: 6.0, 4.9
>
>
> TestReplicationHandler failed in hudson: looks like the root cause is an xml 
> parsing issue:
> [junit] [Fatal Error] :-1:-1: Premature end of file.
> [junit] 13 Oct 2010 23:41:57 org.apache.solr.common.SolrException log
> [junit] SEVERE: Exception during parsing file: 
> schema:org.xml.sax.SAXParseException; Premature end of file.
> Note: the fail occurred with branch_3x, but maybe affects trunk too (i set 
> versions to both aggressively)
> Here is the stacktrace:
> {noformat}
> [junit] Testsuite: org.apache.solr.handler.TestReplicationHandler
> [junit] Testcase: 
> testIndexAndConfigReplication(org.apache.solr.handler.TestReplicationHandler):
>   Caused an ERROR
> [junit] Jetty/Solr unresponsive
> [junit] java.lang.RuntimeException: Jetty/Solr unresponsive
> [junit]   at 
> org.apache.solr.client.solrj.embedded.JettySolrRunner.waitForSolr(JettySolrRunner.java:149)
> [junit]   at 
> org.apache.solr.client.solrj.embedded.JettySolrRunner.start(JettySolrRunner.java:111)
> [junit]   at 
> org.apache.solr.client.solrj.embedded.JettySolrRunner.start(JettySolrRunner.java:103)
> [junit]   at 
> org.apache.solr.handler.TestReplicationHandler.createJetty(TestReplicationHandler.java:110)
> [junit]   at 
> org.apache.solr.handler.TestReplicationHandler.testIndexAndConfigReplication(TestReplicationHandler.java:260)
> [junit]   at 
> org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:693)
> [junit]   at 
> org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:666)
> [junit] Caused by: java.io.IOException: Server returned HTTP response 
> code: 500 for URL: 
> http://localhost:38047/solr/select?q={!raw+f=junit_test_query}ping
> [junit]   at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1269)
> [junit]   at java.net.URL.openStream(URL.java:1029)
> [junit]   at 
> org.apache.solr.client.solrj.embedded.JettySolrRunner.waitForSolr(JettySolrRunner.java:137)
> [junit] 
> [junit] 
> [junit] Testcase: 
> testStopPoll(org.apache.solr.handler.TestReplicationHandler):   Caused an 
> ERROR
> [junit] java.net.ConnectException: Operation timed out
> [junit] org.apache.solr.client.solrj.SolrServerException: 
> java.net.ConnectException: Operation timed out
> [junit]   at 
> org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:483)
> [junit]   at 
> org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:244)
> [junit]   at 
> org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:89)
> [junit]   at 
> org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:118)
> [junit]   at 
> org.apache.solr.handler.TestReplicationHandler.query(TestReplicationHandler.java:142)
> [junit]   at 
> org.apache.solr.handler.TestReplicationHandler.clearIndexWithReplication(TestReplicationHandler.java:85)
> [junit]   at 
> org.apache.solr.handler.TestReplicationHandler.testStopPoll(TestReplicationHandler.java:285)
> [junit]   at 
> org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:693)
> [junit]   at 
> org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:666)
> [junit] Caused by: java.net.ConnectException: Operation timed out
> [junit]   at java.net.PlainSocketImpl.socketConnect(Native Method)
> [junit]   at 
> java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:310)
> [junit]   at 
> java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:176)
> [junit]   at 
> java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:163)
> [junit]   at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:384)
> [junit]   at java.net.Socket.connect(Socket.java:546)
> [junit]   at java.net.Socket.connect(Socket.java:495)
> [junit]   at java.net.Socket.(Socket.java:392)
> [junit]   at java.net.Socket.(Socket.java:266)
> [junit]   at 
> org.apache.commons.httpclient.protocol.DefaultProtocolSocketFactory.createSocket(DefaultProtocolSocketFactory.java:80)
> [junit]   at 
> 

[jira] [Closed] (SOLR-1853) ReplicationHandler reports incorrect replication failures

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch closed SOLR-1853.
---
Resolution: Incomplete

This is no longer relevant to the current replication method.

> ReplicationHandler reports incorrect replication failures
> -
>
> Key: SOLR-1853
> URL: https://issues.apache.org/jira/browse/SOLR-1853
> Project: Solr
>  Issue Type: Bug
>  Components: replication (java)
>Affects Versions: 1.4
> Environment: Linux
>Reporter: Shawn Smith
>
> The ReplicationHandler "details" command reports that replication failed when 
> it didn't.  This occurs after a slave is restarted when it is already in sync 
> with the master.  This makes it difficult to write production monitors that 
> check the health of master-slave replication (no network issues, unexpected 
> slowdowns, etc).
> From the code, it looks like "SnapPuller.successfulInstall" starts out false 
> on restart.  If the slave starts out in sync with the master, then each no-op 
> replication poll leaves "successfulInstall" set to false which makes 
> SnapPuller.logReplicationTimeAndConfFiles log the poll as a failure.  
> SnapPuller.successfulInstall stays false until the first time replication 
> actually has to do something, at which point it gets set to true, and then 
> everything is OK.
> h4. Steps to reproduce
> # Setup Solr master and slave servers using Solr 1.4 Java replication.
> # Index some content on the master.  Wait for it to replicate through to the 
> slave so the master and slave are in sync.
> # Stop the slave server.
> # Restart the slave server.
> # Wait for the first slave replication poll.
> # Query the replication status using 
> "http://localhost:8983/solr/replication?command=details;
> # Until the master index changes and there's something to replicate, all 
> slave replication polls after the restart will be shown as failed in the XML 
> response.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-3280) to many / sometimes stale CLOSE_WAIT connections from SnapPuller during / after replication

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch closed SOLR-3280.
---
Resolution: Incomplete

This is no longer relevant to the current replication method.

> to many / sometimes stale CLOSE_WAIT connections from SnapPuller during / 
> after replication
> ---
>
> Key: SOLR-3280
> URL: https://issues.apache.org/jira/browse/SOLR-3280
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 3.5, 3.6, 4.0-ALPHA
>Reporter: Bernd Fehling
>Assignee: Robert Muir
>Priority: Minor
> Attachments: SOLR-3280.patch
>
>
> There are sometimes to many and also stale CLOSE_WAIT connections 
> during/after replication left over on SLAVE server.
> Normally GC should clean up this but this is not always the case.
> Also if a CLOSE_WAIT is hanging then the new replication won't load.
> Dirty work around so far is to fake a TCP connection as root to that 
> connection and close it. 
> After that the new replication will load, the old index and searcher released 
> and the system will
> return to normal operation.
> Background:
> The SnapPuller is using Apache httpclient 3.x and uses the 
> MultiThreadedHttpConnectionManager.
> The manager holds a connection in CLOSE_WAIT after its use for further 
> requests.
> This is done by calling releaseConnection. But if a connection is stuck it is 
> not available any more and a new
> connection from the pool is used.
> Solution:
> After calling releaseConnection clean up with closeIdleConnections(0).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-2329) old index files not deleted on slave

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch closed SOLR-2329.
---
Resolution: Incomplete

This is no longer relevant to the current replication method.

> old index files not deleted on slave
> 
>
> Key: SOLR-2329
> URL: https://issues.apache.org/jira/browse/SOLR-2329
> Project: Solr
>  Issue Type: Bug
>  Components: replication (java)
>Affects Versions: 4.0-ALPHA
> Environment: centos 5.5
> ext3 file system
>Reporter: Edwin Khodabakchian
> Attachments: solrconfig.xml
>
>
> I have set up index replication (triggered on optimize). The problem I
> am having is the old index files are not being deleted on the slave.
> After each replication, I can see the old files still hanging around
> as well as the files that have just been pulled. This causes the data
> directory size to increase by the index size every replication until
> the disk fills up.
> I am running 4.0 rev 993367 with patch SOLR-1316. Otherwise, the setup
> is pretty vanilla. I can reproduce this on multiple slaves.
> Checking the logs, I see the following error:
> SEVERE: SnapPull failed
> org.apache.solr.common.SolrException: Index fetch failed :
>at 
> org.apache.solr.handler.SnapPuller.fetchLatestIndex(SnapPuller.java:329)
>at 
> org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:265)
>at org.apache.solr.handler.SnapPuller$1.run(SnapPuller.java:159)
>at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
>at 
> java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
>at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
>at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
>at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:181)
>at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:205)
>at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>at java.lang.Thread.run(Thread.java:619)
> Caused by: org.apache.lucene.store.LockObtainFailedException: Lock
> obtain timed out:
> NativeFSLock@/var/solrhome/data/index/lucene-cdaa80c0fefe1a7dfc7aab89298c614c-write.lock
>at org.apache.lucene.store.Lock.obtain(Lock.java:84)
>at org.apache.lucene.index.IndexWriter.(IndexWriter.java:1065)
>at org.apache.lucene.index.IndexWriter.(IndexWriter.java:954)
>at 
> org.apache.solr.update.SolrIndexWriter.(SolrIndexWriter.java:192)
>at 
> org.apache.solr.update.UpdateHandler.createMainIndexWriter(UpdateHandler.java:99)
>at 
> org.apache.solr.update.DirectUpdateHandler2.openWriter(DirectUpdateHandler2.java:173)
>at 
> org.apache.solr.update.DirectUpdateHandler2.forceOpenWriter(DirectUpdateHandler2.java:376)
>at org.apache.solr.handler.SnapPuller.doCommit(SnapPuller.java:471)
>at 
> org.apache.solr.handler.SnapPuller.fetchLatestIndex(SnapPuller.java:319)
>... 11 more
> lsof reveals that the file is still opened from the java process.
> Contents of the index data dir:
> master:
> -rw-rw-r-- 1 feeddo feeddo  191 Dec 14 01:06 _1lg.fnm
> -rw-rw-r-- 1 feeddo feeddo  26M Dec 14 01:07 _1lg.fdx
> -rw-rw-r-- 1 feeddo feeddo 1.9G Dec 14 01:07 _1lg.fdt
> -rw-rw-r-- 1 feeddo feeddo 474M Dec 14 01:12 _1lg.tis
> -rw-rw-r-- 1 feeddo feeddo  15M Dec 14 01:12 _1lg.tii
> -rw-rw-r-- 1 feeddo feeddo 144M Dec 14 01:12 _1lg.prx
> -rw-rw-r-- 1 feeddo feeddo 277M Dec 14 01:12 _1lg.frq
> -rw-rw-r-- 1 feeddo feeddo  311 Dec 14 01:12 segments_1ji
> -rw-rw-r-- 1 feeddo feeddo  23M Dec 14 01:12 _1lg.nrm
> -rw-rw-r-- 1 feeddo feeddo  191 Dec 18 01:11 _24e.fnm
> -rw-rw-r-- 1 feeddo feeddo  26M Dec 18 01:12 _24e.fdx
> -rw-rw-r-- 1 feeddo feeddo 1.9G Dec 18 01:12 _24e.fdt
> -rw-rw-r-- 1 feeddo feeddo 483M Dec 18 01:23 _24e.tis
> -rw-rw-r-- 1 feeddo feeddo  15M Dec 18 01:23 _24e.tii
> -rw-rw-r-- 1 feeddo feeddo 146M Dec 18 01:23 _24e.prx
> -rw-rw-r-- 1 feeddo feeddo 283M Dec 18 01:23 _24e.frq
> -rw-rw-r-- 1 feeddo feeddo  311 Dec 18 01:24 segments_1xz
> -rw-rw-r-- 1 feeddo feeddo  23M Dec 18 01:24 _24e.nrm
> -rw-rw-r-- 1 feeddo feeddo  191 Dec 18 13:15 _25z.fnm
> -rw-rw-r-- 1 feeddo feeddo  26M Dec 18 13:16 _25z.fdx
> -rw-rw-r-- 1 feeddo feeddo 1.9G Dec 18 13:16 _25z.fdt
> -rw-rw-r-- 1 feeddo feeddo 484M Dec 18 13:35 _25z.tis
> -rw-rw-r-- 1 feeddo feeddo  15M Dec 18 13:35 _25z.tii
> -rw-rw-r-- 1 feeddo feeddo 146M Dec 18 13:35 _25z.prx
> -rw-rw-r-- 1 feeddo feeddo 

[jira] [Closed] (SOLR-2661) SnapPull failed - Unable to download index

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch closed SOLR-2661.
---
Resolution: Incomplete

Ancient unfinished bug. No longer relevant

> SnapPull failed - Unable to download index
> --
>
> Key: SOLR-2661
> URL: https://issues.apache.org/jira/browse/SOLR-2661
> Project: Solr
>  Issue Type: Bug
>  Components: replication (java)
>Affects Versions: 1.4.1
> Environment: Linux
>Reporter: Jayesh K Rajpurohit
>Priority: Critical
>
> Getting this exception in my application. Now we are blocked as slaves are 
> not able to download any new indexes. FYI. The optimized index size is 5.5 GB 
> and it goes to 11 GB (non-optimized index size). There are some questions 
> which I have regarding this:
> 1) Is there any size limit of index to replicate ? Because in exception we 
> can get it is trying to download the index of 0.4 GB.
> 2) Is there any connection timeout setting with solr Java replication ?
> 2011-07-18 07:22:18,634 [pool-3-thread-1] ERROR 
> org.apache.solr.handler.ReplicationHandler  - SnapPull failed
> org.apache.solr.common.SolrException: Unable to download _nr.frq completely. 
> Downloaded 57671680!=404386786
> at 
> org.apache.solr.handler.SnapPuller$FileFetcher.cleanup(SnapPuller.java:1026)
> at 
> org.apache.solr.handler.SnapPuller$FileFetcher.fetchFile(SnapPuller.java:906)
> at 
> org.apache.solr.handler.SnapPuller.downloadIndexFiles(SnapPuller.java:541)
> at 
> org.apache.solr.handler.SnapPuller.fetchLatestIndex(SnapPuller.java:294)
> at 
> org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:264)
> at org.apache.solr.handler.SnapPuller$1.run(SnapPuller.java:159)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
> at 
> java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:180)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:204)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)
> Regards,
> Jayesh K Rajpurohit



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-3618) Enable replication of master using proxy settings

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch closed SOLR-3618.
---
Resolution: Won't Fix

This issue is about snappuller which is no longer even present in the shipped 
distribution. A new issue can be opened if this is still somehow relevant to 
the  a recent version.

> Enable replication of master using proxy settings
> -
>
> Key: SOLR-3618
> URL: https://issues.apache.org/jira/browse/SOLR-3618
> Project: Solr
>  Issue Type: Improvement
>  Components: replication (java)
>Affects Versions: 3.6.1
>Reporter: Gautier Koscielny
>  Labels: patch
> Attachments: SnapPuller.java.patch
>
>   Original Estimate: 4h
>  Remaining Estimate: 4h
>
> Check whether system properties http.proxyHost and http.proxyPort are set 
> to initialize the httpClient instance properly in the SnapPuller class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3449) QueryComponent.doFieldSortValues throw ArrayIndexOutOfBoundsException when has maxDoc=0 Segment

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15538838#comment-15538838
 ] 

Alexandre Rafalovitch commented on SOLR-3449:
-

I believe this functionality has now been rewritten several times. Can we close 
this issue and open a new one if somebody will hit this again?

> QueryComponent.doFieldSortValues throw ArrayIndexOutOfBoundsException when 
> has maxDoc=0 Segment
> ---
>
> Key: SOLR-3449
> URL: https://issues.apache.org/jira/browse/SOLR-3449
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 3.5, 3.6
>Reporter: Linbin Chen
> Fix For: 3.6.3
>
> Attachments: SOLR-3449.patch
>
>
> have index
> {code}
> Segment name=_9, offest=[docBase=0, maxDoc=245] idx=0
> Segment name=_a, offest=[docBase=245, maxDoc=3] idx=1
> Segment name=_b, offest=[docBase=248, maxDoc=0] idx=2
> Segment name=_c, offest=[docBase=248, maxDoc=1] idx=3
> Segment name=_d, offest=[docBase=249, maxDoc=0] idx=4
> Segment name=_e, offest=[docBase=249, maxDoc=1] idx=5
> Segment name=_f, offest=[docBase=250, maxDoc=0] idx=6
> Segment name=_g, offest=[docBase=250, maxDoc=3] idx=7
> Segment name=_h, offest=[docBase=253, maxDoc=0] idx=8
> {code}
> maxDoc=0 's Segment maybe create by mergeIndexes。(can make sure maxDoc=0 's 
> segment not merge, but when couldn't control merge indexes)
> when use fsv=true get sort values, hit docId=249 throw 
> ArrayIndexOutOfBoundsException
> {code}
> 2012-5-11 14:28:28 org.apache.solr.common.SolrException log
> ERROR: java.lang.ArrayIndexOutOfBoundsException: 0
> at 
> org.apache.lucene.search.FieldComparator$LongComparator.copy(FieldComparator.java:600)
> at 
> org.apache.solr.handler.component.QueryComponent.doFieldSortValues(QueryComponent.java:463)
> at 
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:400)
> {code}
> reason:
> {code}
> //idx  012345678
> //int[] maxDocs={245,   3,   0,   1,   0,   1,   0,   3,   0};
> int[] offsets = {  0, 245, 248, 248, 249, 249, 250, 250, 253};
> org.apache.solr.search.SolrIndexReader.readerIndex(249, offsets) return idx=4 
> not 5。
> {code}
> correct idx=5。
> patch
> {code}
> Index: solr/core/src/java/org/apache/solr/search/SolrIndexReader.java
> ===
> --- solr/core/src/java/org/apache/solr/search/SolrIndexReader.java
> (revision 1337028)
> +++ solr/core/src/java/org/apache/solr/search/SolrIndexReader.java
> (working copy)
> @@ -138,6 +138,16 @@
>}
>else {
>  // exact match on the offset.
> + //skip equal offest
> + for(int i=mid+1; i<=high; i++) {
> + if(doc == offsets[i]) {
> + //skip offests[i] == doc
> + mid = i;
> + } else {
> + //stop skip offest
> + break;
> + }
> + }
>  return mid;
>}
>  }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2731) CSVResponseWriter should optionally return numfound

2016-10-01 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15538833#comment-15538833
 ] 

Alexandre Rafalovitch commented on SOLR-2731:
-

Is there still a desire in augmenting CSV output or have JSON/export handlers 
proved sufficient?

> CSVResponseWriter should optionally return numfound
> ---
>
> Key: SOLR-2731
> URL: https://issues.apache.org/jira/browse/SOLR-2731
> Project: Solr
>  Issue Type: Improvement
>  Components: Response Writers
>Affects Versions: 3.1, 3.3, 4.0-ALPHA
>Reporter: Jon Hoffman
>  Labels: patch
> Fix For: 3.1.1, 4.9, 6.0
>
> Attachments: SOLR-2731-R1.patch, SOLR-2731.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> an optional parameter "csv.numfound=true" can be added to the request which 
> causes the first line of the response to be the numfound.  This would have no 
> impact on existing behavior, and those who are interested in that value can 
> simply read off the first line before sending to their usual csv parser.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >