Unresolved external symbol errors when linking example native c++ code that uses jcc

2010-10-21 Thread Imre András
Hi list,

I intend to use jcc to ease calling Java code from native code. I managed to 
build and install it. Now I try to build my first test code from within MS VS 
2010 Win32 console app project. Despite setting up the libs and includes I 
still get linker errors:

--
Link:
C:\Program Files\Microsoft Visual Studio 10.0\VC\bin\link.exe 
/ERRORREPORT:PROMPT 
/OUT:C:\out\app\PeldaProgram\ZipBe\bin\test\src\Debug\jcc.exe /INCREMENTAL 
/NOLOGO python27.lib _jcc.lib jvm.lib kernel32.lib user32.lib gdi32.lib 
winspool.lib comdlg32.lib advapi32.lib shell32.lib ole32.lib oleaut32.lib 
uuid.lib odbc32.lib odbccp32.lib kernel32.lib user32.lib gdi32.lib winspool.lib 
comdlg32.lib advapi32.lib shell32.lib ole32.lib oleaut32.lib uuid.lib 
odbc32.lib odbccp32.lib /MANIFEST 
/ManifestFile:Debug\jcc.exe.intermediate.manifest 
/MANIFESTUAC:level='asInvoker' uiAccess='false' /DEBUG 
/PDB:C:\out\app\PeldaProgram\ZipBe\bin\test\src\Debug\jcc.pdb 
/SUBSYSTEM:CONSOLE /TLBID:1 /DYNAMICBASE /NXCOMPAT 
/IMPLIB:C:\out\app\PeldaProgram\ZipBe\bin\test\src\Debug\jcc.lib /MACHINE:X86 
Debug\jcc.exe.embed.manifest.res
Debug\jcc.obj
Debug\stdafx.obj
Debug\__wrap__.obj
Creating library C:\out\app\PeldaProgram\ZipBe\bin\test\src\Debug\jcc.lib and 
object C:\out\app\PeldaProgram\ZipBe\bin\test\src\Debug\jcc.exp
1jcc.obj : error LNK2001: unresolved external symbol unsigned long VM_ENV 
(?VM_ENV@@3KA)
1stdafx.obj : error LNK2001: unresolved external symbol unsigned long VM_ENV 
(?VM_ENV@@3KA)
1__wrap__.obj : error LNK2001: unresolved external symbol unsigned long 
VM_ENV (?VM_ENV@@3KA)
1jcc.obj : error LNK2001: unresolved external symbol class JCCEnv * env 
(?env@@3PAVJCCEnv@@A)
1stdafx.obj : error LNK2001: unresolved external symbol class JCCEnv * env 
(?env@@3PAVJCCEnv@@A)
1__wrap__.obj : error LNK2001: unresolved external symbol class JCCEnv * env 
(?env@@3PAVJCCEnv@@A)
1C:\out\app\PeldaProgram\ZipBe\bin\test\src\Debug\jcc.exe : fatal error 
LNK1120: 2 unresolved externals
1Done Building Project 
C:\out\app\PeldaProgram\ZipBe\bin\test\jcc\jcc.vcxproj (rebuild target(s)) -- 
FAILED.

Build FAILED.

Time Elapsed 00:00:12.60
--


jni.h and Native2Java.h (jcc generated from the java class I intend to use in 
native c++ code) is added to stdafx.h. Now I have no idea where the above 
symbols come from, and how should I resolve them.


Regards,
András



Re: Unresolved external symbol errors when linking example native c++ code that uses jcc

2010-10-21 Thread Andi Vajda


On Oct 21, 2010, at 4:39, Imre András ia...@freemail.hu wrote:


Hi list,

I intend to use jcc to ease calling Java code from native code. I  
managed to build and install it. Now I try to build my first test  
code from within MS VS 2010 Win32 console app project. Despite  
setting up the libs and includes I still get linker errors:


I'm not sure this link line makes sense. To get an idea of what the  
correct link line looks like, get jcc to build a python extension and  
take a close look at the link line it generates as an example for your  
C++ only case.


Andi..




--
Link:
C:\Program Files\Microsoft Visual Studio 10.0\VC\bin\link.exe / 
ERRORREPORT:PROMPT /OUT:C:\out\app\PeldaProgram\ZipBe\bin\test\src 
\Debug\jcc.exe /INCREMENTAL /NOLOGO python27.lib _jcc.lib jvm.lib  
kernel32.lib user32.lib gdi32.lib winspool.lib comdlg32.lib  
advapi32.lib shell32.lib ole32.lib oleaut32.lib uuid.lib odbc32.lib  
odbccp32.lib kernel32.lib user32.lib gdi32.lib winspool.lib  
comdlg32.lib advapi32.lib shell32.lib ole32.lib oleaut32.lib  
uuid.lib odbc32.lib odbccp32.lib /MANIFEST /ManifestFile:Debug 
\jcc.exe.intermediate.manifest /MANIFESTUAC:level='asInvoker'  
uiAccess='false' /DEBUG /PDB:C:\out\app\PeldaProgram\ZipBe\bin\test 
\src\Debug\jcc.pdb /SUBSYSTEM:CONSOLE /TLBID:1 /DYNAMICBASE / 
NXCOMPAT /IMPLIB:C:\out\app\PeldaProgram\ZipBe\bin\test\src\Debug 
\jcc.lib /MACHINE:X86 Debug\jcc.exe.embed.manifest.res

Debug\jcc.obj
Debug\stdafx.obj
Debug\__wrap__.obj
Creating library C:\out\app\PeldaProgram\ZipBe\bin\test\src\Debug 
\jcc.lib and object C:\out\app\PeldaProgram\ZipBe\bin\test\src\Debug 
\jcc.exp
1jcc.obj : error LNK2001: unresolved external symbol unsigned long  
VM_ENV (?VM_ENV@@3KA)
1stdafx.obj : error LNK2001: unresolved external symbol unsigned  
long VM_ENV (?VM_ENV@@3KA)
1__wrap__.obj : error LNK2001: unresolved external symbol unsigned  
long VM_ENV (?VM_ENV@@3KA)
1jcc.obj : error LNK2001: unresolved external symbol class JCCEnv  
* env (?env@@3PAVJCCEnv@@A)
1stdafx.obj : error LNK2001: unresolved external symbol class  
JCCEnv * env (?env@@3PAVJCCEnv@@A)
1__wrap__.obj : error LNK2001: unresolved external symbol class  
JCCEnv * env (?env@@3PAVJCCEnv@@A)
1C:\out\app\PeldaProgram\ZipBe\bin\test\src\Debug\jcc.exe : fatal  
error LNK1120: 2 unresolved externals
1Done Building Project C:\out\app\PeldaProgram\ZipBe\bin\test\jcc 
\jcc.vcxproj (rebuild target(s)) -- FAILED.


Build FAILED.

Time Elapsed 00:00:12.60
--


jni.h and Native2Java.h (jcc generated from the java class I intend  
to use in native c++ code) is added to stdafx.h. Now I have no idea  
where the above symbols come from, and how should I resolve them.



Regards,
András



RE: ant build error in trunk

2010-10-21 Thread Uwe Schindler
This is an ANT bug. The problem is that ants classpath generator adds somehow 
each listed jar file to each directoy it *may* be in. So it generates 
impossible filenames. This happens when you add an ~/.ant/lib folder or add a 
-lib parameter to your ant cmd line call. Javac prints a warning for each 
classpath component that’s invalid. The whole reason for that is not clear, but 
it has nothing to do with Lucene.

This is just a warning of javac, the compilation does not fail, so everything 
is ok. E.g. on our Hudson build we have the same with xbeans.jar.

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


 -Original Message-
 From: Lance Norskog [mailto:goks...@gmail.com]
 Sent: Thursday, October 21, 2010 12:58 AM
 To: solr-dev
 Subject: ant build error in trunk
 
 What does this mean? And should this be something in trunk?
 
 common.compile-core:
 [mkdir] Created dir: 
 /lucid/lance/open/lusolr/3.x/lucene/build/classes/java
 [javac] Compiling 422 source files to
 /lucid/lance/open/lusolr/3.x/lucene/build/classes/java
 [javac] warning: [path] bad path element
 /usr/share/ant/lib/hamcrest-core.jar: no such file or directory
 
 
 --
 Lance Norskog
 goks...@gmail.com
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional
 commands, e-mail: dev-h...@lucene.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (SOLR-2185) QueryElevationComponentTest depends on execution order

2010-10-21 Thread Simon Willnauer (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12923353#action_12923353
 ] 

Simon Willnauer commented on SOLR-2185:
---

seems like this has recently been removed in 
[r1022735|http://svn.apache.org/viewvc/lucene/dev/trunk/solr/src/test/org/apache/solr/handler/component/QueryElevationComponentTest.java?r1=1022735r2=1022785diff_format=h]
 but was added 
[here|http://svn.apache.org/viewvc/lucene/dev/trunk/solr/src/test/org/apache/solr/handler/component/QueryElevationComponentTest.java?r1=959441r2=992469diff_format=h]
 for the same reason.  I will fix though


 QueryElevationComponentTest depends on execution order
 --

 Key: SOLR-2185
 URL: https://issues.apache.org/jira/browse/SOLR-2185
 Project: Solr
  Issue Type: Test
  Components: SearchComponents - other
Reporter: Simon Willnauer
Priority: Minor
 Fix For: 4.0


 QueryElevationComponentTest fails if JUnit executes testSorting() before 
 testInterface() since there is already data in the index which testInterface 
 expects to be empty. 
 {noformat}
 java.lang.RuntimeException: Exception during query
   at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:369)
   at 
 org.apache.solr.handler.component.QueryElevationComponentTest.testInterface(QueryElevationComponentTest.java:100)
   at 
 org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:881)
   at 
 org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:847)
 Caused by: java.lang.RuntimeException: REQUEST FAILED: 
 xpath=//*...@numfound='0']
   xml response was: ?xml version=1.0 encoding=UTF-8?
 response
 lst name=responseHeaderint name=status0/intint 
 name=QTime9/intlst name=paramsstr name=q.alt*:*/strstr 
 name=qt/elevate/strstr name=defTypedismax/str/lst/lstresult 
 name=response numFound=6 start=0docstr name=ida/strarr 
 name=str_sstra/str/arrstr name=titleipod/str/docdocstr 
 name=idb/strarr name=str_sstrb/str/arrstr name=titleipod 
 ipod/str/docdocstr name=idc/strarr 
 name=str_sstrc/str/arrstr name=titleipod ipod 
 ipod/str/docdocstr name=idx/strarr 
 name=str_sstrx/str/arrstr name=titleboosted/str/docdocstr 
 name=idy/strarr name=str_sstry/str/arrstr 
 name=titleboosted boosted/str/docdocstr name=idz/strarr 
 name=str_sstrz/str/arrstr name=titleboosted boosted 
 boosted/str/doc/result
 /response
   request was:q.alt=*:*qt=/elevatedefType=dismax
   at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:362)
 NOTE: reproduce with: ant test -Dtestcase=QueryElevationComponentTest 
 -Dtestmethod=testInterface 
 -Dtests.seed=-9078717212967910902:2677111434934379417 -Dtests.multiplier=3
 {noformat}
 it is necessarily reproducible since it depends on JUnit and its execution 
 order which can be different. If i move the testSorting() up above 
 testInterface() it fails on my machine.
 I guess a setup method would do the job:
 {code}
 @Override
   public void setUp() throws Exception{
 super.setUp();
 clearIndex();
 assertU(commit());
 assertU(optimize());
   }
 {code}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Assigned: (SOLR-2185) QueryElevationComponentTest depends on execution order

2010-10-21 Thread Simon Willnauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer reassigned SOLR-2185:
-

Assignee: Simon Willnauer

 QueryElevationComponentTest depends on execution order
 --

 Key: SOLR-2185
 URL: https://issues.apache.org/jira/browse/SOLR-2185
 Project: Solr
  Issue Type: Test
  Components: SearchComponents - other
Reporter: Simon Willnauer
Assignee: Simon Willnauer
Priority: Minor
 Fix For: 4.0


 QueryElevationComponentTest fails if JUnit executes testSorting() before 
 testInterface() since there is already data in the index which testInterface 
 expects to be empty. 
 {noformat}
 java.lang.RuntimeException: Exception during query
   at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:369)
   at 
 org.apache.solr.handler.component.QueryElevationComponentTest.testInterface(QueryElevationComponentTest.java:100)
   at 
 org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:881)
   at 
 org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:847)
 Caused by: java.lang.RuntimeException: REQUEST FAILED: 
 xpath=//*...@numfound='0']
   xml response was: ?xml version=1.0 encoding=UTF-8?
 response
 lst name=responseHeaderint name=status0/intint 
 name=QTime9/intlst name=paramsstr name=q.alt*:*/strstr 
 name=qt/elevate/strstr name=defTypedismax/str/lst/lstresult 
 name=response numFound=6 start=0docstr name=ida/strarr 
 name=str_sstra/str/arrstr name=titleipod/str/docdocstr 
 name=idb/strarr name=str_sstrb/str/arrstr name=titleipod 
 ipod/str/docdocstr name=idc/strarr 
 name=str_sstrc/str/arrstr name=titleipod ipod 
 ipod/str/docdocstr name=idx/strarr 
 name=str_sstrx/str/arrstr name=titleboosted/str/docdocstr 
 name=idy/strarr name=str_sstry/str/arrstr 
 name=titleboosted boosted/str/docdocstr name=idz/strarr 
 name=str_sstrz/str/arrstr name=titleboosted boosted 
 boosted/str/doc/result
 /response
   request was:q.alt=*:*qt=/elevatedefType=dismax
   at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:362)
 NOTE: reproduce with: ant test -Dtestcase=QueryElevationComponentTest 
 -Dtestmethod=testInterface 
 -Dtests.seed=-9078717212967910902:2677111434934379417 -Dtests.multiplier=3
 {noformat}
 it is necessarily reproducible since it depends on JUnit and its execution 
 order which can be different. If i move the testSorting() up above 
 testInterface() it fails on my machine.
 I guess a setup method would do the job:
 {code}
 @Override
   public void setUp() throws Exception{
 super.setUp();
 clearIndex();
 assertU(commit());
 assertU(optimize());
   }
 {code}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Solr-trunk - Build # 1288 - Failure

2010-10-21 Thread Apache Hudson Server
Build: http://hudson.zones.apache.org/hudson/job/Solr-trunk/1288/

All tests passed

Build Log (for compile errors):
[...truncated 16293 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Lucene-Solr-tests-only-trunk - Build # 372 - Failure

2010-10-21 Thread Apache Hudson Server
Build: 
http://hudson.zones.apache.org/hudson/job/Lucene-Solr-tests-only-trunk/372/

6 tests failed.
REGRESSION:  
org.apache.solr.TestGroupingSearch.testGroupingGroupSortingScore_basic

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:369)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:336)
at 
org.apache.solr.TestGroupingSearch.testGroupingGroupSortingScore_basic(TestGroupingSearch.java:47)
at 
org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:881)
at 
org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:847)
Caused by: org.apache.solr.common.SolrException: Unknown query type 'func'
at org.apache.solr.core.SolrCore.getQueryPlugin(SolrCore.java:1508)
at org.apache.solr.search.QParser.getParser(QParser.java:286)
at 
org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:330)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:231)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1359)
at 
org.apache.solr.util.TestHarness.queryAndResponse(TestHarness.java:348)
at org.apache.solr.util.TestHarness.query(TestHarness.java:331)
at org.apache.solr.util.TestHarness.query(TestHarness.java:316)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:343)


REGRESSION:  
org.apache.solr.TestGroupingSearch.testGroupingGroupSortingScore_basicWithGroupSortEqualToSort

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:369)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:336)
at 
org.apache.solr.TestGroupingSearch.testGroupingGroupSortingScore_basicWithGroupSortEqualToSort(TestGroupingSearch.java:77)
at 
org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:881)
at 
org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:847)
Caused by: org.apache.solr.common.SolrException: Unknown query type 'func'
at org.apache.solr.core.SolrCore.getQueryPlugin(SolrCore.java:1508)
at org.apache.solr.search.QParser.getParser(QParser.java:286)
at 
org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:330)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:231)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1359)
at 
org.apache.solr.util.TestHarness.queryAndResponse(TestHarness.java:348)
at org.apache.solr.util.TestHarness.query(TestHarness.java:331)
at org.apache.solr.util.TestHarness.query(TestHarness.java:316)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:343)


REGRESSION:  org.apache.solr.TestGroupingSearch.testGroupingGroupSortingName

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:369)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:336)
at 
org.apache.solr.TestGroupingSearch.testGroupingGroupSortingName(TestGroupingSearch.java:103)
at 
org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:881)
at 
org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:847)
Caused by: org.apache.solr.common.SolrException: Unknown query type 'func'
at org.apache.solr.core.SolrCore.getQueryPlugin(SolrCore.java:1508)
at org.apache.solr.search.QParser.getParser(QParser.java:286)
at 
org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:330)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:231)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1359)
at 
org.apache.solr.util.TestHarness.queryAndResponse(TestHarness.java:348)
at org.apache.solr.util.TestHarness.query(TestHarness.java:331)
at org.apache.solr.util.TestHarness.query(TestHarness.java:316)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:343)


REGRESSION:  org.apache.solr.TestGroupingSearch.testGroupingGroupSortingWeight

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 

[jira] Updated: (LUCENE-2618) Intermittent failure in 3.x's backwards TestThreadedOptimize

2010-10-21 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-2618:
---

Attachment: LUCENE-2618.patch

I think I found this!

After a merge completes, IW then checks w/ the merge policy to see if followon 
merges are now necessary.

But this check is skipped if IW.close is pending (ie has been called and is 
waiting for merges to complete).

However, if that merge is an optimize, then we should in fact consult the merge 
policy even when a close is pending, which we are not doing today.

Tiny patch (attached) should fix it.

 Intermittent failure in 3.x's backwards TestThreadedOptimize
 

 Key: LUCENE-2618
 URL: https://issues.apache.org/jira/browse/LUCENE-2618
 Project: Lucene - Java
  Issue Type: Bug
  Components: Index
Reporter: Michael McCandless
 Fix For: 3.1, 4.0

 Attachments: LUCENE-2618.patch


 Failure looks like this:
 {noformat}
 [junit] Testsuite: org.apache.lucene.index.TestThreadedOptimize
 [junit] Testcase: 
 testThreadedOptimize(org.apache.lucene.index.TestThreadedOptimize): FAILED
 [junit] null
 [junit] junit.framework.AssertionFailedError: null
 [junit]   at 
 org.apache.lucene.index.TestThreadedOptimize.runTest(TestThreadedOptimize.java:125)
 [junit]   at 
 org.apache.lucene.index.TestThreadedOptimize.testThreadedOptimize(TestThreadedOptimize.java:149)
 [junit]   at 
 org.apache.lucene.util.LuceneTestCase.runBare(LuceneTestCase.java:253)
 {noformat}
 I just committed some verbosity so next time it strikes we'll have more 
 details.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Lucene-Solr-tests-only-trunk - Build # 376 - Failure

2010-10-21 Thread Apache Hudson Server
Build: 
http://hudson.zones.apache.org/hudson/job/Lucene-Solr-tests-only-trunk/376/

1 tests failed.
REGRESSION:  org.apache.solr.TestDistributedSearch.testDistribSearch

Error Message:
Some threads threw uncaught exceptions!

Stack Trace:
junit.framework.AssertionFailedError: Some threads threw uncaught exceptions!
at 
org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:881)
at 
org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:847)
at 
org.apache.lucene.util.LuceneTestCase.tearDown(LuceneTestCase.java:440)
at org.apache.solr.SolrTestCaseJ4.tearDown(SolrTestCaseJ4.java:78)
at 
org.apache.solr.BaseDistributedSearchTestCase.tearDown(BaseDistributedSearchTestCase.java:144)




Build Log (for compile errors):
[...truncated 8727 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: svn commit: r1025929 - /lucene/dev/trunk/lucene/src/java/org/apache/lucene/search/MultiTermQuery.java

2010-10-21 Thread Michael McCandless
Whoa, thanks for catching this Uwe!

On Thu, Oct 21, 2010 at 8:21 AM, Uwe Schindler u...@thetaphi.de wrote:

 Why did you change this at all? I would revert or fix as described.

Because the current impl is wasteful, ie it does one too many .next()
calls.  I like your fix (move up the count/cache)... I'll commit.

BTW this count is only for debugging/transparency right?  Ie so you
can see how many terms were visited during MTQ rewrite.

Mike

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (LUCENE-2618) Intermittent failure in 3.x's backwards TestThreadedOptimize

2010-10-21 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12923426#action_12923426
 ] 

Michael McCandless commented on LUCENE-2618:


{quote}
If close(false) is called after optimize() was called, it means the app would 
like to abort merges ASAP. If so, why would we consult the MP if we're 
instructed to abort?

Are you talking about a different use case?
{quote}

Sorry, different use case.

This use case is you call .optimize(doWait=false) then you call a normal 
.close() (ie, wait for merges).  In this case we wait for all running merges to 
finish, but don't start any new ones.  My patch would still allow new ones to 
start if the merges are due to a running optimize.

Your use case, where .close(false) is called, will in fact abort all running 
merges and close quickly.  Ie we will not start new merges, even for optimize, 
if you pass false to close, with this pattch.

 Intermittent failure in 3.x's backwards TestThreadedOptimize
 

 Key: LUCENE-2618
 URL: https://issues.apache.org/jira/browse/LUCENE-2618
 Project: Lucene - Java
  Issue Type: Bug
  Components: Index
Reporter: Michael McCandless
 Fix For: 3.1, 4.0

 Attachments: LUCENE-2618.patch


 Failure looks like this:
 {noformat}
 [junit] Testsuite: org.apache.lucene.index.TestThreadedOptimize
 [junit] Testcase: 
 testThreadedOptimize(org.apache.lucene.index.TestThreadedOptimize): FAILED
 [junit] null
 [junit] junit.framework.AssertionFailedError: null
 [junit]   at 
 org.apache.lucene.index.TestThreadedOptimize.runTest(TestThreadedOptimize.java:125)
 [junit]   at 
 org.apache.lucene.index.TestThreadedOptimize.testThreadedOptimize(TestThreadedOptimize.java:149)
 [junit]   at 
 org.apache.lucene.util.LuceneTestCase.runBare(LuceneTestCase.java:253)
 {noformat}
 I just committed some verbosity so next time it strikes we'll have more 
 details.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: svn commit: r1025978 - /lucene/dev/trunk/lucene/src/java/org/apache/lucene/search/MultiTermQuery.java

2010-10-21 Thread Michael McCandless
Thanks Uwe!

Mike

On Thu, Oct 21, 2010 at 8:51 AM,  uschind...@apache.org wrote:
 Author: uschindler
 Date: Thu Oct 21 12:51:06 2010
 New Revision: 1025978

 URL: http://svn.apache.org/viewvc?rev=1025978view=rev
 Log:
 fix caching on Mike's last commit. Also remove counting at all.

 Modified:
    
 lucene/dev/trunk/lucene/src/java/org/apache/lucene/search/MultiTermQuery.java

 Modified: 
 lucene/dev/trunk/lucene/src/java/org/apache/lucene/search/MultiTermQuery.java
 URL: 
 http://svn.apache.org/viewvc/lucene/dev/trunk/lucene/src/java/org/apache/lucene/search/MultiTermQuery.java?rev=1025978r1=1025977r2=1025978view=diff
 ==
 --- 
 lucene/dev/trunk/lucene/src/java/org/apache/lucene/search/MultiTermQuery.java 
 (original)
 +++ 
 lucene/dev/trunk/lucene/src/java/org/apache/lucene/search/MultiTermQuery.java 
 Thu Oct 21 12:51:06 2010
 @@ -247,10 +247,9 @@ public abstract class MultiTermQuery ext

   private abstract static class BooleanQueryRewrite extends RewriteMethod {

 -    protected final int collectTerms(IndexReader reader, MultiTermQuery 
 query, TermCollector collector) throws IOException {
 +    protected final void collectTerms(IndexReader reader, MultiTermQuery 
 query, TermCollector collector) throws IOException {
       final ListIndexReader subReaders = new ArrayListIndexReader();
       ReaderUtil.gatherSubReaders(subReaders, reader);
 -      int count = 0;
       ComparatorBytesRef lastTermComp = null;

       for (IndexReader r : subReaders) {
 @@ -281,15 +280,11 @@ public abstract class MultiTermQuery ext
         collector.setNextEnum(termsEnum);
         BytesRef bytes;
         while ((bytes = termsEnum.next()) != null) {
 -          if (collector.collect(bytes)) {
 -            termsEnum.cacheCurrentTerm();
 -            count++;
 -          } else {
 -            return count; // interrupt whole term collection, so also don't 
 iterate other subReaders
 -          }
 +          termsEnum.cacheCurrentTerm();
 +          if (!collector.collect(bytes))
 +            return; // interrupt whole term collection, so also don't 
 iterate other subReaders
         }
       }
 -      return count;
     }

     protected static abstract class TermCollector {




-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (LUCENE-2618) Intermittent failure in 3.x's backwards TestThreadedOptimize

2010-10-21 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12923428#action_12923428
 ] 

Robert Muir commented on LUCENE-2618:
-

thanks for tracking this down...! 

I think if we fix this one, then we are really into the long tail of random 
test fails (at least for now)


 Intermittent failure in 3.x's backwards TestThreadedOptimize
 

 Key: LUCENE-2618
 URL: https://issues.apache.org/jira/browse/LUCENE-2618
 Project: Lucene - Java
  Issue Type: Bug
  Components: Index
Reporter: Michael McCandless
 Fix For: 3.1, 4.0

 Attachments: LUCENE-2618.patch


 Failure looks like this:
 {noformat}
 [junit] Testsuite: org.apache.lucene.index.TestThreadedOptimize
 [junit] Testcase: 
 testThreadedOptimize(org.apache.lucene.index.TestThreadedOptimize): FAILED
 [junit] null
 [junit] junit.framework.AssertionFailedError: null
 [junit]   at 
 org.apache.lucene.index.TestThreadedOptimize.runTest(TestThreadedOptimize.java:125)
 [junit]   at 
 org.apache.lucene.index.TestThreadedOptimize.testThreadedOptimize(TestThreadedOptimize.java:149)
 [junit]   at 
 org.apache.lucene.util.LuceneTestCase.runBare(LuceneTestCase.java:253)
 {noformat}
 I just committed some verbosity so next time it strikes we'll have more 
 details.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (LUCENE-2618) Intermittent failure in 3.x's backwards TestThreadedOptimize

2010-10-21 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12923430#action_12923430
 ] 

Shai Erera commented on LUCENE-2618:


Ok Mike, that makes sense. You want to allow optimize() to finish all possible 
merges. Why then not let regular merges finish all the way through, even if 
we're closing? I mean, the application wants to wait for all running merges, so 
why is optimize() different than maybeMerge()?

 Intermittent failure in 3.x's backwards TestThreadedOptimize
 

 Key: LUCENE-2618
 URL: https://issues.apache.org/jira/browse/LUCENE-2618
 Project: Lucene - Java
  Issue Type: Bug
  Components: Index
Reporter: Michael McCandless
 Fix For: 3.1, 4.0

 Attachments: LUCENE-2618.patch


 Failure looks like this:
 {noformat}
 [junit] Testsuite: org.apache.lucene.index.TestThreadedOptimize
 [junit] Testcase: 
 testThreadedOptimize(org.apache.lucene.index.TestThreadedOptimize): FAILED
 [junit] null
 [junit] junit.framework.AssertionFailedError: null
 [junit]   at 
 org.apache.lucene.index.TestThreadedOptimize.runTest(TestThreadedOptimize.java:125)
 [junit]   at 
 org.apache.lucene.index.TestThreadedOptimize.testThreadedOptimize(TestThreadedOptimize.java:149)
 [junit]   at 
 org.apache.lucene.util.LuceneTestCase.runBare(LuceneTestCase.java:253)
 {noformat}
 I just committed some verbosity so next time it strikes we'll have more 
 details.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Lucene-Solr-tests-only-trunk - Build # 380 - Failure

2010-10-21 Thread Apache Hudson Server
Build: 
http://hudson.zones.apache.org/hudson/job/Lucene-Solr-tests-only-trunk/380/

6 tests failed.
REGRESSION:  
org.apache.solr.TestGroupingSearch.testGroupingGroupSortingScore_basic

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:369)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:336)
at 
org.apache.solr.TestGroupingSearch.testGroupingGroupSortingScore_basic(TestGroupingSearch.java:47)
at 
org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:881)
at 
org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:847)
Caused by: org.apache.solr.common.SolrException: Unknown query type 'func'
at org.apache.solr.core.SolrCore.getQueryPlugin(SolrCore.java:1508)
at org.apache.solr.search.QParser.getParser(QParser.java:286)
at 
org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:330)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:231)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1359)
at 
org.apache.solr.util.TestHarness.queryAndResponse(TestHarness.java:348)
at org.apache.solr.util.TestHarness.query(TestHarness.java:331)
at org.apache.solr.util.TestHarness.query(TestHarness.java:316)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:343)


REGRESSION:  
org.apache.solr.TestGroupingSearch.testGroupingGroupSortingScore_basicWithGroupSortEqualToSort

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:369)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:336)
at 
org.apache.solr.TestGroupingSearch.testGroupingGroupSortingScore_basicWithGroupSortEqualToSort(TestGroupingSearch.java:77)
at 
org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:881)
at 
org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:847)
Caused by: org.apache.solr.common.SolrException: Unknown query type 'func'
at org.apache.solr.core.SolrCore.getQueryPlugin(SolrCore.java:1508)
at org.apache.solr.search.QParser.getParser(QParser.java:286)
at 
org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:330)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:231)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1359)
at 
org.apache.solr.util.TestHarness.queryAndResponse(TestHarness.java:348)
at org.apache.solr.util.TestHarness.query(TestHarness.java:331)
at org.apache.solr.util.TestHarness.query(TestHarness.java:316)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:343)


REGRESSION:  org.apache.solr.TestGroupingSearch.testGroupingGroupSortingName

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:369)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:336)
at 
org.apache.solr.TestGroupingSearch.testGroupingGroupSortingName(TestGroupingSearch.java:103)
at 
org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:881)
at 
org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:847)
Caused by: org.apache.solr.common.SolrException: Unknown query type 'func'
at org.apache.solr.core.SolrCore.getQueryPlugin(SolrCore.java:1508)
at org.apache.solr.search.QParser.getParser(QParser.java:286)
at 
org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:330)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:231)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1359)
at 
org.apache.solr.util.TestHarness.queryAndResponse(TestHarness.java:348)
at org.apache.solr.util.TestHarness.query(TestHarness.java:331)
at org.apache.solr.util.TestHarness.query(TestHarness.java:316)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:343)


REGRESSION:  org.apache.solr.TestGroupingSearch.testGroupingGroupSortingWeight

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 

[jira] Commented: (SOLR-2160) Unknown query type 'func'

2010-10-21 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12923445#action_12923445
 ] 

Yonik Seeley commented on SOLR-2160:


I'm looping the tests on my ubuntu system now trying to reproduce this (solr 
tests now only take 2min!)

 Unknown query type 'func'
 -

 Key: SOLR-2160
 URL: https://issues.apache.org/jira/browse/SOLR-2160
 Project: Solr
  Issue Type: Test
  Components: Build
Affects Versions: 3.1, 4.0
 Environment: Hudson
Reporter: Robert Muir
 Fix For: 3.1, 4.0


 Several test methods in TestTrie failed in hudson, with errors such as this:
 Caused by: org.apache.solr.common.SolrException: Unknown query type 'func'

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (SOLR-2010) Improvements to SpellCheckComponent Collate functionality

2010-10-21 Thread Grant Ingersoll (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12923446#action_12923446
 ] 

Grant Ingersoll commented on SOLR-2010:
---

James, I did the merge back to 3.x.  I don't think we will be backporting this 
to 1.4, since all future releases there are bug-fix only.  

 Improvements to SpellCheckComponent Collate functionality
 -

 Key: SOLR-2010
 URL: https://issues.apache.org/jira/browse/SOLR-2010
 Project: Solr
  Issue Type: New Feature
  Components: clients - java, spellchecker
Affects Versions: 1.4.1
 Environment: Tested against trunk revision 966633
Reporter: James Dyer
Assignee: Grant Ingersoll
Priority: Minor
 Attachments: multiple_collations_as_an_array.patch, SOLR-2010.patch, 
 SOLR-2010.patch, SOLR-2010.patch, SOLR-2010.patch, SOLR-2010.txt, 
 SOLR-2010_141.patch, SOLR-2010_141.patch, 
 SOLR-2010_shardRecombineCollations_993538.patch, 
 SOLR-2010_shardRecombineCollations_999521.patch, 
 SOLR-2010_shardSearchHandler_993538.patch, 
 SOLR-2010_shardSearchHandler_999521.patch, solr_2010_3x.patch


 Improvements to SpellCheckComponent Collate functionality
 Our project requires a better Spell Check Collator.  I'm contributing this as 
 a patch to get suggestions for improvements and in case there is a broader 
 need for these features.
 1. Only return collations that are guaranteed to result in hits if re-queried 
 (applying original fq params also).  This is especially helpful when there is 
 more than one correction per query.  The 1.4 behavior does not verify that a 
 particular combination will actually return hits.
 2. Provide the option to get multiple collation suggestions
 3. Provide extended collation results including the # of hits re-querying 
 will return and a breakdown of each misspelled word and its correction.
 This patch is similar to what is described in SOLR-507 item #1.  Also, this 
 patch provides a viable workaround for the problem discussed in SOLR-1074.  A 
 dictionary could be created that combines the terms from the multiple fields. 
  The collator then would prune out any spurious suggestions this would cause.
 This patch adds the following spellcheck parameters:
 1. spellcheck.maxCollationTries - maximum # of collation possibilities to try 
 before giving up.  Lower values ensure better performance.  Higher values may 
 be necessary to find a collation that can return results.  Default is 0, 
 which maintains backwards-compatible behavior (do not check collations).
 2. spellcheck.maxCollations - maximum # of collations to return.  Default is 
 1, which maintains backwards-compatible behavior.
 3. spellcheck.collateExtendedResult - if true, returns an expanded response 
 format detailing collations found.  default is false, which maintains 
 backwards-compatible behavior.  When true, output is like this (in context):
 lst name=spellcheck
   lst name=suggestions
   lst name=hopq
   int name=numFound94/int
   int name=startOffset7/int
   int name=endOffset11/int
   arr name=suggestion
   strhope/str
   strhow/str
   strhope/str
   strchops/str
   strhoped/str
   etc
   /arr
   lst name=faill
   int name=numFound100/int
   int name=startOffset16/int
   int name=endOffset21/int
   arr name=suggestion
   strfall/str
   strfails/str
   strfail/str
   strfill/str
   strfaith/str
   strall/str
   etc
   /arr
   /lst
   lst name=collation
   str name=collationQueryTitle:(how AND fails)/str
   int name=hits2/int
   lst name=misspellingsAndCorrections
   str name=hopqhow/str
   str name=faillfails/str
   /lst
   /lst
   lst name=collation
   str name=collationQueryTitle:(hope AND faith)/str
   int name=hits2/int
   lst name=misspellingsAndCorrections
   str name=hopqhope/str
   str name=faillfaith/str
   /lst
   /lst
   lst name=collation
   str 

[jira] Commented: (SOLR-2160) Unknown query type 'func'

2010-10-21 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12923447#action_12923447
 ] 

Robert Muir commented on SOLR-2160:
---

I too find this test hard to reproduce.

But, I think if we look at hudson logs it is all caused by SolrInfoMBeanTest.

The problem is that now this test only loads up src/java things, so the concern 
would be that its a real static or similar problem.

I wish we could come up with a better way than SolrInfoMBeanTest to find 
problems since it makes debugging *very difficult*


 Unknown query type 'func'
 -

 Key: SOLR-2160
 URL: https://issues.apache.org/jira/browse/SOLR-2160
 Project: Solr
  Issue Type: Test
  Components: Build
Affects Versions: 3.1, 4.0
 Environment: Hudson
Reporter: Robert Muir
 Fix For: 3.1, 4.0


 Several test methods in TestTrie failed in hudson, with errors such as this:
 Caused by: org.apache.solr.common.SolrException: Unknown query type 'func'

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Resolved: (SOLR-2181) Propose adding field to the admin GUI to indicate the status of HTTP caching

2010-10-21 Thread Grant Ingersoll (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Ingersoll resolved SOLR-2181.
---

   Resolution: Fixed
Fix Version/s: 4.0
   3.1

Backported to 3.x

 Propose adding field to the admin GUI to indicate the status of HTTP caching
 

 Key: SOLR-2181
 URL: https://issues.apache.org/jira/browse/SOLR-2181
 Project: Solr
  Issue Type: New Feature
  Components: web gui
Affects Versions: 4.0
Reporter: Pradeep Singh
Priority: Trivial
 Fix For: 3.1, 4.0

 Attachments: SOLR-2181.patch, SOLR-2181.patch


 Add a field to the admin GUI to indicate the status of HTTP caching - HTTP 
 caching is ON. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Issue Comment Edited: (SOLR-2160) Unknown query type 'func'

2010-10-21 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12923449#action_12923449
 ] 

Uwe Schindler edited comment on SOLR-2160 at 10/21/10 10:12 AM:


The problem seems to be one of the solr classes that does some static 
initialization that depends e.g. on a SolrCore be available. If this class is 
initialized before (e.g. in SolrMBeanTest) and no SolrCore or other part is 
available, it fails. The SolrMBean test is not the bug, it's the class that 
does static initialization! How gets the func QP be enabled? Is the info about 
it cached in a static field? Something like that causes this issue.

  was (Author: thetaphi):
The problem seems to be one of the solr classes that does some static 
initialization that depends e.g. on a SolrCore be available. If this class is 
initialized before (e.g. in SolrMBeanTest) it fails. The SolrMBean test is not 
the bug its the class that does static initialization. How gets the func QP be 
enabled? is the infor about it cached in a staic var? Something like that 
causes this issue.
  
 Unknown query type 'func'
 -

 Key: SOLR-2160
 URL: https://issues.apache.org/jira/browse/SOLR-2160
 Project: Solr
  Issue Type: Test
  Components: Build
Affects Versions: 3.1, 4.0
 Environment: Hudson
Reporter: Robert Muir
 Fix For: 3.1, 4.0


 Several test methods in TestTrie failed in hudson, with errors such as this:
 Caused by: org.apache.solr.common.SolrException: Unknown query type 'func'

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (SOLR-2160) Unknown query type 'func'

2010-10-21 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12923461#action_12923461
 ] 

Uwe Schindler commented on SOLR-2160:
-

Its not instantiating. SolrMBean only loads the classes! So only statics are 
initialized. And that should be able to be done in random order. Especially 
could optimizers in the JVM do this (this is even allowed in the JLS). 
SolrMBean only instantiates classes with default ctor that implement this 
interface (this is a bug, you are right). But this problem seems to be more a 
static initializer bug.

To track this, we can temporarily disable the instantiation part of SolrMBean 
and only do the Class.forName() calls. If it then still fails its a real bug. 
If it does not fail then, the reason is SolrMBean. In all other cases, the 
static initializers run by Class.forName() use wrong assumptions!

 Unknown query type 'func'
 -

 Key: SOLR-2160
 URL: https://issues.apache.org/jira/browse/SOLR-2160
 Project: Solr
  Issue Type: Test
  Components: Build
Affects Versions: 3.1, 4.0
 Environment: Hudson
Reporter: Robert Muir
 Fix For: 3.1, 4.0


 Several test methods in TestTrie failed in hudson, with errors such as this:
 Caused by: org.apache.solr.common.SolrException: Unknown query type 'func'

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (SOLR-2160) Unknown query type 'func'

2010-10-21 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12923462#action_12923462
 ] 

Robert Muir commented on SOLR-2160:
---

bq. Its not instantiating. SolrMBean only loads the classes!

I disagree... unless I'm completely wrong about what Class.newInstance() does!

{noformat}
for( Class clazz : classes ) {
  if( SolrInfoMBean.class.isAssignableFrom( clazz ) ) {
try {
  SolrInfoMBean info = (SolrInfoMBean)clazz.newInstance();
  
  //System.out.println( info.getClass() );
  assertNotNull( info.getName() );
{noformat}

 Unknown query type 'func'
 -

 Key: SOLR-2160
 URL: https://issues.apache.org/jira/browse/SOLR-2160
 Project: Solr
  Issue Type: Test
  Components: Build
Affects Versions: 3.1, 4.0
 Environment: Hudson
Reporter: Robert Muir
 Fix For: 3.1, 4.0


 Several test methods in TestTrie failed in hudson, with errors such as this:
 Caused by: org.apache.solr.common.SolrException: Unknown query type 'func'

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (SOLR-2010) Improvements to SpellCheckComponent Collate functionality

2010-10-21 Thread James Dyer (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12923463#action_12923463
 ] 

James Dyer commented on SOLR-2010:
--

Great.  Thank you.  

The 1.4 patch is mostly for my benefit, so we can use the functionaltiy before 
the next release.  Thought I'd share that with anyone else who wants to try it 
too...


 Improvements to SpellCheckComponent Collate functionality
 -

 Key: SOLR-2010
 URL: https://issues.apache.org/jira/browse/SOLR-2010
 Project: Solr
  Issue Type: New Feature
  Components: clients - java, spellchecker
Affects Versions: 1.4.1
 Environment: Tested against trunk revision 966633
Reporter: James Dyer
Assignee: Grant Ingersoll
Priority: Minor
 Attachments: multiple_collations_as_an_array.patch, SOLR-2010.patch, 
 SOLR-2010.patch, SOLR-2010.patch, SOLR-2010.patch, SOLR-2010.txt, 
 SOLR-2010_141.patch, SOLR-2010_141.patch, 
 SOLR-2010_shardRecombineCollations_993538.patch, 
 SOLR-2010_shardRecombineCollations_999521.patch, 
 SOLR-2010_shardSearchHandler_993538.patch, 
 SOLR-2010_shardSearchHandler_999521.patch, solr_2010_3x.patch


 Improvements to SpellCheckComponent Collate functionality
 Our project requires a better Spell Check Collator.  I'm contributing this as 
 a patch to get suggestions for improvements and in case there is a broader 
 need for these features.
 1. Only return collations that are guaranteed to result in hits if re-queried 
 (applying original fq params also).  This is especially helpful when there is 
 more than one correction per query.  The 1.4 behavior does not verify that a 
 particular combination will actually return hits.
 2. Provide the option to get multiple collation suggestions
 3. Provide extended collation results including the # of hits re-querying 
 will return and a breakdown of each misspelled word and its correction.
 This patch is similar to what is described in SOLR-507 item #1.  Also, this 
 patch provides a viable workaround for the problem discussed in SOLR-1074.  A 
 dictionary could be created that combines the terms from the multiple fields. 
  The collator then would prune out any spurious suggestions this would cause.
 This patch adds the following spellcheck parameters:
 1. spellcheck.maxCollationTries - maximum # of collation possibilities to try 
 before giving up.  Lower values ensure better performance.  Higher values may 
 be necessary to find a collation that can return results.  Default is 0, 
 which maintains backwards-compatible behavior (do not check collations).
 2. spellcheck.maxCollations - maximum # of collations to return.  Default is 
 1, which maintains backwards-compatible behavior.
 3. spellcheck.collateExtendedResult - if true, returns an expanded response 
 format detailing collations found.  default is false, which maintains 
 backwards-compatible behavior.  When true, output is like this (in context):
 lst name=spellcheck
   lst name=suggestions
   lst name=hopq
   int name=numFound94/int
   int name=startOffset7/int
   int name=endOffset11/int
   arr name=suggestion
   strhope/str
   strhow/str
   strhope/str
   strchops/str
   strhoped/str
   etc
   /arr
   lst name=faill
   int name=numFound100/int
   int name=startOffset16/int
   int name=endOffset21/int
   arr name=suggestion
   strfall/str
   strfails/str
   strfail/str
   strfill/str
   strfaith/str
   strall/str
   etc
   /arr
   /lst
   lst name=collation
   str name=collationQueryTitle:(how AND fails)/str
   int name=hits2/int
   lst name=misspellingsAndCorrections
   str name=hopqhow/str
   str name=faillfails/str
   /lst
   /lst
   lst name=collation
   str name=collationQueryTitle:(hope AND faith)/str
   int name=hits2/int
   lst name=misspellingsAndCorrections
   str name=hopqhope/str
   str name=faillfaith/str
   /lst
   /lst
   lst 

[jira] Commented: (SOLR-2160) Unknown query type 'func'

2010-10-21 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12923465#action_12923465
 ] 

Uwe Schindler commented on SOLR-2160:
-

See above I changed my original comment before you posted your reply - sorry.

It only instantiates few classes (which match the interface). Because of this I 
suggested at the end, to disable this instantiation temporarily. If it then 
does not fail, the bug is tracked to the instantiation. If it then also fails, 
you can be 10% sure that it is caused by static initialization.

Sorry for incomplete comment at the beginning!

 Unknown query type 'func'
 -

 Key: SOLR-2160
 URL: https://issues.apache.org/jira/browse/SOLR-2160
 Project: Solr
  Issue Type: Test
  Components: Build
Affects Versions: 3.1, 4.0
 Environment: Hudson
Reporter: Robert Muir
 Fix For: 3.1, 4.0


 Several test methods in TestTrie failed in hudson, with errors such as this:
 Caused by: org.apache.solr.common.SolrException: Unknown query type 'func'

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (SOLR-2160) Unknown query type 'func'

2010-10-21 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12923466#action_12923466
 ] 

Robert Muir commented on SOLR-2160:
---

bq. Because of this I suggested at the end, to disable this instantiation 
temporarily.

This is a good idea for debugging. This is how i found it was meddling with 
TestReplicationHandler by only doing Class.forName(TestReplicationHandler) back 
when SolrInfoMBeanTest meddled with test classes too

the difference with that one was that it was easier to reproduce... this func() 
one is tricky!


 Unknown query type 'func'
 -

 Key: SOLR-2160
 URL: https://issues.apache.org/jira/browse/SOLR-2160
 Project: Solr
  Issue Type: Test
  Components: Build
Affects Versions: 3.1, 4.0
 Environment: Hudson
Reporter: Robert Muir
 Fix For: 3.1, 4.0


 Several test methods in TestTrie failed in hudson, with errors such as this:
 Caused by: org.apache.solr.common.SolrException: Unknown query type 'func'

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (SOLR-2180) Requests to Embedded Solr (often) leave old Searchers open

2010-10-21 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12923491#action_12923491
 ] 

Yonik Seeley commented on SOLR-2180:


My reading of it was that it was only on an exception would the request not be 
closed (which is still a serious bug though).
I was cranking on tests and overlooked that this change had a broader impact.

 Requests to Embedded Solr (often) leave old Searchers open
 --

 Key: SOLR-2180
 URL: https://issues.apache.org/jira/browse/SOLR-2180
 Project: Solr
  Issue Type: Bug
  Components: search
Affects Versions: 1.4.2, 1.5, 3.1, 4.0
Reporter: Lance Norskog

 SolrEmbeddedServer.request() fails to call SolrQueryRequest.close() at the 
 very end of a successful request. This (in some situations) causes 
 EmbeddedSolr to leave old Searchers open until the Solr core stops (core 
 unload, the JVM restarts). This leaves old Solr and Lucene caches in place, 
 which causes a memory leak.
 A fix for this was committed on the trunk on Sunday, Oct/15/2010. 
 [Revision r1023599  to 
 SolrEmbeddedServer|http://svn.apache.org/viewvc/lucene/dev/trunk/solr/src/webapp/src/org/apache/solr/client/solrj/embedded/EmbeddedSolrServer.java?diff_format=hrevision=1023599view=markup]
 This should be backported, or the problem checked for, in 1.4.2 and 3.1. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Unresolved external symbol errors when linking example native c++ code that uses jcc

2010-10-21 Thread Imre András
Hi Chris,

I did it on MS VS 2008, too. class JCCEnv * env still can not be resolved.

Regards,
  András

Christian Heimes li...@cheimes.de írta:
Am 21.10.2010 13:39, schrieb Imre András:
 Hi list,
 
 I intend to use jcc to ease calling Java code from native code. I managed to 
 build and install it. Now I try to build my first test code from within MS VS 
 2010 Win32 console app project. Despite setting up the libs and includes I 
 still get linker errors:

This may not be related to your problem but MS VS 2010 is not supported
by any current Python release. You have to use VS 2008.

Christian



[jira] Commented: (SOLR-2178) Use the Velocity UI as the default tutorial example

2010-10-21 Thread Grant Ingersoll (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12923503#action_12923503
 ] 

Grant Ingersoll commented on SOLR-2178:
---

Just checked in some more changes along this line:

# Added toggleable tooltips (click the enable annotation link at the 
bottom) so that when you hover over the facet headings, the boost by price 
check box, etc. you will get a tooltip describing what it does.
# Added a link to the Solr admin
# Called out the various facet types more explicitly. 

 Use the Velocity UI as the default tutorial example
 ---

 Key: SOLR-2178
 URL: https://issues.apache.org/jira/browse/SOLR-2178
 Project: Solr
  Issue Type: Improvement
Reporter: Grant Ingersoll
Assignee: Grant Ingersoll
Priority: Minor

 The /browse example in solr/example is much nicer to look at and work with, 
 we should convert the tutorial over to use it so as to present a nicer view 
 of Solr's capabilities.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Unresolved external symbol errors when linking example native c++ code that uses jcc

2010-10-21 Thread Imre András
JCCEnv.h has the following:

...
#ifdef _jcc_shared
_DLL_IMPORT extern JCCEnv *env;
_DLL_IMPORT extern DWORD VM_ENV;
#else
_DLL_EXPORT extern JCCEnv *env;
_DLL_EXPORT extern DWORD VM_ENV;
#endif
...

I suspect here is the root of my linker problem. Where should this *env get 
resolved?


Thanks,
  András


Imre András ia...@freemail.hu írta:
Hi list,

I intend to use jcc to ease calling Java code from native code. I managed to 
build and install it. Now I try to build my first test code from within MS VS 
2010 Win32 console app project. Despite setting up the libs and includes I 
still get linker errors:

--
Link:
C:\Program Files\Microsoft Visual Studio 10.0\VC\bin\link.exe 
/ERRORREPORT:PROMPT 
/OUT:C:\out\app\PeldaProgram\ZipBe\bin\test\src\Debug\jcc.exe /INCREMENTAL 
/NOLOGO python27.lib _jcc.lib jvm.lib kernel32.lib user32.lib gdi32.lib 
winspool.lib comdlg32.lib advapi32.lib shell32.lib ole32.lib oleaut32.lib 
uuid.lib odbc32.lib odbccp32.lib kernel32.lib user32.lib gdi32.lib winspool.lib 
comdlg32.lib advapi32.lib shell32.lib ole32.lib oleaut32.lib uuid.lib 
odbc32.lib odbccp32.lib /MANIFEST 
/ManifestFile:Debug\jcc.exe.intermediate.manifest 
/MANIFESTUAC:level='asInvoker' uiAccess='false' /DEBUG 
/PDB:C:\out\app\PeldaProgram\ZipBe\bin\test\src\Debug\jcc.pdb 
/SUBSYSTEM:CONSOLE /TLBID:1 /DYNAMICBASE /NXCOMPAT 
/IMPLIB:C:\out\app\PeldaProgram\ZipBe\bin\test\src\Debug\jcc.lib /MACHINE:X86 
Debug\jcc.exe.embed.manifest.res
Debug\jcc.obj
Debug\stdafx.obj
Debug\__wrap__.obj
Creating library C:\out\app\PeldaProgram\ZipBe\bin\test\src\Debug\jcc.lib and 
object C:\out\app\PeldaProgram\ZipBe\bin\test\src\Debug\jcc.exp
1jcc.obj : error LNK2001: unresolved external symbol unsigned long VM_ENV 
(?VM_ENV@@3KA)
1stdafx.obj : error LNK2001: unresolved external symbol unsigned long VM_ENV 
(?VM_ENV@@3KA)
1__wrap__.obj : error LNK2001: unresolved external symbol unsigned long 
VM_ENV (?VM_ENV@@3KA)
1jcc.obj : error LNK2001: unresolved external symbol class JCCEnv * env 
(?env@@3PAVJCCEnv@@A)
1stdafx.obj : error LNK2001: unresolved external symbol class JCCEnv * env 
(?env@@3PAVJCCEnv@@A)
1__wrap__.obj : error LNK2001: unresolved external symbol class JCCEnv * env 
(?env@@3PAVJCCEnv@@A)
1C:\out\app\PeldaProgram\ZipBe\bin\test\src\Debug\jcc.exe : fatal error 
LNK1120: 2 unresolved externals
1Done Building Project 
C:\out\app\PeldaProgram\ZipBe\bin\test\jcc\jcc.vcxproj (rebuild target(s)) -- 
FAILED.

Build FAILED.

Time Elapsed 00:00:12.60
--


jni.h and Native2Java.h (jcc generated from the java class I intend to use in 
native c++ code) is added to stdafx.h. Now I have no idea where the above 
symbols come from, and how should I resolve them.


Regards,
András




[jira] Resolved: (SOLR-2180) Requests to Embedded Solr (often) leave old Searchers open

2010-10-21 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley resolved SOLR-2180.


   Resolution: Fixed
Fix Version/s: 4.0
   3.1
   1.4.2

I packported to 3.x and 1.4.x

It's unclear exactly what would have triggered this bug... the 
SearchHandler.handleRequest normally catches exceptions and returns normally, 
so a normal query exception wouldn't have.  I'm sure there were some ways 
though.

 Requests to Embedded Solr (often) leave old Searchers open
 --

 Key: SOLR-2180
 URL: https://issues.apache.org/jira/browse/SOLR-2180
 Project: Solr
  Issue Type: Bug
  Components: search
Affects Versions: 1.4.2, 1.5, 3.1, 4.0
Reporter: Lance Norskog
 Fix For: 1.4.2, 3.1, 4.0


 SolrEmbeddedServer.request() fails to call SolrQueryRequest.close() at the 
 very end of a successful request. This (in some situations) causes 
 EmbeddedSolr to leave old Searchers open until the Solr core stops (core 
 unload, the JVM restarts). This leaves old Solr and Lucene caches in place, 
 which causes a memory leak.
 A fix for this was committed on the trunk on Sunday, Oct/15/2010. 
 [Revision r1023599  to 
 SolrEmbeddedServer|http://svn.apache.org/viewvc/lucene/dev/trunk/solr/src/webapp/src/org/apache/solr/client/solrj/embedded/EmbeddedSolrServer.java?diff_format=hrevision=1023599view=markup]
 This should be backported, or the problem checked for, in 1.4.2 and 3.1. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Created: (LUCENE-2716) Improve automaton's MinimizeOperations.minimizeHopcroft() to not create so many objects

2010-10-21 Thread Uwe Schindler (JIRA)
Improve automaton's MinimizeOperations.minimizeHopcroft() to not create so many 
objects
---

 Key: LUCENE-2716
 URL: https://issues.apache.org/jira/browse/LUCENE-2716
 Project: Lucene - Java
  Issue Type: Improvement
Affects Versions: 4.0
Reporter: Uwe Schindler
Assignee: Uwe Schindler
Priority: Minor
 Fix For: 4.0


MinimizeOperations.minimizeHopcroft() creates a lot of objects because of 
strange arrays and useless ArrayLists with fixed length. E.g. it created 
ListListList. This patch minimizes this and makes the whole method much 
more GC friendler by using simple arrays or avoiding empty LinkedLists at all 
(inside reverse array). 

minimize() is called very very often, especially in tests (MockAnalyzer).

A test for the method is prepared by Robert, we found a bug somewhere else in 
automaton, so this is pending until his issue and fix arrives.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (LUCENE-2716) Improve automaton's MinimizeOperations.minimizeHopcroft() to not create so many objects

2010-10-21 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12923550#action_12923550
 ] 

Uwe Schindler commented on LUCENE-2716:
---

It still creates lots of objects depending on the size and states of the 
automaton, but a lot less!
If I will look several times over it, I may find more improvements. :-)

The Hopcroft-Policeman

 Improve automaton's MinimizeOperations.minimizeHopcroft() to not create so 
 many objects
 ---

 Key: LUCENE-2716
 URL: https://issues.apache.org/jira/browse/LUCENE-2716
 Project: Lucene - Java
  Issue Type: Improvement
Affects Versions: 4.0
Reporter: Uwe Schindler
Assignee: Uwe Schindler
Priority: Minor
 Fix For: 4.0

 Attachments: LUCENE-2716.patch


 MinimizeOperations.minimizeHopcroft() creates a lot of objects because of 
 strange arrays and useless ArrayLists with fixed length. E.g. it created 
 ListListList. This patch minimizes this and makes the whole method much 
 more GC friendler by using simple arrays or avoiding empty LinkedLists at all 
 (inside reverse array). 
 minimize() is called very very often, especially in tests (MockAnalyzer).
 A test for the method is prepared by Robert, we found a bug somewhere else in 
 automaton, so this is pending until his issue and fix arrives.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (LUCENE-2716) Improve automaton's MinimizeOperations.minimizeHopcroft() to not create so many objects

2010-10-21 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12923552#action_12923552
 ] 

Robert Muir commented on LUCENE-2716:
-

This thing will make your head explode, thanks for cleaning it up!

 Improve automaton's MinimizeOperations.minimizeHopcroft() to not create so 
 many objects
 ---

 Key: LUCENE-2716
 URL: https://issues.apache.org/jira/browse/LUCENE-2716
 Project: Lucene - Java
  Issue Type: Improvement
Affects Versions: 4.0
Reporter: Uwe Schindler
Assignee: Uwe Schindler
Priority: Minor
 Fix For: 4.0

 Attachments: LUCENE-2716.patch


 MinimizeOperations.minimizeHopcroft() creates a lot of objects because of 
 strange arrays and useless ArrayLists with fixed length. E.g. it created 
 ListListList. This patch minimizes this and makes the whole method much 
 more GC friendler by using simple arrays or avoiding empty LinkedLists at all 
 (inside reverse array). 
 minimize() is called very very often, especially in tests (MockAnalyzer).
 A test for the method is prepared by Robert, we found a bug somewhere else in 
 automaton, so this is pending until his issue and fix arrives.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Unresolved external symbol errors when linking example native c++ code that uses jcc

2010-10-21 Thread Andi Vajda


On Thu, 21 Oct 2010, Imre András wrote:


JCCEnv.h has the following:

...
#ifdef _jcc_shared
_DLL_IMPORT extern JCCEnv *env;
_DLL_IMPORT extern DWORD VM_ENV;
#else
_DLL_EXPORT extern JCCEnv *env;
_DLL_EXPORT extern DWORD VM_ENV;
#endif
...

I suspect here is the root of my linker problem. Where should this *env 
get resolved?


These are in JCCEnv.cpp and jcc.cpp which gets linked with your shared 
library either dynamically (when using --shared) or statically.


You need to link a piece of jcc itself with your code. To get an example 
link line, ask jcc to create a python extension for you, a correct link line 
flies by.


Andi..




Thanks,
 András


Imre András ia...@freemail.hu írta:

Hi list,


I intend to use jcc to ease calling Java code from native code. I managed to build 
and install it. Now I try to build my first test code from within MS VS 2010 Win32 
console app project. Despite setting up the libs and includes I still get linker 
errors:



--
Link:
C:\Program Files\Microsoft Visual Studio 10.0\VC\bin\link.exe /ERRORREPORT:PROMPT 
/OUT:C:\out\app\PeldaProgram\ZipBe\bin\test\src\Debug\jcc.exe /INCREMENTAL /NOLOGO python27.lib _jcc.lib jvm.lib 
kernel32.lib user32.lib gdi32.lib winspool.lib comdlg32.lib advapi32.lib shell32.lib ole32.lib oleaut32.lib uuid.lib odbc32.lib 
odbccp32.lib kernel32.lib user32.lib gdi32.lib winspool.lib comdlg32.lib advapi32.lib shell32.lib ole32.lib oleaut32.lib uuid.lib 
odbc32.lib odbccp32.lib /MANIFEST /ManifestFile:Debug\jcc.exe.intermediate.manifest /MANIFESTUAC:level='asInvoker' 
uiAccess='false' /DEBUG /PDB:C:\out\app\PeldaProgram\ZipBe\bin\test\src\Debug\jcc.pdb /SUBSYSTEM:CONSOLE /TLBID:1 
/DYNAMICBASE /NXCOMPAT /IMPLIB:C:\out\app\PeldaProgram\ZipBe\bin\test\src\Debug\jcc.lib /MACHINE:X86 
Debug\jcc.exe.embed.manifest.res
Debug\jcc.obj
Debug\stdafx.obj
Debug\__wrap__.obj
Creating library C:\out\app\PeldaProgram\ZipBe\bin\test\src\Debug\jcc.lib and 
object C:\out\app\PeldaProgram\ZipBe\bin\test\src\Debug\jcc.exp
1jcc.obj : error LNK2001: unresolved external symbol unsigned long VM_ENV 
(?VM_ENV@@3KA)
1stdafx.obj : error LNK2001: unresolved external symbol unsigned long VM_ENV 
(?VM_ENV@@3KA)
1__wrap__.obj : error LNK2001: unresolved external symbol unsigned long VM_ENV 
(?VM_ENV@@3KA)
1jcc.obj : error LNK2001: unresolved external symbol class JCCEnv * env 
(?env@@3PAVJCCEnv@@A)
1stdafx.obj : error LNK2001: unresolved external symbol class JCCEnv * env 
(?env@@3PAVJCCEnv@@A)
1__wrap__.obj : error LNK2001: unresolved external symbol class JCCEnv * env 
(?env@@3PAVJCCEnv@@A)
1C:\out\app\PeldaProgram\ZipBe\bin\test\src\Debug\jcc.exe : fatal error LNK1120: 
2 unresolved externals
1Done Building Project C:\out\app\PeldaProgram\ZipBe\bin\test\jcc\jcc.vcxproj 
(rebuild target(s)) -- FAILED.



Build FAILED.



Time Elapsed 00:00:12.60
--




jni.h and Native2Java.h (jcc generated from the java class I intend to use in 
native c++ code) is added to stdafx.h. Now I have no idea where the above symbols 
come from, and how should I resolve them.




Regards,
András






[jira] Created: (LUCENE-2717) BasicOperations.concatenate creates invariants

2010-10-21 Thread Robert Muir (JIRA)
BasicOperations.concatenate creates invariants
--

 Key: LUCENE-2717
 URL: https://issues.apache.org/jira/browse/LUCENE-2717
 Project: Lucene - Java
  Issue Type: Bug
Affects Versions: 4.0
Reporter: Robert Muir
Assignee: Robert Muir
 Fix For: 4.0
 Attachments: LUCENE-2717.patch

I started writing a test for LUCENE-2716, and i found a problem with 
BasicOperations.concatenate(Automaton, Automaton):
it creates automata with invariant representation (which should never happen, 
unless you manipulate states/transitions manually).

strangely enough the BasicOperations.concatenate(ListAutomaton) does not have 
this problem.


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Updated: (LUCENE-2717) BasicOperations.concatenate creates invariants

2010-10-21 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-2717:


Attachment: LUCENE-2717.patch

attached is a patch with test and a sub-optimal fix.

I'll work on a better fix.


 BasicOperations.concatenate creates invariants
 --

 Key: LUCENE-2717
 URL: https://issues.apache.org/jira/browse/LUCENE-2717
 Project: Lucene - Java
  Issue Type: Bug
Affects Versions: 4.0
Reporter: Robert Muir
Assignee: Robert Muir
 Fix For: 4.0

 Attachments: LUCENE-2717.patch


 I started writing a test for LUCENE-2716, and i found a problem with 
 BasicOperations.concatenate(Automaton, Automaton):
 it creates automata with invariant representation (which should never happen, 
 unless you manipulate states/transitions manually).
 strangely enough the BasicOperations.concatenate(ListAutomaton) does not 
 have this problem.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (LUCENE-2618) Intermittent failure in 3.x's backwards TestThreadedOptimize

2010-10-21 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12923564#action_12923564
 ] 

Michael McCandless commented on LUCENE-2618:


We do allow all running merges to run to completion.

But, we don't allow new merges to start, unless it's part of an ongoing 
optimize (as of this patch).

I think this distinction makes sense?  Since optimize was an explicit call, it 
should run until completion.  But merging can simply pick up the next time the 
index is opened?

If an app really wants to allow all merges to run before closing (even new ones 
starting) it can call waitForMerges and then close.

 Intermittent failure in 3.x's backwards TestThreadedOptimize
 

 Key: LUCENE-2618
 URL: https://issues.apache.org/jira/browse/LUCENE-2618
 Project: Lucene - Java
  Issue Type: Bug
  Components: Index
Reporter: Michael McCandless
 Fix For: 3.1, 4.0

 Attachments: LUCENE-2618.patch


 Failure looks like this:
 {noformat}
 [junit] Testsuite: org.apache.lucene.index.TestThreadedOptimize
 [junit] Testcase: 
 testThreadedOptimize(org.apache.lucene.index.TestThreadedOptimize): FAILED
 [junit] null
 [junit] junit.framework.AssertionFailedError: null
 [junit]   at 
 org.apache.lucene.index.TestThreadedOptimize.runTest(TestThreadedOptimize.java:125)
 [junit]   at 
 org.apache.lucene.index.TestThreadedOptimize.testThreadedOptimize(TestThreadedOptimize.java:149)
 [junit]   at 
 org.apache.lucene.util.LuceneTestCase.runBare(LuceneTestCase.java:253)
 {noformat}
 I just committed some verbosity so next time it strikes we'll have more 
 details.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Updated: (LUCENE-2717) BasicOperations.concatenate creates invariants

2010-10-21 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-2717:


Attachment: LUCENE-2717.patch

here's the fix.

the problem is if you concatenate empty with any automaton, the result must be 
empty.

so if the RHS was empty, the concatenation was correct but it would 
create epsilon transitions from the LHS's accept states all to dead states...

in the fix i just return makeEmpty() if either is empty, which is a very quick 
check.

will commit soon.


 BasicOperations.concatenate creates invariants
 --

 Key: LUCENE-2717
 URL: https://issues.apache.org/jira/browse/LUCENE-2717
 Project: Lucene - Java
  Issue Type: Bug
Affects Versions: 4.0
Reporter: Robert Muir
Assignee: Robert Muir
 Fix For: 4.0

 Attachments: LUCENE-2717.patch, LUCENE-2717.patch


 I started writing a test for LUCENE-2716, and i found a problem with 
 BasicOperations.concatenate(Automaton, Automaton):
 it creates automata with invariant representation (which should never happen, 
 unless you manipulate states/transitions manually).
 strangely enough the BasicOperations.concatenate(ListAutomaton) does not 
 have this problem.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Resolved: (LUCENE-2717) BasicOperations.concatenate creates invariants

2010-10-21 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-2717.
-

Resolution: Fixed

Committed revision 1026104

 BasicOperations.concatenate creates invariants
 --

 Key: LUCENE-2717
 URL: https://issues.apache.org/jira/browse/LUCENE-2717
 Project: Lucene - Java
  Issue Type: Bug
Affects Versions: 4.0
Reporter: Robert Muir
Assignee: Robert Muir
 Fix For: 4.0

 Attachments: LUCENE-2717.patch, LUCENE-2717.patch


 I started writing a test for LUCENE-2716, and i found a problem with 
 BasicOperations.concatenate(Automaton, Automaton):
 it creates automata with invariant representation (which should never happen, 
 unless you manipulate states/transitions manually).
 strangely enough the BasicOperations.concatenate(ListAutomaton) does not 
 have this problem.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (LUCENE-2716) Improve automaton's MinimizeOperations.minimizeHopcroft() to not create so many objects

2010-10-21 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12923572#action_12923572
 ] 

Robert Muir commented on LUCENE-2716:
-

Ok, I fixed LUCENE-2717 in revision 1026104.

In that issue, I added a basic random test for minimize(), maybe we can improve 
it, but
it should be pretty good at finding any bugs.

(it doesnt find any bugs with this patch)

 Improve automaton's MinimizeOperations.minimizeHopcroft() to not create so 
 many objects
 ---

 Key: LUCENE-2716
 URL: https://issues.apache.org/jira/browse/LUCENE-2716
 Project: Lucene - Java
  Issue Type: Improvement
Affects Versions: 4.0
Reporter: Uwe Schindler
Assignee: Uwe Schindler
Priority: Minor
 Fix For: 4.0

 Attachments: LUCENE-2716.patch


 MinimizeOperations.minimizeHopcroft() creates a lot of objects because of 
 strange arrays and useless ArrayLists with fixed length. E.g. it created 
 ListListList. This patch minimizes this and makes the whole method much 
 more GC friendler by using simple arrays or avoiding empty LinkedLists at all 
 (inside reverse array). 
 minimize() is called very very often, especially in tests (MockAnalyzer).
 A test for the method is prepared by Robert, we found a bug somewhere else in 
 automaton, so this is pending until his issue and fix arrives.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Polymorphic Index

2010-10-21 Thread eks dev
Hi All, 
I am trying to figure out a way to implement following use case with 
lucene/solr. 


In order to support simple incremental updates (master) I need to index  and 
store UID Field on 300Mio collection. (My UID is a 32 byte  sequence). But I do 
not need indexed (only stored) it during normal  searching (slaves). 


The problem is that my term dictionary gets blown away with sheer number  of 
unique IDs. Number of unique terms on this collection, excluding UID  is less 
than 7Mio.
 I can tolerate resources hit on Updater (big hardware, on disk index...).

This is a master slave setup, where searchers run from RAMDisk and  having 
300Mio * 32 (give or take prefix compression) plus pointers to  postings and 
postings is something I would really love to avoid as this  is significant 
compared to really small documents I have. 


Cutting to the chase:
How I can have Indexed UID field, and when done with indexing:
1) Load searchable index into ram from such an index on disk without one 
field? 

2) create 2 Indices in sync on docIDs, One containing only indexed UID
3) somehow transform index with indexed UID by droping UID field, preserving 
docIs. Kind of tool smart index-editing tool. 

Something else already there i do not know?

Preserving docIds is crucial, as I need support for lovely incremental  updates 
(like in solr master-slave update). Also Stored field should  remain!
I am not looking for use MMAPed Index and let OS deal with it advice... 
I do not mind doing it with flex branch 4.0, nut being in a hurry.

Thanks in advance, 
Eks 




-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Created: (LUCENE-2718) separate java code from .jj file

2010-10-21 Thread Yonik Seeley (JIRA)
separate java code from .jj file


 Key: LUCENE-2718
 URL: https://issues.apache.org/jira/browse/LUCENE-2718
 Project: Lucene - Java
  Issue Type: Improvement
  Components: QueryParser
Reporter: Yonik Seeley
Priority: Minor


It would make development easier to move most of the java code out from the .jj 
file and into a real java file.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Updated: (LUCENE-2718) separate java code from .jj file

2010-10-21 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated LUCENE-2718:
-

Attachment: LUCENE-2718.patch

Here's a first cut at this, pulling out most stuff in QueryParser into 
QueryParserBase.
This would make it easier to do refactorings and changes in Java (instead of a 
.JJ file which isn't well supported in IDEs), and allows more changes w/o 
having to recompile the grammar.

Thoughts?

 separate java code from .jj file
 

 Key: LUCENE-2718
 URL: https://issues.apache.org/jira/browse/LUCENE-2718
 Project: Lucene - Java
  Issue Type: Improvement
  Components: QueryParser
Reporter: Yonik Seeley
Priority: Minor
 Attachments: LUCENE-2718.patch


 It would make development easier to move most of the java code out from the 
 .jj file and into a real java file.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (LUCENE-2718) separate java code from .jj file

2010-10-21 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12923610#action_12923610
 ] 

Robert Muir commented on LUCENE-2718:
-

just taking a quick glance, I am really for the change. 

I can't stand not having syntax/error highlighting in my IDE, always
having to regenerate the .jj file with 'ant javacc' only to find i made a 
stupid typo.


 separate java code from .jj file
 

 Key: LUCENE-2718
 URL: https://issues.apache.org/jira/browse/LUCENE-2718
 Project: Lucene - Java
  Issue Type: Improvement
  Components: QueryParser
Reporter: Yonik Seeley
Priority: Minor
 Attachments: LUCENE-2718.patch


 It would make development easier to move most of the java code out from the 
 .jj file and into a real java file.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Lucene-Solr-tests-only-trunk - Build # 395 - Failure

2010-10-21 Thread Apache Hudson Server
Build: 
http://hudson.zones.apache.org/hudson/job/Lucene-Solr-tests-only-trunk/395/

1 tests failed.
REGRESSION:  org.apache.solr.TestDistributedSearch.testDistribSearch

Error Message:
Some threads threw uncaught exceptions!

Stack Trace:
junit.framework.AssertionFailedError: Some threads threw uncaught exceptions!
at 
org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:881)
at 
org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:847)
at 
org.apache.lucene.util.LuceneTestCase.tearDown(LuceneTestCase.java:440)
at org.apache.solr.SolrTestCaseJ4.tearDown(SolrTestCaseJ4.java:78)
at 
org.apache.solr.BaseDistributedSearchTestCase.tearDown(BaseDistributedSearchTestCase.java:144)




Build Log (for compile errors):
[...truncated 8716 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (SOLR-2126) highlighting multicore searches relying on q.alt gives NPE

2010-10-21 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12923623#action_12923623
 ] 

David Smiley commented on SOLR-2126:


Following the instructions in my description here, I see that the bug is no 
longer present.  Thanks Koji.

BTW, there seems to be a problem with the multicore example config in trunk; it 
fails to start up properly due to a lack of index segments.  Weird.  I copied 
some out of another index.

 highlighting multicore searches relying on q.alt gives NPE
 --

 Key: SOLR-2126
 URL: https://issues.apache.org/jira/browse/SOLR-2126
 Project: Solr
  Issue Type: Bug
  Components: highlighter
Affects Versions: 1.4
 Environment: I'm on a trunk release from early March, but I also just 
 verified this on LucidWorks 1.4 which I have handy.
Reporter: David Smiley
Priority: Minor

 To reproduce this, run the example multicore solr configuration.  Then index 
 each example document into each core.  Now we're going to do a distributed 
 search, with q.alt=*:* and defType=dismax.  Normally, these would be set in a 
 request handler config as defaults but we'll put them in the url to make it 
 clear they need to be set and because the default multicore example config is 
 so bare bones that it doesn't already have a dismax setup.  We're going to 
 enable highlighting.
 http://localhost:8983/solr/core0/select?hl=trueq.alt=*:*defType=dismaxshards=localhost:8983/solr/core0,localhost:8983/solr/core1
 java.lang.NullPointerException
   at 
 org.apache.solr.handler.component.HighlightComponent.finishStage(HighlightComponent.java:130)
   at 
 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:296)
   at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131)
   at org.apache.solr.core.SolrCore.execute(SolrCore.java:1316)
   at 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:338)
   at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:241)
   at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1089)
   at 
 org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:365)
   at 
 org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
   at 
 org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181)
   at 
 org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:712)
   at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:405)
   at 
 org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:211)
   at 
 org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:114)
   at 
 org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:139)
   at org.mortbay.jetty.Server.handle(Server.java:285)
   at 
 org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:502)
   at 
 org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:821)
   at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:513)
   at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:208)
   at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:378)
   at 
 org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:226)
   at 
 org.mortbay.thread.BoundedThreadPool$PoolThread.run(BoundedThreadPool.java:442)
 Since I happen to be using edismax in trunk, it was easy for me to work 
 around this problem by renaming my q.alt parameter in my request handler 
 defaults to just q since edismax understands raw lucene queries.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (SOLR-2126) highlighting multicore searches relying on q.alt gives NPE

2010-10-21 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12923625#action_12923625
 ] 

David Smiley commented on SOLR-2126:


Closed as duplicate; although arguably the duplicate is actually SOLR-2148.

 highlighting multicore searches relying on q.alt gives NPE
 --

 Key: SOLR-2126
 URL: https://issues.apache.org/jira/browse/SOLR-2126
 Project: Solr
  Issue Type: Bug
  Components: highlighter
Affects Versions: 1.4
 Environment: I'm on a trunk release from early March, but I also just 
 verified this on LucidWorks 1.4 which I have handy.
Reporter: David Smiley
Priority: Minor

 To reproduce this, run the example multicore solr configuration.  Then index 
 each example document into each core.  Now we're going to do a distributed 
 search, with q.alt=*:* and defType=dismax.  Normally, these would be set in a 
 request handler config as defaults but we'll put them in the url to make it 
 clear they need to be set and because the default multicore example config is 
 so bare bones that it doesn't already have a dismax setup.  We're going to 
 enable highlighting.
 http://localhost:8983/solr/core0/select?hl=trueq.alt=*:*defType=dismaxshards=localhost:8983/solr/core0,localhost:8983/solr/core1
 java.lang.NullPointerException
   at 
 org.apache.solr.handler.component.HighlightComponent.finishStage(HighlightComponent.java:130)
   at 
 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:296)
   at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131)
   at org.apache.solr.core.SolrCore.execute(SolrCore.java:1316)
   at 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:338)
   at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:241)
   at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1089)
   at 
 org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:365)
   at 
 org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
   at 
 org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181)
   at 
 org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:712)
   at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:405)
   at 
 org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:211)
   at 
 org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:114)
   at 
 org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:139)
   at org.mortbay.jetty.Server.handle(Server.java:285)
   at 
 org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:502)
   at 
 org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:821)
   at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:513)
   at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:208)
   at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:378)
   at 
 org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:226)
   at 
 org.mortbay.thread.BoundedThreadPool$PoolThread.run(BoundedThreadPool.java:442)
 Since I happen to be using edismax in trunk, it was easy for me to work 
 around this problem by renaming my q.alt parameter in my request handler 
 defaults to just q since edismax understands raw lucene queries.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (SOLR-2178) Use the Velocity UI as the default tutorial example

2010-10-21 Thread Grant Ingersoll (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12923628#action_12923628
 ] 

Grant Ingersoll commented on SOLR-2178:
---

Committed support for spatial filtering (and map display), more like this, also 
switched the default defType to edismax.

 Use the Velocity UI as the default tutorial example
 ---

 Key: SOLR-2178
 URL: https://issues.apache.org/jira/browse/SOLR-2178
 Project: Solr
  Issue Type: Improvement
Reporter: Grant Ingersoll
Assignee: Grant Ingersoll
Priority: Minor

 The /browse example in solr/example is much nicer to look at and work with, 
 we should convert the tutorial over to use it so as to present a nicer view 
 of Solr's capabilities.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Polymorphic Index

2010-10-21 Thread Mark Harwood
Perhaps another way of thinking about the problem:

Given a large range of IDs (eg your 300 million) you could constrain the number 
of unique terms using a double-hashing technique e.g.
Pick a number n for the max number of unique terms you'll tolerate e.g. 1 
million and store 2 terms for every primary key using a different hashing 
function e.g.

int hashedKey1=hashFunction1(myKey)%maxNumUniqueTerms.
int hashedKey2=hashFunction2(myKey)%maxNumUniqueTerms.

Then queries to retrieve/delete a record use a search for hashedKey1 AND 
hashedKey2. The probability of having the same collision on two different 
hashing functions is minimal and should return the original record only.
Obviously you would still have the postings recorded but these would be 
slightly more compact e.g each of your 1 million unique terms would have ~300 
gap-encoded vints entries as opposed to 300m postings of one full int.

Cheers
Mark

On 21 Oct 2010, at 20:44, eks dev wrote:

 Hi All, 
 I am trying to figure out a way to implement following use case with 
 lucene/solr. 
 
 
 In order to support simple incremental updates (master) I need to index  and 
 store UID Field on 300Mio collection. (My UID is a 32 byte  sequence). But I 
 do 
 not need indexed (only stored) it during normal  searching (slaves). 
 
 
 The problem is that my term dictionary gets blown away with sheer number  of 
 unique IDs. Number of unique terms on this collection, excluding UID  is less 
 than 7Mio.
 I can tolerate resources hit on Updater (big hardware, on disk index...).
 
 This is a master slave setup, where searchers run from RAMDisk and  having 
 300Mio * 32 (give or take prefix compression) plus pointers to  postings and 
 postings is something I would really love to avoid as this  is significant 
 compared to really small documents I have. 
 
 
 Cutting to the chase:
 How I can have Indexed UID field, and when done with indexing:
 1) Load searchable index into ram from such an index on disk without one 
 field? 
 
 2) create 2 Indices in sync on docIDs, One containing only indexed UID
 3) somehow transform index with indexed UID by droping UID field, preserving 
 docIs. Kind of tool smart index-editing tool. 
 
 Something else already there i do not know?
 
 Preserving docIds is crucial, as I need support for lovely incremental  
 updates 
 (like in solr master-slave update). Also Stored field should  remain!
 I am not looking for use MMAPed Index and let OS deal with it advice... 
 I do not mind doing it with flex branch 4.0, nut being in a hurry.
 
 Thanks in advance, 
 Eks 
 
 
 
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org
 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Polymorphic Index

2010-10-21 Thread Paul Elschot
How about splitting the 32 byte field into for example 16 subfields of 2 bytes 
each?
Then any direct query on that field needs to be transformed into a boolean
query requiring all 16 subfield terms.
Would that work?

Regards,
Paul Elschot


Op donderdag 21 oktober 2010 21:44:34 schreef eks dev:
 Hi All, 
 I am trying to figure out a way to implement following use case with 
 lucene/solr. 
 
 
 In order to support simple incremental updates (master) I need to index  and 
 store UID Field on 300Mio collection. (My UID is a 32 byte  sequence). But I 
 do 
 not need indexed (only stored) it during normal  searching (slaves). 
 
 
 The problem is that my term dictionary gets blown away with sheer number  of 
 unique IDs. Number of unique terms on this collection, excluding UID  is less 
 than 7Mio.
  I can tolerate resources hit on Updater (big hardware, on disk index...).
 
 This is a master slave setup, where searchers run from RAMDisk and  having 
 300Mio * 32 (give or take prefix compression) plus pointers to  postings and 
 postings is something I would really love to avoid as this  is significant 
 compared to really small documents I have. 
 
 
 Cutting to the chase:
 How I can have Indexed UID field, and when done with indexing:
 1) Load searchable index into ram from such an index on disk without one 
 field? 
 
 2) create 2 Indices in sync on docIDs, One containing only indexed UID
 3) somehow transform index with indexed UID by droping UID field, preserving 
 docIs. Kind of tool smart index-editing tool. 
 
 Something else already there i do not know?
 
 Preserving docIds is crucial, as I need support for lovely incremental  
 updates 
 (like in solr master-slave update). Also Stored field should  remain!
 I am not looking for use MMAPed Index and let OS deal with it advice... 
 I do not mind doing it with flex branch 4.0, nut being in a hurry.
 
 Thanks in advance, 
 Eks 
 
 
   
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org
 
 
 

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: Polymorphic Index

2010-10-21 Thread Toke Eskildsen
Mark Harwood [markharw...@yahoo.co.uk]:
 Given a large range of IDs (eg your 300 million) you could constrain
 the number of unique terms using a double-hashing technique e.g.
 Pick a number n for the max number of unique terms you'll tolerate
 e.g. 1 million and store 2 terms for every primary key using a 
 different hashing function e.g.

 int hashedKey1=hashFunction1(myKey)%maxNumUniqueTerms.
 int hashedKey2=hashFunction2(myKey)%maxNumUniqueTerms.

 Then queries to retrieve/delete a record use a search for hashedKey1
 AND hashedKey2. The probability of having the same collision on two
 different hashing functions is minimal and should return the original record 
 only.

I am sorry, but this won't work. It is a variation of the birthday paradox:
http://en.wikipedia.org/wiki/Birthday_problem

Assuming the two hash-functions are ideal so that there will be 1M different 
values from each after the modulo, the probability for any given pair of 
different UIDs having the same hashes is 1/(1M * 1M). That's very low. Another 
way to look at it would be to say that there are 1M * 1M possible values for 
the aggregated hash function.

Using the recipe from
http://en.wikipedia.org/wiki/Birthday_problem#Cast_as_a_collision_problem
we have
n = 300M
d = 1M * 1M
and the formula 1-((d-1)/d)^(n*(n-1)/2) which gets expanded to
http://www.google.com/search?q=1-(((1-1)/1)^(3*(3-1)/2)

We see that the probability of a collision is ... 1. Or rather, so close to 1 
that Google's calculator will not show any decimals. Turning the number of UIDs 
down to just 3M, we still get the probability 0.99881 for a collision. It 
does not really help to increase the number unique hashes as there are simply 
too many possible pairs of UIDs.
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Polymorphic Index

2010-10-21 Thread Mark Harwood
Good point, Toke. Forgot about that. Of course doubling the number of hash 
algos used to 4 increases the space massively. 

On 21 Oct 2010, at 22:51, Toke Eskildsen t...@statsbiblioteket.dk wrote:

 Mark Harwood [markharw...@yahoo.co.uk]:
 Given a large range of IDs (eg your 300 million) you could constrain
 the number of unique terms using a double-hashing technique e.g.
 Pick a number n for the max number of unique terms you'll tolerate
 e.g. 1 million and store 2 terms for every primary key using a 
 different hashing function e.g.
 
 int hashedKey1=hashFunction1(myKey)%maxNumUniqueTerms.
 int hashedKey2=hashFunction2(myKey)%maxNumUniqueTerms.
 
 Then queries to retrieve/delete a record use a search for hashedKey1
 AND hashedKey2. The probability of having the same collision on two
 different hashing functions is minimal and should return the original record 
 only.
 
 I am sorry, but this won't work. It is a variation of the birthday paradox:
 http://en.wikipedia.org/wiki/Birthday_problem
 
 Assuming the two hash-functions are ideal so that there will be 1M different 
 values from each after the modulo, the probability for any given pair of 
 different UIDs having the same hashes is 1/(1M * 1M). That's very low. 
 Another way to look at it would be to say that there are 1M * 1M possible 
 values for the aggregated hash function.
 
 Using the recipe from
 http://en.wikipedia.org/wiki/Birthday_problem#Cast_as_a_collision_problem
 we have
 n = 300M
 d = 1M * 1M
 and the formula 1-((d-1)/d)^(n*(n-1)/2) which gets expanded to
 http://www.google.com/search?q=1-(((1-1)/1)^(3*(3-1)/2)
 
 We see that the probability of a collision is ... 1. Or rather, so close to 1 
 that Google's calculator will not show any decimals. Turning the number of 
 UIDs down to just 3M, we still get the probability 0.99881 for a 
 collision. It does not really help to increase the number unique hashes as 
 there are simply too many possible pairs of UIDs.
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org
 

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (LUCENE-2618) Intermittent failure in 3.x's backwards TestThreadedOptimize

2010-10-21 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12923665#action_12923665
 ] 

Michael McCandless commented on LUCENE-2618:


bq. Just want to point out that calling maybeMerge is as explicit as calling 
optimize.

But: apps don't normally call maybeMerge?  This is typically called within IW, 
eg on segment flush.

I mean, it is public so apps can call it, but I expect very few do (vs optimize 
which apps use alot).  It's the exception not the rule...

I guess I feel that close should try to close quickly -- an app would not 
expect close to randomly take a long time (it's already bad enough since a 
large merge could be in process...).   So, allowing other merges to start up, 
which could easily be large merges since they are follow-on ones, would make 
that worse.

Alternatively, we could define the semantics of close as being allowed to 
prevent a running optimize from actually completing?  Then we'd have to change 
this test, eg to call .waitForMerges before close.

 Intermittent failure in 3.x's backwards TestThreadedOptimize
 

 Key: LUCENE-2618
 URL: https://issues.apache.org/jira/browse/LUCENE-2618
 Project: Lucene - Java
  Issue Type: Bug
  Components: Index
Reporter: Michael McCandless
 Fix For: 3.1, 4.0

 Attachments: LUCENE-2618.patch


 Failure looks like this:
 {noformat}
 [junit] Testsuite: org.apache.lucene.index.TestThreadedOptimize
 [junit] Testcase: 
 testThreadedOptimize(org.apache.lucene.index.TestThreadedOptimize): FAILED
 [junit] null
 [junit] junit.framework.AssertionFailedError: null
 [junit]   at 
 org.apache.lucene.index.TestThreadedOptimize.runTest(TestThreadedOptimize.java:125)
 [junit]   at 
 org.apache.lucene.index.TestThreadedOptimize.testThreadedOptimize(TestThreadedOptimize.java:149)
 [junit]   at 
 org.apache.lucene.util.LuceneTestCase.runBare(LuceneTestCase.java:253)
 {noformat}
 I just committed some verbosity so next time it strikes we'll have more 
 details.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: Polymorphic Index

2010-10-21 Thread Toke Eskildsen
From: Mark Harwood [markharw...@yahoo.co.uk]
 Good point, Toke. Forgot about that. Of course doubling the number
 of hash algos used to 4 increases the space massively.

Maybe your hashing-idea could work even with collisions?

Using your original two-hash suggestion, we're just about sure to get 
collisions. However, we are still able to uniquely identify the right document 
as the UID is also stored (search for the hashes, iterate over the results and 
get the UID for each). When an update is requested for an existing document, 
the indexer extracts the UIDs from all the documents that matches the hash. 
Then it performs a delete of the hash-terms and re-indexes all the documents 
that had false collisions. As the number of unique hash-values as well as 
hash-function can be adjusted, this could be a nicely tweakable 
performance-vs-space trade off.

This will only work if it is possible to re-create the documents from stored 
terms or by requesting the data from outside of Lucene by UID. Is this possible 
with your setup, eks dev?
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Updated: (LUCENE-2716) Improve automaton's MinimizeOperations.minimizeHopcroft() to not create so many objects

2010-10-21 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-2716:
--

Attachment: LUCENE-2716.patch

Final patch, will commit now.

 Improve automaton's MinimizeOperations.minimizeHopcroft() to not create so 
 many objects
 ---

 Key: LUCENE-2716
 URL: https://issues.apache.org/jira/browse/LUCENE-2716
 Project: Lucene - Java
  Issue Type: Improvement
Affects Versions: 4.0
Reporter: Uwe Schindler
Assignee: Uwe Schindler
Priority: Minor
 Fix For: 4.0

 Attachments: LUCENE-2716.patch, LUCENE-2716.patch


 MinimizeOperations.minimizeHopcroft() creates a lot of objects because of 
 strange arrays and useless ArrayLists with fixed length. E.g. it created 
 ListListList. This patch minimizes this and makes the whole method much 
 more GC friendler by using simple arrays or avoiding empty LinkedLists at all 
 (inside reverse array). 
 minimize() is called very very often, especially in tests (MockAnalyzer).
 A test for the method is prepared by Robert, we found a bug somewhere else in 
 automaton, so this is pending until his issue and fix arrives.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Resolved: (LUCENE-2716) Improve automaton's MinimizeOperations.minimizeHopcroft() to not create so many objects

2010-10-21 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler resolved LUCENE-2716.
---

Resolution: Fixed

Committed revision: 1026168

 Improve automaton's MinimizeOperations.minimizeHopcroft() to not create so 
 many objects
 ---

 Key: LUCENE-2716
 URL: https://issues.apache.org/jira/browse/LUCENE-2716
 Project: Lucene - Java
  Issue Type: Improvement
Affects Versions: 4.0
Reporter: Uwe Schindler
Assignee: Uwe Schindler
Priority: Minor
 Fix For: 4.0

 Attachments: LUCENE-2716.patch, LUCENE-2716.patch


 MinimizeOperations.minimizeHopcroft() creates a lot of objects because of 
 strange arrays and useless ArrayLists with fixed length. E.g. it created 
 ListListList. This patch minimizes this and makes the whole method much 
 more GC friendler by using simple arrays or avoiding empty LinkedLists at all 
 (inside reverse array). 
 minimize() is called very very often, especially in tests (MockAnalyzer).
 A test for the method is prepared by Robert, we found a bug somewhere else in 
 automaton, so this is pending until his issue and fix arrives.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Lucene-3.x - Build # 157 - Failure

2010-10-21 Thread Apache Hudson Server
Build: http://hudson.zones.apache.org/hudson/job/Lucene-3.x/157/

All tests passed

Build Log (for compile errors):
[...truncated 21249 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Lucene-Solr-tests-only-trunk - Build # 402 - Failure

2010-10-21 Thread Apache Hudson Server
Build: 
http://hudson.zones.apache.org/hudson/job/Lucene-Solr-tests-only-trunk/402/

6 tests failed.
REGRESSION:  
org.apache.solr.TestGroupingSearch.testGroupingGroupSortingScore_basic

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:369)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:336)
at 
org.apache.solr.TestGroupingSearch.testGroupingGroupSortingScore_basic(TestGroupingSearch.java:47)
at 
org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:882)
at 
org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:848)
Caused by: org.apache.solr.common.SolrException: Unknown query type 'func'
at org.apache.solr.core.SolrCore.getQueryPlugin(SolrCore.java:1508)
at org.apache.solr.search.QParser.getParser(QParser.java:286)
at 
org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:330)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:231)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1359)
at 
org.apache.solr.util.TestHarness.queryAndResponse(TestHarness.java:348)
at org.apache.solr.util.TestHarness.query(TestHarness.java:331)
at org.apache.solr.util.TestHarness.query(TestHarness.java:316)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:343)


REGRESSION:  
org.apache.solr.TestGroupingSearch.testGroupingGroupSortingScore_basicWithGroupSortEqualToSort

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:369)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:336)
at 
org.apache.solr.TestGroupingSearch.testGroupingGroupSortingScore_basicWithGroupSortEqualToSort(TestGroupingSearch.java:77)
at 
org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:882)
at 
org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:848)
Caused by: org.apache.solr.common.SolrException: Unknown query type 'func'
at org.apache.solr.core.SolrCore.getQueryPlugin(SolrCore.java:1508)
at org.apache.solr.search.QParser.getParser(QParser.java:286)
at 
org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:330)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:231)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1359)
at 
org.apache.solr.util.TestHarness.queryAndResponse(TestHarness.java:348)
at org.apache.solr.util.TestHarness.query(TestHarness.java:331)
at org.apache.solr.util.TestHarness.query(TestHarness.java:316)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:343)


REGRESSION:  org.apache.solr.TestGroupingSearch.testGroupingGroupSortingName

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:369)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:336)
at 
org.apache.solr.TestGroupingSearch.testGroupingGroupSortingName(TestGroupingSearch.java:103)
at 
org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:882)
at 
org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:848)
Caused by: org.apache.solr.common.SolrException: Unknown query type 'func'
at org.apache.solr.core.SolrCore.getQueryPlugin(SolrCore.java:1508)
at org.apache.solr.search.QParser.getParser(QParser.java:286)
at 
org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:330)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:231)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1359)
at 
org.apache.solr.util.TestHarness.queryAndResponse(TestHarness.java:348)
at org.apache.solr.util.TestHarness.query(TestHarness.java:331)
at org.apache.solr.util.TestHarness.query(TestHarness.java:316)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:343)


REGRESSION:  org.apache.solr.TestGroupingSearch.testGroupingGroupSortingWeight

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 

[jira] Reopened: (LUCENE-2716) Improve automaton's MinimizeOperations.minimizeHopcroft() to not create so many objects

2010-10-21 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler reopened LUCENE-2716:
---


More stuff to optimize, no longer LinkedLists and ArrayLists :-)

Hopcroft Policeman is back!

 Improve automaton's MinimizeOperations.minimizeHopcroft() to not create so 
 many objects
 ---

 Key: LUCENE-2716
 URL: https://issues.apache.org/jira/browse/LUCENE-2716
 Project: Lucene - Java
  Issue Type: Improvement
Affects Versions: 4.0
Reporter: Uwe Schindler
Assignee: Uwe Schindler
Priority: Minor
 Fix For: 4.0

 Attachments: LUCENE-2716-2.patch, LUCENE-2716.patch, LUCENE-2716.patch


 MinimizeOperations.minimizeHopcroft() creates a lot of objects because of 
 strange arrays and useless ArrayLists with fixed length. E.g. it created 
 ListListList. This patch minimizes this and makes the whole method much 
 more GC friendler by using simple arrays or avoiding empty LinkedLists at all 
 (inside reverse array). 
 minimize() is called very very often, especially in tests (MockAnalyzer).
 A test for the method is prepared by Robert, we found a bug somewhere else in 
 automaton, so this is pending until his issue and fix arrives.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Updated: (LUCENE-2716) Improve automaton's MinimizeOperations.minimizeHopcroft() to not create so many objects

2010-10-21 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-2716:
--

Attachment: LUCENE-2716-2.patch

 Improve automaton's MinimizeOperations.minimizeHopcroft() to not create so 
 many objects
 ---

 Key: LUCENE-2716
 URL: https://issues.apache.org/jira/browse/LUCENE-2716
 Project: Lucene - Java
  Issue Type: Improvement
Affects Versions: 4.0
Reporter: Uwe Schindler
Assignee: Uwe Schindler
Priority: Minor
 Fix For: 4.0

 Attachments: LUCENE-2716-2.patch, LUCENE-2716.patch, LUCENE-2716.patch


 MinimizeOperations.minimizeHopcroft() creates a lot of objects because of 
 strange arrays and useless ArrayLists with fixed length. E.g. it created 
 ListListList. This patch minimizes this and makes the whole method much 
 more GC friendler by using simple arrays or avoiding empty LinkedLists at all 
 (inside reverse array). 
 minimize() is called very very often, especially in tests (MockAnalyzer).
 A test for the method is prepared by Robert, we found a bug somewhere else in 
 automaton, so this is pending until his issue and fix arrives.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Updated: (LUCENE-2718) separate java code from .jj file

2010-10-21 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated LUCENE-2718:
-

Attachment: LUCENE-2718.patch

Here's an update that pulls out more (some of the larger code blocks that were 
embedded in the middle of the grammar).

I'll probably commit tomorrow if no one things it's a bad idea.

 separate java code from .jj file
 

 Key: LUCENE-2718
 URL: https://issues.apache.org/jira/browse/LUCENE-2718
 Project: Lucene - Java
  Issue Type: Improvement
  Components: QueryParser
Reporter: Yonik Seeley
Priority: Minor
 Attachments: LUCENE-2718.patch, LUCENE-2718.patch


 It would make development easier to move most of the java code out from the 
 .jj file and into a real java file.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Updated: (LUCENE-2716) Improve automaton's MinimizeOperations.minimizeHopcroft() to not create so many objects

2010-10-21 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-2716:
--

Attachment: LUCENE-2716-2(OpenBitSet).patch

Same patch with OpenBitSet for comparison

 Improve automaton's MinimizeOperations.minimizeHopcroft() to not create so 
 many objects
 ---

 Key: LUCENE-2716
 URL: https://issues.apache.org/jira/browse/LUCENE-2716
 Project: Lucene - Java
  Issue Type: Improvement
Affects Versions: 4.0
Reporter: Uwe Schindler
Assignee: Uwe Schindler
Priority: Minor
 Fix For: 4.0

 Attachments: LUCENE-2716-2(OpenBitSet).patch, LUCENE-2716-2.patch, 
 LUCENE-2716.patch, LUCENE-2716.patch


 MinimizeOperations.minimizeHopcroft() creates a lot of objects because of 
 strange arrays and useless ArrayLists with fixed length. E.g. it created 
 ListListList. This patch minimizes this and makes the whole method much 
 more GC friendler by using simple arrays or avoiding empty LinkedLists at all 
 (inside reverse array). 
 minimize() is called very very often, especially in tests (MockAnalyzer).
 A test for the method is prepared by Robert, we found a bug somewhere else in 
 automaton, so this is pending until his issue and fix arrives.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Resolved: (LUCENE-2716) Improve automaton's MinimizeOperations.minimizeHopcroft() to not create so many objects

2010-10-21 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler resolved LUCENE-2716.
---

Resolution: Fixed

Commit standard BitSet variant in revision: 1026190
(OpenBitSet is 64bit and set/get of single bits is ineffective on 32bit 
machines, no need for bulk transformations)

 Improve automaton's MinimizeOperations.minimizeHopcroft() to not create so 
 many objects
 ---

 Key: LUCENE-2716
 URL: https://issues.apache.org/jira/browse/LUCENE-2716
 Project: Lucene - Java
  Issue Type: Improvement
Affects Versions: 4.0
Reporter: Uwe Schindler
Assignee: Uwe Schindler
Priority: Minor
 Fix For: 4.0

 Attachments: LUCENE-2716-2(OpenBitSet).patch, LUCENE-2716-2.patch, 
 LUCENE-2716.patch, LUCENE-2716.patch


 MinimizeOperations.minimizeHopcroft() creates a lot of objects because of 
 strange arrays and useless ArrayLists with fixed length. E.g. it created 
 ListListList. This patch minimizes this and makes the whole method much 
 more GC friendler by using simple arrays or avoiding empty LinkedLists at all 
 (inside reverse array). 
 minimize() is called very very often, especially in tests (MockAnalyzer).
 A test for the method is prepared by Robert, we found a bug somewhere else in 
 automaton, so this is pending until his issue and fix arrives.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Updated: (SOLR-2156) Solr Replication - SnapPuller fails to clean Old Index Directories on Full Copy

2010-10-21 Thread Jayendra Patil (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayendra Patil updated SOLR-2156:
-

Attachment: Solr-2156_SnapPuller.patch

Attached the Fix.

 Solr Replication - SnapPuller fails to clean Old Index Directories on Full 
 Copy
 ---

 Key: SOLR-2156
 URL: https://issues.apache.org/jira/browse/SOLR-2156
 Project: Solr
  Issue Type: Improvement
  Components: replication (java)
Affects Versions: 4.0
Reporter: Jayendra Patil
 Attachments: Solr-2156_SnapPuller.patch


 We are working on the Solr trunk and have a Master and Two slaves 
 configuration . 
 Our indexing consists of Periodic Full and Incremental index building on the 
 master and replication on the slaves.
 When a Full indexing (clean and rebuild) is performed, we always end with an 
 extra index folder copy, which holds the complete index and hence the size 
 just grows on, on the slaves.
 e.g.
 drwxr-xr-x 2 tomcat tomcat 4096 2010-10-09 12:10 index
 drwxr-xr-x 2 tomcat tomcat 4096 2010-10-11 09:43 index.20101009120649
 drwxr-xr-x 2 tomcat tomcat 4096 2010-10-12 10:27 index.20101011094043
 -rw-r--r-- 1 tomcattomcat   75 2010-10-11 09:43  index.properties
 -rw-r--r-- 1 tomcattomcat  422 2010-10-12 10:26 replication.properties
 drwxr-xr-x 2 tomcat tomcat   68 2010-10-12 10:27 spellchecker
 Where index.20101011094043 is the active index and the other index.xxx 
 directories are no more used.
 The SnapPuller deletes the temporary Index directory, but does not delete the 
 old one when the switch is performed for the full copy.
 The below code should do the trick.
  boolean fetchLatestIndex(SolrCore core) throws IOException {
 ..
   } finally {
 if(deleteTmpIdxDir) {
 delTree(tmpIndexDir);
 } else {
 // Delete the old index directory, as the flag is set only after 
 the full copy is performed
 delTree(indexDir);
 }
   }
 .
   }

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Lucene-trunk - Build # 1339 - Still Failing

2010-10-21 Thread Apache Hudson Server
Build: http://hudson.zones.apache.org/hudson/job/Lucene-trunk/1339/

All tests passed

Build Log (for compile errors):
[...truncated 18162 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Lucene-Solr-tests-only-trunk - Build # 409 - Failure

2010-10-21 Thread Apache Hudson Server
Build: 
http://hudson.zones.apache.org/hudson/job/Lucene-Solr-tests-only-trunk/409/

7 tests failed.
REGRESSION:  
org.apache.solr.TestGroupingSearch.testGroupingGroupSortingScore_basic

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:369)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:336)
at 
org.apache.solr.TestGroupingSearch.testGroupingGroupSortingScore_basic(TestGroupingSearch.java:47)
at 
org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:882)
at 
org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:848)
Caused by: org.apache.solr.common.SolrException: Unknown query type 'func'
at org.apache.solr.core.SolrCore.getQueryPlugin(SolrCore.java:1508)
at org.apache.solr.search.QParser.getParser(QParser.java:286)
at 
org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:330)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:231)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1359)
at 
org.apache.solr.util.TestHarness.queryAndResponse(TestHarness.java:348)
at org.apache.solr.util.TestHarness.query(TestHarness.java:331)
at org.apache.solr.util.TestHarness.query(TestHarness.java:316)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:343)


REGRESSION:  
org.apache.solr.TestGroupingSearch.testGroupingGroupSortingScore_basicWithGroupSortEqualToSort

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:369)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:336)
at 
org.apache.solr.TestGroupingSearch.testGroupingGroupSortingScore_basicWithGroupSortEqualToSort(TestGroupingSearch.java:77)
at 
org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:882)
at 
org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:848)
Caused by: org.apache.solr.common.SolrException: Unknown query type 'func'
at org.apache.solr.core.SolrCore.getQueryPlugin(SolrCore.java:1508)
at org.apache.solr.search.QParser.getParser(QParser.java:286)
at 
org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:330)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:231)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1359)
at 
org.apache.solr.util.TestHarness.queryAndResponse(TestHarness.java:348)
at org.apache.solr.util.TestHarness.query(TestHarness.java:331)
at org.apache.solr.util.TestHarness.query(TestHarness.java:316)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:343)


REGRESSION:  org.apache.solr.TestGroupingSearch.testGroupingGroupSortingName

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:369)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:336)
at 
org.apache.solr.TestGroupingSearch.testGroupingGroupSortingName(TestGroupingSearch.java:103)
at 
org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:882)
at 
org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:848)
Caused by: org.apache.solr.common.SolrException: Unknown query type 'func'
at org.apache.solr.core.SolrCore.getQueryPlugin(SolrCore.java:1508)
at org.apache.solr.search.QParser.getParser(QParser.java:286)
at 
org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:330)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:231)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1359)
at 
org.apache.solr.util.TestHarness.queryAndResponse(TestHarness.java:348)
at org.apache.solr.util.TestHarness.query(TestHarness.java:331)
at org.apache.solr.util.TestHarness.query(TestHarness.java:316)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:343)


REGRESSION:  org.apache.solr.TestGroupingSearch.testGroupingGroupSortingWeight

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 

[jira] Commented: (LUCENE-2618) Intermittent failure in 3.x's backwards TestThreadedOptimize

2010-10-21 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12923761#action_12923761
 ] 

Shai Erera commented on LUCENE-2618:


Ok - I agree maybeMerge is probably less frequently called than optimize. And 
perhaps we can look at it that way: when you call optimize, you know exactly 
what to expect. You control the # of final segments. When you call maybeMerge 
lucene does not guarantee the final result. Unless you know exactly the state 
of all the segments in the index (which except than from unit tests I think 
it's very unlikely) and exactly what your MP is doing, you cannot expect any 
guaranteed outcome from calling maybeMerge, except for it doing the best 
effort.

What bothered me is that even if maybeMerge and optimize may go through several 
levels of merging following one call to them, one is guaranteed to complete and 
the other isn't. But since optimize is more common in apps than the other, I'm 
willing to make that exception. Perhaps then add to maybeMerge docs that if you 
want to guarantee merges finish when close is called, you should wait for 
merges? Or should we add it to close?

I'm fine now with this fix. +1 to commit.

 Intermittent failure in 3.x's backwards TestThreadedOptimize
 

 Key: LUCENE-2618
 URL: https://issues.apache.org/jira/browse/LUCENE-2618
 Project: Lucene - Java
  Issue Type: Bug
  Components: Index
Reporter: Michael McCandless
 Fix For: 3.1, 4.0

 Attachments: LUCENE-2618.patch


 Failure looks like this:
 {noformat}
 [junit] Testsuite: org.apache.lucene.index.TestThreadedOptimize
 [junit] Testcase: 
 testThreadedOptimize(org.apache.lucene.index.TestThreadedOptimize): FAILED
 [junit] null
 [junit] junit.framework.AssertionFailedError: null
 [junit]   at 
 org.apache.lucene.index.TestThreadedOptimize.runTest(TestThreadedOptimize.java:125)
 [junit]   at 
 org.apache.lucene.index.TestThreadedOptimize.testThreadedOptimize(TestThreadedOptimize.java:149)
 [junit]   at 
 org.apache.lucene.util.LuceneTestCase.runBare(LuceneTestCase.java:253)
 {noformat}
 I just committed some verbosity so next time it strikes we'll have more 
 details.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Created: (SOLR-2186) DataImportHandler multi-threaded option throws exception

2010-10-21 Thread Lance Norskog (JIRA)
DataImportHandler multi-threaded option throws exception


 Key: SOLR-2186
 URL: https://issues.apache.org/jira/browse/SOLR-2186
 Project: Solr
  Issue Type: Bug
  Components: contrib - DataImportHandler
Reporter: Lance Norskog


The multi-threaded option for the DataImportHandler throws an exception and the 
entire operation fails. This is true even if only 1 thread is configured via 
*threads='1'*


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (SOLR-2186) DataImportHandler multi-threaded option throws exception

2010-10-21 Thread Lance Norskog (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12923764#action_12923764
 ] 

Lance Norskog commented on SOLR-2186:
-

This is the stack trace. The operation configures 4 threads and then does a 
full-import:

Oct 21, 2010 10:21:16 PM org.apache.solr.handler.dataimport.DocBuilder 
doFullDump
INFO: running multithreaded full-import
Oct 21, 2010 10:21:16 PM 
org.apache.solr.handler.dataimport.ThreadedEntityProcessorWrapper nextRow
INFO: arow : {fileSize=18837, fileLastModified=Wed Nov 21 08:15:23 PST 2007, 
fileAbsolutePath=/lucid/private_pdfs/10.pdfs/10.1.1.10.1.pdf, 
fileDir=/lucid/private_pdfs/10.pdfs, file=10.1.1.10.1.pdf}
Oct 21, 2010 10:21:16 PM 
org.apache.solr.handler.dataimport.ThreadedEntityProcessorWrapper nextRow
INFO: arow : {fileSize=289898, fileLastModified=Wed Nov 21 08:15:25 PST 2007, 
fileAbsolutePath=/lucid/private_pdfs/10.pdfs/10.1.1.10.10.pdf, 
fileDir=/lucid/private_pdfs/10.pdfs, file=10.1.1.10.10.pdf}
Oct 21, 2010 10:21:16 PM 
org.apache.solr.handler.dataimport.ThreadedEntityProcessorWrapper nextRow
INFO: arow : {fileSize=121847, fileLastModified=Wed Nov 21 08:15:43 PST 2007, 
fileAbsolutePath=/lucid/private_pdfs/10.pdfs/10.1.1.10.100.pdf, 
fileDir=/lucid/private_pdfs/10.pdfs, file=10.1.1.10.100.pdf}
Oct 21, 2010 10:21:16 PM 
org.apache.solr.handler.dataimport.ThreadedEntityProcessorWrapper nextRow
INFO: arow : {fileSize=59844, fileLastModified=Wed Nov 21 08:18:49 PST 2007, 
fileAbsolutePath=/lucid/private_pdfs/10.pdfs/10.1.1.10.1000.pdf, 
fileDir=/lucid/private_pdfs/10.pdfs, file=10.1.1.10.1000.pdf}
Oct 21, 2010 10:21:16 PM org.apache.solr.handler.dataimport.DocBuilder 
doFullDump
SEVERE: error in import
java.lang.NullPointerException
 at 
org.apache.solr.handler.dataimport.ContextImpl.getResolvedEntityAttribute(ContextImpl.java:79)
 at 
org.apache.solr.handler.dataimport.ThreadedContext.getResolvedEntityAttribute(ThreadedContext.java:78)
 at 
org.apache.solr.handler.dataimport.TikaEntityProcessor.firstInit(TikaEntityProcessor.java:67)
 at 
org.apache.solr.handler.dataimport.EntityProcessorBase.init(EntityProcessorBase.java:56)
 at 
org.apache.solr.handler.dataimport.DocBuilder$EntityRunner.initEntity(DocBuilder.java:507)
 at 
org.apache.solr.handler.dataimport.DocBuilder$EntityRunner.runAThread(DocBuilder.java:425)
 at 
org.apache.solr.handler.dataimport.DocBuilder$EntityRunner.run(DocBuilder.java:386)
 at 
org.apache.solr.handler.dataimport.DocBuilder$EntityRunner.runAThread(DocBuilder.java:453)
 at 
org.apache.solr.handler.dataimport.DocBuilder$EntityRunner.access$000(DocBuilder.java:340)
 at 
org.apache.solr.handler.dataimport.DocBuilder$EntityRunner$1.run(DocBuilder.java:393)
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:619)
Oct 21, 2010 10:21:16 PM org.apache.solr.handler.dataimport.DocBuilder finish
INFO: Import completed successfully
Oct 21, 2010 10:21:16 PM org.apache.solr.update.DirectUpdateHandler2 commit
INFO: start 
commit(optimize=true,waitFlush=false,waitSearcher=true,expungeDeletes=false)


 DataImportHandler multi-threaded option throws exception
 

 Key: SOLR-2186
 URL: https://issues.apache.org/jira/browse/SOLR-2186
 Project: Solr
  Issue Type: Bug
  Components: contrib - DataImportHandler
Reporter: Lance Norskog

 The multi-threaded option for the DataImportHandler throws an exception and 
 the entire operation fails. This is true even if only 1 thread is configured 
 via *threads='1'*

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (SOLR-2186) DataImportHandler multi-threaded option throws exception

2010-10-21 Thread Lance Norskog (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12923765#action_12923765
 ] 

Lance Norskog commented on SOLR-2186:
-

This is the dataConfig.xml. It is very simple: it walks a directory and indexes 
every PDF file it finds.
If you change threads='4' to threads='1', it will still fail. If you remove the 
threads directive, it runs.

dataConfig
   dataSource type=BinFileDataSource/
   document
 entity name=jc dataSource=null
 pk=id
 processor=FileListEntityProcessor
 fileName=^.*\.pdf$ recursive=false
 baseDir=/lucid/private_pdfs/10.pdfs
 transformer=TemplateTransformer
 threads='4'
 

field column=id template=${jc.fileAbsolutePath}/

entity name=tika-test processor=TikaEntityProcessor
url=${jc.fileAbsolutePath}
parser=org.apache.tika.parser.pdf.PDFParser
onError=skip

field column=Author name=author meta=true/
field column=title name=title meta=true/
field column=text name=text/
/entity
  /entity
/document
/dataConfig


 DataImportHandler multi-threaded option throws exception
 

 Key: SOLR-2186
 URL: https://issues.apache.org/jira/browse/SOLR-2186
 Project: Solr
  Issue Type: Bug
  Components: contrib - DataImportHandler
Reporter: Lance Norskog

 The multi-threaded option for the DataImportHandler throws an exception and 
 the entire operation fails. This is true even if only 1 thread is configured 
 via *threads='1'*

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org