Re: Can't build on Mavericks (different issue)

2014-03-29 Thread Andi Vajda

 On Mar 28, 2014, at 21:37, Mike McCormick mccorm...@runbox.com wrote:
 
 Hi!
 
 It is most deterministic to set the variables controlling which version of 
 everything is used. It us also important to use the same compiler (gcc vs 
 clang) that was used to build your version of python.
 
 Got it.  I’m using the stock compiler with Xcode 5.1 (Clang) and the stock 
 Python 2.7 distribution (which I hope/assume was also compiled with Clang).  
 I might try installing GCC and roll with that.
 
 You still have a mismatch somewhere.
 But I can't see what version of the jdk you're linking against ?
 
 Here is the full output, building against JDK 1.7.0 update 51.  The CFLAGS 
 and CPPFLAGS are needed because the Xcode compiler does not observe the 
 -mno-fused-madd argument and will stop with a hard error unless 
 -Qunused-arguments is specified.

That unused argument is emitted by python and a sign that it was compiled a 
different compiler than what you're using (probably gcc vs clang).
You should install the Apple command line tools (so you don't have xcode in the 
way) and be prepared to build python from sources (easy enough).

Andi..

 $ export CFLAGS=-Qunused-arguments
 $ export CPPFLAGS=-Qunused-arguments
 $ export LDFLAGS=-v
 $ python setup.py build
 
 found JAVAHOME = 
 /Library/Java/JavaVirtualMachines/jdk1.7.0_51.jdk/Contents/Home/
 found JAVAFRAMEWORKS = 
 /Library/Java/JavaVirtualMachines/jdk1.7.0_51.jdk/Contents/Home/include/
 Loading source files for package org.apache.jcc...
 Constructing Javadoc information...
 Standard Doclet version 1.8.0
 Building tree for all the packages and classes...
 Generating javadoc/org/apache/jcc/PythonException.html...
 Generating javadoc/org/apache/jcc/PythonVM.html...
 Generating javadoc/org/apache/jcc/package-frame.html...
 Generating javadoc/org/apache/jcc/package-summary.html...
 Generating javadoc/org/apache/jcc/package-tree.html...
 Generating javadoc/constant-values.html...
 Generating javadoc/serialized-form.html...
 Building index for all the packages and classes...
 Generating javadoc/overview-tree.html...
 Generating javadoc/index-all.html...
 Generating javadoc/deprecated-list.html...
 Building index for all classes...
 Generating javadoc/allclasses-frame.html...
 Generating javadoc/allclasses-noframe.html...
 Generating javadoc/index.html...
 Generating javadoc/help-doc.html...
 running build
 running build_py
 writing /Users/mike/Desktop/modelica/install/jcc/jcc/config.py
 copying jcc/config.py - build/lib.macosx-10.9-intel-2.7/jcc
 copying jcc/classes/org/apache/jcc/PythonVM.class - 
 build/lib.macosx-10.9-intel-2.7/jcc/classes/org/apache/jcc
 copying jcc/classes/org/apache/jcc/PythonException.class - 
 build/lib.macosx-10.9-intel-2.7/jcc/classes/org/apache/jcc
 running build_ext
 building 'jcc' extension
 cc -DNDEBUG -g -fwrapv -Os -Wall -Wstrict-prototypes -Qunused-arguments 
 -Qunused-arguments -dynamiclib -D_jcc_lib -DJCC_VER=2.19 
 -I/Library/Java/JavaVirtualMachines/jdk1.7.0_51.jdk/Contents/Home//include 
 -I/Library/Java/JavaVirtualMachines/jdk1.7.0_51.jdk/Contents/Home//include/darwin
  -I_jcc -Ijcc/sources 
 -I/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 
 -c jcc/sources/jcc.cpp -o build/temp.macosx-10.9-intel-2.7/jcc/sources/jcc.o 
 -DPYTHON -fno-strict-aliasing -Wno-write-strings
 cc -DNDEBUG -g -fwrapv -Os -Wall -Wstrict-prototypes -Qunused-arguments 
 -Qunused-arguments -dynamiclib -D_jcc_lib -DJCC_VER=2.19 
 -I/Library/Java/JavaVirtualMachines/jdk1.7.0_51.jdk/Contents/Home//include 
 -I/Library/Java/JavaVirtualMachines/jdk1.7.0_51.jdk/Contents/Home//include/darwin
  -I_jcc -Ijcc/sources 
 -I/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 
 -c jcc/sources/JCCEnv.cpp -o 
 build/temp.macosx-10.9-intel-2.7/jcc/sources/JCCEnv.o -DPYTHON 
 -fno-strict-aliasing -Wno-write-strings
 c++ -Wl,-x -dynamiclib -undefined dynamic_lookup -v -Qunused-arguments 
 -Qunused-arguments build/temp.macosx-10.9-intel-2.7/jcc/sources/jcc.o 
 build/temp.macosx-10.9-intel-2.7/jcc/sources/JCCEnv.o -o 
 build/lib.macosx-10.9-intel-2.7/libjcc.dylib 
 -L/Library/Java/JavaVirtualMachines/jdk1.7.0_51.jdk/Contents/Home//jre/lib 
 -ljava 
 -L/Library/Java/JavaVirtualMachines/jdk1.7.0_51.jdk/Contents/Home//jre/lib/server
  -ljvm -Wl,-rpath 
 -Wl,/Library/Java/JavaVirtualMachines/jdk1.7.0_51.jdk/Contents/Home//jre/lib 
 -Wl,-rpath 
 -Wl,/Library/Java/JavaVirtualMachines/jdk1.7.0_51.jdk/Contents/Home//jre/lib/server
  -Wl,-S -install_name @rpath/libjcc.dylib -current_version 2.19 
 -compatibility_version 2.19
 Apple LLVM version 5.1 (clang-503.0.38) (based on LLVM 3.4svn)
 Target: x86_64-apple-darwin13.1.0
 Thread model: posix
 /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ld
  -demangle -dynamic -dylib -dylib_compatibility_version 2.19 
 -dylib_current_version 2.19 -arch x86_64 -dylib_install_name 
 @rpath/libjcc.dylib -macosx_version_min 10.9.0 -undefined dynamic_lookup 
 

Re: Can't build on Mavericks (different issue)

2014-03-29 Thread Andi Vajda

 On Mar 28, 2014, at 22:57, Mike McCormick mccorm...@runbox.com wrote:
 
 Andi,
 
 I was able to compile by installing apple-gcc42 from Homebrew and applying 
 the appropriate symlinks to bypass Apple’s compiler.  It’s a stopgap, but it 
 allows me to proceed.

That's also an option. The Apple xcode command line tools (separate install) 
should also work.

Andi..

 
 Mike
 


Re: Can't build on Mavericks (different issue)

2014-03-29 Thread Mike McCormick
Andi,

Actually, it doesn’t—I have the command line tools installed.  At least on 
Mavericks, gcc actually points to llvm-gcc which is a front-end to the same 
compiler!  This appears to be a recent change.

Mike

On Mar 29, 2014, at 3:26 AM, Andi Vajda va...@apache.org wrote:

 
 On Mar 28, 2014, at 22:57, Mike McCormick mccorm...@runbox.com wrote:
 
 Andi,
 
 I was able to compile by installing apple-gcc42 from Homebrew and applying 
 the appropriate symlinks to bypass Apple’s compiler.  It’s a stopgap, but it 
 allows me to proceed.
 
 That's also an option. The Apple xcode command line tools (separate install) 
 should also work.
 
 Andi..
 
 
 Mike
 
 



Re: Can't build on Mavericks (different issue)

2014-03-29 Thread Andi Vajda

 On Mar 29, 2014, at 14:33, Mike McCormick mccorm...@runbox.com wrote:
 
 Andi,
 
 Actually, it doesn’t—I have the command line tools installed.  At least on 
 Mavericks, gcc actually points to llvm-gcc which is a front-end to the same 
 compiler!  This appears to be a recent change.

Mavericks + command line tools + oracle java 7 is the combination I use for 
development and it seems to work fine.

Andi..

 
 Mike
 
 On Mar 29, 2014, at 3:26 AM, Andi Vajda va...@apache.org wrote:
 
 
 On Mar 28, 2014, at 22:57, Mike McCormick mccorm...@runbox.com wrote:
 
 Andi,
 
 I was able to compile by installing apple-gcc42 from Homebrew and applying 
 the appropriate symlinks to bypass Apple’s compiler.  It’s a stopgap, but 
 it allows me to proceed.
 
 That's also an option. The Apple xcode command line tools (separate install) 
 should also work.
 
 Andi..
 
 
 Mike
 


Re: Can't build on Mavericks (different issue)

2014-03-29 Thread Mike McCormick
Sorry I’m so slow to grasp the issue.  It looks like the following file 
contains the flags that were used to build Python, and these are used by Python.


/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/_sysconfigdata.py

Interesting!  Sorry, Python newbie here.  I had no idea of the extent of the 
Python build system.

This likely explains a LOT of the other issues I’ve been having with other 
projects, such as “no symbol for x86_64” during linking.

Mike

On Mar 29, 2014, at 9:36 AM, Andi Vajda va...@apache.org wrote:

 
 On Mar 29, 2014, at 14:33, Mike McCormick mccorm...@runbox.com wrote:
 
 Andi,
 
 Actually, it doesn’t—I have the command line tools installed.  At least on 
 Mavericks, gcc actually points to llvm-gcc which is a front-end to the same 
 compiler!  This appears to be a recent change.
 
 Mavericks + command line tools + oracle java 7 is the combination I use for 
 development and it seems to work fine.
 
 Andi..
 
 
 Mike
 
 On Mar 29, 2014, at 3:26 AM, Andi Vajda va...@apache.org wrote:
 
 
 On Mar 28, 2014, at 22:57, Mike McCormick mccorm...@runbox.com wrote:
 
 Andi,
 
 I was able to compile by installing apple-gcc42 from Homebrew and applying 
 the appropriate symlinks to bypass Apple’s compiler.  It’s a stopgap, but 
 it allows me to proceed.
 
 That's also an option. The Apple xcode command line tools (separate 
 install) should also work.
 
 Andi..
 
 
 Mike
 
 



[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.7.0_60-ea-b10) - Build # 9936 - Still Failing!

2014-03-29 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/9936/
Java: 32bit/jdk1.7.0_60-ea-b10 -server -XX:+UseParallelGC

All tests passed

Build Log:
[...truncated 51533 lines...]
BUILD FAILED
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:467: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:406: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/extra-targets.xml:87: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/extra-targets.xml:179: The 
following files are missing svn:eol-style (or binary svn:mime-type):
* 
./lucene/codecs/src/test/org/apache/lucene/codecs/memory/TestFSTOrdPostingsFormat.java
* 
./lucene/codecs/src/test/org/apache/lucene/codecs/memory/TestFSTOrdPulsing41PostingsFormat.java
* 
./lucene/codecs/src/test/org/apache/lucene/codecs/memory/TestFSTPostingsFormat.java
* 
./lucene/codecs/src/test/org/apache/lucene/codecs/memory/TestFSTPulsing41PostingsFormat.java

Total time: 62 minutes 56 seconds
Build step 'Invoke Ant' marked build as failure
Description set: Java: 32bit/jdk1.7.0_60-ea-b10 -server -XX:+UseParallelGC
Archiving artifacts
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Updated] (SOLR-5859) Harden the Overseer restart mechanism

2014-03-29 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-5859:
-

Attachment: SOLR-5859.patch

I guess I have managed to eliminate the complexity from the leader election 
process

The problem was that the cancel election was not removing the watcher and just 
deleting the corresponding node in ZK. 
OCP should check soon after the messages are read because there is a huge time 
wait for OCP reading messages

[~markrmil...@gmail.com] your review will be appreciated

 Harden the Overseer restart mechanism
 -

 Key: SOLR-5859
 URL: https://issues.apache.org/jira/browse/SOLR-5859
 Project: Solr
  Issue Type: Improvement
Reporter: Noble Paul
Assignee: Noble Paul
 Attachments: SOLR-5859.patch, SOLR-5859.patch


 SOLR-5476 depends on Overseer restart.The current strategy is to remove the 
 zk node for leader election and wait for STATUS_UPDATE_DELAY +100 ms and  
 start the new overseer.
 Though overseer ops are short running,  it is not a 100% foolproof strategy 
 because if an operation takes longer than the wait period there can be race 
 condition. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-5859) Harden the Overseer restart mechanism

2014-03-29 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13951784#comment-13951784
 ] 

Noble Paul edited comment on SOLR-5859 at 3/29/14 7:04 AM:
---

I guess I have managed to eliminate the complexity from the leader 
prioritization process

The problem was that the cancel election was not removing the watcher and just 
deleting the corresponding node in ZK. 
OCP should check soon after the messages are read because there is a huge time 
wait for OCP reading messages

[~markrmil...@gmail.com] your review will be appreciated


was (Author: noble.paul):
I guess I have managed to eliminate the complexity from the leader election 
process

The problem was that the cancel election was not removing the watcher and just 
deleting the corresponding node in ZK. 
OCP should check soon after the messages are read because there is a huge time 
wait for OCP reading messages

[~markrmil...@gmail.com] your review will be appreciated

 Harden the Overseer restart mechanism
 -

 Key: SOLR-5859
 URL: https://issues.apache.org/jira/browse/SOLR-5859
 Project: Solr
  Issue Type: Improvement
Reporter: Noble Paul
Assignee: Noble Paul
 Attachments: SOLR-5859.patch, SOLR-5859.patch


 SOLR-5476 depends on Overseer restart.The current strategy is to remove the 
 zk node for leader election and wait for STATUS_UPDATE_DELAY +100 ms and  
 start the new overseer.
 Though overseer ops are short running,  it is not a 100% foolproof strategy 
 because if an operation takes longer than the wait period there can be race 
 condition. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.7.0_60-ea-b10) - Build # 9830 - Still Failing!

2014-03-29 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/9830/
Java: 32bit/jdk1.7.0_60-ea-b10 -client -XX:+UseSerialGC

All tests passed

Build Log:
[...truncated 51534 lines...]
BUILD FAILED
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/build.xml:467: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/build.xml:406: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/extra-targets.xml:87: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/extra-targets.xml:179: The 
following files are missing svn:eol-style (or binary svn:mime-type):
* 
./lucene/codecs/src/test/org/apache/lucene/codecs/memory/TestFSTOrdPostingsFormat.java
* 
./lucene/codecs/src/test/org/apache/lucene/codecs/memory/TestFSTOrdPulsing41PostingsFormat.java
* 
./lucene/codecs/src/test/org/apache/lucene/codecs/memory/TestFSTPostingsFormat.java
* 
./lucene/codecs/src/test/org/apache/lucene/codecs/memory/TestFSTPulsing41PostingsFormat.java

Total time: 67 minutes 2 seconds
Build step 'Invoke Ant' marked build as failure
Description set: Java: 32bit/jdk1.7.0_60-ea-b10 -client -XX:+UseSerialGC
Archiving artifacts
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Updated] (SOLR-5473) Make one state.json per collection

2014-03-29 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-5473:


Attachment: SOLR-5473-74.patch

Thanks Anshum.

I've fixed all of your points except for indentation. I'm working on adding 
some tests and I'll take care of it before the next patch.

Next to come:
# Randomize AbstractFullDistribZkTestBase.useExternalCollection so that all 
cloud tests use external collection sometimes.
# Write a test which exercises SolrDispatchFilter logic with missing and 
invalid \_stateVer\_ values.

 Make one state.json per collection
 --

 Key: SOLR-5473
 URL: https://issues.apache.org/jira/browse/SOLR-5473
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
 Attachments: SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 ec2-23-20-119-52_solr.log, ec2-50-16-38-73_solr.log


 As defined in the parent issue, store the states of each collection under 
 /collections/collectionname/state.json node



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.7.0_60-ea-b10) - Build # 9830 - Still Failing!

2014-03-29 Thread Shalin Shekhar Mangar
I committed a fix.

On Sat, Mar 29, 2014 at 12:57 PM, Policeman Jenkins Server
jenk...@thetaphi.de wrote:
 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/9830/
 Java: 32bit/jdk1.7.0_60-ea-b10 -client -XX:+UseSerialGC

 All tests passed

 Build Log:
 [...truncated 51534 lines...]
 BUILD FAILED
 /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/build.xml:467: The following 
 error occurred while executing this line:
 /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/build.xml:406: The following 
 error occurred while executing this line:
 /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/extra-targets.xml:87: The 
 following error occurred while executing this line:
 /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/extra-targets.xml:179: The 
 following files are missing svn:eol-style (or binary svn:mime-type):
 * 
 ./lucene/codecs/src/test/org/apache/lucene/codecs/memory/TestFSTOrdPostingsFormat.java
 * 
 ./lucene/codecs/src/test/org/apache/lucene/codecs/memory/TestFSTOrdPulsing41PostingsFormat.java
 * 
 ./lucene/codecs/src/test/org/apache/lucene/codecs/memory/TestFSTPostingsFormat.java
 * 
 ./lucene/codecs/src/test/org/apache/lucene/codecs/memory/TestFSTPulsing41PostingsFormat.java

 Total time: 67 minutes 2 seconds
 Build step 'Invoke Ant' marked build as failure
 Description set: Java: 32bit/jdk1.7.0_60-ea-b10 -client -XX:+UseSerialGC
 Archiving artifacts
 Recording test results
 Email was triggered for: Failure
 Sending email for trigger: Failure




-- 
Regards,
Shalin Shekhar Mangar.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-5935) SolrCloud hangs under certain conditions

2014-03-29 Thread JIRA
Rafał Kuć created SOLR-5935:
---

 Summary: SolrCloud hangs under certain conditions
 Key: SOLR-5935
 URL: https://issues.apache.org/jira/browse/SOLR-5935
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.6.1
Reporter: Rafał Kuć
Priority: Critical


As discussed in a mailing list - let's try to find the reason while under 
certain conditions SolrCloud can hang.

I have an issue with one of the SolrCloud deployments. Six machines, a 
collection with 6 shards with a replication factor of 3. It all runs on 6 
physical servers, each with 24 cores. We've indexed about 32 million documents 
and everything was fine until that point.

Now, during performance tests, we run into an issue - SolrCloud hangs
when querying and indexing is run at the same time. First we see a
normal load on the machines, than the load starts to drop and thread
dump shown numerous threads like this:

Thread 12624: (state = BLOCKED)
 - sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information may 
be imprecise)
 - java.util.concurrent.locks.LockSupport.park(java.lang.Object) @bci=14, 
line=186 (Compiled frame)
 - 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await() 
@bci=42, line=2043 (Compiled frame)
 - org.apache.http.pool.PoolEntryFuture.await(java.util.Date) @bci=50, line=131 
(Compiled frame)
 - org.apache.http.pool.AbstractConnPool.getPoolEntryBlocking(java.lang.Object, 
java.lang.Object, long, java.util.concurrent.TimeUnit, 
org.apache.http.pool.PoolEntryFuture) @bci=431, line=281 (Compiled frame)
 - 
org.apache.http.pool.AbstractConnPool.access$000(org.apache.http.pool.AbstractConnPool,
 java.lang.Object, java.lang.Object, long, java.util.concurrent.TimeUnit, 
org.apache.http.pool.PoolEntryFuture) @bci=8, line=62 (Compiled frame)
 - org.apache.http.pool.AbstractConnPool$2.getPoolEntry(long, 
java.util.concurrent.TimeUnit) @bci=15, line=176 (Compiled frame)
 - org.apache.http.pool.AbstractConnPool$2.getPoolEntry(long, 
java.util.concurrent.TimeUnit) @bci=3, line=169 (Compiled frame)
 - org.apache.http.pool.PoolEntryFuture.get(long, 
java.util.concurrent.TimeUnit) @bci=38, line=100 (Compiled frame)
 - 
org.apache.http.impl.conn.PoolingClientConnectionManager.leaseConnection(java.util.concurrent.Future,
 long, java.util.concurrent.TimeUnit) @bci=4, line=212 (Compiled frame)
 - 
org.apache.http.impl.conn.PoolingClientConnectionManager$1.getConnection(long, 
java.util.concurrent.TimeUnit) @bci=10, line=199 (Compiled frame)
 - 
org.apache.http.impl.client.DefaultRequestDirector.execute(org.apache.http.HttpHost,
 org.apache.http.HttpRequest, org.apache.http.protocol.HttpContext) @bci=259, 
line=456 (Compiled frame)
 - 
org.apache.http.impl.client.AbstractHttpClient.execute(org.apache.http.HttpHost,
 org.apache.http.HttpRequest, org.apache.http.protocol.HttpContext) @bci=344, 
line=906 (Compiled frame)
 - 
org.apache.http.impl.client.AbstractHttpClient.execute(org.apache.http.client.methods.HttpUriRequest,
 org.apache.http.protocol.HttpContext) @bci=21, line=805 (Compiled frame)
 - 
org.apache.http.impl.client.AbstractHttpClient.execute(org.apache.http.client.methods.HttpUriRequest)
 @bci=6, line=784 (Compiled frame)
 - 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(org.apache.solr.client.solrj.SolrRequest,
 org.apache.solr.client.solrj.ResponseParser) @bci=1175, line=395 (Interpreted 
frame)
 - 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(org.apache.solr.client.solrj.SolrRequest)
 @bci=17, line=199 (Compiled frame)
 - 
org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(org.apache.solr.client.solrj.impl.LBHttpSolrServer$Req)
 @bci=132, line=285 (Interpreted frame)
 - 
org.apache.solr.handler.component.HttpShardHandlerFactory.makeLoadBalancedRequest(org.apache.solr.client.solrj.request.QueryRequest,
 java.util.List) @bci=13, line=214 (Compiled frame)
 - org.apache.solr.handler.component.HttpShardHandler$1.call() @bci=246, 
line=161 (Compiled frame)
 - org.apache.solr.handler.component.HttpShardHandler$1.call() @bci=1, line=118 
(Interpreted frame)
 - java.util.concurrent.FutureTask$Sync.innerRun() @bci=29, line=334 
(Interpreted frame)
 - java.util.concurrent.FutureTask.run() @bci=4, line=166 (Compiled frame)
 - java.util.concurrent.Executors$RunnableAdapter.call() @bci=4, line=471 
(Interpreted frame)
 - java.util.concurrent.FutureTask$Sync.innerRun() @bci=29, line=334 
(Interpreted frame)
 - java.util.concurrent.FutureTask.run() @bci=4, line=166 (Compiled frame)
 - 
java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
 @bci=95, line=1145 (Compiled frame)
 - java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=615 
(Interpreted frame)
 - java.lang.Thread.run() @bci=11, line=724 (Interpreted frame)

I've checked I/O statistics, GC working, memory usage, networking and
all of that - those 

[jira] [Updated] (SOLR-5935) SolrCloud hangs under certain conditions

2014-03-29 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-5935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rafał Kuć updated SOLR-5935:


Description: 
As discussed in a mailing list - let's try to find the reason while under 
certain conditions SolrCloud can hang.

I have an issue with one of the SolrCloud deployments. Six machines, a 
collection with 6 shards with a replication factor of 3. It all runs on 6 
physical servers, each with 24 cores. We've indexed about 32 million documents 
and everything was fine until that point.

Now, during performance tests, we run into an issue - SolrCloud hangs
when querying and indexing is run at the same time. First we see a
normal load on the machines, than the load starts to drop and thread
dump shown numerous threads like this:

{noformat}
Thread 12624: (state = BLOCKED)
 - sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information may 
be imprecise)
 - java.util.concurrent.locks.LockSupport.park(java.lang.Object) @bci=14, 
line=186 (Compiled frame)
 - 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await() 
@bci=42, line=2043 (Compiled frame)
 - org.apache.http.pool.PoolEntryFuture.await(java.util.Date) @bci=50, line=131 
(Compiled frame)
 - org.apache.http.pool.AbstractConnPool.getPoolEntryBlocking(java.lang.Object, 
java.lang.Object, long, java.util.concurrent.TimeUnit, 
org.apache.http.pool.PoolEntryFuture) @bci=431, line=281 (Compiled frame)
 - 
org.apache.http.pool.AbstractConnPool.access$000(org.apache.http.pool.AbstractConnPool,
 java.lang.Object, java.lang.Object, long, java.util.concurrent.TimeUnit, 
org.apache.http.pool.PoolEntryFuture) @bci=8, line=62 (Compiled frame)
 - org.apache.http.pool.AbstractConnPool$2.getPoolEntry(long, 
java.util.concurrent.TimeUnit) @bci=15, line=176 (Compiled frame)
 - org.apache.http.pool.AbstractConnPool$2.getPoolEntry(long, 
java.util.concurrent.TimeUnit) @bci=3, line=169 (Compiled frame)
 - org.apache.http.pool.PoolEntryFuture.get(long, 
java.util.concurrent.TimeUnit) @bci=38, line=100 (Compiled frame)
 - 
org.apache.http.impl.conn.PoolingClientConnectionManager.leaseConnection(java.util.concurrent.Future,
 long, java.util.concurrent.TimeUnit) @bci=4, line=212 (Compiled frame)
 - 
org.apache.http.impl.conn.PoolingClientConnectionManager$1.getConnection(long, 
java.util.concurrent.TimeUnit) @bci=10, line=199 (Compiled frame)
 - 
org.apache.http.impl.client.DefaultRequestDirector.execute(org.apache.http.HttpHost,
 org.apache.http.HttpRequest, org.apache.http.protocol.HttpContext) @bci=259, 
line=456 (Compiled frame)
 - 
org.apache.http.impl.client.AbstractHttpClient.execute(org.apache.http.HttpHost,
 org.apache.http.HttpRequest, org.apache.http.protocol.HttpContext) @bci=344, 
line=906 (Compiled frame)
 - 
org.apache.http.impl.client.AbstractHttpClient.execute(org.apache.http.client.methods.HttpUriRequest,
 org.apache.http.protocol.HttpContext) @bci=21, line=805 (Compiled frame)
 - 
org.apache.http.impl.client.AbstractHttpClient.execute(org.apache.http.client.methods.HttpUriRequest)
 @bci=6, line=784 (Compiled frame)
 - 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(org.apache.solr.client.solrj.SolrRequest,
 org.apache.solr.client.solrj.ResponseParser) @bci=1175, line=395 (Interpreted 
frame)
 - 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(org.apache.solr.client.solrj.SolrRequest)
 @bci=17, line=199 (Compiled frame)
 - 
org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(org.apache.solr.client.solrj.impl.LBHttpSolrServer$Req)
 @bci=132, line=285 (Interpreted frame)
 - 
org.apache.solr.handler.component.HttpShardHandlerFactory.makeLoadBalancedRequest(org.apache.solr.client.solrj.request.QueryRequest,
 java.util.List) @bci=13, line=214 (Compiled frame)
 - org.apache.solr.handler.component.HttpShardHandler$1.call() @bci=246, 
line=161 (Compiled frame)
 - org.apache.solr.handler.component.HttpShardHandler$1.call() @bci=1, line=118 
(Interpreted frame)
 - java.util.concurrent.FutureTask$Sync.innerRun() @bci=29, line=334 
(Interpreted frame)
 - java.util.concurrent.FutureTask.run() @bci=4, line=166 (Compiled frame)
 - java.util.concurrent.Executors$RunnableAdapter.call() @bci=4, line=471 
(Interpreted frame)
 - java.util.concurrent.FutureTask$Sync.innerRun() @bci=29, line=334 
(Interpreted frame)
 - java.util.concurrent.FutureTask.run() @bci=4, line=166 (Compiled frame)
 - 
java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker)
 @bci=95, line=1145 (Compiled frame)
 - java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=615 
(Interpreted frame)
 - java.lang.Thread.run() @bci=11, line=724 (Interpreted frame)
{noformat}

I've checked I/O statistics, GC working, memory usage, networking and
all of that - those resources are not exhausted during the test.

Hard autocommit is set to 15 seconds with openSearcher=false and
softAutocommit to 4 hours. We have a 

[jira] [Commented] (SOLR-5929) Solrj QueryResponse results not presented in proper score order

2014-03-29 Thread Chris Pilsworth (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13951808#comment-13951808
 ] 

Chris Pilsworth commented on SOLR-5929:
---

Understood.  Thanks for your input and sorry for wasting your time with this 
non-issue

 Solrj QueryResponse results not presented in proper score order
 ---

 Key: SOLR-5929
 URL: https://issues.apache.org/jira/browse/SOLR-5929
 Project: Solr
  Issue Type: Bug
  Components: clients - java
Affects Versions: 4.6.1
 Environment: Windows 7, Java 7
Reporter: Chris Pilsworth

 It would appear that the results collection is sorting on the score as a 
 string where there is an exponent.
 When searching for a term that returns two documents, one with a 
 significantly smaller score than the other the results are returned 
 *correctly* from solr directly.
 {code:json}
 {
 responseHeader: {
 status: 0,
 QTime: 69,
 params: {
 q: sausages,
 indent: true,
 fl: id, inv_text_summary, score,
 wt: json,
 debugQuery: true
 }
 },
 response: {
 numFound: 2,
 start: 0,
 maxScore: 0.0012368863,
 docs: [
 {
 inv_text_summary: Contrary to popular belief, Lorem sausages 
 sausages sausages sausagesIpsum is not simply random text. It has roots in a 
 piece of classical Latin literature from 45 BC, making it over 2000 years 
 old. Richard McClintock, a Latin professor at Hampden-Sydney College in 
 Virginia, looked up one of the ...,
 id: 
 /content/site-qa/capital/en_gb/home/test_pages/test_page_2/multiterm2,
 score: 0.0012368863
 },
 {
 inv_text_summary: Contrary to sausages belief, Lorem Ipsum 
 is not simply random text. It has roots in a piece of classical Latin 
 literature from 45 BC, making it over 2000 years old. Richard McClintock, a 
 Latin professor at Hampden-Sydney College in Virginia, looked up one of the 
 more obscure Latin words, consecte...,
 id: 
 /content/site-qa/capital/en_gb/home/test_pages/test_page_2/multiterm1,
 score: 0.0006184431
 }
 ]
 },
 debug: {
 rawquerystring: sausages,
 querystring: sausages,
 parsedquery: (+DisjunctionMaxQuery((inv_path:sausages | 
 inv_h1:sausages^8.0 | inv_text_summary:sausages^2.0 | inv_title:sausages^18.0 
 | inv_h2:sausages^6.0 | inv_h3:sausages^4.0 | inv_text:sausages)~1.0) () () 
 () () () () ())/no_coord,
 parsedquery_toString: +(inv_path:sausages | inv_h1:sausages^8.0 | 
 inv_text_summary:sausages^2.0 | inv_title:sausages^18.0 | inv_h2:sausages^6.0 
 | inv_h3:sausages^4.0 | inv_text:sausages)~1.0 () () () () () () (),
 explain: {
 
 /content/site-qa/capital/en_gb/home/test_pages/test_page_2/multiterm2:  
 0.0012368863 = (MATCH) sum of: 0.0012368863 = (MATCH) max plus 1.0 times 
 others of: 0.0012368863 = (MATCH) weight(inv_text:sausages in 0) 
 [DefaultSimilarity], result of: 0.0012368863 = score(doc=0,freq=4.0 = 
 termFreq=4.0 ), product of: 0.016643414 = queryWeight, product of: 0.5945349 
 = idf(docFreq=2, maxDocs=2) 0.027994009 = queryNorm 0.07431686 = fieldWeight 
 in 0, product of: 2.0 = tf(freq=4.0), with freq of: 4.0 = termFreq=4.0 
 0.5945349 = idf(docFreq=2, maxDocs=2) 0.0625 = fieldNorm(doc=0) ,
 
 /content/site-qa/capital/en_gb/home/test_pages/test_page_2/multiterm1:  
 6.184431E-4 = (MATCH) sum of: 6.184431E-4 = (MATCH) max plus 1.0 times others 
 of: 6.184431E-4 = (MATCH) weight(inv_text:sausages in 0) [DefaultSimilarity], 
 result of: 6.184431E-4 = score(doc=0,freq=1.0 = termFreq=1.0 ), product of: 
 0.016643414 = queryWeight, product of: 0.5945349 = idf(docFreq=2, maxDocs=2) 
 0.027994009 = queryNorm 0.03715843 = fieldWeight in 0, product of: 1.0 = 
 tf(freq=1.0), with freq of: 1.0 = termFreq=1.0 0.5945349 = idf(docFreq=2, 
 maxDocs=2) 0.0625 = fieldNorm(doc=0) 
 },
 QParser: ExtendedDismaxQParser,
 altquerystring: null,
 boost_queries: null,
 parsed_boost_queries: [
 
 ],
 boostfuncs: null,
 timing: {
 time: 69,
 prepare: {
 time: 14,
 query: {
 time: 14
 },
 facet: {
 time: 0
 },
 mlt: {
 time: 0
 },
 highlight: {
 time: 0
 },
 stats: {
 time: 0
 },
 debug: {
 time: 0
 }
 },
  

[jira] [Assigned] (SOLR-5931) solrcore.properties is not reloaded when core is reloaded

2014-03-29 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar reassigned SOLR-5931:
---

Assignee: Shalin Shekhar Mangar

 solrcore.properties is not reloaded when core is reloaded
 -

 Key: SOLR-5931
 URL: https://issues.apache.org/jira/browse/SOLR-5931
 Project: Solr
  Issue Type: Bug
  Components: multicore
Affects Versions: 4.7
Reporter: Gunnlaugur Thor Briem
Assignee: Shalin Shekhar Mangar
Priority: Minor

 When I change solrcore.properties for a core, and then reload the core, the 
 previous values of the properties in that file are still in effect. If I 
 *unload* the core and then add it back, in the “Core Admin” section of the 
 admin UI, then the changes in solrcore.properties do take effect.
 My specific test case is a DataImportHandler where {{db-data-config.xml}} 
 uses a property to decide which DB host to talk to:
 {code:xml}
 dataSource driver=org.postgresql.Driver name=meta 
 url=jdbc:postgresql://${dbhost}/${solr.core.name} .../
 {code}
 When I change that {{dbhost}} property in {{solrcore.properties}} and reload 
 the core, the next dataimport operation still connects to the previous DB 
 host. Reloading the dataimport config does not help. I have to unload the 
 core (or fully restart the whole Solr) for the properties change to take 
 effect.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[VOTE] Lucene / Solr 4.7.1 RC2

2014-03-29 Thread Steve Rowe
Please vote for the second Release Candidate for Lucene/Solr 4.7.1.

Download it here:
https://people.apache.org/~sarowe/staging_area/lucene-solr-4.7.1-RC2-rev1582953/

Smoke tester cmdline (from the lucene_solr_4_7 branch):

python3.2 -u dev-tools/scripts/smokeTestRelease.py \
https://people.apache.org/~sarowe/staging_area/lucene-solr-4.7.1-RC2-rev1582953/
 \
1582953 4.7.1 /tmp/4.7.1-smoke

The smoke tester passed for me: SUCCESS! [0:50:29.936732]

My vote: +1

Steve
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5473) Make one state.json per collection

2014-03-29 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-5473:


Attachment: SOLR-5473-74.patch

The previous patch missed reverting baseUrl to nodeName in 
ZkController.getShardId.

I'm still seeing some test failures. For example in OverseerStatusTest, it 
looks like the OverseerCollectionProcessor is waiting for a new collection to 
be created but Overseer is waiting for new items in the queue?

{quote}
Overseer-91493854598660099-127.0.0.1:52655_o_%2Fcs-n_00 daemon 
prio=10 tid=0x7fb7a4299000 nid=0x32be waiting on condition 
[0x7fb810168000]
   java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(Native Method)
at 
org.apache.solr.cloud.OverseerCollectionProcessor.createCollection(OverseerCollectionProcessor.java:1999)
at 
org.apache.solr.cloud.OverseerCollectionProcessor.processMessage(OverseerCollectionProcessor.java:449)
at 
org.apache.solr.cloud.OverseerCollectionProcessor.run(OverseerCollectionProcessor.java:242)
at java.lang.Thread.run(Thread.java:744)

Thread-12 daemon prio=10 tid=0x7fb7a4297800 nid=0x32bd in Object.wait() 
[0x7fb810269000]
   java.lang.Thread.State: TIMED_WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
- waiting on 0x0007783049c8 (a java.lang.Object)
at 
org.apache.solr.cloud.DistributedQueue$LatchChildWatcher.await(DistributedQueue.java:238)
- locked 0x0007783049c8 (a java.lang.Object)
at 
org.apache.solr.cloud.DistributedQueue.peek(DistributedQueue.java:464)
at 
org.apache.solr.cloud.DistributedQueue.peek(DistributedQueue.java:428)
at 
org.apache.solr.cloud.Overseer$ClusterStateUpdater.run(Overseer.java:217)
at java.lang.Thread.run(Thread.java:744)
{quote}

Digging into it now.

 Make one state.json per collection
 --

 Key: SOLR-5473
 URL: https://issues.apache.org/jira/browse/SOLR-5473
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
 Attachments: SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, ec2-23-20-119-52_solr.log, ec2-50-16-38-73_solr.log


 As defined in the parent issue, store the states of each collection under 
 /collections/collectionname/state.json node



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Lucene / Solr 4.7.1 RC2

2014-03-29 Thread Ahmet Arslan
+1
SUCCESS! [1:28:08.851424]


Ahmet


On Saturday, March 29, 2014 10:46 AM, Steve Rowe sar...@gmail.com wrote:
Please vote for the second Release Candidate for Lucene/Solr 4.7.1.

Download it here:
https://people.apache.org/~sarowe/staging_area/lucene-solr-4.7.1-RC2-rev1582953/

Smoke tester cmdline (from the lucene_solr_4_7 branch):

python3.2 -u dev-tools/scripts/smokeTestRelease.py \
https://people.apache.org/~sarowe/staging_area/lucene-solr-4.7.1-RC2-rev1582953/
 \
1582953 4.7.1 /tmp/4.7.1-smoke

The smoke tester passed for me: SUCCESS! [0:50:29.936732]

My vote: +1

Steve
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.7.0_60-ea-b10) - Build # 9830 - Still Failing!

2014-03-29 Thread Robert Muir
thank you!

On Sat, Mar 29, 2014 at 12:52 AM, Shalin Shekhar Mangar
sha...@apache.org wrote:
 I committed a fix.

 On Sat, Mar 29, 2014 at 12:57 PM, Policeman Jenkins Server
 jenk...@thetaphi.de wrote:
 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/9830/
 Java: 32bit/jdk1.7.0_60-ea-b10 -client -XX:+UseSerialGC

 All tests passed

 Build Log:
 [...truncated 51534 lines...]
 BUILD FAILED
 /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/build.xml:467: The 
 following error occurred while executing this line:
 /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/build.xml:406: The 
 following error occurred while executing this line:
 /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/extra-targets.xml:87: The 
 following error occurred while executing this line:
 /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/extra-targets.xml:179: The 
 following files are missing svn:eol-style (or binary svn:mime-type):
 * 
 ./lucene/codecs/src/test/org/apache/lucene/codecs/memory/TestFSTOrdPostingsFormat.java
 * 
 ./lucene/codecs/src/test/org/apache/lucene/codecs/memory/TestFSTOrdPulsing41PostingsFormat.java
 * 
 ./lucene/codecs/src/test/org/apache/lucene/codecs/memory/TestFSTPostingsFormat.java
 * 
 ./lucene/codecs/src/test/org/apache/lucene/codecs/memory/TestFSTPulsing41PostingsFormat.java

 Total time: 67 minutes 2 seconds
 Build step 'Invoke Ant' marked build as failure
 Description set: Java: 32bit/jdk1.7.0_60-ea-b10 -client -XX:+UseSerialGC
 Archiving artifacts
 Recording test results
 Email was triggered for: Failure
 Sending email for trigger: Failure




 --
 Regards,
 Shalin Shekhar Mangar.

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Compilation Error at Trunk

2014-03-29 Thread Furkan KAMACI
Hi;

Trunk fails at compile. Here is a similar conversation:
http://mail-archives.apache.org/mod_mbox/stanbol-dev/201304.mbox/%3CCAA7LAO2kxNZ=xfq-ejma5d9zyqvok2mays6vewizhrgsjtu...@mail.gmail.com%3E
I
think that same kind of problem has occurred again.

Thanks;
Furkan KAMACI


Re: Compilation Error at Trunk

2014-03-29 Thread Furkan KAMACI
Don't mind, mail sent to wrong mail list.


2014-03-29 16:58 GMT+02:00 Furkan KAMACI furkankam...@gmail.com:

 Hi;

 Trunk fails at compile. Here is a similar conversation:
 http://mail-archives.apache.org/mod_mbox/stanbol-dev/201304.mbox/%3CCAA7LAO2kxNZ=xfq-ejma5d9zyqvok2mays6vewizhrgsjtu...@mail.gmail.com%3E
  I
 think that same kind of problem has occurred again.

 Thanks;
 Furkan KAMACI



[jira] [Commented] (SOLR-5920) Distributed sort on DateField, BoolField and BCD{Int,Long,Str}Field returns string cast exception

2014-03-29 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13954276#comment-13954276
 ] 

Erick Erickson commented on SOLR-5920:
--

Can this be marked fixed?

 Distributed sort on DateField, BoolField and BCD{Int,Long,Str}Field returns 
 string cast exception
 -

 Key: SOLR-5920
 URL: https://issues.apache.org/jira/browse/SOLR-5920
 Project: Solr
  Issue Type: Bug
  Components: search
Affects Versions: 4.7
 Environment: Debian, Java JVM 1.6.0_26
Reporter: Eric Bus
Assignee: Steve Rowe
  Labels: datefield, exception, sort
 Fix For: 4.8, 5.0, 4.7.1

 Attachments: SOLR-5920.patch, SOLR-5920.patch


 After upgrading to 4.7, sorting on a date field returns the folllow trace:
 {quote}
 ?xml version=1.0 encoding=UTF-8?
 response
 lst name=responseHeaderint name=status500/intint 
 name=QTime7/int/lstlst name=errorstr name=msgjava.lang.String 
 cannot be cast to org.apache.lucene.util.BytesRef/strstr 
 name=tracejava.lang.ClassCastException: java.lang.String cannot be cast to 
 org.apache.lucene.util.BytesRef
   at 
 org.apache.lucene.search.FieldComparator$TermOrdValComparator.compareValues(FieldComparator.java:940)
   at 
 org.apache.solr.handler.component.ShardFieldSortedHitQueue$2.compare(ShardDoc.java:245)
   at 
 org.apache.solr.handler.component.ShardFieldSortedHitQueue$2.compare(ShardDoc.java:237)
   at 
 org.apache.solr.handler.component.ShardFieldSortedHitQueue.lessThan(ShardDoc.java:162)
   at 
 org.apache.solr.handler.component.ShardFieldSortedHitQueue.lessThan(ShardDoc.java:104)
   at 
 org.apache.lucene.util.PriorityQueue.insertWithOverflow(PriorityQueue.java:159)
   at 
 org.apache.solr.handler.component.QueryComponent.mergeIds(QueryComponent.java:909)
   at 
 org.apache.solr.handler.component.QueryComponent.handleRegularResponses(QueryComponent.java:661)
   at 
 org.apache.solr.handler.component.QueryComponent.handleResponses(QueryComponent.java:640)
   at 
 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:321)
   at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
   at org.apache.solr.core.SolrCore.execute(SolrCore.java:1916)
   at 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:780)
   at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:427)
   at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:217)
   at 
 org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
   at 
 org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
   at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
   at 
 org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
   at 
 org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
   at 
 org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
   at 
 org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
   at 
 org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
   at 
 org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
   at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
   at 
 org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
   at 
 org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
   at 
 org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
   at org.eclipse.jetty.server.Server.handle(Server.java:368)
   at 
 org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
   at 
 org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
   at 
 org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)
   at 
 org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)
   at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640)
   at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
   at 
 org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
   at 
 org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
   at 
 org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
 

[jira] [Resolved] (SOLR-5920) Distributed sort on DateField, BoolField and BCD{Int,Long,Str}Field returns string cast exception

2014-03-29 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe resolved SOLR-5920.
--

Resolution: Fixed

bq. Can this be marked fixed?

Yes, I committed to trunk, branch_4x and lucene_solr_4_7.

 Distributed sort on DateField, BoolField and BCD{Int,Long,Str}Field returns 
 string cast exception
 -

 Key: SOLR-5920
 URL: https://issues.apache.org/jira/browse/SOLR-5920
 Project: Solr
  Issue Type: Bug
  Components: search
Affects Versions: 4.7
 Environment: Debian, Java JVM 1.6.0_26
Reporter: Eric Bus
Assignee: Steve Rowe
  Labels: datefield, exception, sort
 Fix For: 4.8, 5.0, 4.7.1

 Attachments: SOLR-5920.patch, SOLR-5920.patch


 After upgrading to 4.7, sorting on a date field returns the folllow trace:
 {quote}
 ?xml version=1.0 encoding=UTF-8?
 response
 lst name=responseHeaderint name=status500/intint 
 name=QTime7/int/lstlst name=errorstr name=msgjava.lang.String 
 cannot be cast to org.apache.lucene.util.BytesRef/strstr 
 name=tracejava.lang.ClassCastException: java.lang.String cannot be cast to 
 org.apache.lucene.util.BytesRef
   at 
 org.apache.lucene.search.FieldComparator$TermOrdValComparator.compareValues(FieldComparator.java:940)
   at 
 org.apache.solr.handler.component.ShardFieldSortedHitQueue$2.compare(ShardDoc.java:245)
   at 
 org.apache.solr.handler.component.ShardFieldSortedHitQueue$2.compare(ShardDoc.java:237)
   at 
 org.apache.solr.handler.component.ShardFieldSortedHitQueue.lessThan(ShardDoc.java:162)
   at 
 org.apache.solr.handler.component.ShardFieldSortedHitQueue.lessThan(ShardDoc.java:104)
   at 
 org.apache.lucene.util.PriorityQueue.insertWithOverflow(PriorityQueue.java:159)
   at 
 org.apache.solr.handler.component.QueryComponent.mergeIds(QueryComponent.java:909)
   at 
 org.apache.solr.handler.component.QueryComponent.handleRegularResponses(QueryComponent.java:661)
   at 
 org.apache.solr.handler.component.QueryComponent.handleResponses(QueryComponent.java:640)
   at 
 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:321)
   at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
   at org.apache.solr.core.SolrCore.execute(SolrCore.java:1916)
   at 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:780)
   at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:427)
   at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:217)
   at 
 org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
   at 
 org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
   at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
   at 
 org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
   at 
 org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
   at 
 org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
   at 
 org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
   at 
 org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
   at 
 org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
   at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
   at 
 org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
   at 
 org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
   at 
 org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
   at org.eclipse.jetty.server.Server.handle(Server.java:368)
   at 
 org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
   at 
 org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
   at 
 org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)
   at 
 org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)
   at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640)
   at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
   at 
 org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
   at 
 org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
   at 
 

RE: [VOTE] Lucene / Solr 4.7.1 RC2

2014-03-29 Thread Uwe Schindler
SUCCESS! [1:45:28.291215]

+1 to release!

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


 -Original Message-
 From: Steve Rowe [mailto:sar...@gmail.com]
 Sent: Saturday, March 29, 2014 9:46 AM
 To: lucene dev
 Subject: [VOTE] Lucene / Solr 4.7.1 RC2
 
 Please vote for the second Release Candidate for Lucene/Solr 4.7.1.
 
 Download it here:
 https://people.apache.org/~sarowe/staging_area/lucene-solr-4.7.1-RC2-
 rev1582953/
 
 Smoke tester cmdline (from the lucene_solr_4_7 branch):
 
 python3.2 -u dev-tools/scripts/smokeTestRelease.py \
 https://people.apache.org/~sarowe/staging_area/lucene-solr-4.7.1-RC2-
 rev1582953/ \
 1582953 4.7.1 /tmp/4.7.1-smoke
 
 The smoke tester passed for me: SUCCESS! [0:50:29.936732]
 
 My vote: +1
 
 Steve
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional
 commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Lucene / Solr 4.7.1 RC2

2014-03-29 Thread Michael McCandless
+1

SUCCESS! [0:43:25.792795]

Mike McCandless

http://blog.mikemccandless.com


On Sat, Mar 29, 2014 at 4:46 AM, Steve Rowe sar...@gmail.com wrote:
 Please vote for the second Release Candidate for Lucene/Solr 4.7.1.

 Download it here:
 https://people.apache.org/~sarowe/staging_area/lucene-solr-4.7.1-RC2-rev1582953/

 Smoke tester cmdline (from the lucene_solr_4_7 branch):

 python3.2 -u dev-tools/scripts/smokeTestRelease.py \
 https://people.apache.org/~sarowe/staging_area/lucene-solr-4.7.1-RC2-rev1582953/
  \
 1582953 4.7.1 /tmp/4.7.1-smoke

 The smoke tester passed for me: SUCCESS! [0:50:29.936732]

 My vote: +1

 Steve
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-5936) Deprecate non-Trie-based numeric (and date) field types in 4.x and remove them from 5.0

2014-03-29 Thread Steve Rowe (JIRA)
Steve Rowe created SOLR-5936:


 Summary: Deprecate non-Trie-based numeric (and date) field types 
in 4.x and remove them from 5.0
 Key: SOLR-5936
 URL: https://issues.apache.org/jira/browse/SOLR-5936
 Project: Solr
  Issue Type: Task
  Components: Schema and Analysis
Reporter: Steve Rowe
Assignee: Steve Rowe
Priority: Minor
 Fix For: 4.8, 5.0


We've been discouraging people from using non-Trie numericdate field types for 
years, it's time we made it official.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5936) Deprecate non-Trie-based numeric (and date) field types in 4.x and remove them from 5.0

2014-03-29 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-5936:
-

Attachment: SOLR-5936.branch_4x.patch

4.x patch with deprecations.

I'll work up a separate trunk patch.

 Deprecate non-Trie-based numeric (and date) field types in 4.x and remove 
 them from 5.0
 ---

 Key: SOLR-5936
 URL: https://issues.apache.org/jira/browse/SOLR-5936
 Project: Solr
  Issue Type: Task
  Components: Schema and Analysis
Reporter: Steve Rowe
Assignee: Steve Rowe
Priority: Minor
 Fix For: 4.8, 5.0

 Attachments: SOLR-5936.branch_4x.patch


 We've been discouraging people from using non-Trie numericdate field types 
 for years, it's time we made it official.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5936) Deprecate non-Trie-based numeric (and date) field types in 4.x and remove them from 5.0

2014-03-29 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-5936:
-

Attachment: SOLR-5936.branch_4x.patch

Here's an updated 4.x patch that removes these field types from the schemaless 
example (users almost certainly won't have existing indexes with the legacy 
numeric/date types), and adds comments to the main example schema about these 
types being deprecated and removed in 5.0.

The DIH example schemas are wy out of date - they don't include any of the 
trie fields - I'll make a separate issue to clean them up.

 Deprecate non-Trie-based numeric (and date) field types in 4.x and remove 
 them from 5.0
 ---

 Key: SOLR-5936
 URL: https://issues.apache.org/jira/browse/SOLR-5936
 Project: Solr
  Issue Type: Task
  Components: Schema and Analysis
Reporter: Steve Rowe
Assignee: Steve Rowe
Priority: Minor
 Fix For: 4.8, 5.0

 Attachments: SOLR-5936.branch_4x.patch, SOLR-5936.branch_4x.patch


 We've been discouraging people from using non-Trie numericdate field types 
 for years, it's time we made it official.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-5937) Modernize the DIH example schemas

2014-03-29 Thread Steve Rowe (JIRA)
Steve Rowe created SOLR-5937:


 Summary: Modernize the DIH example schemas
 Key: SOLR-5937
 URL: https://issues.apache.org/jira/browse/SOLR-5937
 Project: Solr
  Issue Type: Sub-task
  Components: Schema and Analysis
Reporter: Steve Rowe
Priority: Minor


The DIH example schemas should be modified to include trie numeric/date fields, 
and add comments about the non-trie numeric/date fields being deprecated and 
removed in 5.0.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5168) ByteSliceReader assert trips with 32-bit oracle 1.7.0_25 + G1GC

2014-03-29 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13954432#comment-13954432
 ] 

Dawid Weiss commented on LUCENE-5168:
-

Just a quick update -- the problem is still there, just checked with:

1) the most recent official 1.7:

Java(TM) SE Runtime Environment (build 1.7.0_51-b13); 
Java HotSpot(TM) Server VM (build 24.51-b03, mixed mode)

2) the most recent official 1.8:

Java(TM) SE Runtime Environment (build 1.8.0-b132)
Java HotSpot(TM) Server VM (build 25.0-b70, mixed mode)

and they both fail on the impossible assertion.
{code}
  [junit4] Throwable #1: java.lang.AssertionError
   [junit4]at 
__randomizedtesting.SeedInfo.seed([CF:9F36A99E987A1F00]:0)
   [junit4]at 
org.apache.lucene.index.FreqProxTermsWriterPerField.flush(FreqProxTermsWriterPerField.java:457)
   [junit4]at 
org.apache.lucene.index.FreqProxTermsWriter.flush(FreqProxTermsWriter.java:85)
   [junit4]at 
org.apache.lucene.index.TermsHash.flush(TermsHash.java:116)
   [junit4]at 
org.apache.lucene.index.DocInverter.flush(DocInverter.java:53)
...
{code}

This reproduces for me solidly on a historic version of branch 4.x:
{code}
URL: 
https://svn.apache.org/repos/asf/lucene/dev/branches/branch_4x/lucene/highlighter
Revision: 1512807
{code}

But since hotspot is still broken somewhere I assume the same bug may result in 
other odd surprises, even if Mike spaghettified (yes, my own invention but I 
just verified and it's actually a true word 
http://en.wikipedia.org/wiki/Spaghettification) the code a bit to dodge the 
problem.


 ByteSliceReader assert trips with 32-bit oracle 1.7.0_25 + G1GC
 ---

 Key: LUCENE-5168
 URL: https://issues.apache.org/jira/browse/LUCENE-5168
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: java8-windows-4x-3075-console.txt, log.0025, log.0042, 
 log.0078, log.0086, log.0100


 This assertion trips (sometimes from different tests), if you run the 
 highlighting tests on branch_4x with r1512807.
 It reproduces about half the time, always only with 32bit + G1GC (other 
 combinations do not seem to trip it, i didnt try looping or anything really 
 though).
 {noformat}
 rmuir@beast:~/workspace/branch_4x$ svn up -r 1512807
 rmuir@beast:~/workspace/branch_4x$ ant clean
 rmuir@beast:~/workspace/branch_4x$ rm -rf .caches #this is important,
 otherwise master seed does not work!
 rmuir@beast:~/workspace/branch_4x/lucene/highlighter$ ant test
 -Dtests.jvms=2 -Dtests.seed=EBBFA6F4E80A7365 -Dargs=-server
 -XX:+UseG1GC
 {noformat}
 Originally showed up like this:
 {noformat}
 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/6874/
 Java: 32bit/jdk1.7.0_25 -server -XX:+UseG1GC
 1 tests failed.
 REGRESSION:  
 org.apache.lucene.search.postingshighlight.TestPostingsHighlighter.testUserFailedToIndexOffsets
 Error Message:
 Stack Trace:
 java.lang.AssertionError
 at 
 __randomizedtesting.SeedInfo.seed([EBBFA6F4E80A7365:1FBF811885F2D611]:0)
 at 
 org.apache.lucene.index.ByteSliceReader.readByte(ByteSliceReader.java:73)
 at org.apache.lucene.store.DataInput.readVInt(DataInput.java:108)
 at 
 org.apache.lucene.index.FreqProxTermsWriterPerField.flush(FreqProxTermsWriterPerField.java:453)
 at 
 org.apache.lucene.index.FreqProxTermsWriter.flush(FreqProxTermsWriter.java:85)
 at org.apache.lucene.index.TermsHash.flush(TermsHash.java:116)
 at org.apache.lucene.index.DocInverter.flush(DocInverter.java:53)
 at 
 org.apache.lucene.index.DocFieldProcessor.flush(DocFieldProcessor.java:81)
 at 
 org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:501)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5936) Deprecate non-Trie-based numeric (and date) field types in 4.x and remove them from 5.0

2014-03-29 Thread Jack Krupansky (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13954438#comment-13954438
 ] 

Jack Krupansky commented on SOLR-5936:
--

As part of this cleanup, could somebody volunteer to create a plain-English 
summary of exactly what a trie field really is, what good it is, and why we 
can't live without them? I've read the code and, okay, there is a sequence of 
bit shifts and generation of extra terms, but in plain English, what's the 
point?

I'm not asking for a recitation of the actual algorithm(s), but some 
intuitively accessible summary. I would note that the typical examples are for 
strings with prefixes rather than binary numbers.

See:
http://en.wikipedia.org/wiki/Trie

And, is trie really the best solution for number types? Does it actually have 
real value for float and double values?

And I would really like to see some plain, easily readable explanation of 
precision step. Again, especially for real numbers.

And how should precision step be used for dates?

I mean, other than assuring sort order, why bother with trie? Or more 
specifically, why does a Solr (or Lucene) user need to know that trie is used 
for the implementation?

Specifically, for example, does it matter if a field has an evenly distributed 
range of numeric values with little repetition vs. numeric codes where there is 
a relatively small number of distinct values (e.g., 1-10, or scores of 0-100 or 
dates in years between 1970 and 2014) and relatively high cardinality? I mean, 
does trie do a uniformly great job for both of these extreme use cases, 
including for faceting?

And if trie really is the best approach for numeric fields, why not just do all 
of this under the hood instead of polluting the field type names with trie? 
IOW, rename TrieIntField to IntField, etc.

To me, trie just seems like unnecessary noise to average users.


 Deprecate non-Trie-based numeric (and date) field types in 4.x and remove 
 them from 5.0
 ---

 Key: SOLR-5936
 URL: https://issues.apache.org/jira/browse/SOLR-5936
 Project: Solr
  Issue Type: Task
  Components: Schema and Analysis
Reporter: Steve Rowe
Assignee: Steve Rowe
Priority: Minor
 Fix For: 4.8, 5.0

 Attachments: SOLR-5936.branch_4x.patch, SOLR-5936.branch_4x.patch


 We've been discouraging people from using non-Trie numericdate field types 
 for years, it's time we made it official.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5914) Almost all Solr tests no longer cleanup their temp dirs on Windows

2014-03-29 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13954447#comment-13954447
 ] 

Dawid Weiss commented on SOLR-5914:
---

I looked at the current state of the branch, Mark, and I strongly disagree with 
having this in TestUtil:
{code}
  public static File createTempDir(String name, File tmpDir, boolean 
ensureCleanedUp)
{code}

It's creating a gateway for bad code to proliferate. A temporary folder WILL be 
removed after a suite is done, there should be no way to opt out from this 
check. I understand sometimes it's useful to keep temp. files and it should be 
possible, but NOT AT THE CODE LEVEL (it should be a result of an explicit 
action of the developer).

My suggestion is to change SolrTestCaseJ4 and instead of putting the logic to 
keep temporary folders in there I'd rather:

- declare a global sys property that would prevent removing temporary folders 
(placed in TestUtil; but we can keep solr.test.leavetmpdir as an alias). In 
fact there already is a similar property for the runner itself:
{code}
# Don't remove temporary files under slave directories, even if
# the test passes.
ant -Dtests.leaveTemporary=true
{code}
so we could just adopt it here too.

- make an annotation called DirtyHarry (suggestions welcome) to mark suites 
which are known offenders of the default behavior. Classes annotated with 
DirtyHarry wouldn't fail if they leave undeletable garbage behind (but would 
print a warning).

I also don't quite understand why you insist on things like this:
{code}
 rootTmpDir = TestUtil.createTempDir(solrtest- + cname, null, ensureClosed);
 initCoreDataDir = TestUtil.createTempDir(solrtest- + cname, rootTmpDir, 
ensureClosed);
{code}

Why 'solrtest-*' prefix? Isn't class name enough? And why create a temporary 
folder under a temporary folder (if we know rootTmpDir will always be empty, 
even on a run when previous runs left their temporary folders a new, empty 
folder uniquely suffixed will be created.

As for this:
{code}
  // TODO: tmp files should already get cleaned up by the test 
framework, but
  // we still do it here as well, so that we clean up as much as we 
can, even
  // when a test is the SuppressTempDirCleanUp annotation
{code}
it isn't true. The rule to clean up temporary files is fired almost at the very 
end of processing, after all the after class rules have already been executed. 
The reason for this is that you don't want to clean up any temporary files for 
failed tests/ suites and you know this only after everything else has been 
executed.

Let me know what you think, I'll make the above changes and make them ready for 
your review.

 Almost all Solr tests no longer cleanup their temp dirs on Windows
 --

 Key: SOLR-5914
 URL: https://issues.apache.org/jira/browse/SOLR-5914
 Project: Solr
  Issue Type: Bug
  Components: Tests
Affects Versions: 4.8
Reporter: Uwe Schindler
Assignee: Dawid Weiss
Priority: Critical
 Fix For: 4.8

 Attachments: SOLR-5914 .patch, SOLR-5914 .patch, 
 branch4x-jenkins.png, build-plugin.jpg, trunk-jenkins.png


 Recently the Windows Jenkins Build server has the problem of all-the time 
 running out of disk space. This machine runs 2 workspaces (4.x and trunk) and 
 has initially 8 Gigabytes of free SSD disk space.
 Because of the recently all-the time failing tests, the test framework does 
 not forcefully clean up the J0 working folders after running tests. This 
 leads to the fact, that the workspace is filled with tons of Solr Home dirs. 
 I tried this on my local machine:
 - run ant test
 - go to build/.../test/J0 and watch folders appearing: Almost every test no 
 longer cleans up after shutting down, leaving a million of files there. This 
 is approx 3 to 4 Gigabytes!!!
 In Lucene the folders are correctly removed. This has happened recently, so i 
 think we have some code like ([~erickerickson] !!!):
 {{new Properties().load(new FileInputStream(...))}} that does not close the 
 files. Because of this, the test's afterClass cannot clean up folders 
 anymore. If you look in the test log, you see messages like {{ WARNING: 
 best effort to remove 
 C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\solr\build\solr-core\test\J0\.\org.apache.solr.cloud.TestShortCircuitedRequests-1395693845226
  FAILED !}} all the time.
 So if anybody committed some changes that might not close files correctly, 
 please fix! Otherwise I have to disable testing on windows - and I will no 
 longer run solr, tests, too: My local computer also uses gigabytes of temp 
 space after running tests!



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To 

[jira] [Commented] (SOLR-5914) Almost all Solr tests no longer cleanup their temp dirs on Windows

2014-03-29 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13954456#comment-13954456
 ] 

Dawid Weiss commented on SOLR-5914:
---

Just looking at the diff file:
{code}
+solrHomeDirectory = createTempDir();
+FileUtils.deleteDirectory(solrHomeDirectory); // Ensure that a failed test 
didn't leave something lying around. 
{code}
This is not possible. A new temporary folder will always be empty (it is 
uniquely suffixed if a previous folder with the same prefix exists), no need to 
clean it.

You reverted timestamp-based folder creation. I don't see the point of this. 
Why bother (again, parent will be empty anyway).
{code}
-  File home = TestUtil.createTempDir(getClass().getName());
+  File home = new File(dataDir, 
+   getClass().getName() + - + 
+   System.currentTimeMillis()); 
{code}

Same here:
{code}
+File indexDir = createTempDir();
+if (indexDir.exists())  {
+  FileUtils.deleteDirectory(indexDir);
+}
+indexDir.mkdirs(); 
{code}


 Almost all Solr tests no longer cleanup their temp dirs on Windows
 --

 Key: SOLR-5914
 URL: https://issues.apache.org/jira/browse/SOLR-5914
 Project: Solr
  Issue Type: Bug
  Components: Tests
Affects Versions: 4.8
Reporter: Uwe Schindler
Assignee: Dawid Weiss
Priority: Critical
 Fix For: 4.8

 Attachments: SOLR-5914 .patch, SOLR-5914 .patch, 
 branch4x-jenkins.png, build-plugin.jpg, trunk-jenkins.png


 Recently the Windows Jenkins Build server has the problem of all-the time 
 running out of disk space. This machine runs 2 workspaces (4.x and trunk) and 
 has initially 8 Gigabytes of free SSD disk space.
 Because of the recently all-the time failing tests, the test framework does 
 not forcefully clean up the J0 working folders after running tests. This 
 leads to the fact, that the workspace is filled with tons of Solr Home dirs. 
 I tried this on my local machine:
 - run ant test
 - go to build/.../test/J0 and watch folders appearing: Almost every test no 
 longer cleans up after shutting down, leaving a million of files there. This 
 is approx 3 to 4 Gigabytes!!!
 In Lucene the folders are correctly removed. This has happened recently, so i 
 think we have some code like ([~erickerickson] !!!):
 {{new Properties().load(new FileInputStream(...))}} that does not close the 
 files. Because of this, the test's afterClass cannot clean up folders 
 anymore. If you look in the test log, you see messages like {{ WARNING: 
 best effort to remove 
 C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\solr\build\solr-core\test\J0\.\org.apache.solr.cloud.TestShortCircuitedRequests-1395693845226
  FAILED !}} all the time.
 So if anybody committed some changes that might not close files correctly, 
 please fix! Otherwise I have to disable testing on windows - and I will no 
 longer run solr, tests, too: My local computer also uses gigabytes of temp 
 space after running tests!



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5936) Deprecate non-Trie-based numeric (and date) field types in 4.x and remove them from 5.0

2014-03-29 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13954476#comment-13954476
 ] 

Uwe Schindler commented on SOLR-5936:
-

Hi Jack,

bq. And if trie really is the best approach for numeric fields, why not just do 
all of this under the hood instead of polluting the field type names with 
trie? IOW, rename TrieIntField to IntField, etc.

This goes back to the introduction of that in Lucene 2.9 / Solr 1.4. At that 
time everybody was using other field types, and stuff like IntField, 
SortableIntField,.. was already used as *names*. Because of that it was 
introduced to Solr with the name based on the original donated code (by me). 
Shortly later, Lucene renamed the field to be NumericField and 
NumericRangeQuery the query. The term trie is no longer used in Lucene and 
only the term precisionStep as a configureable flag for the number of 
additional term remained (in the documentation). So 
Trie(Int|Long|Float|Double|Date)Field is just there for backwards 
compatibility with earlier indexes (in Solr 1.4) and now, because the name is 
baked in, no way to change anymore.

+1 to rename for 5.0

bq. As part of this cleanup, could somebody volunteer to create a plain-English 
summary of exactly what a trie field really is, what good it is, and why we 
can't live without them? I've read the code and, okay, there is a sequence of 
bit shifts and generation of extra terms, but in plain English, what's the 
point?

See javadocs of NumericRangeQuery.

bq. Specifically, for example, does it matter if a field has an evenly 
distributed range of numeric values with little repetition vs. numeric codes 
where there is a relatively small number of distinct values (e.g., 1-10, or 
scores of 0-100 or dates in years between 1970 and 2014) and relatively high 
cardinality?

This does not matter because of the structure of the additional terms. The 
number of terms used for actual ranges is almost always around the approx. 
expected number (see javadocs of NRQ). It also does not matter if it is a date 
or a int or a float. Internally, for trie, there are no floats or dates at all. 
Everything is mapped to the sortable bits (means if value_a  value_b also the 
bits_of_value_a  bits_of_value_b). It also has no real effect on the size of 
the range. Lucene always matches approximately the same number of terms (a few 
hundreds at maximum).

Simply said, you are indexing all numbers as bits like strings formed as 
10110110 (just in a better compressed way), with additional terms stripping 
some bits from the right (like 10110110, 101101, 1011, 10). Ranges are 
then simplified to match middle parts of the range with shorter terms that 
match more documents. For that algorithm, the distribution of values is not 
that important. Index size only grows by a minimum size, because the shorter 
terms are more rare (approx. 12% more terms), with large posting lists (many 
docs match). But as those terms match many sequential docs, the posting lists 
are not so big (because of the delta encoding). So trie terms raise the index 
size only by a few percents, but make range queries ultimatively fast, because 
ranges can be matched with few terms hitting many documents.

bq. I mean, does trie do a uniformly great job for both of these extreme use 
cases, including for faceting?

It is not used for facetting. Facetting does not use the additional terms. For 
facetting use DocValues instead of indexed fields. If you want to use Trie 
fields, and don't want to search on them with ranges, you can switch of the 
additional terms by setting precStep to 0.

One last note from my side:
I agree with removing the impl details from the user. The user in my opinion 
only needs 2 types of numerics: precisionStep=4 or 8 (I think the default in 
solr is 8, although I disagree - e.g., Elasticsearch uses the Lucene default of 
4) and another one with precisonStep=infinity (0 in solr would) for numerics 
that are only for sorting and don't need range queries.

 Deprecate non-Trie-based numeric (and date) field types in 4.x and remove 
 them from 5.0
 ---

 Key: SOLR-5936
 URL: https://issues.apache.org/jira/browse/SOLR-5936
 Project: Solr
  Issue Type: Task
  Components: Schema and Analysis
Reporter: Steve Rowe
Assignee: Steve Rowe
Priority: Minor
 Fix For: 4.8, 5.0

 Attachments: SOLR-5936.branch_4x.patch, SOLR-5936.branch_4x.patch


 We've been discouraging people from using non-Trie numericdate field types 
 for years, it's time we made it official.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For 

[jira] [Commented] (SOLR-5488) Fix up test failures for Analytics Component

2014-03-29 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13954489#comment-13954489
 ] 

Erick Erickson commented on SOLR-5488:
--

Hmmm, I'm pretty sure Vitaliy's patch incorporates yours, Steve.

So I applied Vitaliy's patch and ran the ExpressionTest test 30,000 iterations 
and no problems so far. The @Ignore and @BadApple annotations are gone.

So, I'll commit this patch in a bit (running full test suite now). If no test 
failures happen over the rest of the weekend I propose to merge it into 4x 
early next week.

 Fix up test failures for Analytics Component
 

 Key: SOLR-5488
 URL: https://issues.apache.org/jira/browse/SOLR-5488
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.7, 5.0
Reporter: Erick Erickson
Assignee: Erick Erickson
 Attachments: SOLR-5488.patch, SOLR-5488.patch, SOLR-5488.patch, 
 SOLR-5488.patch, SOLR-5488.patch, SOLR-5488.patch, SOLR-5488.patch, 
 SOLR-5488.patch, eoe.errors


 The analytics component has a few test failures, perhaps 
 environment-dependent. This is just to collect the test fixes in one place 
 for convenience when we merge back into 4.x



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3862) add remove as update option for atomically removing a value from a multivalued field

2014-03-29 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13954494#comment-13954494
 ] 

Erick Erickson commented on SOLR-3862:
--

Does anyone have any objections for committing this? I'm running precommit and 
tests this evening, if there are no objections I'll commit this early next week.

I took a pretty quick look at the code and it seems OK, but I'd love to have 
someone who knows the code better take a look.

I'll also put up a patch without the commented-out code that looks like a 
leftover.

One note. It's a bit easier if people always put up a patch with the same name, 
SOLR-3862.patch in this case. Only the most recent one will be blue, the rest 
will be gray. No big deal, just for future reference.

 add remove as update option for atomically removing a value from a 
 multivalued field
 --

 Key: SOLR-3862
 URL: https://issues.apache.org/jira/browse/SOLR-3862
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.0-BETA
Reporter: Jim Musil
Assignee: Erick Erickson
 Attachments: SOLR-3862-2.patch, SOLR-3862-3.patch, SOLR-3862-4.patch, 
 SOLR-3862.patch


 Currently you can atomically add a value to a multivalued field. It would 
 be useful to be able to remove a value from a multivalued field. 
 When you set a multivalued field to null, it destroys all values.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5936) Deprecate non-Trie-based numeric (and date) field types in 4.x and remove them from 5.0

2014-03-29 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13954508#comment-13954508
 ] 

Erick Erickson commented on SOLR-5936:
--

Could we take the pint, plong, pfloat and all that out of the example schema 
while we're at it? Maybe in trunk only? I think that trunk, at least, won't 
have to read indexes with these it it.

 Deprecate non-Trie-based numeric (and date) field types in 4.x and remove 
 them from 5.0
 ---

 Key: SOLR-5936
 URL: https://issues.apache.org/jira/browse/SOLR-5936
 Project: Solr
  Issue Type: Task
  Components: Schema and Analysis
Reporter: Steve Rowe
Assignee: Steve Rowe
Priority: Minor
 Fix For: 4.8, 5.0

 Attachments: SOLR-5936.branch_4x.patch, SOLR-5936.branch_4x.patch


 We've been discouraging people from using non-Trie numericdate field types 
 for years, it's time we made it official.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-3862) add remove as update option for atomically removing a value from a multivalued field

2014-03-29 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13954517#comment-13954517
 ] 

Yonik Seeley edited comment on SOLR-3862 at 3/30/14 12:01 AM:
--

bq. Does anyone have any objections for committing this?

Is it finished?  It doesn't look like there are tests for all the 
functionality.  The debug logging statements should probably go as well.
Also, it's nice to know *what* is being committed (as in, what is the API?) to 
enable feedback without having to parse the code to figure it out.



was (Author: ysee...@gmail.com):
bq. Does anyone have any objections for committing this?

Is it finished?  It doesn't look like there are tests for all the 
functionality.  The debug logging statements should probably go as well.
Also, it's nice to know *what* is being committed (as in, what is the API?) and 
give feedback without having to parse the code to figure it out.


 add remove as update option for atomically removing a value from a 
 multivalued field
 --

 Key: SOLR-3862
 URL: https://issues.apache.org/jira/browse/SOLR-3862
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.0-BETA
Reporter: Jim Musil
Assignee: Erick Erickson
 Attachments: SOLR-3862-2.patch, SOLR-3862-3.patch, SOLR-3862-4.patch, 
 SOLR-3862.patch


 Currently you can atomically add a value to a multivalued field. It would 
 be useful to be able to remove a value from a multivalued field. 
 When you set a multivalued field to null, it destroys all values.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3862) add remove as update option for atomically removing a value from a multivalued field

2014-03-29 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13954526#comment-13954526
 ] 

Erick Erickson commented on SOLR-3862:
--

[~alaknantha] Can you address Yonik's points please?

 add remove as update option for atomically removing a value from a 
 multivalued field
 --

 Key: SOLR-3862
 URL: https://issues.apache.org/jira/browse/SOLR-3862
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.0-BETA
Reporter: Jim Musil
Assignee: Erick Erickson
 Attachments: SOLR-3862-2.patch, SOLR-3862-3.patch, SOLR-3862-4.patch, 
 SOLR-3862.patch


 Currently you can atomically add a value to a multivalued field. It would 
 be useful to be able to remove a value from a multivalued field. 
 When you set a multivalued field to null, it destroys all values.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5935) SolrCloud hangs under certain conditions

2014-03-29 Thread Otis Gospodnetic (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13954527#comment-13954527
 ] 

Otis Gospodnetic commented on SOLR-5935:


What happens when *only indexing* at the same rate (same number of indexing 
threads, same batch size, same everything) is happening without any querying?  
Any way to get things to lock up when just indexing?  Or were you never able to 
get things to lock up when just indexing?

And in the *indexing AND searching* scenario, does the lock up happen even if 
indexing rate is really low while query rate is high?

And vice versa: high indexing rate, but low query rate?



 SolrCloud hangs under certain conditions
 

 Key: SOLR-5935
 URL: https://issues.apache.org/jira/browse/SOLR-5935
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.6.1
Reporter: Rafał Kuć
Priority: Critical
 Attachments: thread dumps.zip


 As discussed in a mailing list - let's try to find the reason while under 
 certain conditions SolrCloud can hang.
 I have an issue with one of the SolrCloud deployments. Six machines, a 
 collection with 6 shards with a replication factor of 3. It all runs on 6 
 physical servers, each with 24 cores. We've indexed about 32 million 
 documents and everything was fine until that point.
 Now, during performance tests, we run into an issue - SolrCloud hangs
 when querying and indexing is run at the same time. First we see a
 normal load on the machines, than the load starts to drop and thread
 dump shown numerous threads like this:
 {noformat}
 Thread 12624: (state = BLOCKED)
  - sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information 
 may be imprecise)
  - java.util.concurrent.locks.LockSupport.park(java.lang.Object) @bci=14, 
 line=186 (Compiled frame)
  - 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await() 
 @bci=42, line=2043 (Compiled frame)
  - org.apache.http.pool.PoolEntryFuture.await(java.util.Date) @bci=50, 
 line=131 (Compiled frame)
  - 
 org.apache.http.pool.AbstractConnPool.getPoolEntryBlocking(java.lang.Object, 
 java.lang.Object, long, java.util.concurrent.TimeUnit, 
 org.apache.http.pool.PoolEntryFuture) @bci=431, line=281 (Compiled frame)
  - 
 org.apache.http.pool.AbstractConnPool.access$000(org.apache.http.pool.AbstractConnPool,
  java.lang.Object, java.lang.Object, long, java.util.concurrent.TimeUnit, 
 org.apache.http.pool.PoolEntryFuture) @bci=8, line=62 (Compiled frame)
  - org.apache.http.pool.AbstractConnPool$2.getPoolEntry(long, 
 java.util.concurrent.TimeUnit) @bci=15, line=176 (Compiled frame)
  - org.apache.http.pool.AbstractConnPool$2.getPoolEntry(long, 
 java.util.concurrent.TimeUnit) @bci=3, line=169 (Compiled frame)
  - org.apache.http.pool.PoolEntryFuture.get(long, 
 java.util.concurrent.TimeUnit) @bci=38, line=100 (Compiled frame)
  - 
 org.apache.http.impl.conn.PoolingClientConnectionManager.leaseConnection(java.util.concurrent.Future,
  long, java.util.concurrent.TimeUnit) @bci=4, line=212 (Compiled frame)
  - 
 org.apache.http.impl.conn.PoolingClientConnectionManager$1.getConnection(long,
  java.util.concurrent.TimeUnit) @bci=10, line=199 (Compiled frame)
  - 
 org.apache.http.impl.client.DefaultRequestDirector.execute(org.apache.http.HttpHost,
  org.apache.http.HttpRequest, org.apache.http.protocol.HttpContext) @bci=259, 
 line=456 (Compiled frame)
  - 
 org.apache.http.impl.client.AbstractHttpClient.execute(org.apache.http.HttpHost,
  org.apache.http.HttpRequest, org.apache.http.protocol.HttpContext) @bci=344, 
 line=906 (Compiled frame)
  - 
 org.apache.http.impl.client.AbstractHttpClient.execute(org.apache.http.client.methods.HttpUriRequest,
  org.apache.http.protocol.HttpContext) @bci=21, line=805 (Compiled frame)
  - 
 org.apache.http.impl.client.AbstractHttpClient.execute(org.apache.http.client.methods.HttpUriRequest)
  @bci=6, line=784 (Compiled frame)
  - 
 org.apache.solr.client.solrj.impl.HttpSolrServer.request(org.apache.solr.client.solrj.SolrRequest,
  org.apache.solr.client.solrj.ResponseParser) @bci=1175, line=395 
 (Interpreted frame)
  - 
 org.apache.solr.client.solrj.impl.HttpSolrServer.request(org.apache.solr.client.solrj.SolrRequest)
  @bci=17, line=199 (Compiled frame)
  - 
 org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(org.apache.solr.client.solrj.impl.LBHttpSolrServer$Req)
  @bci=132, line=285 (Interpreted frame)
  - 
 org.apache.solr.handler.component.HttpShardHandlerFactory.makeLoadBalancedRequest(org.apache.solr.client.solrj.request.QueryRequest,
  java.util.List) @bci=13, line=214 (Compiled frame)
  - org.apache.solr.handler.component.HttpShardHandler$1.call() @bci=246, 
 line=161 (Compiled frame)
  - org.apache.solr.handler.component.HttpShardHandler$1.call() @bci=1, 
 line=118 (Interpreted frame)
  - 

[jira] [Comment Edited] (SOLR-5935) SolrCloud hangs under certain conditions

2014-03-29 Thread Otis Gospodnetic (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13954527#comment-13954527
 ] 

Otis Gospodnetic edited comment on SOLR-5935 at 3/30/14 12:29 AM:
--

What happens when *only indexing* at the same rate (same number of indexing 
threads, same batch size, same everything) is happening without any querying?  
Any way to get things to lock up when just indexing?  Or were you never able to 
get things to lock up when just indexing?

And in the *indexing AND searching* scenario, does the lock up happen even if 
*indexing rate is really low while query rate is high*?

And vice versa: *high indexing rate, but low query rate*?




was (Author: otis):
What happens when *only indexing* at the same rate (same number of indexing 
threads, same batch size, same everything) is happening without any querying?  
Any way to get things to lock up when just indexing?  Or were you never able to 
get things to lock up when just indexing?

And in the *indexing AND searching* scenario, does the lock up happen even if 
indexing rate is really low while query rate is high?

And vice versa: high indexing rate, but low query rate?



 SolrCloud hangs under certain conditions
 

 Key: SOLR-5935
 URL: https://issues.apache.org/jira/browse/SOLR-5935
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.6.1
Reporter: Rafał Kuć
Priority: Critical
 Attachments: thread dumps.zip


 As discussed in a mailing list - let's try to find the reason while under 
 certain conditions SolrCloud can hang.
 I have an issue with one of the SolrCloud deployments. Six machines, a 
 collection with 6 shards with a replication factor of 3. It all runs on 6 
 physical servers, each with 24 cores. We've indexed about 32 million 
 documents and everything was fine until that point.
 Now, during performance tests, we run into an issue - SolrCloud hangs
 when querying and indexing is run at the same time. First we see a
 normal load on the machines, than the load starts to drop and thread
 dump shown numerous threads like this:
 {noformat}
 Thread 12624: (state = BLOCKED)
  - sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information 
 may be imprecise)
  - java.util.concurrent.locks.LockSupport.park(java.lang.Object) @bci=14, 
 line=186 (Compiled frame)
  - 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await() 
 @bci=42, line=2043 (Compiled frame)
  - org.apache.http.pool.PoolEntryFuture.await(java.util.Date) @bci=50, 
 line=131 (Compiled frame)
  - 
 org.apache.http.pool.AbstractConnPool.getPoolEntryBlocking(java.lang.Object, 
 java.lang.Object, long, java.util.concurrent.TimeUnit, 
 org.apache.http.pool.PoolEntryFuture) @bci=431, line=281 (Compiled frame)
  - 
 org.apache.http.pool.AbstractConnPool.access$000(org.apache.http.pool.AbstractConnPool,
  java.lang.Object, java.lang.Object, long, java.util.concurrent.TimeUnit, 
 org.apache.http.pool.PoolEntryFuture) @bci=8, line=62 (Compiled frame)
  - org.apache.http.pool.AbstractConnPool$2.getPoolEntry(long, 
 java.util.concurrent.TimeUnit) @bci=15, line=176 (Compiled frame)
  - org.apache.http.pool.AbstractConnPool$2.getPoolEntry(long, 
 java.util.concurrent.TimeUnit) @bci=3, line=169 (Compiled frame)
  - org.apache.http.pool.PoolEntryFuture.get(long, 
 java.util.concurrent.TimeUnit) @bci=38, line=100 (Compiled frame)
  - 
 org.apache.http.impl.conn.PoolingClientConnectionManager.leaseConnection(java.util.concurrent.Future,
  long, java.util.concurrent.TimeUnit) @bci=4, line=212 (Compiled frame)
  - 
 org.apache.http.impl.conn.PoolingClientConnectionManager$1.getConnection(long,
  java.util.concurrent.TimeUnit) @bci=10, line=199 (Compiled frame)
  - 
 org.apache.http.impl.client.DefaultRequestDirector.execute(org.apache.http.HttpHost,
  org.apache.http.HttpRequest, org.apache.http.protocol.HttpContext) @bci=259, 
 line=456 (Compiled frame)
  - 
 org.apache.http.impl.client.AbstractHttpClient.execute(org.apache.http.HttpHost,
  org.apache.http.HttpRequest, org.apache.http.protocol.HttpContext) @bci=344, 
 line=906 (Compiled frame)
  - 
 org.apache.http.impl.client.AbstractHttpClient.execute(org.apache.http.client.methods.HttpUriRequest,
  org.apache.http.protocol.HttpContext) @bci=21, line=805 (Compiled frame)
  - 
 org.apache.http.impl.client.AbstractHttpClient.execute(org.apache.http.client.methods.HttpUriRequest)
  @bci=6, line=784 (Compiled frame)
  - 
 org.apache.solr.client.solrj.impl.HttpSolrServer.request(org.apache.solr.client.solrj.SolrRequest,
  org.apache.solr.client.solrj.ResponseParser) @bci=1175, line=395 
 (Interpreted frame)
  - 
 org.apache.solr.client.solrj.impl.HttpSolrServer.request(org.apache.solr.client.solrj.SolrRequest)
  @bci=17, line=199 (Compiled frame)
  - 
 

[jira] [Updated] (SOLR-5501) Ability to work with cold replicas

2014-03-29 Thread Manuel Lenormand (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manuel Lenormand updated SOLR-5501:
---

Attachment: 5501.patch
cloud_screenshot.png

an example of how coldness could be reflected in the cloud view

 Ability to work with cold replicas
 --

 Key: SOLR-5501
 URL: https://issues.apache.org/jira/browse/SOLR-5501
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.5.1
Reporter: Manuel Lenormand
  Labels: performance
 Fix For: 4.8

 Attachments: 5501.patch, 5501.patch, cloud_screenshot.png


 Following this conversation from the mailing list:
 http://lucene.472066.n3.nabble.com/Proposal-for-new-feature-cold-replicas-brainstorming-td4097501.html
 Should give the ability to use replicas mainly as backup cores and not for 
 handling high qps rate. 
 This way you would avoid using the caching ressources (solr and OS) used when 
 routing a query to a replica. 
 With many replicas it's harder hitting the solr cache (same query may hit 
 another replica) and having many replicas on the same instance would cause a 
 useless competition on the OS memory for caching.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5501) Ability to work with cold replicas

2014-03-29 Thread Manuel Lenormand (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manuel Lenormand updated SOLR-5501:
---

Attachment: (was: 5501.patch)

 Ability to work with cold replicas
 --

 Key: SOLR-5501
 URL: https://issues.apache.org/jira/browse/SOLR-5501
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.5.1
Reporter: Manuel Lenormand
  Labels: performance
 Fix For: 4.8

 Attachments: 5501.patch, cloud_screenshot.png


 Following this conversation from the mailing list:
 http://lucene.472066.n3.nabble.com/Proposal-for-new-feature-cold-replicas-brainstorming-td4097501.html
 Should give the ability to use replicas mainly as backup cores and not for 
 handling high qps rate. 
 This way you would avoid using the caching ressources (solr and OS) used when 
 routing a query to a replica. 
 With many replicas it's harder hitting the solr cache (same query may hit 
 another replica) and having many replicas on the same instance would cause a 
 useless competition on the OS memory for caching.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5501) Ability to work with cold replicas

2014-03-29 Thread Manuel Lenormand (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manuel Lenormand updated SOLR-5501:
---

Attachment: 5501.patch

 Ability to work with cold replicas
 --

 Key: SOLR-5501
 URL: https://issues.apache.org/jira/browse/SOLR-5501
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.5.1
Reporter: Manuel Lenormand
  Labels: performance
 Fix For: 4.8

 Attachments: 5501.patch, cloud_screenshot.png


 Following this conversation from the mailing list:
 http://lucene.472066.n3.nabble.com/Proposal-for-new-feature-cold-replicas-brainstorming-td4097501.html
 Should give the ability to use replicas mainly as backup cores and not for 
 handling high qps rate. 
 This way you would avoid using the caching ressources (solr and OS) used when 
 routing a query to a replica. 
 With many replicas it's harder hitting the solr cache (same query may hit 
 another replica) and having many replicas on the same instance would cause a 
 useless competition on the OS memory for caching.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5501) Ability to work with cold replicas

2014-03-29 Thread Manuel Lenormand (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manuel Lenormand updated SOLR-5501:
---

Attachment: (was: 5501.patch)

 Ability to work with cold replicas
 --

 Key: SOLR-5501
 URL: https://issues.apache.org/jira/browse/SOLR-5501
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.5.1
Reporter: Manuel Lenormand
  Labels: performance
 Fix For: 4.8

 Attachments: 5501.patch, cloud_screenshot.png


 Following this conversation from the mailing list:
 http://lucene.472066.n3.nabble.com/Proposal-for-new-feature-cold-replicas-brainstorming-td4097501.html
 Should give the ability to use replicas mainly as backup cores and not for 
 handling high qps rate. 
 This way you would avoid using the caching ressources (solr and OS) used when 
 routing a query to a replica. 
 With many replicas it's harder hitting the solr cache (same query may hit 
 another replica) and having many replicas on the same instance would cause a 
 useless competition on the OS memory for caching.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3862) add remove as update option for atomically removing a value from a multivalued field

2014-03-29 Thread Alaknantha (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13954563#comment-13954563
 ] 

Alaknantha commented on SOLR-3862:
--

Since we are only interested in the remove functionality, I removed replace 
code segment from this patch. The JUnit Test case AtomicUpdatesTest.java 
tests the newly added remove functionality and the existing set and add 
also. 
Could you please let me know where to update the API documentation and I will 
do that? I am attaching the updated patch SOLR-3862.patch

 add remove as update option for atomically removing a value from a 
 multivalued field
 --

 Key: SOLR-3862
 URL: https://issues.apache.org/jira/browse/SOLR-3862
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.0-BETA
Reporter: Jim Musil
Assignee: Erick Erickson
 Attachments: SOLR-3862-2.patch, SOLR-3862-3.patch, SOLR-3862-4.patch, 
 SOLR-3862.patch, SOLR-3862.patch


 Currently you can atomically add a value to a multivalued field. It would 
 be useful to be able to remove a value from a multivalued field. 
 When you set a multivalued field to null, it destroys all values.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3862) add remove as update option for atomically removing a value from a multivalued field

2014-03-29 Thread Alaknantha (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alaknantha updated SOLR-3862:
-

Attachment: SOLR-3862.patch

 add remove as update option for atomically removing a value from a 
 multivalued field
 --

 Key: SOLR-3862
 URL: https://issues.apache.org/jira/browse/SOLR-3862
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.0-BETA
Reporter: Jim Musil
Assignee: Erick Erickson
 Attachments: SOLR-3862-2.patch, SOLR-3862-3.patch, SOLR-3862-4.patch, 
 SOLR-3862.patch, SOLR-3862.patch


 Currently you can atomically add a value to a multivalued field. It would 
 be useful to be able to remove a value from a multivalued field. 
 When you set a multivalued field to null, it destroys all values.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4478) Allow cores to specify a named config set in non-SolrCloud mode

2014-03-29 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13954567#comment-13954567
 ] 

Shalin Shekhar Mangar commented on SOLR-4478:
-

Thanks Alan!

I guess the one time it succeeded for me was because I hadn't cleared my 
zookeeper directory :)

 Allow cores to specify a named config set in non-SolrCloud mode
 ---

 Key: SOLR-4478
 URL: https://issues.apache.org/jira/browse/SOLR-4478
 Project: Solr
  Issue Type: Improvement
Affects Versions: 4.2, 5.0
Reporter: Erick Erickson
Assignee: Alan Woodward
 Fix For: 4.8, 5.0

 Attachments: SOLR-4478-take2.patch, SOLR-4478-take2.patch, 
 SOLR-4478-take2.patch, SOLR-4478-take2.patch, SOLR-4478.patch, 
 SOLR-4478.patch, solr.log


 Part of moving forward to the new way, after SOLR-4196 etc... I propose an 
 additional parameter specified on the core node in solr.xml or as a 
 parameter in the discovery mode core.properties file, call it configSet, 
 where the value provided is a path to a directory, either absolute or 
 relative. Really, this is as though you copied the conf directory somewhere 
 to be used by more than one core.
 Straw-man: There will be a directory solr_home/configsets which will be the 
 default. If the configSet parameter is, say, myconf, then I'd expect a 
 directory named myconf to exist in solr_home/configsets, which would look 
 something like
 solr_home/configsets/myconf/schema.xml
   solrconfig.xml
   stopwords.txt
   velocity
   velocity/query.vm
 etc.
 If multiple cores used the same configSet, schema, solrconfig etc. would all 
 be shared (i.e. shareSchema=true would be assumed). I don't see a good 
 use-case for _not_ sharing schemas, so I don't propose to allow this to be 
 turned off. Hmmm, what if shareSchema is explicitly set to false in the 
 solr.xml or properties file? I'd guess it should be honored but maybe log a 
 warning?
 Mostly I'm putting this up for comments. I know that there are already 
 thoughts about how this all should work floating around, so before I start 
 any work on this I thought I'd at least get an idea of whether this is the 
 way people are thinking about going.
 Configset can be either a relative or absolute path, if relative it's assumed 
 to be relative to solr_home.
 Thoughts?



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org