Re: solr-user-subscribe

2019-10-25 Thread Erick Erickson
If you _are_ using SolrCloud, you can use the collections API SPLITSHARD 
command.

> On Oct 25, 2019, at 7:37 AM, Shawn Heisey  wrote:
> 
> On 10/24/2019 11:19 PM, Hafiz Muhammad Shafiq wrote:
>> HI,
>> I am using Solr 6.x version for search purposes. Now data has been
>> increased into one shard. I have to create some additional shards and also
>> have to balance base on number of documents. According to my search, solr
>> does not provide rebalance API. Is it correct ? How can I do my job.
> 
> You will need to create the collection again.  If you're not already running 
> SolrCloud, you will need to change your Solr install so that it is running in 
> cloud mode.  Sharded indexes are possible without SolrCloud, but SolrCloud 
> will be a lot easier.
> 
> A wiki page that's more informative than helpful:
> 
> https://cwiki.apache.org/confluence/display/solr/HowToReindex
> 
> Thanks,
> Shawn



Re: solr-user-subscribe

2019-10-25 Thread Shawn Heisey

On 10/24/2019 11:19 PM, Hafiz Muhammad Shafiq wrote:

HI,
I am using Solr 6.x version for search purposes. Now data has been
increased into one shard. I have to create some additional shards and also
have to balance base on number of documents. According to my search, solr
does not provide rebalance API. Is it correct ? How can I do my job.


You will need to create the collection again.  If you're not already 
running SolrCloud, you will need to change your Solr install so that it 
is running in cloud mode.  Sharded indexes are possible without 
SolrCloud, but SolrCloud will be a lot easier.


A wiki page that's more informative than helpful:

https://cwiki.apache.org/confluence/display/solr/HowToReindex

Thanks,
Shawn


Re: solr-user-subscribe

2019-10-25 Thread Hafiz Muhammad Shafiq
HI,
I am using Solr 6.x version for search purposes. Now data has been
increased into one shard. I have to create some additional shards and also
have to balance base on number of documents. According to my search, solr
does not provide rebalance API. Is it correct ? How can I do my job.

On Fri, Oct 25, 2019 at 10:16 AM Hafiz Muhammad Shafiq <
hafiz.sha...@kics.edu.pk> wrote:

>
>


Re: solr-user-unsubscribe

2019-09-07 Thread Erick Erickson
Follow the instructions here: 
http://lucene.apache.org/solr/community.html#mailing-lists-irc. You must use 
the _exact_ same e-mail as you used to subscribe.

If the initial try doesn't work and following the suggestions at the "problems" 
link doesn't work for you, let us know. But note you need to show us the 
_entire_ return header to allow anyone to diagnose the problem.

Best,
Erick

> On Sep 6, 2019, at 4:22 AM, Charton, Andre 
>  wrote:
> 
> 
> 



Re: solr-user-subscribe

2017-07-17 Thread srshaik
I added a reply to the discussion. Please accept.

On Fri, Jul 14, 2017 at 11:05 PM, Naohiko Uramoto [via Lucene] <
ml+s472066n4346101...@n3.nabble.com> wrote:

> solr-user-subscribe <[hidden email]
> >
>
> --
> Naohiko Uramoto
>
>
> --
> If you reply to this email, your message will be added to the discussion
> below:
> http://lucene.472066.n3.nabble.com/solr-user-subscribe-tp4346101.html
> To unsubscribe from Solr - User, click here
> 
> .
> NAML
> 
>




--
View this message in context: 
http://lucene.472066.n3.nabble.com/solr-user-subscribe-tp4346101p4346307.html
Sent from the Solr - User mailing list archive at Nabble.com.

Re: solr-user-subscribe

2017-07-17 Thread Erick Erickson
Please follow the instructions here:
http://lucene.apache.org/solr/community.html#mailing-lists-irc. You
must use the _exact_ same e-mail as you used to subscribe.


If the initial try doesn't work and following the suggestions at the
"problems" link doesn't work for you, let us know. But note you need
to show us the _entire_ return header to allow anyone to diagnose the
problem.


Best,

Erick

On Sun, Jul 16, 2017 at 12:49 PM, Yangrui Guo  wrote:
> unsubscribe
>
> On Friday, July 14, 2017, Naohiko Uramoto  wrote:
>
>> solr-user-subscribe >
>>
>> --
>> Naohiko Uramoto
>>


Re: solr-user-subscribe

2017-07-16 Thread Yangrui Guo
unsubscribe

On Friday, July 14, 2017, Naohiko Uramoto  wrote:

> solr-user-subscribe >
>
> --
> Naohiko Uramoto
>


Re: solr-user-unsubscribe

2017-01-31 Thread Erick Erickson
Please follow the instructions here:
http://lucene.apache.org/solr/community.html#mailing-lists-irc. You
must use the _exact_ same e-mail as you used to subscribe.

If the initial try doesn't work and following the suggestions at the
"problems" link doesn't work for you, let us know. But note you need
to show us the _entire_ return header to allow anyone to diagnose the
problem.

Best,
Erick

On Tue, Jan 31, 2017 at 4:59 AM, Rowe, William - 1180 - MITLL
 wrote:
> solr-user-unsubscribe
>
>
>
> From: Rowe, William - 1180 - MITLL
> Sent: Monday, January 30, 2017 7:54 AM
> To: 'solr-user@lucene.apache.org'
> Subject: solr-user-unsubscribe
>
>
>
>
>
>
>
> Bill Rowe
>
> Senior Software Developer
>
> Technology Innovation & Integration, Information Services Department (ISD)
>
> MIT Lincoln Laboratory
>
> 244 Wood Street
>
> Lexington, MA 02420
>
> Office: 781-981-4520
>
> Mobile: 774-210-0853
>
> william.r...@ll.mit.edu
>
>


Re: solr-user-unsubscribe

2017-01-31 Thread Erick Erickson
Please follow the instructions here:
http://lucene.apache.org/solr/community.html#mailing-lists-irc. You
must use the _exact_ same e-mail as you used to subscribe.

If the initial try doesn't work and following the suggestions at the
"problems" link doesn't work for you, let us know. But note you need
to show us the _entire_ return header to allow anyone to diagnose the
problem.

Best,
Erick

On Tue, Jan 31, 2017 at 5:54 AM, Jiangenbo  wrote:
> solr-user-unsubscribe
>
>
>


Re: solr-user-unsubscribe

2017-01-30 Thread Erick Erickson
It's all an automated process,
Please follow the "unsubscribe" instructions here:
http://lucene.apache.org/solr/community.html#mailing-lists-irc

You must use the _exact_ e-mail address you used to subscribe with.
the "Problems" link provides additional information.

On Mon, Jan 30, 2017 at 4:55 AM, Syed Mudasseer  wrote:
>


Re: solr-user-unsubscribe

2017-01-30 Thread Dorian Hoxha
Come on dude. Just look at instructions. Have a little respect.

On Mon, Jan 30, 2017 at 1:55 PM, Rowe, William - 1180 - MITLL <
william.r...@ll.mit.edu> wrote:

> solr-user-unsubscribe
>
>
>


RE: solr-user-unsubscribe

2017-01-30 Thread Rowe, William - 1180 - MITLL
solr-user-unsubscribe

 



smime.p7s
Description: S/MIME cryptographic signature


Re: solr-user-unsubscribe

2015-05-28 Thread Erick Erickson
Please follow the instructions here:
http://lucene.apache.org/solr/resources.html. Be sure to use the exact
same e-mail you used to subscribe.

Best,
Erick

On Thu, May 28, 2015 at 6:10 AM, Stefan Meise - SONIC Performance
Support stefan.me...@sonic-ps.de wrote:



Re: solr user

2014-06-03 Thread Gora Mohanty
On 3 June 2014 11:22, Manoj V manojv1...@gmail.com wrote:
 I m working on solr. i m interested in getting added to solr user group.

 Can you please add me to the group ?

If mail from your address is reaching this list, you are already subscribed
to it. Presumably, you did that from under
https://lucene.apache.org/solr/discussion.html
Or, did you mean something else?

Regards,
Gora


Re: solr-user Digest of: get.100322

2014-05-21 Thread Jack Krupansky
Just to re-emphasize the point - when provisioning Solr, you need to ASSURE 
that the system has enough system memory so that the Solr index on that 
system fits entirely in the OS file system cache. No ifs, ands, or buts. If 
you fail to follow that RULE, all bets are off for performance and don't 
even bother complaining about poor performance on this mailing list!! Either 
get more memory or shard your index more heavily - again, no ifs, ands, or 
buts!!


Any questions on that rule?

Maybe somebody else can phrase this guidance more clearly, so that fewer 
people will fail to follow it.


Or, maybe we should enhance Solr to check available memory and log a stern 
warning if the index size exceeds system memory when Solr is started.


-- Jack Krupansky

-Original Message- 
From: Shawn Heisey

Sent: Tuesday, May 20, 2014 1:49 PM
To: solr-user@lucene.apache.org
Subject: Re: solr-user Digest of: get.100322

On 5/20/2014 2:01 AM, Jeongseok Son wrote:

Though it uses only small amount of memory I'm worried about memory
usage because I have to store so many documents. (32GB RAM / total 5B
docs, sum of docs. of all cores)


If you've only got 32GB of RAM and there are five billion docs on the
system, Solr performance will be dismal no matter what you do with
docValues.  Your index will be FAR larger than the amount of available
RAM for caching.

http://wiki.apache.org/solr/SolrPerformanceProblems#RAM

With that many documents, even if you don't use RAM-hungry features like
sorting and facets, you'll need a significant heap size, which will
further reduce the amount of RAM on the system that the OS can use to
cache the index.

For good performance, Solr *relies* on the operating system caching a
significant portion of the index.

Thanks,
Shawn 



Re: solr-user Digest of: get.100322

2014-05-21 Thread Shawn Heisey
On 5/21/2014 7:28 AM, Jack Krupansky wrote:
 Just to re-emphasize the point - when provisioning Solr, you need to
 ASSURE that the system has enough system memory so that the Solr index
 on that system fits entirely in the OS file system cache. No ifs,
 ands, or buts. If you fail to follow that RULE, all bets are off for
 performance and don't even bother complaining about poor performance
 on this mailing list!! Either get more memory or shard your index more
 heavily - again, no ifs, ands, or buts!!

 Any questions on that rule?

 Maybe somebody else can phrase this guidance more clearly, so that
 fewer people will fail to follow it.

 Or, maybe we should enhance Solr to check available memory and log a
 stern warning if the index size exceeds system memory when Solr is
 started.

If the amount of free and cached RAM can be detected by Java in a
cross-platform method, it would be awesome to log a performance warning
if the total of that memory is less than 50% of the total index size. 
This is the point where I generally feel comfortable saying that lack of
memory is a likely problem.  Depending on the exact index composition
and the types of queries being run, a Solr server may run very well when
only half the index can be cached.

I've seen some discussion of a documentation section (and supporting
scripts/data in the download) that describes how to set up a
production-ready and fault tolerant install.  That would be a good place
to put this information.  An install script on *NIX systems would be
able to easily gather memory information and display various index sizes
that the hardware is likely to handle efficiently.

If nothing else, we can beef up the SYSTEM_REQUIREMENTS.txt file.  Later
today I'll file an issue and cook up a patch for that.

Thanks,
Shawn



Re: solr-user Digest of: get.100322

2014-05-20 Thread Jeongseok Son
Thank you for your reply! I also found docValues after sending an
email and your suggestion seems the best solution for me.

Now I'm configuring schema.xml to use docValues and have a question
about docValuesFormat.

According to this thread(
http://lucene.472066.n3.nabble.com/Trade-offs-in-choosing-DocValuesFormat-td4114758.html
),

Solr 4.6 only holds some hash structures in memory space with the
default docValuesFormat configuration.

Though it uses only small amount of memory I'm worried about memory
usage because I have to store so many documents. (32GB RAM / total 5B
docs, sum of docs. of all cores)

Which docValuesFormat is more appropriate in my case? (Default or
Disk?) Can I change it later without re-indexing?

On Sat, May 17, 2014 at 9:45 PM,  solr-user-h...@lucene.apache.org wrote:

 solr-user Digest of: get.100322

 Topics (messages 100322 through 100322)

 Re: Sorting problem in Solr due to Lucene Field Cache
 100322 by: Joel Bernstein

 Administrivia:


 --- Administrative commands for the solr-user list ---

 I can handle administrative requests automatically. Please
 do not send them to the list address! Instead, send
 your message to the correct command address:

 To subscribe to the list, send a message to:
solr-user-subscr...@lucene.apache.org

 To remove your address from the list, send a message to:
solr-user-unsubscr...@lucene.apache.org

 Send mail to the following for info and FAQ for this list:
solr-user-i...@lucene.apache.org
solr-user-...@lucene.apache.org

 Similar addresses exist for the digest list:
solr-user-digest-subscr...@lucene.apache.org
solr-user-digest-unsubscr...@lucene.apache.org

 To get messages 123 through 145 (a maximum of 100 per request), mail:
solr-user-get.123_...@lucene.apache.org

 To get an index with subject and author for messages 123-456 , mail:
solr-user-index.123_...@lucene.apache.org

 They are always returned as sets of 100, max 2000 per request,
 so you'll actually get 100-499.

 To receive all messages with the same subject as message 12345,
 send a short message to:
solr-user-thread.12...@lucene.apache.org

 The messages should contain one line or word of text to avoid being
 treated as sp@m, but I will ignore their content.
 Only the ADDRESS you send to is important.

 You can start a subscription for an alternate address,
 for example john@host.domain, just add a hyphen and your
 address (with '=' instead of '@') after the command word:
 solr-user-subscribe-john=host.dom...@lucene.apache.org

 To stop subscription for this address, mail:
 solr-user-unsubscribe-john=host.dom...@lucene.apache.org

 In both cases, I'll send a confirmation message to that address. When
 you receive it, simply reply to it to complete your subscription.

 If despite following these instructions, you do not get the
 desired results, please contact my owner at
 solr-user-ow...@lucene.apache.org. Please be patient, my owner is a
 lot slower than I am ;-)

 --- Enclosed is a copy of the request I received.

 Return-Path: invictu...@gmail.com
 Received: (qmail 64267 invoked by uid 99); 17 May 2014 12:22:20 -
 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136)
 by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 17 May 2014 12:22:20 +
 X-ASF-Spam-Status: No, hits=-0.7 required=5.0
 tests=RCVD_IN_DNSWL_LOW,SPF_PASS
 X-Spam-Check-By: apache.org
 Received-SPF: pass (athena.apache.org: domain of invictu...@gmail.com 
 designates 209.85.128.193 as permitted sender)
 Received: from [209.85.128.193] (HELO mail-ve0-f193.google.com) 
 (209.85.128.193)
 by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 17 May 2014 12:22:14 +
 Received: by mail-ve0-f193.google.com with SMTP id sa20so1075564veb.8
 for solr-user-get.100...@lucene.apache.org; Sat, 17 May 2014 
 05:21:54 -0700 (PDT)
 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=gmail.com; s=20120113;
 h=mime-version:date:message-id:subject:from:to:content-type;
 bh=QzTOKgbCPT36kZdZcCT/uV4aRZ2PlQ3OgQFPLH0SCoc=;
 b=yygC07cHEwmRg6rS0bHxGg5AaqtPRdsozFD6eO8ssVVC+YsfT32ZWUDDk9s7/2Z91Q
  aCwFsbb7Thla9nkKbtMctqonOacly29Tsple/lzQX5qOQyAFdzOsQHpim+9jB+W0B1Ac
  ZEDLqPzdMG8ZszKDa8lJ8yRadUtlb83HgB56PulZLh1XQG+WOMAuC8pBQ2zS8c/0lsib
  JVehSX/OdqU+6HAhPYcIm6pLNWP4lYPwjTAp66Bms9j2/Y5ROwZ6azwCgGIe2hsk06q6
  5BSKtoTXAfGweIvTQHEfvp6KgLEhIpgjlgo/s5r0NzNaaRM9zdkhp+qYOWM8nWuT8RAu
  ytng==
 MIME-Version: 1.0
 X-Received: by 10.220.95.204 with SMTP id e12mr2401964vcn.37.1400329314139;
  Sat, 17 May 2014 05:21:54 -0700 (PDT)
 Received: by 10.52.10.137 with HTTP; Sat, 17 May 2014 05:21:54 -0700 (PDT)
 Date: Sat, 17 May 2014 21:21:54 +0900
 Message-ID: 
 CABH_4FoTg+xYGgJ90r_c+0Nb-YBOfZYq7rRyrvXe2ybXkF=b...@mail.gmail.com
 Subject: Give me this mail
 From: Jeongseok Son invictu...@gmail.com
 To: solr-user-get.100...@lucene.apache.org
 Content-Type: text/plain; charset=UTF-8
 

Re: solr-user Digest of: get.100322

2014-05-20 Thread Shawn Heisey
On 5/20/2014 2:01 AM, Jeongseok Son wrote:
 Though it uses only small amount of memory I'm worried about memory
 usage because I have to store so many documents. (32GB RAM / total 5B
 docs, sum of docs. of all cores)

If you've only got 32GB of RAM and there are five billion docs on the
system, Solr performance will be dismal no matter what you do with
docValues.  Your index will be FAR larger than the amount of available
RAM for caching.

http://wiki.apache.org/solr/SolrPerformanceProblems#RAM

With that many documents, even if you don't use RAM-hungry features like
sorting and facets, you'll need a significant heap size, which will
further reduce the amount of RAM on the system that the OS can use to
cache the index.

For good performance, Solr *relies* on the operating system caching a
significant portion of the index.

Thanks,
Shawn



RE: solr user group

2012-10-09 Thread David Hill

And still on the list...

David Hill

Iowa Student Loan | Lead Software Analyst / Developer | phone 515-273-7241 | 
fax 515-273-7241 | dh...@studentloan.org


-Original Message-
From: David Hill
Sent: Tuesday, September 18, 2012 6:58 AM
To: 'solr-user@lucene.apache.org'
Subject: solr user group


sorry for the broadcast, but the solr list server is just not taking the hint 
yet, I have issued the following commands on the following dates:

Sent Mon 08/27/2012 10:37 PM to 'solr-user-unsubscr...@lucene.apache.org' 
subject = unsubscribe

Sent Mon 07/16/2012 6:53 AM to 'solr-user-unsubscr...@lucene.apache.org'

Sent Mon 04/23/2012 8:01 AM to 'solr-user-unsubscr...@lucene.apache.org' 
subject = unsubscribe

David Hill

Iowa Student Loan | Lead Software Analyst / Developer | phone 515-273-7241 | 
fax 515-273-7241 | dh...@studentloan.org



This e-mail and any files transmitted with it are confidential and intended 
solely for the use of the individual or entity to whom they are addressed. If 
you have received this e-mail in error please notify the originator of the 
message. This footer also confirms that this e-mail message has been scanned 
for the presence of computer viruses. Any views expressed in this message are 
those of the individual sender, except where the sender specifies and with 
authority, states them to be the views of Iowa Student Loan.



 



RE: solr user group

2012-10-09 Thread Chris Hostetter

: And still on the list...

As Jack mentioned in his 18 Sep 2012 reply to your original email...

https://mail-archives.apache.org/mod_mbox/lucene-solr-user/201209.mbox/%3CD8AD75DD68FD45618D83C8CE3F93803E@JackKrupansky%3E

 Did you send them from the exact same email address as the original 
 subscriptions?
 
 Did you follow all of the suggestions listed at the Problems? link on 
 the discussions page?

 https://wiki.apache.org/solr/Unsubscribing%20from%20mailing%20lists

( Linked from: https://lucene.apache.org/solr/discussion.html )

In particular: if anyone has problems subscribing/unsubscribing, the 
method to contact a human (the list moderators) for help is 
solr-user-ow...@lucene.apache.org - but there is specific information you 
should proactively provide when contacting the moderators.



-Hoss


Re: solr user group

2012-09-18 Thread Jack Krupansky
Did you send them from the exact same email address as the original 
subscriptions?


Did you follow all of the suggestions listed at the Problems? link on the 
discussions page?


See:
https://wiki.apache.org/solr/Unsubscribing%20from%20mailing%20lists

-- Jack Krupansky

-Original Message- 
From: David Hill

Sent: Tuesday, September 18, 2012 7:58 AM
To: 'solr-user@lucene.apache.org'
Subject: solr user group


sorry for the broadcast, but the solr list server is just not taking the 
hint yet, I have issued the following commands on the following dates:


Sent Mon 08/27/2012 10:37 PM to 'solr-user-unsubscr...@lucene.apache.org' 
subject = unsubscribe


Sent Mon 07/16/2012 6:53 AM to 'solr-user-unsubscr...@lucene.apache.org'

Sent Mon 04/23/2012 8:01 AM to 'solr-user-unsubscr...@lucene.apache.org' 
subject = unsubscribe


David Hill

Iowa Student Loan | Lead Software Analyst / Developer | phone 515-273-7241 | 
fax 515-273-7241 | dh...@studentloan.org




This e-mail and any files transmitted with it are confidential and intended 
solely for the use of the individual or entity to whom they are addressed. 
If you have received this e-mail in error please notify the originator of 
the message. This footer also confirms that this e-mail message has been 
scanned for the presence of computer viruses. Any views expressed in this 
message are those of the individual sender, except where the sender 
specifies and with authority, states them to be the views of Iowa Student 
Loan.







Re: solr-user-unsubscribe

2012-08-19 Thread Michael Della Bitta
Just FYI folks, this doesn't work. You need to send mail to
solr-user-unsubscr...@lucene.apache.org, not the list.

Michael Della Bitta


Appinions | 18 East 41st St., Suite 1806 | New York, NY 10017
www.appinions.com
Where Influence Isn’t a Game


On Sun, Aug 19, 2012 at 12:04 PM, Adamsky, Robert
radam...@techtarget.com wrote:
 solr-user-unsubscribe solr-user-unsubscr...@lucene.apache.org


Re: Solr User Interface

2011-07-19 Thread Yusniel Hidalgo Delgado

Hi,

You can to send wt param to Solr such as follow:

wt=json or wt=phps

In the first case, Solr result are retorned in JSON format response and 
the second case, are returned in PHP serialized format.


Regards.

El 19/07/11 15:46, serenity keningston escribió:

Hi,

I installed Solr 3.2 and able to search results successfully from the
crawled data, however , I would like to develop UI for the http or json
response.  Can anyone guide me with the tutorial or sample ?
I referred few thing like Ajax Solr but am not sure how to do the things.


Serenity



Re: solr-user

2010-10-08 Thread Lance Norskog
Please start a new thread with this topic in the subject line.

On Fri, Oct 8, 2010 at 10:37 PM, ankita shinde
ankitashinde...@gmail.com wrote:
 -- Forwarded message --
 From: ankita shinde ankitashinde...@gmail.com
 Date: Sat, Oct 9, 2010 at 8:19 AM
 Subject: solr-user
 To: solr-user@lucene.apache.org


 hello,
 Is there any api in SolrJ that calls the dataImportHandler to execute
 commands like full-import and delta-import. Please help..




-- 
Lance Norskog
goks...@gmail.com


Re: solr-user

2010-10-04 Thread Erick Erickson
I suspect you're not actually including the path to those jars.
SolrException should be in your solrj jar file. You can test this
by executing jar -tf apacheBLAHBLAH.jar which will dump
all the class names in the jar file. I'm assuming that you're
really including the version for the * in the solrj jar file here

So I'd guess it's a classpath issue and you're not really including
what you think you are

HTH
Erick

On Fri, Oct 1, 2010 at 11:28 PM, ankita shinde ankitashinde...@gmail.comwrote:

 -- Forwarded message --
 From: ankita shinde ankitashinde...@gmail.com
 Date: Sat, Oct 2, 2010 at 8:54 AM
 Subject: solr-user
 To: solr-user@lucene.apache.org


 hello,

 I am trying to use solrj for interfacing with solr. I am trying to run the
 SolrjTest example. I have included all the following  jar files-


   - commons-codec-1.3.jar
   - commons-fileupload-1.2.1.jar
   - commons-httpclient-3.1.jar
   - commons-io-1.4.jar
   - geronimo-stax-api_1.0_spec-1.0.1.jar
   - apache-solr-solrj-*.jar
   - wstx-asl-3.2.7.jar
   - slf4j-api-1.5.5.jar
   - slf4j-simple-1.5.5.jar




  But its giving me error as 'NoClassDefFoundError:
 org/apache/solr/client/solrj/SolrServerException'.
 Can anyone tell me where did i go wrong?



Re: solr-user

2010-10-04 Thread Allistair Crossley
I updated the SolrJ JAR requirements to be clearer on the wiki page given how 
many of these SolrJ emails I saw coming through since joining the list. I just 
created a test java class and imported the removed JARs until I found out the 
minimal set required.

On Oct 4, 2010, at 8:27 AM, Erick Erickson wrote:

 I suspect you're not actually including the path to those jars.
 SolrException should be in your solrj jar file. You can test this
 by executing jar -tf apacheBLAHBLAH.jar which will dump
 all the class names in the jar file. I'm assuming that you're
 really including the version for the * in the solrj jar file here
 
 So I'd guess it's a classpath issue and you're not really including
 what you think you are
 
 HTH
 Erick
 
 On Fri, Oct 1, 2010 at 11:28 PM, ankita shinde 
 ankitashinde...@gmail.comwrote:
 
 -- Forwarded message --
 From: ankita shinde ankitashinde...@gmail.com
 Date: Sat, Oct 2, 2010 at 8:54 AM
 Subject: solr-user
 To: solr-user@lucene.apache.org
 
 
 hello,
 
 I am trying to use solrj for interfacing with solr. I am trying to run the
 SolrjTest example. I have included all the following  jar files-
 
 
  - commons-codec-1.3.jar
  - commons-fileupload-1.2.1.jar
  - commons-httpclient-3.1.jar
  - commons-io-1.4.jar
  - geronimo-stax-api_1.0_spec-1.0.1.jar
  - apache-solr-solrj-*.jar
  - wstx-asl-3.2.7.jar
  - slf4j-api-1.5.5.jar
  - slf4j-simple-1.5.5.jar
 
 
 
 
 But its giving me error as 'NoClassDefFoundError:
 org/apache/solr/client/solrj/SolrServerException'.
 Can anyone tell me where did i go wrong?
 



RE: solr user

2010-09-07 Thread Dave Searle
You probably need to use the file:// moniker - if using firefox, install 
firebug and use the net panel to see if the includes load

-Original Message-
From: ankita shinde [mailto:ankitashinde...@gmail.com] 
Sent: 07 September 2010 18:22
To: solr-user@lucene.apache.org
Subject: solr user

hello all,

   I am working with Ajax solr.I m trying to send request to solr to
retrieve all XML documents.I have created one folder named source which is
in C drive.Source folder contains all the .js files.I have tried following
code but its giving error as AjaxSolr is not defined.
Can anyone pleas guide me




!DOCTYPE html PUBLIC -//W3C//DTD XHTML 1.0 Strict//EN 
http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;
html xmlns=http://www.w3.org/1999/xhtml; xml:lang=en lang=en
head
  titleAJAX Solr/title

  link rel=stylesheet type=text/css href=css/reuters.css
media=screen /


  script type=text/javascript
src=C:/source/AbstractManager.js/script
  script type=text/javascript src=C:/source/Manager.jquery.js/script
  script type=text/javascript src=C:/source/Parameter.js/script
  script type=text/javascript src=C:/source/ParameterStore.js/script
  script type=text/javascript src=C:/source/AbstractWidget.js/script
  script type=text/javascript src=C:/source/ResultWidget.2.js/script

  script type=text/javascript src=thm.2.js/script

  script type=text/javascript src=jquery.min.js/script
  script type=text/javascript src=retuers.js/script

  script type=text/javascript src=C:/source/Core.js/script

/head
body
  div id=wrap
div id=header
  h1AJAX Solr Demonstration/h1
  h2Browse Reuters business news from 1987/h2
/div

div class=right
  div id=result
div id=navigation
  ul id=pager/ul
  div id=pager-header/div
/div
div id=docs/div
  /div
/div

div class=left
  h2Current Selection/h2
  ul id=selection/ul

  h2Search/h2
  span id=search_help(press ESC to close suggestions)/span
  ul id=search
input type=text id=query name=query/
  /ul

  h2Top Topics/h2
  div class=tagcloud id=topics/div

  h2Top Organisations/h2
  div class=tagcloud id=organisations/div

  h2Top Exchanges/h2
  div class=tagcloud id=exchanges/div

  h2By Country/h2
  div id=countries/div
  div id=preview/div

  h2By Date/h2
  div id=calendar/div

  div class=clear/div
/div
div class=clear/div
  /div
/body
/html


Re: solr user

2010-09-03 Thread Lance Norskog
Naming fields something_t but declaring them string will either not
work, or cause confusion.

On Thu, Sep 2, 2010 at 6:49 AM, kenf_nc ken.fos...@realestate.com wrote:

 You are querying for 'branch' and trying to place it in 'skill'.

 Also, you have Name and Column backwards, it should be:

 field column=id name=id/
 field column=name name=name/
 field column=city name=city_t/
 field column=skill name=skill_t/
 --
 View this message in context: 
 http://lucene.472066.n3.nabble.com/solr-user-tp1404814p1406343.html
 Sent from the Solr - User mailing list archive at Nabble.com.




-- 
Lance Norskog
goks...@gmail.com


Re: solr user

2010-09-02 Thread Pavan Gupta
Hi Ankita,
One reason could be that you are using area_t instead of city_t for mapping.
So the association may not be taking place in Solr. Have you tried searching
on skill? That should have worked for you.
Pavan

On Thu, Sep 2, 2010 at 12:10 PM, ankita shinde ankitashinde...@gmail.comwrote:

 hi,
 I am able to index all my entries in my table named info. This table has
 four columns named id, name, city and skill.
 I have written a data-config file as follow :

 dataConfig
  dataSource type=JdbcDataSource

  driver=com.mysql.jdbc.Driver
  url=jdbc:mysql://3307/dbname
  user=user-name
  password=password/
  document

entity name=id
query=select id,name,city,branch from info

 field name=id column=id/
 field name=name column=name/
 field name=city column=city_t/
 field name=skill column=skill_t/

/entity
  /document
 /dataConfig




 *And the entries made in schema.xml are:*

 field name=area_t type=string indexed=true stored=true/
 field name=skill_t type=string indexed=true stored=true/


 entries for id and name are already present.


 And i also have added requestimporthandler in solrconfig.xml as follow:

 requestHandler name=/dataimport
 class=org.apache.solr.handler.dataimport.DataImportHandler
 lst name=defaults
  str name=configdata-config.xml/str
 /lst
 /requestHandler



 Data is successfully indexed. But I am able to find results only for
 column 'name'. For other columns it is not giving result.



Re: solr user

2010-09-02 Thread kenf_nc

You are querying for 'branch' and trying to place it in 'skill'.

Also, you have Name and Column backwards, it should be:

field column=id name=id/ 
field column=name name=name/ 
field column=city name=city_t/ 
field column=skill name=skill_t/ 
-- 
View this message in context: 
http://lucene.472066.n3.nabble.com/solr-user-tp1404814p1406343.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: solr-user@lucene.apache.org

2010-03-27 Thread MitchK

Excuse me again... it seems like my mail-provider has changed something. I
hope this message won't pend again.

Thank you. 
-- 
View this message in context: 
http://n3.nabble.com/solr-user-lucene-apache-org-tp679673p679684.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: [solr-user] Upgrade from 1.2 to 1.3 gives 3x slowdown

2009-04-15 Thread Fergus McMenemie
On Apr 2, 2009, at 9:23 AM, Fergus McMenemie wrote:

 Grant,



 I should note, however, that the speed difference you are seeing may
 not be as pronounced as it appears.  If I recall during ApacheCon, I
 commented on how long it takes to shutdown your Solr instance when
 exiting it.  That time it takes is in fact Solr doing the work that
 was put off by not committing earlier and having all those deletes
 pile up.

 I am confused about work that was put off vs committing. My script
 was doing a commit right after the CVS import, and you are right
 about the massive times required to shut tomcat down. But in my tests
 the time taken to do the commit was under a second, yet I had to allow
 300secs for tomcat shutdown. Also I dont have any duplicates. So
 what sort of work was being done at shutdown that was not being done
 by a commit? Optimise!


The work being done is addressing the deletes, AIUI, but of course  
there are other things happening during shutdown, too.
There are no deletes to do. It was a clean index to begin with
and there were no duplicates.

How long is the shutdown if you do a commit first and then a shutdown?
Still very long, sometimes 300sec. My script always did a commit!

At any rate, I don't know that there is a satisfying answer to the  
larger issue due to the things like the fsync stuff, which is an  
overall win for Lucene/Solr despite it being more slower.  Have you  
tried running the tests on other machines (non-Mac?)
Nope. Although next week I will have real PC running vista, so 
I could try it there.

I think we should knock this on the head and move on. I rarely
need to index this content and I can take the performance hit,
and of course your work around provides a good speed up. 

Regards Fergus.
-- 

===
Fergus McMenemie   Email:fer...@twig.me.uk
Techmore Ltd   Phone:(UK) 07721 376021

Unix/Mac/Intranets Analyst Programmer
===


Re: [solr-user] Upgrade from 1.2 to 1.3 gives 3x slowdown

2009-04-15 Thread Ryan McKinley


The work being done is addressing the deletes, AIUI, but of course
there are other things happening during shutdown, too.

There are no deletes to do. It was a clean index to begin with
and there were no duplicates.



I have not followed this thread, so forgive me if this has already  
been suggested


If you know that there are not any duplicates, have you tried indexing  
with allowDups=true?


It will not change the fsync cost, but it may reduce some other  
checking times.


ryan


Re: [solr-user] Upgrade from 1.2 to 1.3 gives 3x slowdown

2009-04-02 Thread Fergus McMenemie
On Apr 1, 2009, at 9:39 AM, Fergus McMenemie wrote:

 Grant,

 Redoing the work with your patch applied does not seem to


 make a difference! Is this the expected result?

No, I didn't expect Solr 1095 to fix the problem. Overwrite = false +  
1095, does, however, AFAICT by your last line, right?



 I did run it again using the full file, this time using my Imac:-
  643465took  22min 14sec 2008-04-01
  734796  73min 58sec 2009-01-15
  758795  70min 55sec 2009-03-26
 Again using only the first 1M records with  
 commit=falseoverwrite=true:-
  643465took  2m51.516s   2008-04-01
  734796  7m29.326s   2009-01-15
  758795  8m18.403s   2009-03-26
  SOLR-1095   7m41.699s
 this time with commit=trueoverwrite=true.
  643465took  2m49.200s   2008-04-01
  734796  8m27.414s   2009-01-15
  758795  9m32.459s   2009-03-26
  SOLR-1095   7m58.825s
 this time with commit=falseoverwrite=false.
  643465took  2m46.149s   2008-04-01
  734796  3m29.909s   2009-01-15
  758795  3m26.248s   2009-03-26
  SOLR-1095   2m49.997s

Grant,

Hmmm, the big difference is made by overwrite=false. But,
can you explain why overwrite=false makes such a difference.
I am starting off with an empty index and I have checked the
content there are no duplicates in the uniqueKey field.

I guess if overwrite=false then a few checks can be removed
from the indexing process, and if I am confident that my content
contains no duplicates then this is a good speed up. 

http://wiki.apache.org/solr/UpdateCSV says that if overwrite 
is true (the default) then overwrite documents based on the
uniqueKey. However what will solr/lucene do if the uniqueKey
is not unique and overwrite=false?  

fergus: perl -nlaF\t -e 'print $F[2];' geonames.txt | wc -l
 100
fergus: perl -nlaF\t -e 'print $F[2];' geonames.txt | sort -u | wc -l
 100
fergus: /usr/bin/head geonames.txt
RC  UFI UNI LAT LONGDMS_LAT DMS_LONGMGRSJOG 
FC  DSG PC  CC1 ADM1ADM2POP ELEVCC2 NT  
LC  SHORT_FORM  GENERIC SORT_NAME   FULL_NAME   FULL_NAME_ND
MODIFY_DATE
1   -130782860524   12.47   -69.9   122800  -695400 
19PDP0219578323 ND19-14 T   MT  AA  00  
PALUMARGA   Palu Marga  Palu Marga  1995-03-23
1   -1307756-189172012.5-70.016667  123000  -700100 
19PCP8952982056 ND19-14 P   PPLX

PS. do you want me to do some kind of chop through the
different versions to see where the slow down happened
or are you happy you have nailed it?
-- 

===
Fergus McMenemie   Email:fer...@twig.me.uk
Techmore Ltd   Phone:(UK) 07721 376021

Unix/Mac/Intranets Analyst Programmer
===


Re: [solr-user] Upgrade from 1.2 to 1.3 gives 3x slowdown

2009-04-02 Thread Grant Ingersoll


On Apr 2, 2009, at 4:02 AM, Fergus McMenemie wrote:

Grant,

Hmmm, the big difference is made by overwrite=false. But,
can you explain why overwrite=false makes such a difference.
I am starting off with an empty index and I have checked the
content there are no duplicates in the uniqueKey field.

I guess if overwrite=false then a few checks can be removed
from the indexing process, and if I am confident that my content
contains no duplicates then this is a good speed up.

http://wiki.apache.org/solr/UpdateCSV says that if overwrite
is true (the default) then overwrite documents based on the
uniqueKey. However what will solr/lucene do if the uniqueKey
is not unique and overwrite=false?


overwrite=false means Solr does not issue deletes first, meaning if  
you have a doc w/ that id already, you will now have two docs with  
that id.   unique Id is enforced by Solr, not by Lucene.


Even if you can't guarantee uniqueness, you can still do overwrite =  
false as a workaround using the suggestion I gave you in a prior email:
1. Add a new field that is unique for your data source, but is the  
same for all records in that data source.  i.e. type = geonames.txt
2. Before updating, issue a delete by query for the value of that  
type, which will delete all records with that term

3. Do your indexing with overwrite = false

I should note, however, that the speed difference you are seeing may  
not be as pronounced as it appears.  If I recall during ApacheCon, I  
commented on how long it takes to shutdown your Solr instance when  
exiting it.  That time it takes is in fact Solr doing the work that  
was put off by not committing earlier and having all those deletes  
pile up.


Thus, while it is likely that your older version is still faster due  
to the new fsync stuff in Lucene, it may not be that much faster.  I  
think you could see this by actually doing commit = true, but I'm not  
100% sure.






fergus: perl -nlaF\t -e 'print $F[2];' geonames.txt | wc -l
100
fergus: perl -nlaF\t -e 'print $F[2];' geonames.txt | sort -u |  
wc -l

100
fergus: /usr/bin/head geonames.txt
RC	UFI	UNI	LAT	LONG	DMS_LAT	DMS_LONG	MGRS	JOG	FC	DSG	PC	CC1	ADM1	 
ADM2	POP	ELEV	CC2	NT	LC	SHORT_FORM	GENERIC	SORT_NAME	FULL_NAME	 
FULL_NAME_ND	MODIFY_DATE
1	-1307828	60524	12.47	-69.9	122800	-695400	19PDP0219578323	 
ND19-14	T	MT		AA	00	PALUMARGA	Palu Marga	Palu Marga	1995-03-23
1	-1307756	-1891720	12.5	-70.016667	123000	-700100	19PCP8952982056	 
ND19-14	P	PPLX	


PS. do you want me to do some kind of chop through the
different versions to see where the slow down happened
or are you happy you have nailed it?
--

===
Fergus McMenemie   Email:fer...@twig.me.uk
Techmore Ltd   Phone:(UK) 07721 376021

Unix/Mac/Intranets Analyst Programmer
===


--
Grant Ingersoll
http://www.lucidimagination.com/

Search the Lucene ecosystem (Lucene/Solr/Nutch/Mahout/Tika/Droids)  
using Solr/Lucene:

http://www.lucidimagination.com/search



Re: [solr-user] Upgrade from 1.2 to 1.3 gives 3x slowdown

2009-04-02 Thread Grant Ingersoll


On Apr 2, 2009, at 9:23 AM, Fergus McMenemie wrote:


Grant,




I should note, however, that the speed difference you are seeing may
not be as pronounced as it appears.  If I recall during ApacheCon, I
commented on how long it takes to shutdown your Solr instance when
exiting it.  That time it takes is in fact Solr doing the work that
was put off by not committing earlier and having all those deletes
pile up.


I am confused about work that was put off vs committing. My script
was doing a commit right after the CVS import, and you are right
about the massive times required to shut tomcat down. But in my tests
the time taken to do the commit was under a second, yet I had to allow
300secs for tomcat shutdown. Also I dont have any duplicates. So
what sort of work was being done at shutdown that was not being done
by a commit? Optimise!



The work being done is addressing the deletes, AIUI, but of course  
there are other things happening during shutdown, too.


How long is the shutdown if you do a commit first and then a shutdown?

At any rate, I don't know that there is a satisfying answer to the  
larger issue due to the things like the fsync stuff, which is an  
overall win for Lucene/Solr despite it being more slower.  Have you  
tried running the tests on other machines (non-Mac?)


Re: [solr-user] Upgrade from 1.2 to 1.3 gives 3x slowdown

2009-04-01 Thread Fergus McMenemie
Grant,

Redoing the work with your patch applied does not seem to 
make a difference! Is this the expected result?

I did run it again using the full file, this time using my Imac:-
643465took  22min 14sec 2008-04-01
734796  73min 58sec 2009-01-15
758795  70min 55sec 2009-03-26
Again using only the first 1M records with commit=falseoverwrite=true:-
643465took  2m51.516s   2008-04-01
734796  7m29.326s   2009-01-15
758795  8m18.403s   2009-03-26
SOLR-1095   7m41.699s  
this time with commit=trueoverwrite=true.
643465took  2m49.200s   2008-04-01
734796  8m27.414s   2009-01-15
758795  9m32.459s   2009-03-26
SOLR-1095   7m58.825s
this time with commit=falseoverwrite=false.
643465took  2m46.149s   2008-04-01
734796  3m29.909s   2009-01-15
758795  3m26.248s   2009-03-26
SOLR-1095   2m49.997s  


-- 

===
Fergus McMenemie   Email:fer...@twig.me.uk
Techmore Ltd   Phone:(UK) 07721 376021

Unix/Mac/Intranets Analyst Programmer
===


Re: [solr-user] Upgrade from 1.2 to 1.3 gives 3x slowdown

2009-04-01 Thread Grant Ingersoll


On Apr 1, 2009, at 9:39 AM, Fergus McMenemie wrote:


Grant,

Redoing the work with your patch applied does not seem to




make a difference! Is this the expected result?


No, I didn't expect Solr 1095 to fix the problem. Overwrite = false +  
1095, does, however, AFAICT by your last line, right?





I did run it again using the full file, this time using my Imac:-
643465took  22min 14sec 2008-04-01
734796  73min 58sec 2009-01-15
758795  70min 55sec 2009-03-26
Again using only the first 1M records with  
commit=falseoverwrite=true:-

643465took  2m51.516s   2008-04-01
734796  7m29.326s   2009-01-15
758795  8m18.403s   2009-03-26
SOLR-1095   7m41.699s
this time with commit=trueoverwrite=true.
643465took  2m49.200s   2008-04-01
734796  8m27.414s   2009-01-15
758795  9m32.459s   2009-03-26
SOLR-1095   7m58.825s
this time with commit=falseoverwrite=false.
643465took  2m46.149s   2008-04-01
734796  3m29.909s   2009-01-15
758795  3m26.248s   2009-03-26
SOLR-1095   2m49.997s


--

===
Fergus McMenemie   Email:fer...@twig.me.uk
Techmore Ltd   Phone:(UK) 07721 376021

Unix/Mac/Intranets Analyst Programmer
===


--
Grant Ingersoll
http://www.lucidimagination.com/

Search the Lucene ecosystem (Lucene/Solr/Nutch/Mahout/Tika/Droids)  
using Solr/Lucene:

http://www.lucidimagination.com/search



Re: [solr-user] Upgrade from 1.2 to 1.3 gives 3x slowdown

2009-03-31 Thread Grant Ingersoll
svn co -r REV_NUM  https://svn.apache.org/repos/asf/lucene/solr/trunk  
solr-REV_NUM


-Grant


On Mar 30, 2009, at 4:55 PM, Fergus McMenemie wrote:


Can you verify that rev 701485 still performs reasonably well?  This
is from October 2008 and I get similar results to the earlier rev.
Am now trying some other versions between October and when you first
reported the issue in November.


OK. Can you tell me how to get a hold of revision 701485. What is the
magic svn line?



On Mar 30, 2009, at 3:37 PM, Grant Ingersoll wrote:


Fregus,

Is rev 643465 the absolute latest you tried that still performs?
i.e. every revision after is slower?

-Grant

On Mar 30, 2009, at 12:45 PM, Grant Ingersoll wrote:


Fergus,

I think the problem may actually be due to something that was
introduced by a change to Solr's StopFilterFactory and the way it
loads the stop words set.  See https://issues.apache.org/jira/browse/SOLR-1095

I am in the process of testing it out and will let you know.

-Grant

On Mar 28, 2009, at 11:00 AM, Grant Ingersoll wrote:


Hey Fergus,

Finally got a chance to run your scripts, etc. per the thread:
http://www.lucidimagination.com/search/document/5c3de15a4e61095c/upgrade_from_1_2_to_1_3_gives_3x_slowdown_script#8324a98d8840c623

I can reproduce your slowdown.

One oddity with rev 643465 is:

On the old version, there is an exception during startup:
Mar 28, 2009 10:44:31 AM org.apache.solr.common.SolrException log
SEVERE: java.lang.NullPointerException
at
org
.apache
.solr
.handler
.component.SearchHandler.handleRequestBody(SearchHandler.java:129)
at
org
.apache
.solr
.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:
125)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:953)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:968)
at
org
.apache
.solr
.core.QuerySenderListener.newSearcher(QuerySenderListener.java:50)
at org.apache.solr.core.SolrCore$3.call(SolrCore.java:797)
at java.util.concurrent.FutureTask
$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at java.util.concurrent.ThreadPoolExecutor
$Worker.runTask(ThreadPoolExecutor.java:885)
at java.util.concurrent.ThreadPoolExecutor
$Worker.run(ThreadPoolExecutor.java:907)
at java.lang.Thread.run(Thread.java:637)

I see two things in CHANGES.txt that might apply, but I'm not  
sure:

1. I think commons-csv was upgraded
2. The CSV loader stuff was refactored to share common code

I'm still investigating.

-Grant


--
Grant Ingersoll
http://www.lucidimagination.com/

Search the Lucene ecosystem (Lucene/Solr/Nutch/Mahout/Tika/Droids)
using Solr/Lucene:
http://www.lucidimagination.com/search



--
Grant Ingersoll
http://www.lucidimagination.com/

Search the Lucene ecosystem (Lucene/Solr/Nutch/Mahout/Tika/Droids)
using Solr/Lucene:
http://www.lucidimagination.com/search



--
Grant Ingersoll
http://www.lucidimagination.com/

Search the Lucene ecosystem (Lucene/Solr/Nutch/Mahout/Tika/Droids)
using Solr/Lucene:
http://www.lucidimagination.com/search


--

===
Fergus McMenemie   Email:fer...@twig.me.uk
Techmore Ltd   Phone:(UK) 07721 376021

Unix/Mac/Intranets Analyst Programmer
===


--
Grant Ingersoll
http://www.lucidimagination.com/

Search the Lucene ecosystem (Lucene/Solr/Nutch/Mahout/Tika/Droids)  
using Solr/Lucene:

http://www.lucidimagination.com/search



Re: [solr-user] Upgrade from 1.2 to 1.3 gives 3x slowdown

2009-03-31 Thread Grant Ingersoll
Can you try adding overwrite=false and running against the latest  
version?  My current working theory is that Solr/Lucene has changed  
how deletes are handled such that work that was deferred before is now  
not deferred as often.  In fact, you are not seeing this cost paid (or  
at least not noticing it) because you are not committing, but I  
believe you do see it when you are closing down Solr, which is why it  
takes so long to exit.  I also think that Lucene adding fsync() into  
the equation may cause some slow down, but that is a penalty we are  
willing to pay as it gives us higher data integrity.


So, depending on how you have your data, I think a workaround is to:
Add a field that contains a single term identifying the data type for  
this particular CSV file, i.e. something like field: type, value:  
fergs-csv
Then, before indexing, you can issue a Delete By Query: type:fergs-csv  
and then add your CSV file using overwrite=false.  This amounts to a  
batch delete followed by a batch add, but without the add having to  
issue deletes for each add.


In the meantime, I'm trying to see if I can pinpoint down a specific  
change and see if there is anything that might help it perform better.


-Grant

On Mar 30, 2009, at 4:52 PM, Fergus McMenemie wrote:


Grant,

After all my playing about at boot camp, I gave things a rest. It
was not till months later that got back to looking at solr again.
So after 643465 (2008-Apr-01)  the next version I tried was 694377
from (2008-Sep-11). Nothing in between. Yep so 643465 is the latest
version I tried that still performs. Every later revision is slower.

However I need to repeat the tests using 643465, 694377 and whatever
is the latest version. On my macbook I am only seeing a 2x slowdown
of 643465 vis today, where as I had been seeing a 3x slowdown using
my Imac.

Fergus



Fregus,

Is rev 643465 the absolute latest you tried that still performs?   
i.e.

every revision after is slower?

-Grant

On Mar 30, 2009, at 12:45 PM, Grant Ingersoll wrote:


Fergus,

I think the problem may actually be due to something that was
introduced by a change to Solr's StopFilterFactory and the way it
loads the stop words set.  See https://issues.apache.org/jira/browse/SOLR-1095

I am in the process of testing it out and will let you know.

-Grant

On Mar 28, 2009, at 11:00 AM, Grant Ingersoll wrote:


Hey Fergus,

Finally got a chance to run your scripts, etc. per the thread:
http://www.lucidimagination.com/search/document/5c3de15a4e61095c/upgrade_from_1_2_to_1_3_gives_3x_slowdown_script#8324a98d8840c623

I can reproduce your slowdown.

One oddity with rev 643465 is:

On the old version, there is an exception during startup:
Mar 28, 2009 10:44:31 AM org.apache.solr.common.SolrException log
SEVERE: java.lang.NullPointerException
 at
org
.apache
.solr
.handler
.component.SearchHandler.handleRequestBody(SearchHandler.java:129)
 at
org
.apache
.solr
.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:
125)
 at org.apache.solr.core.SolrCore.execute(SolrCore.java:953)
 at org.apache.solr.core.SolrCore.execute(SolrCore.java:968)
 at
org
.apache
.solr 
.core.QuerySenderListener.newSearcher(QuerySenderListener.java:

50)
 at org.apache.solr.core.SolrCore$3.call(SolrCore.java:797)
 at java.util.concurrent.FutureTask
$Sync.innerRun(FutureTask.java:303)
 at java.util.concurrent.FutureTask.run(FutureTask.java:138)
 at java.util.concurrent.ThreadPoolExecutor
$Worker.runTask(ThreadPoolExecutor.java:885)
 at java.util.concurrent.ThreadPoolExecutor
$Worker.run(ThreadPoolExecutor.java:907)
 at java.lang.Thread.run(Thread.java:637)

I see two things in CHANGES.txt that might apply, but I'm not sure:
1. I think commons-csv was upgraded
2. The CSV loader stuff was refactored to share common code

I'm still investigating.

-Grant


--
Grant Ingersoll
http://www.lucidimagination.com/

Search the Lucene ecosystem (Lucene/Solr/Nutch/Mahout/Tika/Droids)
using Solr/Lucene:
http://www.lucidimagination.com/search


--

===
Fergus McMenemie   Email:fer...@twig.me.uk
Techmore Ltd   Phone:(UK) 07721 376021

Unix/Mac/Intranets Analyst Programmer
===


--
Grant Ingersoll
http://www.lucidimagination.com/

Search the Lucene ecosystem (Lucene/Solr/Nutch/Mahout/Tika/Droids)  
using Solr/Lucene:

http://www.lucidimagination.com/search



Re: [solr-user] Upgrade from 1.2 to 1.3 gives 3x slowdown

2009-03-31 Thread Fergus McMenemie
Grant,

I am messing with the script, and with your tip I expect I can
make it recurse over as many releases as needed.

I did run it again using the full file, this time using my Imac:-
643465took  22min 14sec 2008-04-01
734796  73min 58sec 2009-01-15
758795  70min 55sec 2009-03-26
I then ran it again using only the first 1M records:-
643465took  2m51.516s   2008-04-01
734796  7m29.326s   2009-01-15
758795  8m18.403s   2009-03-26
this time with commit=true.
643465took  2m49.200s   2008-04-01
734796  8m27.414s   2009-01-15
758795  9m32.459s   2009-03-26
this time with commit=falseoverwrite=false.
643465took  2m46.149s   2008-04-01
734796  3m29.909s   2009-01-15
758795  3m26.248s   2009-03-26

Just read your latest post. I will apply the patches and retest
the above.

Can you try adding overwrite=false and running against the latest  
version?  My current working theory is that Solr/Lucene has changed  
how deletes are handled such that work that was deferred before is now  
not deferred as often.  In fact, you are not seeing this cost paid (or  
at least not noticing it) because you are not committing, but I  
believe you do see it when you are closing down Solr, which is why it  
takes so long to exit.
It can take ages! (15min to get tomcat to quit). Also my script does
have the separate commit step, which does not take any time!

I also think that Lucene adding fsync() into  
the equation may cause some slow down, but that is a penalty we are  
willing to pay as it gives us higher data integrity.
Data integrity is always good. However if performance seems
unreasonable, user/customers tend to take things into their
own hands and kill the process or machine. This tends to be
very bad for data integrity.

So, depending on how you have your data, I think a workaround is to:
Add a field that contains a single term identifying the data type for  
this particular CSV file, i.e. something like field: type, value:  
fergs-csv
Then, before indexing, you can issue a Delete By Query: type:fergs-csv  
and then add your CSV file using overwrite=false.  This amounts to a  
batch delete followed by a batch add, but without the add having to  
issue deletes for each add.
Ok.. but... for these test cases I am starting off with an empty
index. The script does a rm -rf solr/data before tomcat is launched.
So I do not understand how the above helps. UNLESS there are duplicate
gaz entries.

In the meantime, I'm trying to see if I can pinpoint down a specific  
change and see if there is anything that might help it perform better.

-Grant


-- 

===
Fergus McMenemie   Email:fer...@twig.me.uk
Techmore Ltd   Phone:(UK) 07721 376021

Unix/Mac/Intranets Analyst Programmer
===


Re: [solr-user] Upgrade from 1.2 to 1.3 gives 3x slowdown

2009-03-30 Thread Grant Ingersoll

Fergus,

I think the problem may actually be due to something that was  
introduced by a change to Solr's StopFilterFactory and the way it  
loads the stop words set.  See https://issues.apache.org/jira/browse/SOLR-1095


I am in the process of testing it out and will let you know.

-Grant

On Mar 28, 2009, at 11:00 AM, Grant Ingersoll wrote:


Hey Fergus,

Finally got a chance to run your scripts, etc. per the thread:
http://www.lucidimagination.com/search/document/5c3de15a4e61095c/upgrade_from_1_2_to_1_3_gives_3x_slowdown_script#8324a98d8840c623

I can reproduce your slowdown.

One oddity with rev 643465 is:

On the old version, there is an exception during startup:
Mar 28, 2009 10:44:31 AM org.apache.solr.common.SolrException log
SEVERE: java.lang.NullPointerException
   at  
org 
.apache 
.solr 
.handler 
.component.SearchHandler.handleRequestBody(SearchHandler.java:129)
   at  
org 
.apache 
.solr 
.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:125)

   at org.apache.solr.core.SolrCore.execute(SolrCore.java:953)
   at org.apache.solr.core.SolrCore.execute(SolrCore.java:968)
   at  
org 
.apache 
.solr.core.QuerySenderListener.newSearcher(QuerySenderListener.java: 
50)

   at org.apache.solr.core.SolrCore$3.call(SolrCore.java:797)
   at java.util.concurrent.FutureTask 
$Sync.innerRun(FutureTask.java:303)

   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at java.util.concurrent.ThreadPoolExecutor 
$Worker.runTask(ThreadPoolExecutor.java:885)
   at java.util.concurrent.ThreadPoolExecutor 
$Worker.run(ThreadPoolExecutor.java:907)

   at java.lang.Thread.run(Thread.java:637)

I see two things in CHANGES.txt that might apply, but I'm not sure:
1. I think commons-csv was upgraded
2. The CSV loader stuff was refactored to share common code

I'm still investigating.

-Grant


--
Grant Ingersoll
http://www.lucidimagination.com/

Search the Lucene ecosystem (Lucene/Solr/Nutch/Mahout/Tika/Droids)  
using Solr/Lucene:

http://www.lucidimagination.com/search



Re: [solr-user] Upgrade from 1.2 to 1.3 gives 3x slowdown

2009-03-30 Thread Grant Ingersoll

Fregus,

Is rev 643465 the absolute latest you tried that still performs?  i.e.  
every revision after is slower?


-Grant

On Mar 30, 2009, at 12:45 PM, Grant Ingersoll wrote:


Fergus,

I think the problem may actually be due to something that was  
introduced by a change to Solr's StopFilterFactory and the way it  
loads the stop words set.  See https://issues.apache.org/jira/browse/SOLR-1095


I am in the process of testing it out and will let you know.

-Grant

On Mar 28, 2009, at 11:00 AM, Grant Ingersoll wrote:


Hey Fergus,

Finally got a chance to run your scripts, etc. per the thread:
http://www.lucidimagination.com/search/document/5c3de15a4e61095c/upgrade_from_1_2_to_1_3_gives_3x_slowdown_script#8324a98d8840c623

I can reproduce your slowdown.

One oddity with rev 643465 is:

On the old version, there is an exception during startup:
Mar 28, 2009 10:44:31 AM org.apache.solr.common.SolrException log
SEVERE: java.lang.NullPointerException
  at  
org 
.apache 
.solr 
.handler 
.component.SearchHandler.handleRequestBody(SearchHandler.java:129)
  at  
org 
.apache 
.solr 
.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java: 
125)

  at org.apache.solr.core.SolrCore.execute(SolrCore.java:953)
  at org.apache.solr.core.SolrCore.execute(SolrCore.java:968)
  at  
org 
.apache 
.solr.core.QuerySenderListener.newSearcher(QuerySenderListener.java: 
50)

  at org.apache.solr.core.SolrCore$3.call(SolrCore.java:797)
  at java.util.concurrent.FutureTask 
$Sync.innerRun(FutureTask.java:303)

  at java.util.concurrent.FutureTask.run(FutureTask.java:138)
  at java.util.concurrent.ThreadPoolExecutor 
$Worker.runTask(ThreadPoolExecutor.java:885)
  at java.util.concurrent.ThreadPoolExecutor 
$Worker.run(ThreadPoolExecutor.java:907)

  at java.lang.Thread.run(Thread.java:637)

I see two things in CHANGES.txt that might apply, but I'm not sure:
1. I think commons-csv was upgraded
2. The CSV loader stuff was refactored to share common code

I'm still investigating.

-Grant


--
Grant Ingersoll
http://www.lucidimagination.com/

Search the Lucene ecosystem (Lucene/Solr/Nutch/Mahout/Tika/Droids)  
using Solr/Lucene:

http://www.lucidimagination.com/search



--
Grant Ingersoll
http://www.lucidimagination.com/

Search the Lucene ecosystem (Lucene/Solr/Nutch/Mahout/Tika/Droids)  
using Solr/Lucene:

http://www.lucidimagination.com/search



Re: [solr-user] Upgrade from 1.2 to 1.3 gives 3x slowdown

2009-03-30 Thread Grant Ingersoll
Can you verify that rev 701485 still performs reasonably well?  This  
is from October 2008 and I get similar results to the earlier rev. 
Am now trying some other versions between October and when you first  
reported the issue in November.


-Grant

On Mar 30, 2009, at 3:37 PM, Grant Ingersoll wrote:


Fregus,

Is rev 643465 the absolute latest you tried that still performs?   
i.e. every revision after is slower?


-Grant

On Mar 30, 2009, at 12:45 PM, Grant Ingersoll wrote:


Fergus,

I think the problem may actually be due to something that was  
introduced by a change to Solr's StopFilterFactory and the way it  
loads the stop words set.  See https://issues.apache.org/jira/browse/SOLR-1095


I am in the process of testing it out and will let you know.

-Grant

On Mar 28, 2009, at 11:00 AM, Grant Ingersoll wrote:


Hey Fergus,

Finally got a chance to run your scripts, etc. per the thread:
http://www.lucidimagination.com/search/document/5c3de15a4e61095c/upgrade_from_1_2_to_1_3_gives_3x_slowdown_script#8324a98d8840c623

I can reproduce your slowdown.

One oddity with rev 643465 is:

On the old version, there is an exception during startup:
Mar 28, 2009 10:44:31 AM org.apache.solr.common.SolrException log
SEVERE: java.lang.NullPointerException
 at  
org 
.apache 
.solr 
.handler 
.component.SearchHandler.handleRequestBody(SearchHandler.java:129)
 at  
org 
.apache 
.solr 
.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java: 
125)

 at org.apache.solr.core.SolrCore.execute(SolrCore.java:953)
 at org.apache.solr.core.SolrCore.execute(SolrCore.java:968)
 at  
org 
.apache 
.solr 
.core.QuerySenderListener.newSearcher(QuerySenderListener.java:50)

 at org.apache.solr.core.SolrCore$3.call(SolrCore.java:797)
 at java.util.concurrent.FutureTask 
$Sync.innerRun(FutureTask.java:303)

 at java.util.concurrent.FutureTask.run(FutureTask.java:138)
 at java.util.concurrent.ThreadPoolExecutor 
$Worker.runTask(ThreadPoolExecutor.java:885)
 at java.util.concurrent.ThreadPoolExecutor 
$Worker.run(ThreadPoolExecutor.java:907)

 at java.lang.Thread.run(Thread.java:637)

I see two things in CHANGES.txt that might apply, but I'm not sure:
1. I think commons-csv was upgraded
2. The CSV loader stuff was refactored to share common code

I'm still investigating.

-Grant


--
Grant Ingersoll
http://www.lucidimagination.com/

Search the Lucene ecosystem (Lucene/Solr/Nutch/Mahout/Tika/Droids)  
using Solr/Lucene:

http://www.lucidimagination.com/search



--
Grant Ingersoll
http://www.lucidimagination.com/

Search the Lucene ecosystem (Lucene/Solr/Nutch/Mahout/Tika/Droids)  
using Solr/Lucene:

http://www.lucidimagination.com/search



--
Grant Ingersoll
http://www.lucidimagination.com/

Search the Lucene ecosystem (Lucene/Solr/Nutch/Mahout/Tika/Droids)  
using Solr/Lucene:

http://www.lucidimagination.com/search



Re: [solr-user] Upgrade from 1.2 to 1.3 gives 3x slowdown

2009-03-30 Thread Fergus McMenemie
Grant,

After all my playing about at boot camp, I gave things a rest. It
was not till months later that got back to looking at solr again.
So after 643465 (2008-Apr-01)  the next version I tried was 694377 
from (2008-Sep-11). Nothing in between. Yep so 643465 is the latest
version I tried that still performs. Every later revision is slower.

However I need to repeat the tests using 643465, 694377 and whatever
is the latest version. On my macbook I am only seeing a 2x slowdown
of 643465 vis today, where as I had been seeing a 3x slowdown using
my Imac.

Fergus


Fregus,

Is rev 643465 the absolute latest you tried that still performs?  i.e.  
every revision after is slower?

-Grant

On Mar 30, 2009, at 12:45 PM, Grant Ingersoll wrote:

 Fergus,

 I think the problem may actually be due to something that was  
 introduced by a change to Solr's StopFilterFactory and the way it  
 loads the stop words set.  See 
 https://issues.apache.org/jira/browse/SOLR-1095

 I am in the process of testing it out and will let you know.

 -Grant

 On Mar 28, 2009, at 11:00 AM, Grant Ingersoll wrote:

 Hey Fergus,

 Finally got a chance to run your scripts, etc. per the thread:
 http://www.lucidimagination.com/search/document/5c3de15a4e61095c/upgrade_from_1_2_to_1_3_gives_3x_slowdown_script#8324a98d8840c623

 I can reproduce your slowdown.

 One oddity with rev 643465 is:

 On the old version, there is an exception during startup:
 Mar 28, 2009 10:44:31 AM org.apache.solr.common.SolrException log
 SEVERE: java.lang.NullPointerException
   at  
 org 
 .apache 
 .solr 
 .handler 
 .component.SearchHandler.handleRequestBody(SearchHandler.java:129)
   at  
 org 
 .apache 
 .solr 
 .handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java: 
 125)
   at org.apache.solr.core.SolrCore.execute(SolrCore.java:953)
   at org.apache.solr.core.SolrCore.execute(SolrCore.java:968)
   at  
 org 
 .apache 
 .solr.core.QuerySenderListener.newSearcher(QuerySenderListener.java: 
 50)
   at org.apache.solr.core.SolrCore$3.call(SolrCore.java:797)
   at java.util.concurrent.FutureTask 
 $Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at java.util.concurrent.ThreadPoolExecutor 
 $Worker.runTask(ThreadPoolExecutor.java:885)
   at java.util.concurrent.ThreadPoolExecutor 
 $Worker.run(ThreadPoolExecutor.java:907)
   at java.lang.Thread.run(Thread.java:637)

 I see two things in CHANGES.txt that might apply, but I'm not sure:
 1. I think commons-csv was upgraded
 2. The CSV loader stuff was refactored to share common code

 I'm still investigating.

 -Grant

 --
 Grant Ingersoll
 http://www.lucidimagination.com/

 Search the Lucene ecosystem (Lucene/Solr/Nutch/Mahout/Tika/Droids)  
 using Solr/Lucene:
 http://www.lucidimagination.com/search

-- 

===
Fergus McMenemie   Email:fer...@twig.me.uk
Techmore Ltd   Phone:(UK) 07721 376021

Unix/Mac/Intranets Analyst Programmer
===


Re: [solr-user] Upgrade from 1.2 to 1.3 gives 3x slowdown

2009-03-30 Thread Fergus McMenemie
Can you verify that rev 701485 still performs reasonably well?  This  
is from October 2008 and I get similar results to the earlier rev. 
Am now trying some other versions between October and when you first  
reported the issue in November.

OK. Can you tell me how to get a hold of revision 701485. What is the
magic svn line?


On Mar 30, 2009, at 3:37 PM, Grant Ingersoll wrote:

 Fregus,

 Is rev 643465 the absolute latest you tried that still performs?   
 i.e. every revision after is slower?

 -Grant

 On Mar 30, 2009, at 12:45 PM, Grant Ingersoll wrote:

 Fergus,

 I think the problem may actually be due to something that was  
 introduced by a change to Solr's StopFilterFactory and the way it  
 loads the stop words set.  See 
 https://issues.apache.org/jira/browse/SOLR-1095

 I am in the process of testing it out and will let you know.

 -Grant

 On Mar 28, 2009, at 11:00 AM, Grant Ingersoll wrote:

 Hey Fergus,

 Finally got a chance to run your scripts, etc. per the thread:
 http://www.lucidimagination.com/search/document/5c3de15a4e61095c/upgrade_from_1_2_to_1_3_gives_3x_slowdown_script#8324a98d8840c623

 I can reproduce your slowdown.

 One oddity with rev 643465 is:

 On the old version, there is an exception during startup:
 Mar 28, 2009 10:44:31 AM org.apache.solr.common.SolrException log
 SEVERE: java.lang.NullPointerException
  at  
 org 
 .apache 
 .solr 
 .handler 
 .component.SearchHandler.handleRequestBody(SearchHandler.java:129)
  at  
 org 
 .apache 
 .solr 
 .handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java: 
 125)
  at org.apache.solr.core.SolrCore.execute(SolrCore.java:953)
  at org.apache.solr.core.SolrCore.execute(SolrCore.java:968)
  at  
 org 
 .apache 
 .solr 
 .core.QuerySenderListener.newSearcher(QuerySenderListener.java:50)
  at org.apache.solr.core.SolrCore$3.call(SolrCore.java:797)
  at java.util.concurrent.FutureTask 
 $Sync.innerRun(FutureTask.java:303)
  at java.util.concurrent.FutureTask.run(FutureTask.java:138)
  at java.util.concurrent.ThreadPoolExecutor 
 $Worker.runTask(ThreadPoolExecutor.java:885)
  at java.util.concurrent.ThreadPoolExecutor 
 $Worker.run(ThreadPoolExecutor.java:907)
  at java.lang.Thread.run(Thread.java:637)

 I see two things in CHANGES.txt that might apply, but I'm not sure:
 1. I think commons-csv was upgraded
 2. The CSV loader stuff was refactored to share common code

 I'm still investigating.

 -Grant

 --
 Grant Ingersoll
 http://www.lucidimagination.com/

 Search the Lucene ecosystem (Lucene/Solr/Nutch/Mahout/Tika/Droids)  
 using Solr/Lucene:
 http://www.lucidimagination.com/search


 --
 Grant Ingersoll
 http://www.lucidimagination.com/

 Search the Lucene ecosystem (Lucene/Solr/Nutch/Mahout/Tika/Droids)  
 using Solr/Lucene:
 http://www.lucidimagination.com/search


--
Grant Ingersoll
http://www.lucidimagination.com/

Search the Lucene ecosystem (Lucene/Solr/Nutch/Mahout/Tika/Droids)  
using Solr/Lucene:
http://www.lucidimagination.com/search

-- 

===
Fergus McMenemie   Email:fer...@twig.me.uk
Techmore Ltd   Phone:(UK) 07721 376021

Unix/Mac/Intranets Analyst Programmer
===


RE: [solr-user] Upgrade from 1.2 to 1.3 gives 3x slowdown

2009-03-28 Thread Grant Ingersoll

Hey Fergus,

Finally got a chance to run your scripts, etc. per the thread:
http://www.lucidimagination.com/search/document/5c3de15a4e61095c/upgrade_from_1_2_to_1_3_gives_3x_slowdown_script#8324a98d8840c623

I can reproduce your slowdown.

One oddity with rev 643465 is:

On the old version, there is an exception during startup:
Mar 28, 2009 10:44:31 AM org.apache.solr.common.SolrException log
SEVERE: java.lang.NullPointerException
at  
org 
.apache 
.solr 
.handler.component.SearchHandler.handleRequestBody(SearchHandler.java: 
129)
at  
org 
.apache 
.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java: 
125)

at org.apache.solr.core.SolrCore.execute(SolrCore.java:953)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:968)
at  
org 
.apache 
.solr.core.QuerySenderListener.newSearcher(QuerySenderListener.java:50)

at org.apache.solr.core.SolrCore$3.call(SolrCore.java:797)
at java.util.concurrent.FutureTask 
$Sync.innerRun(FutureTask.java:303)

at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at java.util.concurrent.ThreadPoolExecutor 
$Worker.runTask(ThreadPoolExecutor.java:885)
at java.util.concurrent.ThreadPoolExecutor 
$Worker.run(ThreadPoolExecutor.java:907)

at java.lang.Thread.run(Thread.java:637)

I see two things in CHANGES.txt that might apply, but I'm not sure:
1. I think commons-csv was upgraded
2. The CSV loader stuff was refactored to share common code

I'm still investigating.

-Grant


Re: [solr-user] Correct query syntax for a multivalued string field?

2008-09-09 Thread Walter Underwood
color:red AND color:green

+color:red +color:green

Either one works.

wunder

On 9/9/08 3:47 PM, hernan [EMAIL PROTECTED] wrote:

 Hey Solr users,
 
 My schema defines a field like this:
 
 field name=color type=string indexed=true required=true
 multiValued=true/
 
 If I have a document indexed that has the following values for this
 field:  ['red','green']
 ...how do I write a query that says: I want all documents that have
 color = 'red' AND color = 'green'?
 
 When I try:
 
 q=color:red+AND+green
 
 I get zero documents back.  I've searched the Solr and Lucene docs and
 I can't find the right syntax although I have a hunch that this is a
 common query pattern.
 
 Please let me know if you've encountered this before and have a solution.
 
 thanks!
 hs



Re: [solr-user] Correct query syntax for a multivalued string field?

2008-09-09 Thread hernan
On Tue, Sep 9, 2008 at 3:50 PM, Walter Underwood [EMAIL PROTECTED] wrote:
 color:red AND color:green

 +color:red +color:green

 Either one works.

Just what I was looking for, thanks for the quick response and for
helping a new user.
hs

 wunder

 On 9/9/08 3:47 PM, hernan [EMAIL PROTECTED] wrote:

 Hey Solr users,

 My schema defines a field like this:

 field name=color type=string indexed=true required=true
 multiValued=true/

 If I have a document indexed that has the following values for this
 field:  ['red','green']
 ...how do I write a query that says: I want all documents that have
 color = 'red' AND color = 'green'?

 When I try:

 q=color:red+AND+green

 I get zero documents back.  I've searched the Solr and Lucene docs and
 I can't find the right syntax although I have a hunch that this is a
 common query pattern.

 Please let me know if you've encountered this before and have a solution.

 thanks!
 hs




Re: Solr user interface

2008-07-13 Thread tarjei
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Lars Kotthoff wrote:
 Hi all,
 
  I've written a user interface for Solr (Spring web application) which I'd be
 willing to donate if people are interested.
 
 You can see a demo here http://larsko.dyndns.org:8080/solr-ui/search.html, SVN
 repository is here http://larsko.dyndns.org/svn/solr-ui/. Note in particular
 http://larsko.dyndns.org/svn/solr-ui/documentation/manual.pdf for a short
 manual. Please be patient, the server this is running on doesn't have a lot of
 processing power or upstream bandwidth ;)
 
 The purpose of adding this user interface to Solr would be twofold; first, 
 serve
 as a demonstration of Solr's capabilities (running on a server linked to from
 the website, probably like the demo above), and second, give people a starting
 point/inspiration for implementing their own user interfaces.
 
 The special feature is that it supports some form of hierarchical faceting
 (explained in the manual). The data the demo searches comes from the wikipedia
 selection for schools. The subject index pages are used to build the 
 hierarchy.
 
 Let me know what you think.

Hi, I'm not a developer of Solr, but I think this is a useful addition
as it shows how to use/code some of the more advanced features in Solr.
Also, it is a quick way to prototype solr.

Regards,
Tarjei
 
 Thanks,
 
 Lars

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFIekBdYVRKCnSvzfIRAohXAJ4kIf8VrDAlfDy18MPdU6kFRAbejgCgvlOo
CKt+lNX33pZKxxohw7rtgbY=
=DXI1
-END PGP SIGNATURE-