Re: How to improve this solr query?

2012-07-03 Thread Chamnap Chhorn
Lance, I didn't use widcard at all. I use only this, the difference is
quoted or not.

q2=*apartment*
q1=*apartment*
*
*
On Tue, Jul 3, 2012 at 12:06 PM, Lance Norskog goks...@gmail.com wrote:

 q2=*apartment*
 q1=*apartment*

 These are wildcards

 On Mon, Jul 2, 2012 at 8:30 PM, Chamnap Chhorn chamnapchh...@gmail.com
 wrote:
  Hi Lance,
 
  I didn't use wildcards at all. This is a normal text search only. I need
 a
  string field because it needs to be matched exactly, and the value is
  sometimes a multi-word, so quoted it is necessary.
 
  By the way, if I do a super plain query, it takes at least 600ms. I'm not
  sure why. On another solr instance with similar amount of data, it takes
  only 50ms.
 
  I see something strange on the response, there is always
 
  str name=commandbuild/str
 
  What does that mean?
 
  On Tue, Jul 3, 2012 at 10:02 AM, Lance Norskog goks...@gmail.com
 wrote:
 
  Wildcards are slow. Leading wildcards are even more slow. Is there
  some way to search that data differently? If it is a string, can you
  change it to a text field and make sure 'apartment' is a separate
  word?
 
  On Mon, Jul 2, 2012 at 10:01 AM, Chamnap Chhorn 
 chamnapchh...@gmail.com
  wrote:
   Hi Michael,
  
   Thanks for quick response. Based on documentation, facet.mincount
 means
   that solr will return facet fields that has at least that number. For
  me, I
   just want to ensure my facet fields count doesn't have zero value.
  
   I try to increase to 10, but it still slows even for the same query.
  
   Actually, those 13 million documents are divided into 200 portals. I
   already include fq=portal_uuid: kjkjkjk inside each nested query,
 but
   it's still slow.
  
   On Mon, Jul 2, 2012 at 11:47 PM, Michael Della Bitta 
   michael.della.bi...@appinions.com wrote:
  
   Hi Chamnap,
  
   The first thing that jumped out at me was facet.mincount=1. Are you
   sure you need this? Increasing this number should drastically improve
   speed.
  
   Michael Della Bitta
  
   
   Appinions, Inc. -- Where Influence Isn’t a Game.
   http://www.appinions.com
  
  
   On Mon, Jul 2, 2012 at 12:35 PM, Chamnap Chhorn 
  chamnapchh...@gmail.com
   wrote:
Hi all,
   
I'm using solr 3.5 with nested query on the 4 core cpu server + 17
 Gb.
   The
problem is that my query is so slow; the average response time is
 12
  secs
against 13 millions documents.
   
What I am doing is to send quoted string (q2) to string fields and
non-quoted string (q1) to other fields and combine the result
  together.
   
   
  
 
 facet=truesort=score+descq2=*apartment*facet.mincount=1q1=*apartment*
   
  
 
 tie=0.1q.alt=*:*wt=jsonversion=2.2rows=20fl=uuidfacet.query=has_map:+truefacet.query=has_image:+truefacet.query=has_website:+truestart=0q=
*
   
  
 
 _query_:+{!dismax+qf='.'+fq='..'+v=$q1}+OR+_query_:+{!dismax+qf='..'+fq='...'+v=$q2}
*
   
  
 
 facet.field={!ex%3Ddt}sub_category_uuidsfacet.field={!ex%3Ddt}location_uuid
   
I have done solr optimize already, but it's still slow. Any idea
 how
  to
improve the speed? Am I done anything wrong?
   
--
Chhorn Chamnap
http://chamnap.github.com/
  
  
  
  
   --
   Chhorn Chamnap
   http://chamnap.github.com/
 
 
 
  --
  Lance Norskog
  goks...@gmail.com
 
 
 
 
  --
  Chhorn Chamnap
  http://chamnap.github.com/



 --
 Lance Norskog
 goks...@gmail.com




-- 
Chhorn Chamnap
http://chamnap.github.com/


Near Real Time Indexing and Searching with solr 3.6

2012-07-03 Thread thomas

Hi,

As part of my bachelor thesis I'm trying to archive NRT with Solr 3.6. 
I've came up with a basic concept and would be trilled if I could get 
some feedback.


The main idea is to use two different Indexes. One persistent on disc 
and one in RAM. The plan is to route every added and modified document 
to the RAMIndex (http://imgur.com/kLfUN). After a certain period of 
time, this index would get cleared and the documents get added to the 
persistent Index.


Some major problems I still have with this idea is:
- deletions of documents from documents in the persistent index
- having the same unique IDs in both the RAM index and persitent Index, 
as a result of an updated document

  - Merging search results to filter out old versions of updated documents

Would such an idea be viable to persuit?

Thanks for you time



Re: Problem with sorting solr docs

2012-07-03 Thread Rafał Kuć
Hello!

Your query suggests that you are sorting on the 'name' field instead
of the latlng field (sort=name +asc).

The question is what you are trying to achieve ? Do you want to sort
your documents from a given geographical point ? If that's the case
you may want to look here: http://wiki.apache.org/solr/SpatialSearch/
and look at the possibility of sorting on the distance from a given
point. 

-- 
Regards,
 Rafał Kuć
 Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch - ElasticSearch


Hi,
 
I have 260 docs which I want to sort on a single field latlng.
doc
str name=id1/str
str name=nameAmphoe Khanom/str
str name=latlng1.0,1.0/str
/doc
 
My query is :
http://localhost:8080/solr/select?q=*:*sort=name +asc
 
This query sorts all documents except those which doesn’t have latlng, and I 
can’t keep any default value for this field. 
My question is how can I sort all docs on latlng?
 
Regards
Harshvardhan Ojha  | Software Developer - Technology Development
|  MakeMyTrip.com, 243 SP Infocity, Udyog Vihar Phase 1, Gurgaon, Haryana - 
122 016, India

What's new?: Inspire - Discover an inspiring new way to plan and book travel 
online.


Office Map

Facebook

Twitter






RE: Problem with sorting solr docs

2012-07-03 Thread Harshvardhan Ojha
Hi,

Thanks for reply.
I want to sort my docs on name field, it is working well only if I have all 
fields populated well.
But my latlng field is optional, every doc will not have this value. So those 
docs are not getting sorted.

Regards
Harshvardhan Ojha

-Original Message-
From: Rafał Kuć [mailto:r@solr.pl] 
Sent: Tuesday, July 03, 2012 5:24 PM
To: solr-user@lucene.apache.org
Subject: Re: Problem with sorting solr docs

Hello!

Your query suggests that you are sorting on the 'name' field instead of the 
latlng field (sort=name +asc).

The question is what you are trying to achieve ? Do you want to sort your 
documents from a given geographical point ? If that's the case you may want to 
look here: http://wiki.apache.org/solr/SpatialSearch/
and look at the possibility of sorting on the distance from a given point. 

--
Regards,
 Rafał Kuć
 Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch - ElasticSearch


Hi,
 
I have 260 docs which I want to sort on a single field latlng.
doc
str name=id1/str
str name=nameAmphoe Khanom/str
str name=latlng1.0,1.0/str
/doc
 
My query is :
http://localhost:8080/solr/select?q=*:*sort=name +asc
 
This query sorts all documents except those which doesn’t have latlng, and I 
can’t keep any default value for this field. 
My question is how can I sort all docs on latlng?
 
Regards
Harshvardhan Ojha  | Software Developer - Technology Development
|  MakeMyTrip.com, 243 SP Infocity, Udyog Vihar Phase 1, Gurgaon, Haryana - 
122 016, India

What's new?: Inspire - Discover an inspiring new way to plan and book travel 
online.


Office Map

Facebook

Twitter


 



Re: Problem with sorting solr docs

2012-07-03 Thread Rafał Kuć
Hello!

But the latlng field is not taken into account when sorting with sort
defined such as in your query. You only sort on the name field and
only that field. You can also define Solr behavior when there is no
value in the field, but adding sortMissingLast=true or
sortMissingFirst=true to your type definition in the schema.xml
file.

-- 
Regards,
 Rafał Kuć
 Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch - ElasticSearch

 Hi,

 Thanks for reply.
 I want to sort my docs on name field, it is working well only if I have all 
 fields populated well.
 But my latlng field is optional, every doc will not have this
 value. So those docs are not getting sorted.

 Regards
 Harshvardhan Ojha

 -Original Message-
 From: Rafał Kuć [mailto:r@solr.pl] 
 Sent: Tuesday, July 03, 2012 5:24 PM
 To: solr-user@lucene.apache.org
 Subject: Re: Problem with sorting solr docs

 Hello!

 Your query suggests that you are sorting on the 'name' field
 instead of the latlng field (sort=name +asc).

 The question is what you are trying to achieve ? Do you want to
 sort your documents from a given geographical point ? If that's the
 case you may want to look here:
 http://wiki.apache.org/solr/SpatialSearch/
 and look at the possibility of sorting on the distance from a given point.

 --
 Regards,
  Rafał Kuć
  Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch - ElasticSearch


 Hi,
  
 I have 260 docs which I want to sort on a single field latlng.
 doc
 str name=id1/str
 str name=nameAmphoe Khanom/str
 str name=latlng1.0,1.0/str
 /doc
  
 My query is :
 http://localhost:8080/solr/select?q=*:*sort=name +asc
  
 This query sorts all documents except those which doesn’t have
 latlng, and I can’t keep any default value for this field. 
 My question is how can I sort all docs on latlng?
  
 Regards
 Harshvardhan Ojha  | Software Developer - Technology Development
 |  MakeMyTrip.com, 243 SP Infocity, Udyog Vihar Phase 1, Gurgaon, Haryana 
 - 122 016, India

 What's new?: Inspire - Discover an inspiring new way to plan and book travel 
 online.


 Office Map

 Facebook

 Twitter


  



Re: Facet Order when the count is the same

2012-07-03 Thread maurizio1976
I found it, the default order is by ASCII code (which is not alphabetical
order).

--
View this message in context: 
http://lucene.472066.n3.nabble.com/Facet-Order-when-the-count-is-the-same-tp3992471p3992734.html
Sent from the Solr - User mailing list archive at Nabble.com.


RE: Problem with sorting solr docs

2012-07-03 Thread Harshvardhan Ojha
Hi,

I have added field name=latlng indexed=true stored=true 
sortMissingLast=false sortMissingFirst=false/ to my schema.xml, although I 
am searching on name field.
It seems to be working fine. What is its default behavior?

Regards
Harshvardhan Ojha

-Original Message-
From: Rafał Kuć [mailto:r@solr.pl] 
Sent: Tuesday, July 03, 2012 5:35 PM
To: solr-user@lucene.apache.org
Subject: Re: Problem with sorting solr docs

Hello!

But the latlng field is not taken into account when sorting with sort defined 
such as in your query. You only sort on the name field and only that field. You 
can also define Solr behavior when there is no value in the field, but adding 
sortMissingLast=true or sortMissingFirst=true to your type definition in 
the schema.xml file.

--
Regards,
 Rafał Kuć
 Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch - ElasticSearch

 Hi,

 Thanks for reply.
 I want to sort my docs on name field, it is working well only if I have all 
 fields populated well.
 But my latlng field is optional, every doc will not have this value. 
 So those docs are not getting sorted.

 Regards
 Harshvardhan Ojha

 -Original Message-
 From: Rafał Kuć [mailto:r@solr.pl]
 Sent: Tuesday, July 03, 2012 5:24 PM
 To: solr-user@lucene.apache.org
 Subject: Re: Problem with sorting solr docs

 Hello!

 Your query suggests that you are sorting on the 'name' field instead 
 of the latlng field (sort=name +asc).

 The question is what you are trying to achieve ? Do you want to sort 
 your documents from a given geographical point ? If that's the case 
 you may want to look here:
 http://wiki.apache.org/solr/SpatialSearch/
 and look at the possibility of sorting on the distance from a given point.

 --
 Regards,
  Rafał Kuć
  Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch - 
 ElasticSearch


 Hi,
  
 I have 260 docs which I want to sort on a single field latlng.
 doc
 str name=id1/str
 str name=nameAmphoe Khanom/str
 str name=latlng1.0,1.0/str
 /doc
  
 My query is :
 http://localhost:8080/solr/select?q=*:*sort=name +asc
  
 This query sorts all documents except those which doesn’t have latlng, 
 and I can’t keep any default value for this field.
 My question is how can I sort all docs on latlng?
  
 Regards
 Harshvardhan Ojha  | Software Developer - Technology Development
 |  MakeMyTrip.com, 243 SP Infocity, Udyog Vihar Phase 1, Gurgaon, 
 Haryana - 122 016, India

 What's new?: Inspire - Discover an inspiring new way to plan and book travel 
 online.


 Office Map

 Facebook

 Twitter


  



Re: Near Real Time Indexing and Searching with solr 3.6

2012-07-03 Thread Michael McCandless
Hi,

You might want to take a look at Solr's trunk (very soon to be 4.0.0
alpha release), which already has a near-real-time solution (using
Lucene's near-real-time APIs).

Lucene has NRTCachingDirectory (to use RAM for small / recently
flushed segments), but I don't think Solr uses it yet.

Mike McCandless

http://blog.mikemccandless.com

On Tue, Jul 3, 2012 at 4:02 AM, thomas tho...@codemium.com wrote:
 Hi,

 As part of my bachelor thesis I'm trying to archive NRT with Solr 3.6. I've
 came up with a basic concept and would be trilled if I could get some
 feedback.

 The main idea is to use two different Indexes. One persistent on disc and
 one in RAM. The plan is to route every added and modified document to the
 RAMIndex (http://imgur.com/kLfUN). After a certain period of time, this
 index would get cleared and the documents get added to the persistent Index.

 Some major problems I still have with this idea is:
 - deletions of documents from documents in the persistent index
 - having the same unique IDs in both the RAM index and persitent Index, as a
 result of an updated document
   - Merging search results to filter out old versions of updated documents

 Would such an idea be viable to persuit?

 Thanks for you time



Re: How to improve this solr query?

2012-07-03 Thread Michael Della Bitta
Chamnap,

I have a hunch you can get away with not using *s.

Michael Della Bitta


Appinions, Inc. -- Where Influence Isn’t a Game.
http://www.appinions.com


On Tue, Jul 3, 2012 at 2:16 AM, Chamnap Chhorn chamnapchh...@gmail.com wrote:
 Lance, I didn't use widcard at all. I use only this, the difference is
 quoted or not.

 q2=*apartment*
 q1=*apartment*
 *
 *
 On Tue, Jul 3, 2012 at 12:06 PM, Lance Norskog goks...@gmail.com wrote:

 q2=*apartment*
 q1=*apartment*

 These are wildcards

 On Mon, Jul 2, 2012 at 8:30 PM, Chamnap Chhorn chamnapchh...@gmail.com
 wrote:
  Hi Lance,
 
  I didn't use wildcards at all. This is a normal text search only. I need
 a
  string field because it needs to be matched exactly, and the value is
  sometimes a multi-word, so quoted it is necessary.
 
  By the way, if I do a super plain query, it takes at least 600ms. I'm not
  sure why. On another solr instance with similar amount of data, it takes
  only 50ms.
 
  I see something strange on the response, there is always
 
  str name=commandbuild/str
 
  What does that mean?
 
  On Tue, Jul 3, 2012 at 10:02 AM, Lance Norskog goks...@gmail.com
 wrote:
 
  Wildcards are slow. Leading wildcards are even more slow. Is there
  some way to search that data differently? If it is a string, can you
  change it to a text field and make sure 'apartment' is a separate
  word?
 
  On Mon, Jul 2, 2012 at 10:01 AM, Chamnap Chhorn 
 chamnapchh...@gmail.com
  wrote:
   Hi Michael,
  
   Thanks for quick response. Based on documentation, facet.mincount
 means
   that solr will return facet fields that has at least that number. For
  me, I
   just want to ensure my facet fields count doesn't have zero value.
  
   I try to increase to 10, but it still slows even for the same query.
  
   Actually, those 13 million documents are divided into 200 portals. I
   already include fq=portal_uuid: kjkjkjk inside each nested query,
 but
   it's still slow.
  
   On Mon, Jul 2, 2012 at 11:47 PM, Michael Della Bitta 
   michael.della.bi...@appinions.com wrote:
  
   Hi Chamnap,
  
   The first thing that jumped out at me was facet.mincount=1. Are you
   sure you need this? Increasing this number should drastically improve
   speed.
  
   Michael Della Bitta
  
   
   Appinions, Inc. -- Where Influence Isn’t a Game.
   http://www.appinions.com
  
  
   On Mon, Jul 2, 2012 at 12:35 PM, Chamnap Chhorn 
  chamnapchh...@gmail.com
   wrote:
Hi all,
   
I'm using solr 3.5 with nested query on the 4 core cpu server + 17
 Gb.
   The
problem is that my query is so slow; the average response time is
 12
  secs
against 13 millions documents.
   
What I am doing is to send quoted string (q2) to string fields and
non-quoted string (q1) to other fields and combine the result
  together.
   
   
  
 
 facet=truesort=score+descq2=*apartment*facet.mincount=1q1=*apartment*
   
  
 
 tie=0.1q.alt=*:*wt=jsonversion=2.2rows=20fl=uuidfacet.query=has_map:+truefacet.query=has_image:+truefacet.query=has_website:+truestart=0q=
*
   
  
 
 _query_:+{!dismax+qf='.'+fq='..'+v=$q1}+OR+_query_:+{!dismax+qf='..'+fq='...'+v=$q2}
*
   
  
 
 facet.field={!ex%3Ddt}sub_category_uuidsfacet.field={!ex%3Ddt}location_uuid
   
I have done solr optimize already, but it's still slow. Any idea
 how
  to
improve the speed? Am I done anything wrong?
   
--
Chhorn Chamnap
http://chamnap.github.com/
  
  
  
  
   --
   Chhorn Chamnap
   http://chamnap.github.com/
 
 
 
  --
  Lance Norskog
  goks...@gmail.com
 
 
 
 
  --
  Chhorn Chamnap
  http://chamnap.github.com/



 --
 Lance Norskog
 goks...@gmail.com




 --
 Chhorn Chamnap
 http://chamnap.github.com/


RE: Problem with sorting solr docs

2012-07-03 Thread Shubham Srivastava
Just adding to the below-- If there is a field(say X) which is not populated 
and in the query I am not sorting on this particular field but on another field 
(say Y) still the result ordering would depend on X .

Infact in the below problem mentioned by Harsh making X as 
sortMissingLast=false sortMissingFirst=false solved the problem while in 
the query he was sorting on Y.  This seems a bit illogical.

Regards,
Shubham

From: Harshvardhan Ojha [harshvardhan.o...@makemytrip.com]
Sent: Tuesday, July 03, 2012 5:58 PM
To: solr-user@lucene.apache.org
Subject: RE: Problem with sorting solr docs

Hi,

I have added field name=latlng indexed=true stored=true 
sortMissingLast=false sortMissingFirst=false/ to my schema.xml, although I 
am searching on name field.
It seems to be working fine. What is its default behavior?

Regards
Harshvardhan Ojha

-Original Message-
From: Rafał Kuć [mailto:r@solr.pl]
Sent: Tuesday, July 03, 2012 5:35 PM
To: solr-user@lucene.apache.org
Subject: Re: Problem with sorting solr docs

Hello!

But the latlng field is not taken into account when sorting with sort defined 
such as in your query. You only sort on the name field and only that field. You 
can also define Solr behavior when there is no value in the field, but adding 
sortMissingLast=true or sortMissingFirst=true to your type definition in 
the schema.xml file.

--
Regards,
 Rafał Kuć
 Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch - ElasticSearch

 Hi,

 Thanks for reply.
 I want to sort my docs on name field, it is working well only if I have all 
 fields populated well.
 But my latlng field is optional, every doc will not have this value.
 So those docs are not getting sorted.

 Regards
 Harshvardhan Ojha

 -Original Message-
 From: Rafał Kuć [mailto:r@solr.pl]
 Sent: Tuesday, July 03, 2012 5:24 PM
 To: solr-user@lucene.apache.org
 Subject: Re: Problem with sorting solr docs

 Hello!

 Your query suggests that you are sorting on the 'name' field instead
 of the latlng field (sort=name +asc).

 The question is what you are trying to achieve ? Do you want to sort
 your documents from a given geographical point ? If that's the case
 you may want to look here:
 http://wiki.apache.org/solr/SpatialSearch/
 and look at the possibility of sorting on the distance from a given point.

 --
 Regards,
  Rafał Kuć
  Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch -
 ElasticSearch


 Hi,

 I have 260 docs which I want to sort on a single field latlng.
 doc
 str name=id1/str
 str name=nameAmphoe Khanom/str
 str name=latlng1.0,1.0/str
 /doc

 My query is :
 http://localhost:8080/solr/select?q=*:*sort=name +asc

 This query sorts all documents except those which doesn’t have latlng,
 and I can’t keep any default value for this field.
 My question is how can I sort all docs on latlng?

 Regards
 Harshvardhan Ojha  | Software Developer - Technology Development
 |  MakeMyTrip.com, 243 SP Infocity, Udyog Vihar Phase 1, Gurgaon,
 Haryana - 122 016, India

 What's new?: Inspire - Discover an inspiring new way to plan and book travel 
 online.


 Office Map

 Facebook

 Twitter






Re: Prefix query is not analysed?

2012-07-03 Thread Erick Erickson
Right. But note two things:
1 the filters were made MultiTermAware based on doing no harm.
When it comes to this kind of change, we wanted to be sure we
weren't messing things up. If you are certain that other filters would
be OK if they were MultiTermAware, let us know and we can make
then so.
2 you can define your own section of the analysis chain for multiterm and
 put whatever you want in there (in schema.xml). The elements you
 put there do _not_ have to be MultiTermAware. But if they produce
 more than one token for any input tokens, your results will be
 screwy.

Best
Erick


On Mon, Jul 2, 2012 at 4:50 AM, Alok Bhandari
alokomprakashbhand...@gmail.com wrote:
 Yes I am using Solr 3.6.

 Thanks for the link it is very useful.
 From the link I could make out that if analyzer  includes any one of the
 following  then they are applied and any other elements specified under
 analyzer are not applied as they are not multi-term aware.

 ASCIIFoldingFilterFactory
 LowerCaseFilterFactory
 LowerCaseTokenizerFactory
 MappingCharFilterFactory
 PersianCharFilterFactory




 --
 View this message in context: 
 http://lucene.472066.n3.nabble.com/Prefix-query-is-not-analysed-tp3992435p3992463.html
 Sent from the Solr - User mailing list archive at Nabble.com.


[ANNOUNCE] Apache Solr 4.0-alpha released.

2012-07-03 Thread Robert Muir
3 July 2012, Apache Solr™ 4.0-alpha available
The Lucene PMC is pleased to announce the release of Apache Solr 4.0-alpha.

Solr is the popular, blazing fast, open source NoSQL search platform from
the Apache Lucene project. Its major features include powerful full-text
search, hit highlighting, faceted search, dynamic clustering, database
integration, rich document (e.g., Word, PDF) handling, and geospatial search.
Solr is highly scalable, providing fault tolerant distributed search
and indexing,
and powers the search and navigation features of many of the world's
largest internet sites.

Solr 4.0-alpha is available for immediate download at:
   http://lucene.apache.org/solr/mirrors-solr-latest-redir.html?ver=4.0a

See the CHANGES.txt file included with the release for a full list of
details.

Solr 4.0-alpha Release Highlights:

The largest set of features goes by the development code-name “Solr
Cloud” and involves bringing easy scalability to Solr.  See
http://wiki.apache.org/solr/SolrCloud for more details.
 * Distributed indexing designed from the ground up for near real-time
(NRT) and NoSQL features such as realtime-get, optimistic locking, and
durable updates.
 * High availability with no single points of failure.
 * Apache Zookeeper integration for distributed coordination and
cluster metadata and configuration storage.
 * Immunity to split-brain issues due to Zookeeper's Paxos distributed
consensus protocols.
 * Updates sent to any node in the cluster and are automatically
forwarded to the correct shard and replicated to multiple nodes for
redundancy.
 * Queries sent to any node automatically perform a full distributed
search across the cluster with load balancing and fail-over.

Solr 4.0-alpha includes more NoSQL features for those using Solr as a
primary data store:
 * Update durability – A transaction log ensures that even uncommitted
documents are never lost.
 * Real-time Get – The ability to quickly retrieve the latest version
of a document, without the need to commit or open a new searcher
 * Versioning and Optimistic Locking – combined with real-time get,
this allows read-update-write functionality that ensures no
conflicting changes were made concurrently by other clients.
 * Atomic updates -  the ability to add, remove, change, and increment
fields of an existing document without having to send in the complete
document again.

There are many other features coming in Solr 4, such as
 * Pivot Faceting – Multi-level or hierarchical faceting where the top
constraints for one field are found for each top constraint of a
different field.
 * Pseudo-fields – The ability to alias fields, or to add metadata
along with returned documents, such as function query values and
results of spatial distance calculations.
 * A spell checker implementation that can work directly from the main
index instead of creating a sidecar index.
 * Pseudo-Join functionality – The ability to select a set of
documents based on their relationship to a second set of documents.
 * Function query enhancements including conditional function queries
and relevancy functions.
 * New update processors to facilitate modifying documents prior to indexing.
 * A brand new web admin interface, including support for SolrCloud.

This is an alpha release for early adopters. The guarantee for this
alpha release is that the index
format will be the 4.0 index format, supported through the 5.x series
of Lucene/Solr, unless there
is a critical bug (e.g. that would cause index corruption) that would
prevent this.

Please report any feedback to the mailing lists
(http://lucene.apache.org/solr/discussion.html)

Happy searching,

Lucene/Solr developers


Re: Trunk error in Tomcat

2012-07-03 Thread Briggs Thompson
Thanks Erik. If anyone else has any ideas about the NoSuchFieldError issue
please let me know. Thanks!

-Briggs

On Mon, Jul 2, 2012 at 6:27 PM, Erik Hatcher erik.hatc...@gmail.com wrote:

 Interestingly, I just logged the issue of it not showing the right error
 in the UI here: https://issues.apache.org/jira/browse/SOLR-3591

 As for your specific issue, not sure, but the error should at least also
 show in the admin view.

 Erik


 On Jul 2, 2012, at 18:59 , Briggs Thompson wrote:

  Hi All,
 
  I just grabbed the latest version of trunk and am having a hard time
  getting it running properly in tomcat. It does work fine in Jetty. The
  admin screen gives the following error:
  This interface requires that you activate the admin request handlers, add
  the following configuration to your  Solrconfig.xml
 
  I am pretty certain the front end error has nothing to do with the actual
  error. I have seen some other folks on the distro with the same problem,
  but none of the threads have a solution (that I could find). Below is the
  stack trace. I also tried with different versions of Lucene but none
  worked. Note: my index is EMPTY and I am not migrating over an index
 build
  with a previous version of lucene. I think I ran into this a while ago
 with
  an earlier version of trunk, but I don't recall doing anything to fix it.
  Anyhow, if anyone has an idea with this one, please let me know.
 
  Thanks!
  Briggs Thompson
 
  SEVERE: null:java.lang.NoSuchFieldError: LUCENE_50
  at
 
 org.apache.solr.analysis.SynonymFilterFactory$1.createComponents(SynonymFilterFactory.java:83)
  at org.apache.lucene.analysis.Analyzer.tokenStream(Analyzer.java:83)
  at
 
 org.apache.lucene.analysis.synonym.SynonymMap$Builder.analyze(SynonymMap.java:120)
  at
 
 org.apache.lucene.analysis.synonym.SolrSynonymParser.addInternal(SolrSynonymParser.java:99)
  at
 
 org.apache.lucene.analysis.synonym.SolrSynonymParser.add(SolrSynonymParser.java:70)
  at
 
 org.apache.solr.analysis.SynonymFilterFactory.loadSolrSynonyms(SynonymFilterFactory.java:131)
  at
 
 org.apache.solr.analysis.SynonymFilterFactory.inform(SynonymFilterFactory.java:93)
  at
 
 org.apache.solr.core.SolrResourceLoader.inform(SolrResourceLoader.java:584)
  at org.apache.solr.schema.IndexSchema.init(IndexSchema.java:112)
  at org.apache.solr.core.CoreContainer.create(CoreContainer.java:812)
  at org.apache.solr.core.CoreContainer.load(CoreContainer.java:510)
  at org.apache.solr.core.CoreContainer.load(CoreContainer.java:333)
  at
 
 org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:282)
  at
 
 org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:101)
  at
 
 org.apache.catalina.core.ApplicationFilterConfig.initFilter(ApplicationFilterConfig.java:277)
  at
 
 org.apache.catalina.core.ApplicationFilterConfig.getFilter(ApplicationFilterConfig.java:258)
  at
 
 org.apache.catalina.core.ApplicationFilterConfig.setFilterDef(ApplicationFilterConfig.java:382)
  at
 
 org.apache.catalina.core.ApplicationFilterConfig.init(ApplicationFilterConfig.java:103)
  at
 
 org.apache.catalina.core.StandardContext.filterStart(StandardContext.java:4649)
  at
 
 org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5305)
  at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
  at
 
 org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:899)
  at
 org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:875)
  at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:618)
  at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:963)
  at
 
 org.apache.catalina.startup.HostConfig$DeployWar.run(HostConfig.java:1600)
  at
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
  at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
  at java.util.concurrent.FutureTask.run(FutureTask.java:138)
  at
 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
  at
 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
  at java.lang.Thread.run(Thread.java:680)




Re: Trunk error in Tomcat

2012-07-03 Thread Briggs Thompson
Also, I forgot to include this before, but there is a client side error
which is a failed 404 request to the below URL.

http://localhost:8983/solr/null/admin/system?wt=json

On Tue, Jul 3, 2012 at 8:45 AM, Briggs Thompson w.briggs.thomp...@gmail.com
 wrote:

 Thanks Erik. If anyone else has any ideas about the NoSuchFieldError issue
 please let me know. Thanks!

 -Briggs


 On Mon, Jul 2, 2012 at 6:27 PM, Erik Hatcher erik.hatc...@gmail.comwrote:

 Interestingly, I just logged the issue of it not showing the right error
 in the UI here: https://issues.apache.org/jira/browse/SOLR-3591

 As for your specific issue, not sure, but the error should at least also
 show in the admin view.

 Erik


 On Jul 2, 2012, at 18:59 , Briggs Thompson wrote:

  Hi All,
 
  I just grabbed the latest version of trunk and am having a hard time
  getting it running properly in tomcat. It does work fine in Jetty. The
  admin screen gives the following error:
  This interface requires that you activate the admin request handlers,
 add
  the following configuration to your  Solrconfig.xml
 
  I am pretty certain the front end error has nothing to do with the
 actual
  error. I have seen some other folks on the distro with the same problem,
  but none of the threads have a solution (that I could find). Below is
 the
  stack trace. I also tried with different versions of Lucene but none
  worked. Note: my index is EMPTY and I am not migrating over an index
 build
  with a previous version of lucene. I think I ran into this a while ago
 with
  an earlier version of trunk, but I don't recall doing anything to fix
 it.
  Anyhow, if anyone has an idea with this one, please let me know.
 
  Thanks!
  Briggs Thompson
 
  SEVERE: null:java.lang.NoSuchFieldError: LUCENE_50
  at
 
 org.apache.solr.analysis.SynonymFilterFactory$1.createComponents(SynonymFilterFactory.java:83)
  at org.apache.lucene.analysis.Analyzer.tokenStream(Analyzer.java:83)
  at
 
 org.apache.lucene.analysis.synonym.SynonymMap$Builder.analyze(SynonymMap.java:120)
  at
 
 org.apache.lucene.analysis.synonym.SolrSynonymParser.addInternal(SolrSynonymParser.java:99)
  at
 
 org.apache.lucene.analysis.synonym.SolrSynonymParser.add(SolrSynonymParser.java:70)
  at
 
 org.apache.solr.analysis.SynonymFilterFactory.loadSolrSynonyms(SynonymFilterFactory.java:131)
  at
 
 org.apache.solr.analysis.SynonymFilterFactory.inform(SynonymFilterFactory.java:93)
  at
 
 org.apache.solr.core.SolrResourceLoader.inform(SolrResourceLoader.java:584)
  at org.apache.solr.schema.IndexSchema.init(IndexSchema.java:112)
  at org.apache.solr.core.CoreContainer.create(CoreContainer.java:812)
  at org.apache.solr.core.CoreContainer.load(CoreContainer.java:510)
  at org.apache.solr.core.CoreContainer.load(CoreContainer.java:333)
  at
 
 org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:282)
  at
 
 org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:101)
  at
 
 org.apache.catalina.core.ApplicationFilterConfig.initFilter(ApplicationFilterConfig.java:277)
  at
 
 org.apache.catalina.core.ApplicationFilterConfig.getFilter(ApplicationFilterConfig.java:258)
  at
 
 org.apache.catalina.core.ApplicationFilterConfig.setFilterDef(ApplicationFilterConfig.java:382)
  at
 
 org.apache.catalina.core.ApplicationFilterConfig.init(ApplicationFilterConfig.java:103)
  at
 
 org.apache.catalina.core.StandardContext.filterStart(StandardContext.java:4649)
  at
 
 org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5305)
  at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
  at
 
 org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:899)
  at
 org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:875)
  at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:618)
  at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:963)
  at
 
 org.apache.catalina.startup.HostConfig$DeployWar.run(HostConfig.java:1600)
  at
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
  at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
  at java.util.concurrent.FutureTask.run(FutureTask.java:138)
  at
 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
  at
 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
  at java.lang.Thread.run(Thread.java:680)





Re: Trunk error in Tomcat

2012-07-03 Thread Vadim Kisselmann
same problem here:

https://mail.google.com/mail/u/0/?ui=2view=btopver=18zqbez0n5t35q=tomcat%20v.kisselmannqs=truesearch=queryth=13615cfb9a5064bdqt=kisselmann.1.tomcat.1.tomcat's.1.v.1cvid=3


https://issues.apache.org/jira/browse/SOLR-3238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13230056#comment-13230056

i use an older solr-trunk version from february/march, it works. with
newer versions from trunk i get the same error: This interface
requires that you activate the admin request handlers...

regards
vadim



2012/7/3 Briggs Thompson w.briggs.thomp...@gmail.com:
 Also, I forgot to include this before, but there is a client side error
 which is a failed 404 request to the below URL.

 http://localhost:8983/solr/null/admin/system?wt=json

 On Tue, Jul 3, 2012 at 8:45 AM, Briggs Thompson w.briggs.thomp...@gmail.com
 wrote:

 Thanks Erik. If anyone else has any ideas about the NoSuchFieldError issue
 please let me know. Thanks!

 -Briggs


 On Mon, Jul 2, 2012 at 6:27 PM, Erik Hatcher erik.hatc...@gmail.comwrote:

 Interestingly, I just logged the issue of it not showing the right error
 in the UI here: https://issues.apache.org/jira/browse/SOLR-3591

 As for your specific issue, not sure, but the error should at least also
 show in the admin view.

 Erik


 On Jul 2, 2012, at 18:59 , Briggs Thompson wrote:

  Hi All,
 
  I just grabbed the latest version of trunk and am having a hard time
  getting it running properly in tomcat. It does work fine in Jetty. The
  admin screen gives the following error:
  This interface requires that you activate the admin request handlers,
 add
  the following configuration to your  Solrconfig.xml
 
  I am pretty certain the front end error has nothing to do with the
 actual
  error. I have seen some other folks on the distro with the same problem,
  but none of the threads have a solution (that I could find). Below is
 the
  stack trace. I also tried with different versions of Lucene but none
  worked. Note: my index is EMPTY and I am not migrating over an index
 build
  with a previous version of lucene. I think I ran into this a while ago
 with
  an earlier version of trunk, but I don't recall doing anything to fix
 it.
  Anyhow, if anyone has an idea with this one, please let me know.
 
  Thanks!
  Briggs Thompson
 
  SEVERE: null:java.lang.NoSuchFieldError: LUCENE_50
  at
 
 org.apache.solr.analysis.SynonymFilterFactory$1.createComponents(SynonymFilterFactory.java:83)
  at org.apache.lucene.analysis.Analyzer.tokenStream(Analyzer.java:83)
  at
 
 org.apache.lucene.analysis.synonym.SynonymMap$Builder.analyze(SynonymMap.java:120)
  at
 
 org.apache.lucene.analysis.synonym.SolrSynonymParser.addInternal(SolrSynonymParser.java:99)
  at
 
 org.apache.lucene.analysis.synonym.SolrSynonymParser.add(SolrSynonymParser.java:70)
  at
 
 org.apache.solr.analysis.SynonymFilterFactory.loadSolrSynonyms(SynonymFilterFactory.java:131)
  at
 
 org.apache.solr.analysis.SynonymFilterFactory.inform(SynonymFilterFactory.java:93)
  at
 
 org.apache.solr.core.SolrResourceLoader.inform(SolrResourceLoader.java:584)
  at org.apache.solr.schema.IndexSchema.init(IndexSchema.java:112)
  at org.apache.solr.core.CoreContainer.create(CoreContainer.java:812)
  at org.apache.solr.core.CoreContainer.load(CoreContainer.java:510)
  at org.apache.solr.core.CoreContainer.load(CoreContainer.java:333)
  at
 
 org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:282)
  at
 
 org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:101)
  at
 
 org.apache.catalina.core.ApplicationFilterConfig.initFilter(ApplicationFilterConfig.java:277)
  at
 
 org.apache.catalina.core.ApplicationFilterConfig.getFilter(ApplicationFilterConfig.java:258)
  at
 
 org.apache.catalina.core.ApplicationFilterConfig.setFilterDef(ApplicationFilterConfig.java:382)
  at
 
 org.apache.catalina.core.ApplicationFilterConfig.init(ApplicationFilterConfig.java:103)
  at
 
 org.apache.catalina.core.StandardContext.filterStart(StandardContext.java:4649)
  at
 
 org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5305)
  at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
  at
 
 org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:899)
  at
 org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:875)
  at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:618)
  at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:963)
  at
 
 org.apache.catalina.startup.HostConfig$DeployWar.run(HostConfig.java:1600)
  at
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
  at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
  at java.util.concurrent.FutureTask.run(FutureTask.java:138)
  at
 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
  at
 
 

Re: Trunk error in Tomcat

2012-07-03 Thread Briggs Thompson
Wow! I didn't know 4.0 alpha was released today. I think I will just get
that going. Woo!!

On Tue, Jul 3, 2012 at 9:00 AM, Vadim Kisselmann v.kisselm...@gmail.comwrote:

 same problem here:


 https://mail.google.com/mail/u/0/?ui=2view=btopver=18zqbez0n5t35q=tomcat%20v.kisselmannqs=truesearch=queryth=13615cfb9a5064bdqt=kisselmann.1.tomcat.1.tomcat's.1.v.1cvid=3



 https://issues.apache.org/jira/browse/SOLR-3238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13230056#comment-13230056

 i use an older solr-trunk version from february/march, it works. with
 newer versions from trunk i get the same error: This interface
 requires that you activate the admin request handlers...

 regards
 vadim



 2012/7/3 Briggs Thompson w.briggs.thomp...@gmail.com:
  Also, I forgot to include this before, but there is a client side error
  which is a failed 404 request to the below URL.
 
  http://localhost:8983/solr/null/admin/system?wt=json
 
  On Tue, Jul 3, 2012 at 8:45 AM, Briggs Thompson 
 w.briggs.thomp...@gmail.com
  wrote:
 
  Thanks Erik. If anyone else has any ideas about the NoSuchFieldError
 issue
  please let me know. Thanks!
 
  -Briggs
 
 
  On Mon, Jul 2, 2012 at 6:27 PM, Erik Hatcher erik.hatc...@gmail.com
 wrote:
 
  Interestingly, I just logged the issue of it not showing the right
 error
  in the UI here: https://issues.apache.org/jira/browse/SOLR-3591
 
  As for your specific issue, not sure, but the error should at least
 also
  show in the admin view.
 
  Erik
 
 
  On Jul 2, 2012, at 18:59 , Briggs Thompson wrote:
 
   Hi All,
  
   I just grabbed the latest version of trunk and am having a hard time
   getting it running properly in tomcat. It does work fine in Jetty.
 The
   admin screen gives the following error:
   This interface requires that you activate the admin request handlers,
  add
   the following configuration to your  Solrconfig.xml
  
   I am pretty certain the front end error has nothing to do with the
  actual
   error. I have seen some other folks on the distro with the same
 problem,
   but none of the threads have a solution (that I could find). Below is
  the
   stack trace. I also tried with different versions of Lucene but none
   worked. Note: my index is EMPTY and I am not migrating over an index
  build
   with a previous version of lucene. I think I ran into this a while
 ago
  with
   an earlier version of trunk, but I don't recall doing anything to fix
  it.
   Anyhow, if anyone has an idea with this one, please let me know.
  
   Thanks!
   Briggs Thompson
  
   SEVERE: null:java.lang.NoSuchFieldError: LUCENE_50
   at
  
 
 org.apache.solr.analysis.SynonymFilterFactory$1.createComponents(SynonymFilterFactory.java:83)
   at org.apache.lucene.analysis.Analyzer.tokenStream(Analyzer.java:83)
   at
  
 
 org.apache.lucene.analysis.synonym.SynonymMap$Builder.analyze(SynonymMap.java:120)
   at
  
 
 org.apache.lucene.analysis.synonym.SolrSynonymParser.addInternal(SolrSynonymParser.java:99)
   at
  
 
 org.apache.lucene.analysis.synonym.SolrSynonymParser.add(SolrSynonymParser.java:70)
   at
  
 
 org.apache.solr.analysis.SynonymFilterFactory.loadSolrSynonyms(SynonymFilterFactory.java:131)
   at
  
 
 org.apache.solr.analysis.SynonymFilterFactory.inform(SynonymFilterFactory.java:93)
   at
  
 
 org.apache.solr.core.SolrResourceLoader.inform(SolrResourceLoader.java:584)
   at org.apache.solr.schema.IndexSchema.init(IndexSchema.java:112)
   at org.apache.solr.core.CoreContainer.create(CoreContainer.java:812)
   at org.apache.solr.core.CoreContainer.load(CoreContainer.java:510)
   at org.apache.solr.core.CoreContainer.load(CoreContainer.java:333)
   at
  
 
 org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:282)
   at
  
 
 org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:101)
   at
  
 
 org.apache.catalina.core.ApplicationFilterConfig.initFilter(ApplicationFilterConfig.java:277)
   at
  
 
 org.apache.catalina.core.ApplicationFilterConfig.getFilter(ApplicationFilterConfig.java:258)
   at
  
 
 org.apache.catalina.core.ApplicationFilterConfig.setFilterDef(ApplicationFilterConfig.java:382)
   at
  
 
 org.apache.catalina.core.ApplicationFilterConfig.init(ApplicationFilterConfig.java:103)
   at
  
 
 org.apache.catalina.core.StandardContext.filterStart(StandardContext.java:4649)
   at
  
 
 org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5305)
   at
 org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
   at
  
 
 org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:899)
   at
  org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:875)
   at
 org.apache.catalina.core.StandardHost.addChild(StandardHost.java:618)
   at
 org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:963)
   at
  
 
 org.apache.catalina.startup.HostConfig$DeployWar.run(HostConfig.java:1600)
   at
  

Re: Filtering a query by range returning unexpected results

2012-07-03 Thread Erick Erickson
OK, this appears to be something with the currency type. It works fine for
regular float fields. I can't get the multiValued currency types to work with
range queries. Don't quite know what I was doing when I thought they
_did_ work.

One work-around I think, if you are using a single currency USD might be
to copy your price to a simple float field and do your range queries on that.

I'm not at all sure that the currency type was ever intended to support
multiValued=true. I don't know enough about the internals to know if
it's even a good idea to try, but the current behavior could be improved
upon.

But it seems to me that one of two things should happen:
1 the startup should barf if a currency type is multiValued (fail early)
or
2 currency should work when multiValued.

Unfortunately, JIRA is down so I can't look to see if this is already a known
issue or enter a JIRA if it isn't. I'll try to look later if it all
comes back up.

Best
Erick

On Mon, Jul 2, 2012 at 1:53 PM, Andrew Meredith andymered...@gmail.com wrote:
 Yep, that 15.00.00 was a typo.

 Here are the relevant portions of my schema.xml:

 types
 !-- CUT --
 fieldType name=currency class=solr.CurrencyField precisionStep=8
 defaultCurrency=USD currencyConfig=currency.xml /
 !-- CUT --
 /types

 fields
 !-- CUT --
 field name=prices type=currency indexed=true stored=true
 multiValued=true /
 !-- CUT --
 /fields

 And here is the output of a sample query with debugQuery=on appended:

 lst name=debug
 str name=rawquerystringFurtick/str
 str name=querystringFurtick/str
 str name=parsedquery
 +DisjunctionMaxQuery((subtitle:furtick | frontlist_flapcopy:furtick^0.5 |
 frontlist_ean:furtick^6.0 | author:furtick^3.0 | series:furtick^1.5 |
 title:furtick^2.0)) ()
 /str
 str name=parsedquery_toString
 +(subtitle:furtick | frontlist_flapcopy:furtick^0.5 |
 frontlist_ean:furtick^6.0 | author:furtick^3.0 | series:furtick^1.5 |
 title:furtick^2.0) ()
 /str
 lst name=explain/
 str name=QParserExtendedDismaxQParser/str
 null name=altquerystring/
 null name=boostfuncs/
 arr name=filter_queries
 strprices:[5.00 TO 21.00]/str
 strforsaleinusa:true/str
 /arr
 arr name=parsed_filter_queries
 str
 ConstantScore(frange(currency(prices)):[500 TO 2100])
 /str
 strforsaleinusa:true/str
 /arr
 lst name=timing
 double name=time3.0/double
 lst name=prepare
 double name=time2.0/double
 lst name=org.apache.solr.handler.component.QueryComponent
 double name=time2.0/double
 /lst
 lst name=org.apache.solr.handler.component.FacetComponent
 double name=time0.0/double
 /lst
 lst name=org.apache.solr.handler.component.MoreLikeThisComponent
 double name=time0.0/double
 /lst
 lst name=org.apache.solr.handler.component.HighlightComponent
 double name=time0.0/double
 /lst
 lst name=org.apache.solr.handler.component.StatsComponent
 double name=time0.0/double
 /lst
 lst name=org.apache.solr.handler.component.SpellCheckComponent
 double name=time0.0/double
 /lst
 lst name=org.apache.solr.handler.component.DebugComponent
 double name=time0.0/double
 /lst
 /lst
 lst name=process
 double name=time1.0/double
 lst name=org.apache.solr.handler.component.QueryComponent
 double name=time1.0/double
 /lst
 lst name=org.apache.solr.handler.component.FacetComponent
 double name=time0.0/double
 /lst
 lst name=org.apache.solr.handler.component.MoreLikeThisComponent
 double name=time0.0/double
 /lst
 lst name=org.apache.solr.handler.component.HighlightComponent
 double name=time0.0/double
 /lst
 lst name=org.apache.solr.handler.component.StatsComponent
 double name=time0.0/double
 /lst
 lst name=org.apache.solr.handler.component.SpellCheckComponent
 double name=time0.0/double
 /lst
 lst name=org.apache.solr.handler.component.DebugComponent
 double name=time0.0/double
 /lst
 /lst
 /lst
 /lst


 If I run this same query with the filter, prices:[5.00 TO 99.00], then I
 get a result that includes the following field:

 arr name=prices
 str12.99,USD/str
 str14.99,USD/str
 str15.00,USD/str
 str25.00,USD/str
 /arr


 I can't figure out why this is not being returned with the first query.
 I'll try re-building the index with the prices field type set to float
 and see if that changes the behaviour.

 On Sat, Jun 30, 2012 at 6:49 PM, Erick Erickson 
 erickerick...@gmail.comwrote:

 This works fine for me with 3.6, float fields and even on a currency type.

 I'm assuming a typo for 15.00.00 BTW.

 I admit I'm not all that familiar with the currency type, which I infer
 you're
 using given the USD bits. But I ran a quick test with currency types and
 it worked at least the way I ran it... But another quick look shows that
 some interesting things are being done with the currency type, so who
 knows?

 So, let's see your relevant schema bits, and the results of your query
 when you attach debugQuery=on to it.


 Best
 Erick

 On Fri, Jun 29, 2012 at 2:43 PM, Andrew Meredith andymered...@gmail.com
 wrote:
  First off, I have to say that I am working on my first project that has
  required 

Re: Trunk error in Tomcat

2012-07-03 Thread Stefan Matheis
Hey Vadim

Right now JIRA is Down for Maintenance, but afaik there was another comment 
asking for more informations. I'll check Eric's Issue today or tomorrow and see 
how we can handle (and hopefully fix) that.

Regards
Stefan


On Tuesday, July 3, 2012 at 4:00 PM, Vadim Kisselmann wrote:

 same problem here:
 
 https://mail.google.com/mail/u/0/?ui=2view=btopver=18zqbez0n5t35q=tomcat%20v.kisselmannqs=truesearch=queryth=13615cfb9a5064bdqt=kisselmann.1.tomcat.1.tomcat's.1.v.1cvid=3
 
 
 https://issues.apache.org/jira/browse/SOLR-3238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13230056#comment-13230056
 
 i use an older solr-trunk version from february/march, it works. with
 newer versions from trunk i get the same error: This interface
 requires that you activate the admin request handlers...
 
 regards
 vadim





RE: how do I search the archives for solr-user

2012-07-03 Thread Petersen, Robert
This site is pretty cool also, just filter on solr-user like this:
http://markmail.org/search/?q=list%3Aorg.apache.lucene.solr-user


-Original Message-
From: Chris Hostetter [mailto:hossman_luc...@fucit.org] 
Sent: Monday, July 02, 2012 5:34 PM
To: solr-user@lucene.apache.org
Subject: Re: how do I search the archives for solr-user



http://lucene.apache.org/solr/discussion.html#mail-archives


-Hoss




Re: Broken pipe error

2012-07-03 Thread alxsss
I had the same problem with jetty. It turned out that broken pipe happens  when 
application disconnects from jetty. In my case I was using php client and it 
had 10 sec restriction in curl request. When solr takes more than 10 sec to 
respond, curl automatically disconnected from jetty.

Hope this can help.

Alex.



-Original Message-
From: Jason hialo...@gmail.com
To: solr-user solr-user@lucene.apache.org
Sent: Mon, Jul 2, 2012 7:41 pm
Subject: Broken pipe error


Hi, all

We're independently running three search servers.
One of three servers has bigger index size and more connection users than
the others.
Except that, all configurations are same.
Problem is that server sometimes occurs broken pipe error.
But I don't know what problem is.
Please give some ideas.
Thanks in advance.
Jason


error message below...
===
2012-07-03 10:42:56,753 [http-8080-exec-3677] ERROR
org.apache.solr.servlet.SolrDispatchFilter - null:ClientAbortException: 
java.io.IOException: Broken pipe
at
org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:358)
at
org.apache.tomcat.util.buf.ByteChunk.flushBuffer(ByteChunk.java:432)
at
org.apache.catalina.connector.OutputBuffer.doFlush(OutputBuffer.java:309)
at
org.apache.catalina.connector.OutputBuffer.flush(OutputBuffer.java:288)
at
org.apache.catalina.connector.CoyoteOutputStream.flush(CoyoteOutputStream.java:98)
at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:278)
at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:122)
at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:212)
at org.apache.solr.util.FastWriter.flush(FastWriter.java:115)
at
org.apache.solr.servlet.SolrDispatchFilter.writeResponse(SolrDispatchFilter.java:402)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:279)
at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
at
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
at
org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:470)
at
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
at
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
at
org.apache.coyote.http11.Http11NioProcessor.process(Http11NioProcessor.java:889)
at
org.apache.coyote.http11.Http11NioProtocol$Http11ConnectionHandler.process(Http11NioProtocol.java:732)
at
org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:2262)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.io.IOException: Broken pipe
at sun.nio.ch.FileDispatcher.write0(Native Method)
at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:29)
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:69)
at sun.nio.ch.IOUtil.write(IOUtil.java:40)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:334)
at org.apache.tomcat.util.net.NioChannel.write(NioChannel.java:116)
at
org.apache.tomcat.util.net.NioBlockingSelector.write(NioBlockingSelector.java:93)
at
org.apache.tomcat.util.net.NioSelectorPool.write(NioSelectorPool.java:156)
at
org.apache.coyote.http11.InternalNioOutputBuffer.writeToSocket(InternalNioOutputBuffer.java:460)
at
org.apache.coyote.http11.InternalNioOutputBuffer.flushBuffer(InternalNioOutputBuffer.java:804)
at
org.apache.coyote.http11.InternalNioOutputBuffer.addToBB(InternalNioOutputBuffer.java:644)
at
org.apache.coyote.http11.InternalNioOutputBuffer.access$000(InternalNioOutputBuffer.java:46)
at
org.apache.coyote.http11.InternalNioOutputBuffer$SocketOutputBuffer.doWrite(InternalNioOutputBuffer.java:829)
at
org.apache.coyote.http11.filters.ChunkedOutputFilter.doWrite(ChunkedOutputFilter.java:126)
at
org.apache.coyote.http11.InternalNioOutputBuffer.doWrite(InternalNioOutputBuffer.java:610)
at org.apache.coyote.Response.doWrite(Response.java:560)
at
org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:353)
... 25 more

--
View this message in context: 

RE: Broken pipe error

2012-07-03 Thread Petersen, Robert
I also had this problem on solr/tomcat and finally saw the errors were coming 
from my application side disconnecting from solr after a timeout.  This was 
happening when solr was busy doing an optimize and thus not responding quickly 
enough.  Initially when I saw this in the logs, I was quite worried until I 
realized the source of the problem.

Robi

-Original Message-
From: alx...@aim.com [mailto:alx...@aim.com] 
Sent: Tuesday, July 03, 2012 10:38 AM
To: solr-user@lucene.apache.org
Subject: Re: Broken pipe error

I had the same problem with jetty. It turned out that broken pipe happens  when 
application disconnects from jetty. In my case I was using php client and it 
had 10 sec restriction in curl request. When solr takes more than 10 sec to 
respond, curl automatically disconnected from jetty.

Hope this can help.

Alex.



-Original Message-
From: Jason hialo...@gmail.com
To: solr-user solr-user@lucene.apache.org
Sent: Mon, Jul 2, 2012 7:41 pm
Subject: Broken pipe error


Hi, all

We're independently running three search servers.
One of three servers has bigger index size and more connection users than the 
others.
Except that, all configurations are same.
Problem is that server sometimes occurs broken pipe error.
But I don't know what problem is.
Please give some ideas.
Thanks in advance.
Jason


error message below...
===
2012-07-03 10:42:56,753 [http-8080-exec-3677] ERROR 
org.apache.solr.servlet.SolrDispatchFilter - null:ClientAbortException: 
java.io.IOException: Broken pipe
at
org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:358)
at
org.apache.tomcat.util.buf.ByteChunk.flushBuffer(ByteChunk.java:432)
at
org.apache.catalina.connector.OutputBuffer.doFlush(OutputBuffer.java:309)
at
org.apache.catalina.connector.OutputBuffer.flush(OutputBuffer.java:288)
at
org.apache.catalina.connector.CoyoteOutputStream.flush(CoyoteOutputStream.java:98)
at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:278)
at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:122)
at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:212)
at org.apache.solr.util.FastWriter.flush(FastWriter.java:115)
at
org.apache.solr.servlet.SolrDispatchFilter.writeResponse(SolrDispatchFilter.java:402)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:279)
at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
at
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
at
org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:470)
at
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
at
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
at
org.apache.coyote.http11.Http11NioProcessor.process(Http11NioProcessor.java:889)
at
org.apache.coyote.http11.Http11NioProtocol$Http11ConnectionHandler.process(Http11NioProtocol.java:732)
at
org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:2262)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.io.IOException: Broken pipe
at sun.nio.ch.FileDispatcher.write0(Native Method)
at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:29)
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:69)
at sun.nio.ch.IOUtil.write(IOUtil.java:40)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:334)
at org.apache.tomcat.util.net.NioChannel.write(NioChannel.java:116)
at
org.apache.tomcat.util.net.NioBlockingSelector.write(NioBlockingSelector.java:93)
at
org.apache.tomcat.util.net.NioSelectorPool.write(NioSelectorPool.java:156)
at
org.apache.coyote.http11.InternalNioOutputBuffer.writeToSocket(InternalNioOutputBuffer.java:460)
at
org.apache.coyote.http11.InternalNioOutputBuffer.flushBuffer(InternalNioOutputBuffer.java:804)
at
org.apache.coyote.http11.InternalNioOutputBuffer.addToBB(InternalNioOutputBuffer.java:644)
at
org.apache.coyote.http11.InternalNioOutputBuffer.access$000(InternalNioOutputBuffer.java:46)
at

Re: Trunk error in Tomcat

2012-07-03 Thread Vadim Kisselmann
Hi Stefan,
sorry, i overlooked your latest comment with the new issue in SOLR-3238 ;)
Should i open an new issue? I´m not testing it with newer
trunk-versions about a couple of months because
solr cloud with an external ZK and tomcat fails too, but i can do it
and post all the errors which i find in my log files.
Regards
Vadim



2012/7/3 Stefan Matheis matheis.ste...@googlemail.com:
 Hey Vadim

 Right now JIRA is Down for Maintenance, but afaik there was another comment 
 asking for more informations. I'll check Eric's Issue today or tomorrow and 
 see how we can handle (and hopefully fix) that.

 Regards
 Stefan


 On Tuesday, July 3, 2012 at 4:00 PM, Vadim Kisselmann wrote:

 same problem here:

 https://mail.google.com/mail/u/0/?ui=2view=btopver=18zqbez0n5t35q=tomcat%20v.kisselmannqs=truesearch=queryth=13615cfb9a5064bdqt=kisselmann.1.tomcat.1.tomcat's.1.v.1cvid=3


 https://issues.apache.org/jira/browse/SOLR-3238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13230056#comment-13230056

 i use an older solr-trunk version from february/march, it works. with
 newer versions from trunk i get the same error: This interface
 requires that you activate the admin request handlers...

 regards
 vadim





RE: DIH - unable to ADD individual new documents

2012-07-03 Thread Klostermeyer, Michael
Some interesting findings over the last hours, that may change the context of 
this discussion...

Due to the nature of the application, I need the ability to fire off individual 
ADDs on several different entities at basically the same time.  So, I am 
making 2-4 Solr ADD calls within 100ms of each other.  While troubleshooting 
this, I found that if I only made 1 Solr ADD call (ignoring the other 
entities), it updated the index as expected.  However, when all were fired off, 
proper indexing did not occur (at least on one of the entities) and no errors 
were logged.  I am still attempting to figure out if ALL of the 2-4 entities 
failed to ADD, or if some failed and others succeeded.

So...does this have something to do with Solr's index/message queuing (v3.5)?  
How does Solr handle these types of rapid requests, and even more important, 
how do I get the status of an individual DIH call vs simply the status of the 
latest call at /dataimport?

Mike


-Original Message-
From: Gora Mohanty [mailto:g...@mimirtech.com] 
Sent: Monday, July 02, 2012 10:02 PM
To: solr-user@lucene.apache.org
Subject: Re: DIH - unable to ADD individual new documents

On 3 July 2012 07:54, Klostermeyer, Michael mklosterme...@riskexchange.com 
wrote:
 I should add that I am using the full-import command in all cases, and 
 setting clean=false for the individual adds.

What does the data-import page report at the end of the full-import, i.e., how 
many documents were indexed?
Are there any error messages in the Solr logs? Please share with us your DIH 
configuration file, and Solr schema.xml.

Regards,
Gora


RE: DIH - unable to ADD individual new documents

2012-07-03 Thread Dyer, James
A DIH request handler can only process one run at a time.  So if DIH is still 
in process and you kick off a new DIH full-import it will silently ignore the 
new command.  To have more than one DIH run going at a time it is necessary 
to configure more than one handler instance in sorlconfig.xml.  But even then 
you'll have to be careful to find one that is free before trying to use it.

Regardless, to do what you want, you'll need to poll the DIH response screen to 
be sure it isn't running before starting a new one.  It would be simplest to 
leave it with just 1 DIH handler in solrconfig.xml.  If you've got to have an 
undefined # of concurrent updates going at once you're best off to not use DIH.

Perhaps a better usage pattern for which DIH was designed for is to put the doc 
id's in an update table with a timestamp.  Have your queries join to the update 
table where timestamp  ${dih.last_index_time}.  Set up crontab or whatever 
to kick off DIH every so often.  If the prior run is still in progress, it will 
just skip that run, but because we're dealing with timestamps that get written 
automatically when DIH finishes, you will only experience a delayed update, not 
a lost update.  By batching your updates like this you will also have fewer 
commits, which will be beneficial for performance all around.

Of course if you're trying to do this with the near-real-time functionality 
batching isn't your answer.  But DIH isn't designed at all to work well with 
NRT either...

James Dyer
E-Commerce Systems
Ingram Content Group
(615) 213-4311


-Original Message-
From: Klostermeyer, Michael [mailto:mklosterme...@riskexchange.com] 
Sent: Tuesday, July 03, 2012 1:55 PM
To: solr-user@lucene.apache.org
Subject: RE: DIH - unable to ADD individual new documents

Some interesting findings over the last hours, that may change the context of 
this discussion...

Due to the nature of the application, I need the ability to fire off individual 
ADDs on several different entities at basically the same time.  So, I am 
making 2-4 Solr ADD calls within 100ms of each other.  While troubleshooting 
this, I found that if I only made 1 Solr ADD call (ignoring the other 
entities), it updated the index as expected.  However, when all were fired off, 
proper indexing did not occur (at least on one of the entities) and no errors 
were logged.  I am still attempting to figure out if ALL of the 2-4 entities 
failed to ADD, or if some failed and others succeeded.

So...does this have something to do with Solr's index/message queuing (v3.5)?  
How does Solr handle these types of rapid requests, and even more important, 
how do I get the status of an individual DIH call vs simply the status of the 
latest call at /dataimport?

Mike


-Original Message-
From: Gora Mohanty [mailto:g...@mimirtech.com] 
Sent: Monday, July 02, 2012 10:02 PM
To: solr-user@lucene.apache.org
Subject: Re: DIH - unable to ADD individual new documents

On 3 July 2012 07:54, Klostermeyer, Michael mklosterme...@riskexchange.com 
wrote:
 I should add that I am using the full-import command in all cases, and 
 setting clean=false for the individual adds.

What does the data-import page report at the end of the full-import, i.e., how 
many documents were indexed?
Are there any error messages in the Solr logs? Please share with us your DIH 
configuration file, and Solr schema.xml.

Regards,
Gora


RE: DIH - unable to ADD individual new documents

2012-07-03 Thread Klostermeyer, Michael
Well that little bit of knowledge changes things for me, doesn't it?  I 
appreciate your response very much.  Without knowing that about the DIH, I 
attempted to have my DIH handler handle all circumstances, namely the batch, 
scheduled job, and immediate/NRT indexing.  Looks like I'm going to have to 
severely re-think that strategy.

Thanks again...and if anyone has any further input how I can best/most 
efficiently accomplish all 3 above, please let me know.

Mike


-Original Message-
From: Dyer, James [mailto:james.d...@ingrambook.com] 
Sent: Tuesday, July 03, 2012 1:12 PM
To: solr-user@lucene.apache.org
Subject: RE: DIH - unable to ADD individual new documents

A DIH request handler can only process one run at a time.  So if DIH is still 
in process and you kick off a new DIH full-import it will silently ignore the 
new command.  To have more than one DIH run going at a time it is necessary 
to configure more than one handler instance in sorlconfig.xml.  But even then 
you'll have to be careful to find one that is free before trying to use it.

Regardless, to do what you want, you'll need to poll the DIH response screen to 
be sure it isn't running before starting a new one.  It would be simplest to 
leave it with just 1 DIH handler in solrconfig.xml.  If you've got to have an 
undefined # of concurrent updates going at once you're best off to not use DIH.

Perhaps a better usage pattern for which DIH was designed for is to put the doc 
id's in an update table with a timestamp.  Have your queries join to the update 
table where timestamp  ${dih.last_index_time}.  Set up crontab or whatever 
to kick off DIH every so often.  If the prior run is still in progress, it will 
just skip that run, but because we're dealing with timestamps that get written 
automatically when DIH finishes, you will only experience a delayed update, not 
a lost update.  By batching your updates like this you will also have fewer 
commits, which will be beneficial for performance all around.

Of course if you're trying to do this with the near-real-time functionality 
batching isn't your answer.  But DIH isn't designed at all to work well with 
NRT either...

James Dyer
E-Commerce Systems
Ingram Content Group
(615) 213-4311


-Original Message-
From: Klostermeyer, Michael [mailto:mklosterme...@riskexchange.com] 
Sent: Tuesday, July 03, 2012 1:55 PM
To: solr-user@lucene.apache.org
Subject: RE: DIH - unable to ADD individual new documents

Some interesting findings over the last hours, that may change the context of 
this discussion...

Due to the nature of the application, I need the ability to fire off individual 
ADDs on several different entities at basically the same time.  So, I am 
making 2-4 Solr ADD calls within 100ms of each other.  While troubleshooting 
this, I found that if I only made 1 Solr ADD call (ignoring the other 
entities), it updated the index as expected.  However, when all were fired off, 
proper indexing did not occur (at least on one of the entities) and no errors 
were logged.  I am still attempting to figure out if ALL of the 2-4 entities 
failed to ADD, or if some failed and others succeeded.

So...does this have something to do with Solr's index/message queuing (v3.5)?  
How does Solr handle these types of rapid requests, and even more important, 
how do I get the status of an individual DIH call vs simply the status of the 
latest call at /dataimport?

Mike


-Original Message-
From: Gora Mohanty [mailto:g...@mimirtech.com] 
Sent: Monday, July 02, 2012 10:02 PM
To: solr-user@lucene.apache.org
Subject: Re: DIH - unable to ADD individual new documents

On 3 July 2012 07:54, Klostermeyer, Michael mklosterme...@riskexchange.com 
wrote:
 I should add that I am using the full-import command in all cases, and 
 setting clean=false for the individual adds.

What does the data-import page report at the end of the full-import, i.e., how 
many documents were indexed?
Are there any error messages in the Solr logs? Please share with us your DIH 
configuration file, and Solr schema.xml.

Regards,
Gora


Re: How to space between spatial search results? (Declustering)

2012-07-03 Thread Lee Carroll
Sorry can't answer your question directly. However map scale may render
this very tricky or even redundant.

UI may be a better place for a solution rather than the data. Take a look
at  https://developers.google.com/maps/articles/toomanymarkers  for lots of
options

lee c

On 3 July 2012 03:49, mcb thestreet...@gmail.com wrote:

 I have a classic spatial search schema in solr with a lat_long field. Is
 there way to do a bounding-box type search that will pull back results that
 are uniformly distributed so their isn't the case of 10 pins being on top
 of
 each other? Ie, if I have a 100-mile box, not result will return within 5
 miles or so of another?

 Does anyone have any insight or experience with this?

 Thanks!

 --
 View this message in context:
 http://lucene.472066.n3.nabble.com/How-to-space-between-spatial-search-results-Declustering-tp3992668.html
 Sent from the Solr - User mailing list archive at Nabble.com.



Re: Trunk error in Tomcat

2012-07-03 Thread Stefan Matheis
On Tuesday, July 3, 2012 at 8:10 PM, Vadim Kisselmann wrote:
 sorry, i overlooked your latest comment with the new issue in SOLR-3238 ;)
 Should i open an new issue? 


NP Vadim, yes a new Issue would help .. all available Information too :) 


Re: How to improve this solr query?

2012-07-03 Thread Erick Erickson
Chamnap:

I've seen various e-mail programs put the asterisk in for terms that
are in bold face.

The queries you pasted have lots of * characters in it, I suspect
that they were just
things you put in bold in your original, that may be the source of the
confusion about
whether you were using wildcards.

But on to your question. If your q1 and q2 are the same words,
wouldn't it just work to
specify the pf (phrase field) parameter for edismax? That
automatically takes the terms
in the query and turns it into a phrase query that's boosted higher.

And what's the use-case here? I think hou might be making this more complex than
it needs to be

Best
Erick

On Tue, Jul 3, 2012 at 8:41 AM, Michael Della Bitta
michael.della.bi...@appinions.com wrote:
 Chamnap,

 I have a hunch you can get away with not using *s.

 Michael Della Bitta

 
 Appinions, Inc. -- Where Influence Isn’t a Game.
 http://www.appinions.com


 On Tue, Jul 3, 2012 at 2:16 AM, Chamnap Chhorn chamnapchh...@gmail.com 
 wrote:
 Lance, I didn't use widcard at all. I use only this, the difference is
 quoted or not.

 q2=*apartment*
 q1=*apartment*
 *
 *
 On Tue, Jul 3, 2012 at 12:06 PM, Lance Norskog goks...@gmail.com wrote:

 q2=*apartment*
 q1=*apartment*

 These are wildcards

 On Mon, Jul 2, 2012 at 8:30 PM, Chamnap Chhorn chamnapchh...@gmail.com
 wrote:
  Hi Lance,
 
  I didn't use wildcards at all. This is a normal text search only. I need
 a
  string field because it needs to be matched exactly, and the value is
  sometimes a multi-word, so quoted it is necessary.
 
  By the way, if I do a super plain query, it takes at least 600ms. I'm not
  sure why. On another solr instance with similar amount of data, it takes
  only 50ms.
 
  I see something strange on the response, there is always
 
  str name=commandbuild/str
 
  What does that mean?
 
  On Tue, Jul 3, 2012 at 10:02 AM, Lance Norskog goks...@gmail.com
 wrote:
 
  Wildcards are slow. Leading wildcards are even more slow. Is there
  some way to search that data differently? If it is a string, can you
  change it to a text field and make sure 'apartment' is a separate
  word?
 
  On Mon, Jul 2, 2012 at 10:01 AM, Chamnap Chhorn 
 chamnapchh...@gmail.com
  wrote:
   Hi Michael,
  
   Thanks for quick response. Based on documentation, facet.mincount
 means
   that solr will return facet fields that has at least that number. For
  me, I
   just want to ensure my facet fields count doesn't have zero value.
  
   I try to increase to 10, but it still slows even for the same query.
  
   Actually, those 13 million documents are divided into 200 portals. I
   already include fq=portal_uuid: kjkjkjk inside each nested query,
 but
   it's still slow.
  
   On Mon, Jul 2, 2012 at 11:47 PM, Michael Della Bitta 
   michael.della.bi...@appinions.com wrote:
  
   Hi Chamnap,
  
   The first thing that jumped out at me was facet.mincount=1. Are you
   sure you need this? Increasing this number should drastically improve
   speed.
  
   Michael Della Bitta
  
   
   Appinions, Inc. -- Where Influence Isn’t a Game.
   http://www.appinions.com
  
  
   On Mon, Jul 2, 2012 at 12:35 PM, Chamnap Chhorn 
  chamnapchh...@gmail.com
   wrote:
Hi all,
   
I'm using solr 3.5 with nested query on the 4 core cpu server + 17
 Gb.
   The
problem is that my query is so slow; the average response time is
 12
  secs
against 13 millions documents.
   
What I am doing is to send quoted string (q2) to string fields and
non-quoted string (q1) to other fields and combine the result
  together.
   
   
  
 
 facet=truesort=score+descq2=*apartment*facet.mincount=1q1=*apartment*
   
  
 
 tie=0.1q.alt=*:*wt=jsonversion=2.2rows=20fl=uuidfacet.query=has_map:+truefacet.query=has_image:+truefacet.query=has_website:+truestart=0q=
*
   
  
 
 _query_:+{!dismax+qf='.'+fq='..'+v=$q1}+OR+_query_:+{!dismax+qf='..'+fq='...'+v=$q2}
*
   
  
 
 facet.field={!ex%3Ddt}sub_category_uuidsfacet.field={!ex%3Ddt}location_uuid
   
I have done solr optimize already, but it's still slow. Any idea
 how
  to
improve the speed? Am I done anything wrong?
   
--
Chhorn Chamnap
http://chamnap.github.com/
  
  
  
  
   --
   Chhorn Chamnap
   http://chamnap.github.com/
 
 
 
  --
  Lance Norskog
  goks...@gmail.com
 
 
 
 
  --
  Chhorn Chamnap
  http://chamnap.github.com/



 --
 Lance Norskog
 goks...@gmail.com




 --
 Chhorn Chamnap
 http://chamnap.github.com/


Re: DIH - unable to ADD individual new documents

2012-07-03 Thread Erick Erickson
Mike:

Have you considered using one (or several) SolrJ clients to do your
indexing? That can give you a finer control granularity than DIH. Or
even do your NRT with SolrJ or

Here's an example program, you can take out the Tika stuff pretty easily..

Best
Erick

On Tue, Jul 3, 2012 at 3:35 PM, Klostermeyer, Michael
mklosterme...@riskexchange.com wrote:
 Well that little bit of knowledge changes things for me, doesn't it?  I 
 appreciate your response very much.  Without knowing that about the DIH, I 
 attempted to have my DIH handler handle all circumstances, namely the 
 batch, scheduled job, and immediate/NRT indexing.  Looks like I'm going to 
 have to severely re-think that strategy.

 Thanks again...and if anyone has any further input how I can best/most 
 efficiently accomplish all 3 above, please let me know.

 Mike


 -Original Message-
 From: Dyer, James [mailto:james.d...@ingrambook.com]
 Sent: Tuesday, July 03, 2012 1:12 PM
 To: solr-user@lucene.apache.org
 Subject: RE: DIH - unable to ADD individual new documents

 A DIH request handler can only process one run at a time.  So if DIH is 
 still in process and you kick off a new DIH full-import it will silently 
 ignore the new command.  To have more than one DIH run going at a time it 
 is necessary to configure more than one handler instance in sorlconfig.xml.  
 But even then you'll have to be careful to find one that is free before 
 trying to use it.

 Regardless, to do what you want, you'll need to poll the DIH response screen 
 to be sure it isn't running before starting a new one.  It would be simplest 
 to leave it with just 1 DIH handler in solrconfig.xml.  If you've got to have 
 an undefined # of concurrent updates going at once you're best off to not use 
 DIH.

 Perhaps a better usage pattern for which DIH was designed for is to put the 
 doc id's in an update table with a timestamp.  Have your queries join to the 
 update table where timestamp  ${dih.last_index_time}.  Set up crontab or 
 whatever to kick off DIH every so often.  If the prior run is still in 
 progress, it will just skip that run, but because we're dealing with 
 timestamps that get written automatically when DIH finishes, you will only 
 experience a delayed update, not a lost update.  By batching your updates 
 like this you will also have fewer commits, which will be beneficial for 
 performance all around.

 Of course if you're trying to do this with the near-real-time functionality 
 batching isn't your answer.  But DIH isn't designed at all to work well with 
 NRT either...

 James Dyer
 E-Commerce Systems
 Ingram Content Group
 (615) 213-4311


 -Original Message-
 From: Klostermeyer, Michael [mailto:mklosterme...@riskexchange.com]
 Sent: Tuesday, July 03, 2012 1:55 PM
 To: solr-user@lucene.apache.org
 Subject: RE: DIH - unable to ADD individual new documents

 Some interesting findings over the last hours, that may change the context of 
 this discussion...

 Due to the nature of the application, I need the ability to fire off 
 individual ADDs on several different entities at basically the same time.  
 So, I am making 2-4 Solr ADD calls within 100ms of each other.  While 
 troubleshooting this, I found that if I only made 1 Solr ADD call (ignoring 
 the other entities), it updated the index as expected.  However, when all 
 were fired off, proper indexing did not occur (at least on one of the 
 entities) and no errors were logged.  I am still attempting to figure out if 
 ALL of the 2-4 entities failed to ADD, or if some failed and others succeeded.

 So...does this have something to do with Solr's index/message queuing (v3.5)? 
  How does Solr handle these types of rapid requests, and even more important, 
 how do I get the status of an individual DIH call vs simply the status of the 
 latest call at /dataimport?

 Mike


 -Original Message-
 From: Gora Mohanty [mailto:g...@mimirtech.com]
 Sent: Monday, July 02, 2012 10:02 PM
 To: solr-user@lucene.apache.org
 Subject: Re: DIH - unable to ADD individual new documents

 On 3 July 2012 07:54, Klostermeyer, Michael mklosterme...@riskexchange.com 
 wrote:
 I should add that I am using the full-import command in all cases, and 
 setting clean=false for the individual adds.

 What does the data-import page report at the end of the full-import, i.e., 
 how many documents were indexed?
 Are there any error messages in the Solr logs? Please share with us your DIH 
 configuration file, and Solr schema.xml.

 Regards,
 Gora


exception=Cannot create property=seed_provider for JavaBean=org.apache.c assandra.config.Config@6de1dadb

2012-07-03 Thread Venkat Veludandi
Good afternoon.
I downloaded Solandra from github and trying to run Solandra. I get an
exception about creating seed_provider.
I have attached cassandra.yaml for reference. Could you please let me know
what could be the problem with yaml.

Thanks
Venkat


RE: DIH - unable to ADD individual new documents

2012-07-03 Thread Klostermeyer, Michael
I haven't, but will consider those alternatives.  I think right now I'm going 
to go w/ a hybrid approach, meaning my scheduled and full updates will continue 
to use the DIH, as those seem to work really well.  My NTR indexing needs will 
be handled via the JSON processor.  For individual updates this will enable me 
to utilize an existing ORM infrastructure fairly easily (famous last words, I 
know).

Thanks for the help, as always.

Mike


-Original Message-
From: Erick Erickson [mailto:erickerick...@gmail.com] 
Sent: Tuesday, July 03, 2012 2:58 PM
To: solr-user@lucene.apache.org
Subject: Re: DIH - unable to ADD individual new documents

Mike:

Have you considered using one (or several) SolrJ clients to do your indexing? 
That can give you a finer control granularity than DIH. Or even do your NRT 
with SolrJ or

Here's an example program, you can take out the Tika stuff pretty easily..

Best
Erick

On Tue, Jul 3, 2012 at 3:35 PM, Klostermeyer, Michael 
mklosterme...@riskexchange.com wrote:
 Well that little bit of knowledge changes things for me, doesn't it?  I 
 appreciate your response very much.  Without knowing that about the DIH, I 
 attempted to have my DIH handler handle all circumstances, namely the 
 batch, scheduled job, and immediate/NRT indexing.  Looks like I'm going to 
 have to severely re-think that strategy.

 Thanks again...and if anyone has any further input how I can best/most 
 efficiently accomplish all 3 above, please let me know.

 Mike


 -Original Message-
 From: Dyer, James [mailto:james.d...@ingrambook.com]
 Sent: Tuesday, July 03, 2012 1:12 PM
 To: solr-user@lucene.apache.org
 Subject: RE: DIH - unable to ADD individual new documents

 A DIH request handler can only process one run at a time.  So if DIH is 
 still in process and you kick off a new DIH full-import it will silently 
 ignore the new command.  To have more than one DIH run going at a time it 
 is necessary to configure more than one handler instance in sorlconfig.xml.  
 But even then you'll have to be careful to find one that is free before 
 trying to use it.

 Regardless, to do what you want, you'll need to poll the DIH response screen 
 to be sure it isn't running before starting a new one.  It would be simplest 
 to leave it with just 1 DIH handler in solrconfig.xml.  If you've got to have 
 an undefined # of concurrent updates going at once you're best off to not use 
 DIH.

 Perhaps a better usage pattern for which DIH was designed for is to put the 
 doc id's in an update table with a timestamp.  Have your queries join to the 
 update table where timestamp  ${dih.last_index_time}.  Set up crontab or 
 whatever to kick off DIH every so often.  If the prior run is still in 
 progress, it will just skip that run, but because we're dealing with 
 timestamps that get written automatically when DIH finishes, you will only 
 experience a delayed update, not a lost update.  By batching your updates 
 like this you will also have fewer commits, which will be beneficial for 
 performance all around.

 Of course if you're trying to do this with the near-real-time functionality 
 batching isn't your answer.  But DIH isn't designed at all to work well with 
 NRT either...

 James Dyer
 E-Commerce Systems
 Ingram Content Group
 (615) 213-4311


 -Original Message-
 From: Klostermeyer, Michael [mailto:mklosterme...@riskexchange.com]
 Sent: Tuesday, July 03, 2012 1:55 PM
 To: solr-user@lucene.apache.org
 Subject: RE: DIH - unable to ADD individual new documents

 Some interesting findings over the last hours, that may change the context of 
 this discussion...

 Due to the nature of the application, I need the ability to fire off 
 individual ADDs on several different entities at basically the same time.  
 So, I am making 2-4 Solr ADD calls within 100ms of each other.  While 
 troubleshooting this, I found that if I only made 1 Solr ADD call (ignoring 
 the other entities), it updated the index as expected.  However, when all 
 were fired off, proper indexing did not occur (at least on one of the 
 entities) and no errors were logged.  I am still attempting to figure out if 
 ALL of the 2-4 entities failed to ADD, or if some failed and others succeeded.

 So...does this have something to do with Solr's index/message queuing (v3.5)? 
  How does Solr handle these types of rapid requests, and even more important, 
 how do I get the status of an individual DIH call vs simply the status of the 
 latest call at /dataimport?

 Mike


 -Original Message-
 From: Gora Mohanty [mailto:g...@mimirtech.com]
 Sent: Monday, July 02, 2012 10:02 PM
 To: solr-user@lucene.apache.org
 Subject: Re: DIH - unable to ADD individual new documents

 On 3 July 2012 07:54, Klostermeyer, Michael mklosterme...@riskexchange.com 
 wrote:
 I should add that I am using the full-import command in all cases, and 
 setting clean=false for the individual adds.

 What does the data-import page report at the end of the 

Re: exception=Cannot create property=seed_provider for JavaBean=org.apache.c assandra.config.Config@6de1dadb

2012-07-03 Thread Rafał Kuć
Hello!

I think you should ask that question on Solandra mailing list as your issue is 
not connected directly to Solr, at least in my opinion :)

-- 
Regards,
 Rafał Kuć
 Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch - ElasticSearch

 Good afternoon.I downloaded Solandra from github and trying to run
 Solandra. I get an exception about creating seed_provider.
 I have attached cassandra.yaml for reference. Could you please let
 me know what could be the problem with yaml.
  

 Thanks
 Venkat



Re: leap second bug

2012-07-03 Thread Óscar Marín Miró
So, this was the solution, sorry to post it so late, just in case it helps
anyone:

/etc/init.d/ntp stop; date; date `date +%m%d%H%M%C%y.%S`; date;
/etc/init.d/ntp start

And tomcat magically switched from 100% CPU to 0.5% :)

From:

https://groups.google.com/forum/?fromgroups#!topic/elasticsearch/_I1_OfaL7QY

[from Michael McCandless help on this thread]

On Sun, Jul 1, 2012 at 6:15 PM, Jack Krupansky j...@basetechnology.comwrote:

 Interesting:

 
 The sequence of dates of the UTC second markers will be:

 2012 June 30, 23h 59m 59s
 2012 June 30, 23h 59m 60s
 2012 July 1, 0h 0m 0s
 

 See:
 http://wwp.greenwichmeantime.**com/info/leap-second.htmhttp://wwp.greenwichmeantime.com/info/leap-second.htm

 So, there were two consecutive second  markers which were literally
 distinct, but numerically identical.

 What design pattern for timing did Linux violate? In other words, what
 lesson should we be learning to assure that we don't have a similar problem
 at an application level on a future leap second?

 -- Jack Krupansky

 -Original Message- From: Óscar Marín Miró
 Sent: Sunday, July 01, 2012 11:02 AM
 To: solr-user@lucene.apache.org
 Subject: Re: leap second bug


 Thanks Michael, nice information :)

 On Sun, Jul 1, 2012 at 5:29 PM, Michael McCandless 
 luc...@mikemccandless.com wrote:

  Looks like this is a low-level Linux issue ... see Shay's email to the
 ElasticSearch list about it:


 https://groups.google.com/**forum/?fromgroups#!topic/**
 elasticsearch/_I1_OfaL7QYhttps://groups.google.com/forum/?fromgroups#!topic/elasticsearch/_I1_OfaL7QY

 Also see the comments here:

  
 http://news.ycombinator.com/**item?id=4182642http://news.ycombinator.com/item?id=4182642

 Mike McCandless

 http://blog.mikemccandless.com

 On Sun, Jul 1, 2012 at 8:08 AM, Óscar Marín Miró
 oscarmarinm...@gmail.com wrote:
  Hello Michael, thanks for the note :)
 
  I'm having a similar problem since yesterday, tomcats are wild on CPU
 [near
  100%]. Did your solr servers did not reply to index/query requests?
 
  Thanks :)
 
  On Sun, Jul 1, 2012 at 1:22 PM, Michael Tsadikov 
 mich...@myheritage.com
 wrote:
 
  Our solr servers went into GC hell, and became non-responsive on date
  change today.
 
  Restarting tomcats did not help.
 
  Rebooting the machine did.
 
 
 
 http://www.wired.com/**wiredenterprise/2012/07/leap-**
 second-bug-wreaks-havoc-with-**java-linux/http://www.wired.com/wiredenterprise/2012/07/leap-second-bug-wreaks-havoc-with-java-linux/
 
 
 
 
  --
  Whether it's science, technology, personal experience, true love,
  astrology, or gut feelings, each of us has confidence in something that
 we
  will never fully comprehend.
   --Roy H. William




 --
 Whether it's science, technology, personal experience, true love,
 astrology, or gut feelings, each of us has confidence in something that we
 will never fully comprehend.
 --Roy H. William




-- 
Whether it's science, technology, personal experience, true love,
astrology, or gut feelings, each of us has confidence in something that we
will never fully comprehend.
 --Roy H. William


Re: Something like 'bf' or 'bq' with MoreLikeThis

2012-07-03 Thread nanshi
Jack, can you please explain this in some more detail? Such as how to write
my own search component to modify request to add bq parameter and get
customized result back?

--
View this message in context: 
http://lucene.472066.n3.nabble.com/Something-like-bf-or-bq-with-MoreLikeThis-tp3989060p3992888.html
Sent from the Solr - User mailing list archive at Nabble.com.


Nutch 1.4 with Solr 3.6 - compatible?

2012-07-03 Thread 12rad
Hi 

I am new to nutch and was trying it out using the instructions here 
http://wiki.apache.org/nutch/NutchTutorial

After changing the schema.xml of Solr to what Nutch has I keep getting this
error.. 
I am unable to start the solr server. 

 org.apache.solr.common.SolrException: undefined field text 
at
org.apache.solr.schema.IndexSchema.getDynamicFieldType(IndexSchema.java:1330) 
at
org.apache.solr.schema.IndexSchema$SolrQueryAnalyzer.getAnalyzer(IndexSchema.java:408)
 
at
org.apache.solr.schema.IndexSchema$SolrIndexAnalyzer.reusableTokenStream(IndexSchema.java:383)
 
at
org.apache.lucene.queryParser.QueryParser.getFieldQuery(QueryParser.java:574) 
at
org.apache.solr.search.SolrQueryParser.getFieldQuery(SolrQueryParser.java:206) 
at
org.apache.lucene.queryParser.QueryParser.Term(QueryParser.java:1429) 
at
org.apache.lucene.queryParser.QueryParser.Clause(QueryParser.java:1317) 
at
org.apache.lucene.queryParser.QueryParser.Query(QueryParser.java:1245) 
at
org.apache.lucene.queryParser.QueryParser.TopLevelQuery(QueryParser.java:1234) 
at
org.apache.lucene.queryParser.QueryParser.parse(QueryParser.java:206) 
at
org.apache.solr.search.LuceneQParser.parse(LuceneQParserPlugin.java:79) 
at org.apache.solr.search.QParser.getQuery(QParser.java:143) 
at
org.apache.solr.request.SimpleFacets.getFacetQueryCounts(SimpleFacets.java:233) 
at
org.apache.solr.request.SimpleFacets.getFacetCounts(SimpleFacets.java:194) 
at
org.apache.solr.handler.component.FacetComponent.process(FacetComponent.java:72)
 
at
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:186)
 
at
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129)
 
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1376) 
at
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:365) 
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:260)
 
at
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
 
at
org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399) 
at
org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) 
at
org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) 
at
org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766) 
at
org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450) 
at
org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
 
at
org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:114) 
at
org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) 
at org.mortbay.jetty.Server.handle(Server.java:326) 

Anybody whose faced a similar issue? 
Do let me know. 
Thanks! 

--
View this message in context: 
http://lucene.472066.n3.nabble.com/Nutch-1-4-with-Solr-3-6-compatible-tp3992891.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Nutch 1.4 with Solr 3.6 - compatible?

2012-07-03 Thread Jack Krupansky
Either add the text field back to the schema (consult the Solr example 
schema) or remove or change all references to text in solrconfig.xml to 
some other field that you do still have in the schema.


-- Jack Krupansky

-Original Message- 
From: 12rad

Sent: Tuesday, July 03, 2012 7:30 PM
To: solr-user@lucene.apache.org
Subject: Nutch 1.4 with Solr 3.6 - compatible?

Hi

I am new to nutch and was trying it out using the instructions here
http://wiki.apache.org/nutch/NutchTutorial

After changing the schema.xml of Solr to what Nutch has I keep getting this
error..
I am unable to start the solr server.

org.apache.solr.common.SolrException: undefined field text
   at
org.apache.solr.schema.IndexSchema.getDynamicFieldType(IndexSchema.java:1330)
   at
org.apache.solr.schema.IndexSchema$SolrQueryAnalyzer.getAnalyzer(IndexSchema.java:408)
   at
org.apache.solr.schema.IndexSchema$SolrIndexAnalyzer.reusableTokenStream(IndexSchema.java:383)
   at
org.apache.lucene.queryParser.QueryParser.getFieldQuery(QueryParser.java:574)
   at
org.apache.solr.search.SolrQueryParser.getFieldQuery(SolrQueryParser.java:206)
   at
org.apache.lucene.queryParser.QueryParser.Term(QueryParser.java:1429)
   at
org.apache.lucene.queryParser.QueryParser.Clause(QueryParser.java:1317)
   at
org.apache.lucene.queryParser.QueryParser.Query(QueryParser.java:1245)
   at
org.apache.lucene.queryParser.QueryParser.TopLevelQuery(QueryParser.java:1234)
   at
org.apache.lucene.queryParser.QueryParser.parse(QueryParser.java:206)
   at
org.apache.solr.search.LuceneQParser.parse(LuceneQParserPlugin.java:79)
   at org.apache.solr.search.QParser.getQuery(QParser.java:143)
   at
org.apache.solr.request.SimpleFacets.getFacetQueryCounts(SimpleFacets.java:233)
   at
org.apache.solr.request.SimpleFacets.getFacetCounts(SimpleFacets.java:194)
   at
org.apache.solr.handler.component.FacetComponent.process(FacetComponent.java:72)
   at
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:186)
   at
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129)
   at org.apache.solr.core.SolrCore.execute(SolrCore.java:1376)
   at
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:365)
   at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:260)
   at
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
   at
org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
   at
org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
   at
org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
   at
org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
   at
org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
   at
org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
   at
org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:114)
   at
org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
   at org.mortbay.jetty.Server.handle(Server.java:326)

Anybody whose faced a similar issue?
Do let me know.
Thanks!

--
View this message in context: 
http://lucene.472066.n3.nabble.com/Nutch-1-4-with-Solr-3-6-compatible-tp3992891.html
Sent from the Solr - User mailing list archive at Nabble.com. 



[Error] Indexing with solr cell

2012-07-03 Thread savitha sundaramurthy
Hi ,

I'm using solr cell(solrj) to index plain text files, but am encountering
IllegalCharsetNameException: Could you please point out if anything should
be added in schema.xml file. I could index the other mime types
efficiently. I gave the field type as text.

fieldType name=text class=solr.TextField positionIncrementGap=100

  analyzer type=index

tokenizer class=solr.WhitespaceTokenizerFactory/

filter class=solr.StopFilterFactory ignoreCase=true words=
stopwords.txt/

filter class=solr.WordDelimiterFilterFactory generateWordParts=
1 generateNumberParts=1 catenateWords=1 catenateNumbers=1catenateAll=
0/

filter class=solr.LowerCaseFilterFactory/

filter class=solr.PorterStemFilterFactory protected=
protwords.txt/

filter class=solr.RemoveDuplicatesTokenFilterFactory/

  /analyzer

  /fieldType

Thanks a lot,
Savitha.s


Re: Something like 'bf' or 'bq' with MoreLikeThis

2012-07-03 Thread Amit Nithian
I had a similar problem so I submitted this patch:
https://issues.apache.org/jira/browse/SOLR-2351

I haven't applied this to trunk in a while but my goal was to ensure
that bf parameters were passed down and respected by the MLT handler.
Let me know if this works for you or not. If there is sufficient
interest, I'll re-apply this patch to trunk and try and devise some
tests.

Thanks!
Amit

On Tue, Jul 3, 2012 at 5:08 PM, nanshi nanshi.e...@gmail.com wrote:
 Jack, can you please explain this in some more detail? Such as how to write
 my own search component to modify request to add bq parameter and get
 customized result back?

 --
 View this message in context: 
 http://lucene.472066.n3.nabble.com/Something-like-bf-or-bq-with-MoreLikeThis-tp3989060p3992888.html
 Sent from the Solr - User mailing list archive at Nabble.com.


Re: difference between stored=false and stored=true ?

2012-07-03 Thread Amit Nithian
So couple questions on this (comment first then question):
1) I guess you can't have four combinations b/c
index=false/stored=false has no meaning?
2) If you set less fields stored=true does this reduce the memory
footprint for the document cache? Or better yet, I can store more
documents in the cache possibly increasing my cache efficiency?

I read about the lazy loading of fields which seems like a good way to
maximize the cache and gain the advantage of storing data in Solr too.

Thanks
Amit

On Sat, Jun 30, 2012 at 11:01 AM, Giovanni Gherdovich
g.gherdov...@gmail.com wrote:
 Thank you François and Jack for those explainations.

 Cheers,
 GGhh

 2012/6/30 François Schiettecatte:
 Giovanni

 stored=true means the data is stored in the index and [...]


 2012/6/30 Jack Krupansky:
 indexed and stored are independent [...]