Querying on dynamic field

2011-12-21 Thread Isan Fulia
Hi,

I hava  a dynamic field E_*
I want to seach for E_abc*:something
Is there any way i can do this in solr.

If not possible in Solr 3.4 , does Solr 4.0 includes wildcard query on
dynamic field.


-- 
Thanks  Regards,
Isan Fulia.


Autocommit woes

2011-12-08 Thread Isan Fulia
Hi All,

My autocommit settings are
max docs - 1000
max time - 86 secs

We have put newrelic agent so as to monitor our solr performance.
In that we see a continous curve for autocommit. It is as good as
autocommit is continuously being fired.

Is it that if Autocommit for certain documents takes some time and in that
time say some new documents are added than will it add that documents in
the ongoing autocommit operation or it will immediately start a new
autocommit for the new documents added(say max timeout occured) once the
ongoing autocommit is done.


-- 
Thanks  Regards,
Isan Fulia.


Re: Using solr during optimization

2011-11-15 Thread Isan Fulia
Hi Mark,

Thanks for the reply.

You are right.We need to test first by decreasing the mergefactor and see
the indexing as well as searching performance and have some numbers in hand.
Also after partial optimize with the same mergefactor how long the
performance lasts(both searching and indexing)  by continuously adding more
documents.

Thanks,
Isan Fulia,.

On 14 November 2011 19:41, Mark Miller markrmil...@gmail.com wrote:


 On Nov 14, 2011, at 8:27 AM, Isan Fulia wrote:

  Hi Mark,
 
  In the above case , what if  the index is optimized partly ie. by
  specifying the max no of segments we want.
  It has been observed that after optimizing(even partly optimization), the
  indexing as well as searching had been faster than in case of an
  unoptimized one.

 Yes, this remains true - searching against fewer segments is faster than
 searching against many segments. Unless you have a really high merge
 factor, this is just generally not a big deal IMO.

 It tends to be something like, a given query is say 10-30% slower. If you
 have good performance though, this should often be something like a 50ms
 query goes to 80 or 90ms. You really have to decide/test if there is a
 practical difference to your users.

 You should also pay attention to how long that perf improvement lasts
 while you are continuously adding more documents. Is it a super high cost
 for a short perf boost?

  Decreasing the merge factor will affect  the performance as it will
  increase the indexing time due to the frequent merges.

 True - it will essentially amortize the cost of reducing segments. Have
 you tested lower merge factors though? Does it really slow down indexing to
 the point where you find it unacceptable? I've been surprised in the past.
 Usually you can find a pretty nice balance.

  So is it good that we optimize partly(let say once in a month), rather
 than
  decreasing the merge factor and affect  the indexing speed.Also since we
  will be sharding, that 100 GB index will be divided in different shards.

 Partial optimize is a good option, and optimize is an option. They both
 exist for a reason ;) Many people pay the price because they assume they
 have to though, when they really have no practical need.

 Generally, the best way to manage the number of segments in your index is
 through the merge policy IMO - not necessarily optimize calls.

 I'm pretty sure optimize also blocks adds in previous version of Solr as
 well - it grabs the commit lock. It won't do that in Solr 4, but that is
 another reason I wouldn't recommend it under normal circumstances.

 I look at optimize as a last option, or when creating a static index
 personally.

 
  Thanks,
  Isan Fulia.
 
 
 
  On 14 November 2011 11:28, Kalika Mishra kalika.mis...@germinait.com
 wrote:
 
  Hi Mark,
 
  Thanks for your reply.
 
  What you saying is interesting; so are you suggesting that optimizations
  should be done usually when there not many updates. Also can you please
  point out further under what conditions optimizations might be
 beneficial.
 
  Thanks.
 
  On 11 November 2011 20:30, Mark Miller markrmil...@gmail.com wrote:
 
  I would not optimize - it's very expensive. With 11,000 updates a day,
 I
  think it makes sense to completely avoid optimizing.
 
  That should be your default move in any case. If you notice performance
  suffers more than is acceptable (good chance you won't), then I'd use a
  lower merge factor. It defaults to 10 - lower numbers will lower the
  number
  of segments in your index, and essentially amortize the cost of an
  optimize.
 
  Optimize is generally only useful when you will have a mostly static
  index.
 
  - Mark Miller
  lucidimagination.com
 
 
  On Nov 11, 2011, at 9:12 AM, Kalika Mishra wrote:
 
  Hi Mark,
 
  We are performing almost 11,000 updates a day, we have around 50
  million
  docs in the index (i understand we will need to shard) the core seg
  will
  get fragmented over a period of time. We will need to do optimize
 every
  few
  days or once in a month; do you have any reason not to optimize the
  core.
  Please let me know.
 
  Thanks.
 
  On 11 November 2011 18:51, Mark Miller markrmil...@gmail.com wrote:
 
  Do a you have something forcing you to optimize, or are you just
 doing
  it
  for the heck of it?
 
  On Nov 11, 2011, at 7:50 AM, Kalika Mishra wrote:
 
  Hi,
 
  I would like to optimize solr core which is in Reader Writer mode.
  Since
  the Solr cores are huge in size (above 100 GB) the optimization
 takes
  hours
  to complete.
 
  When the optimization is going on say. on the Writer core, the
  application
  wants to continue using the indexes for both query and write
  purposes.
  What
  is the best approach to do this.
 
  I was thinking of using a temporary index (empty core) to write the
  documents and use the same Reader to read the documents. (Please
 note
  that
  temp index and the Reader cannot be made Reader Writer as Reader is
  already
  setup for the Writer on which

Re: Using solr during optimization

2011-11-14 Thread Isan Fulia
Hi Mark,

In the above case , what if  the index is optimized partly ie. by
specifying the max no of segments we want.
It has been observed that after optimizing(even partly optimization), the
indexing as well as searching had been faster than in case of an
unoptimized one.
Decreasing the merge factor will affect  the performance as it will
increase the indexing time due to the frequent merges.
So is it good that we optimize partly(let say once in a month), rather than
decreasing the merge factor and affect  the indexing speed.Also since we
will be sharding, that 100 GB index will be divided in different shards.

Thanks,
Isan Fulia.



On 14 November 2011 11:28, Kalika Mishra kalika.mis...@germinait.comwrote:

 Hi Mark,

 Thanks for your reply.

 What you saying is interesting; so are you suggesting that optimizations
 should be done usually when there not many updates. Also can you please
 point out further under what conditions optimizations might be beneficial.

 Thanks.

 On 11 November 2011 20:30, Mark Miller markrmil...@gmail.com wrote:

  I would not optimize - it's very expensive. With 11,000 updates a day, I
  think it makes sense to completely avoid optimizing.
 
  That should be your default move in any case. If you notice performance
  suffers more than is acceptable (good chance you won't), then I'd use a
  lower merge factor. It defaults to 10 - lower numbers will lower the
 number
  of segments in your index, and essentially amortize the cost of an
 optimize.
 
  Optimize is generally only useful when you will have a mostly static
 index.
 
  - Mark Miller
  lucidimagination.com
 
 
  On Nov 11, 2011, at 9:12 AM, Kalika Mishra wrote:
 
   Hi Mark,
  
   We are performing almost 11,000 updates a day, we have around 50
 million
   docs in the index (i understand we will need to shard) the core seg
 will
   get fragmented over a period of time. We will need to do optimize every
  few
   days or once in a month; do you have any reason not to optimize the
 core.
   Please let me know.
  
   Thanks.
  
   On 11 November 2011 18:51, Mark Miller markrmil...@gmail.com wrote:
  
   Do a you have something forcing you to optimize, or are you just doing
  it
   for the heck of it?
  
   On Nov 11, 2011, at 7:50 AM, Kalika Mishra wrote:
  
   Hi,
  
   I would like to optimize solr core which is in Reader Writer mode.
  Since
   the Solr cores are huge in size (above 100 GB) the optimization takes
   hours
   to complete.
  
   When the optimization is going on say. on the Writer core, the
   application
   wants to continue using the indexes for both query and write
 purposes.
   What
   is the best approach to do this.
  
   I was thinking of using a temporary index (empty core) to write the
   documents and use the same Reader to read the documents. (Please note
   that
   temp index and the Reader cannot be made Reader Writer as Reader is
   already
   setup for the Writer on which optimization is taking place) But there
   could
   be some updates to the temp index which I would like to get reflected
  in
   the Reader. Whats the best setup to support this.
  
   Thanks,
   Kalika
  
   - Mark Miller
   lucidimagination.com
  
  
  
  
  
  
  
  
  
  
  
  
  
  
   --
   Thanks  Regards,
   Kalika
 
 
 
 
 
 
 
 
 
 
 
 
 


 --
 Thanks  Regards,
 Kalika




-- 
Thanks  Regards,
Isan Fulia.


Re: Solr stopword problem in Query

2011-10-03 Thread Isan Fulia
Thanks Erick.

On 29 September 2011 18:31, Erick Erickson erickerick...@gmail.com wrote:

 I think your problem is that you've set

 omitTermFreqAndPositions=true

 It's not real clear from the Wiki page, but
 the tricky little phrase

 Queries that rely on position that are issued
 on a field with this option will silently fail to
 find documents.

 And phrase queries rely on position information

 Best
 Erick

 On Tue, Sep 27, 2011 at 11:00 AM, Rahul Warawdekar
 rahul.warawde...@gmail.com wrote:
  Hi Isan,
 
  The schema.xml seems OK to me.
 
  Is textForQuery the only field you are searching in ?
  Are you also searching on any other non text based fields ? If yes,
 please
  provide schema description for those fields also.
  Also, provide your solrconfig.xml file.
 
 
  On Tue, Sep 27, 2011 at 1:12 AM, Isan Fulia isan.fu...@germinait.com
 wrote:
 
  Hi Rahul,
 
  I also tried searching Coke Studio MTV but no documents were returned.
 
  Here is the snippet of my schema file.
 
   fieldType name=text class=solr.TextField
  positionIncrementGap=100 autoGeneratePhraseQueries=true
 
   analyzer type=index
 tokenizer class=solr.WhitespaceTokenizerFactory/
 
 filter class=solr.StopFilterFactory
 ignoreCase=true
 
 words=stopwords_en.txt
 enablePositionIncrements=true
 
 /
 filter class=solr.WordDelimiterFilterFactory
  generateWordParts=1 generateNumberParts=1 catenateWords=1
  catenateNumbers=1 catenateAll=0 splitOnCaseChange=1/
 
 filter class=solr.LowerCaseFilterFactory/
 
 filter class=solr.KeywordMarkerFilterFactory
  protected=protwords.txt/
 
 filter class=solr.PorterStemFilterFactory/
   /analyzer
 
   analyzer type=query
 tokenizer class=solr.WhitespaceTokenizerFactory/
 
 filter class=solr.SynonymFilterFactory
  synonyms=synonyms.txt ignoreCase=true expand=true/
 
 filter class=solr.StopFilterFactory
 ignoreCase=true
 
 words=stopwords_en.txt
 enablePositionIncrements=true
 
 /
 filter class=solr.WordDelimiterFilterFactory
  generateWordParts=1 generateNumberParts=1 catenateWords=0
  catenateNumbers=0 catenateAll=0 splitOnCaseChange=1/
 
 filter class=solr.LowerCaseFilterFactory/
 
 filter class=solr.KeywordMarkerFilterFactory
  protected=protwords.txt/
 
 filter class=solr.PorterStemFilterFactory/
   /analyzer
 
 /fieldType
 
 
  *field name=content type=text indexed=false stored=true
  multiValued=false/
  field name=title type=text indexed=false stored=true
  multiValued=false/
 
  **field name=textForQuery type=text indexed=true stored=false
  multiValued=true omitTermFreqAndPositions=true/**
 
  copyField source=content dest=textForQuery/
  copyField source=title dest=textForQuery/*
 
 
  Thanks,
  Isan Fulia.
 
 
  On 26 September 2011 21:19, Rahul Warawdekar 
 rahul.warawde...@gmail.com
  wrote:
 
   Hi Isan,
  
   Does your search return any documents when you remove the 'at' keyword
  and
   just search for Coke studio MTV ?
   Also, can you please provide the snippet of schema.xml file where you
  have
   mentioned this field name and its type description ?
  
   On Mon, Sep 26, 2011 at 6:09 AM, Isan Fulia isan.fu...@germinait.com
   wrote:
  
Hi all,
   
I have a text field named* textForQuery* .
Following content has been indexed into solr in field textForQuery
*Coke Studio at MTV*
   
when i fired the query as
*textForQuery:(coke studio at mtv)* the results showed 0 documents
   
After runing the same query in debugMode i got the following results
   
result name=response numFound=0 start=0/
lst name=debug
str name=rawquerystringtextForQuery:(coke studio at mtv)/str
str name=querystringtextForQuery:(coke studio at mtv)/str
str name=parsedqueryPhraseQuery(textForQuery:coke studio ?
   mtv)/str
str name=parsedquery_toStringtextForQuery:coke studio *?
  *mtv/str
   
Why the query did not matched any document even when there is a
  document
with value of textForQuery as *Coke Studio at MTV*?
Is this because of the stopword *at* present in stopwordList?
   
   
   
--
Thanks  Regards,
Isan Fulia.
   
  
  
  
   --
   Thanks and Regards
   Rahul A. Warawdekar
  
 
 
 
  --
  Thanks  Regards,
  Isan Fulia.
 
 
 
 
  --
  Thanks and Regards
  Rahul A. Warawdekar
 




-- 
Thanks  Regards,
Isan Fulia.


Re: Query failing because of omitTermFreqAndPositions

2011-10-03 Thread Isan Fulia
Hi Mike,

Thanks for the information.But why is it that once omiited positions in the
past , it will always omit positions
even if omitPositions is made false.

Thanks,
Isan Fulia.

On 29 September 2011 17:49, Michael McCandless luc...@mikemccandless.comwrote:

 Once a given field has omitted positions in the past, even for just
 one document, it sticks and that field will forever omit positions.

 Try creating a new index, never omitting positions from that field?

 Mike McCandless

 http://blog.mikemccandless.com

 On Thu, Sep 29, 2011 at 1:14 AM, Isan Fulia isan.fu...@germinait.com
 wrote:
  Hi All,
 
  My schema consisted of field textForQuery which was defined as
  field name=textForQuery type=text indexed=true stored=false
  multiValued=true/
 
  After indexing 10 lakhs  of  documents  I changed the field to
  field name=textForQuery type=text indexed=true stored=false
  multiValued=true *omitTermFreqAndPositions=true*/
 
  So documents that were indexed after that omiited the position
 information
  of the terms.
  As a result I was not able to search the text which rely on position
  information for eg. coke studio at mtv even though its present in some
  documents.
 
  So I again changed the field textForQuery to
  field name=textForQuery type=text indexed=true stored=false
  multiValued=true/
 
  But now even for new documents added  the query requiring positon
  information is still failing.
  For example i reindexed certain documents that consisted of coke studio
 at
  mtv but still the query is not returning any documents when searched for
  *textForQuery:coke studio at mtv*
 
  Can anyone please help me out why this is happening
 
 
  --
  Thanks  Regards,
  Isan Fulia.
 




-- 
Thanks  Regards,
Isan Fulia.


Query failing because of omitTermFreqAndPositions

2011-09-28 Thread Isan Fulia
Hi All,

My schema consisted of field textForQuery which was defined as
field name=textForQuery type=text indexed=true stored=false
multiValued=true/

After indexing 10 lakhs  of  documents  I changed the field to
field name=textForQuery type=text indexed=true stored=false
multiValued=true *omitTermFreqAndPositions=true*/

So documents that were indexed after that omiited the position information
of the terms.
As a result I was not able to search the text which rely on position
information for eg. coke studio at mtv even though its present in some
documents.

So I again changed the field textForQuery to
field name=textForQuery type=text indexed=true stored=false
multiValued=true/

But now even for new documents added  the query requiring positon
information is still failing.
For example i reindexed certain documents that consisted of coke studio at
mtv but still the query is not returning any documents when searched for
*textForQuery:coke studio at mtv*

Can anyone please help me out why this is happening


-- 
Thanks  Regards,
Isan Fulia.


Solr stopword problem in Query

2011-09-26 Thread Isan Fulia
Hi all,

I have a text field named* textForQuery* .
Following content has been indexed into solr in field textForQuery
*Coke Studio at MTV*

when i fired the query as
*textForQuery:(coke studio at mtv)* the results showed 0 documents

After runing the same query in debugMode i got the following results

result name=response numFound=0 start=0/
lst name=debug
str name=rawquerystringtextForQuery:(coke studio at mtv)/str
str name=querystringtextForQuery:(coke studio at mtv)/str
str name=parsedqueryPhraseQuery(textForQuery:coke studio ? mtv)/str
str name=parsedquery_toStringtextForQuery:coke studio *? *mtv/str

Why the query did not matched any document even when there is a document
with value of textForQuery as *Coke Studio at MTV*?
Is this because of the stopword *at* present in stopwordList?



-- 
Thanks  Regards,
Isan Fulia.


Re: Solr stopword problem in Query

2011-09-26 Thread Isan Fulia
Hi Rahul,

I also tried searching Coke Studio MTV but no documents were returned.

Here is the snippet of my schema file.

 fieldType name=text class=solr.TextField
positionIncrementGap=100 autoGeneratePhraseQueries=true

  analyzer type=index
tokenizer class=solr.WhitespaceTokenizerFactory/

filter class=solr.StopFilterFactory
ignoreCase=true

words=stopwords_en.txt
enablePositionIncrements=true

/
filter class=solr.WordDelimiterFilterFactory
generateWordParts=1 generateNumberParts=1 catenateWords=1
catenateNumbers=1 catenateAll=0 splitOnCaseChange=1/

filter class=solr.LowerCaseFilterFactory/

filter class=solr.KeywordMarkerFilterFactory
protected=protwords.txt/

filter class=solr.PorterStemFilterFactory/
  /analyzer

  analyzer type=query
tokenizer class=solr.WhitespaceTokenizerFactory/

filter class=solr.SynonymFilterFactory
synonyms=synonyms.txt ignoreCase=true expand=true/

filter class=solr.StopFilterFactory
ignoreCase=true

words=stopwords_en.txt
enablePositionIncrements=true

/
filter class=solr.WordDelimiterFilterFactory
generateWordParts=1 generateNumberParts=1 catenateWords=0
catenateNumbers=0 catenateAll=0 splitOnCaseChange=1/

filter class=solr.LowerCaseFilterFactory/

filter class=solr.KeywordMarkerFilterFactory
protected=protwords.txt/

filter class=solr.PorterStemFilterFactory/
  /analyzer

/fieldType


*field name=content type=text indexed=false stored=true
multiValued=false/
field name=title type=text indexed=false stored=true
multiValued=false/

**field name=textForQuery type=text indexed=true stored=false
multiValued=true omitTermFreqAndPositions=true/**

copyField source=content dest=textForQuery/
copyField source=title dest=textForQuery/*


Thanks,
Isan Fulia.


On 26 September 2011 21:19, Rahul Warawdekar rahul.warawde...@gmail.comwrote:

 Hi Isan,

 Does your search return any documents when you remove the 'at' keyword and
 just search for Coke studio MTV ?
 Also, can you please provide the snippet of schema.xml file where you have
 mentioned this field name and its type description ?

 On Mon, Sep 26, 2011 at 6:09 AM, Isan Fulia isan.fu...@germinait.com
 wrote:

  Hi all,
 
  I have a text field named* textForQuery* .
  Following content has been indexed into solr in field textForQuery
  *Coke Studio at MTV*
 
  when i fired the query as
  *textForQuery:(coke studio at mtv)* the results showed 0 documents
 
  After runing the same query in debugMode i got the following results
 
  result name=response numFound=0 start=0/
  lst name=debug
  str name=rawquerystringtextForQuery:(coke studio at mtv)/str
  str name=querystringtextForQuery:(coke studio at mtv)/str
  str name=parsedqueryPhraseQuery(textForQuery:coke studio ?
 mtv)/str
  str name=parsedquery_toStringtextForQuery:coke studio *? *mtv/str
 
  Why the query did not matched any document even when there is a document
  with value of textForQuery as *Coke Studio at MTV*?
  Is this because of the stopword *at* present in stopwordList?
 
 
 
  --
  Thanks  Regards,
  Isan Fulia.
 



 --
 Thanks and Regards
 Rahul A. Warawdekar




-- 
Thanks  Regards,
Isan Fulia.


Re: Upgrading solr from 3.3 to 3.4

2011-09-19 Thread Isan Fulia
Hi ,

Ya we need to upgrade but my question is whether  reindexing of all cores is
required
or
we can directly use already indexed data folders of solr 3.3 to solr 3.4.

Thanks,
Isan Fulia.





On 19 September 2011 11:03, Wyhw Whon w...@microgle.com wrote:

 If you are already using Apache Lucene 3.1, 3.2 or 3.3, we strongly
 recommend you upgrade to 3.4.0 because of the index corruption bug on
 OS or computer crash or power loss (LUCENE-3418), now fixed in 3.4.0.

 2011/9/19 Isan Fulia isan.fu...@germinait.com

  Hi all,
 
  Does upgrading solr from 3.3 to 3.4 requires reindexing of all the cores
 or
  we can directly copy the data folders to
  the new solr ?
 
 
  --
  Thanks  Regards,
  Isan Fulia.
 




-- 
Thanks  Regards,
Isan Fulia.


Re: Upgrading solr from 3.3 to 3.4

2011-09-19 Thread Isan Fulia
Thanks Erick.


On 19 September 2011 15:10, Erik Hatcher erik.hatc...@gmail.com wrote:

 Reindexing is not necessary.  Drop in 3.4 and go.

 For this sort of scenario, it's easy enough to try using a copy of your
 SOLR_HOME directory with an instance of the newest release of Solr.  If
 the release notes don't say a reindex is necessary, then it's not, but
 always a good idea to try it and run any tests you have handy.

Erik



 On Sep 19, 2011, at 00:02 , Isan Fulia wrote:

  Hi ,
 
  Ya we need to upgrade but my question is whether  reindexing of all cores
 is
  required
  or
  we can directly use already indexed data folders of solr 3.3 to solr 3.4.
 
  Thanks,
  Isan Fulia.
 
 
 
 
 
  On 19 September 2011 11:03, Wyhw Whon w...@microgle.com wrote:
 
  If you are already using Apache Lucene 3.1, 3.2 or 3.3, we strongly
  recommend you upgrade to 3.4.0 because of the index corruption bug on
  OS or computer crash or power loss (LUCENE-3418), now fixed in 3.4.0.
 
  2011/9/19 Isan Fulia isan.fu...@germinait.com
 
  Hi all,
 
  Does upgrading solr from 3.3 to 3.4 requires reindexing of all the
 cores
  or
  we can directly copy the data folders to
  the new solr ?
 
 
  --
  Thanks  Regards,
  Isan Fulia.
 
 
 
 
 
  --
  Thanks  Regards,
  Isan Fulia.




-- 
Thanks  Regards,
Isan Fulia.


Upgrading solr from 3.3 to 3.4

2011-09-18 Thread Isan Fulia
Hi all,

Does upgrading solr from 3.3 to 3.4 requires reindexing of all the cores or
we can directly copy the data folders to
the new solr ?


-- 
Thanks  Regards,
Isan Fulia.


Re: Using lowercase as field type

2011-05-04 Thread Isan Fulia
I want multiple documents with same unique key  to overwrite each other but
they are not overwriting because of lowercase field type as unique key

On 4 May 2011 11:45, Markus Jelsma markus.jel...@openindex.io wrote:

 So those multiple documents overwrite eachother? In that case, your data is
 not suited for a lowercased docID. I'd recommend not doing any analysis on
 the
 docID to prevent such headaches.

  Hi ,
 
  My schema consists of a field of type lowercase(for applying the
 lowercase
  filter factory)  and is the unique key .  But its no longer behaving as
  unique key. Multiple documents with same value for the unique key are
  getting indexed.
  Does anyone know why this is happening or is it that the field of type
  lowercase cannot be unique key.




-- 
Thanks  Regards,
Isan Fulia.


Using lowercase as field type

2011-05-03 Thread Isan Fulia
Hi ,

My schema consists of a field of type lowercase(for applying the lowercase
filter factory)  and is the unique key .  But its no longer behaving as
unique key. Multiple documents with same value for the unique key are
getting indexed.
Does anyone know why this is happening or is it that the field of type
lowercase cannot be unique key.

-- 
Thanks  Regards,
Isan Fulia.


Migrating from solr 1.4.1 to 3.1.0

2011-04-06 Thread Isan Fulia
Hi all,

Solr 3.1.0 uses different javabin format from 1.4.1
So if I use Solrj 1.4.1 jar  , then i get javabin error while saving to
3.1.0
and if I use Solrj 3.1.0 jar , then I get javabin error  while reading the
document from solr 1.4.1.

How to go for reindexing in this situation.

-- 
Thanks  Regards,
Isan Fulia.


Re: FW: Very very large scale Solr Deployment = how to do (Expert Question)?

2011-04-05 Thread Isan Fulia
Hi Ephraim/Jen,

Can u share that diagram with all.It may really help all of us.
Thanks,
Isan Fulia.

On 6 April 2011 10:15, Tirthankar Chatterjee tchatter...@commvault.comwrote:

 Hi Jen,
 Can you please forward the diagram attachment too that Ephraim sent. :-)
 Thanks,
 Tirthankar

 -Original Message-
 From: Jens Mueller [mailto:supidupi...@googlemail.com]
 Sent: Tuesday, April 05, 2011 10:30 PM
 To: solr-user@lucene.apache.org
 Subject: Re: FW: Very very large scale Solr Deployment = how to do (Expert
 Question)?

 Hello Ephraim,

 thank you so much for the great Document/Scaling-Concept!!

 First I think you really should publish this on the solr wiki. This
 approach is nowhere documented there and not really obvious for newbies and
 your document is great and explains this very well!

 Please allow me to further questions regarding your document:
 1.) Is it correct, that you mean by DB the Origin-Data-Source of the data
 that is fed into the Solr Cloud for searching?

 2.) Solr Aggregator: This term did not yeald any google results, but is a
 very important aspect of your design (and this was the missing piece for me
 when thinking about solr architectures): Is it cocrrec that the
 aggregators are simply tomcat instances, with the solr webapp deployed?
 These Aggregators do not have their own index but only run the solr webapp
 and I access them via the ?shard= parameter giving the shards I want to
 query? (So in the end they aggreate the data of the shards but do not have
 their own data). This is really an important aspect that is not documented
 well enough in the solr documentation.

 Thank you very much!
 Jens


 2011/4/5 Ephraim Ofir ephra...@icq.com

  of course the attachment didn't get to the list, so here it is if you
  want it...
 
  Ephraim Ofir
 
 
  -Original Message-
  From: Ephraim Ofir
  Sent: Tuesday, April 05, 2011 10:20 AM
  To: 'solr-user@lucene.apache.org'
  Subject: RE: Very very large scale Solr Deployment = how to do (Expert
  Question)?
 
  I'm not sure about the scale you're aiming for, but you probably want
  to do both sharding and replication.  There's no central server which
  would be the bottleneck. The guidelines should probably be something
 like:
  1. Split your index to enough shards so it can keep up with the update
  rate.
  2. Have enough replicates of each shard master to keep up with the
  rate of queries.
  3. Have enough aggregators in front of the shard replicates so the
  aggregation doesn't become a bottleneck.
  4. Make sure you have good load balancing across your system.
 
  Attached is a diagram of the setup we have.  You might want to look
  into SolrCloud as well.
 
  Ephraim Ofir
 
 
  -Original Message-
  From: Jens Mueller [mailto:supidupi...@googlemail.com]
  Sent: Tuesday, April 05, 2011 4:25 AM
  To: solr-user@lucene.apache.org
  Subject: Very very large scale Solr Deployment = how to do (Expert
  Question)?
 
  Hello Experts,
 
 
 
  I am a Solr newbie but read quite a lot of docs. I still do not
  understand what would be the best way to setup very large scale
  deployments:
 
 
 
  Goal (threoretical):
 
   A.) Index-Size: 1 Petabyte (1 Document is about 5 KB in Size)
 
   B) Queries: 10 Queries/ per Second
 
   C) Updates: 10 Updates / per Second
 
 
 
 
  Solr offers:
 
  1.)Replication = Scales Well for B)  BUT  A) and C) are not
  satisfied
 
 
  2.)Sharding = Scales well for A) BUT B) and C) are not satisfied
  (= As
  I understand the Sharding approach all goes through a central server,
  that dispatches the updates and assembles the quries retrieved from
  the different shards. But this central server has also some capacity
  limits...)
 
 
 
 
  What is the right approach to handle such large deployments? I would
  be thankfull for just a rough sketch of the concepts so I can
  experiment/search further...
 
 
  Maybe I am missing something very trivial as I think some of the Solr
  Users/Use Cases on the homepage are that kind of large deployments.
  How are they implemented?
 
 
 
  Thanky very much!!!
 
  Jens
 
 **Legal Disclaimer***
 This communication may contain confidential and privileged
 material for the sole use of the intended recipient. Any
 unauthorized review, use or distribution by others is strictly
 prohibited. If you have received the message in error, please
 advise the sender by reply email and delete the message. Thank
 you.
 *




-- 
Thanks  Regards,
Isan Fulia.


Re: RamBufferSize and AutoCommit

2011-03-29 Thread Isan Fulia
Hi Eric ,
I m actually getting out of memory error.
As I told earlier my rambuffersize is default(32mb).What could be the
reasons for getting this error.
Can u please share ur views.


On 28 March 2011 17:55, Erick Erickson erickerick...@gmail.com wrote:

 Also note that making RAMBufferSize too big isn't useful. Lucid
 recommends 128M as the point over which you hit diminishing
 returns. But unless you're having problems speed-wise with the
 default, why change it?

 And are you actually getting OOMs or is this a background question?

 Best
 Erick

 On Mon, Mar 28, 2011 at 6:23 AM, Li Li fancye...@gmail.com wrote:
  there are 3 conditions that will trigger an auto flushing in lucene
  1. size of index in ram is larger than ram buffer size
  2. documents in mamory is larger than the number set by
 setMaxBufferedDocs.
  3. deleted term number is larger than the ratio set by
  setMaxBufferedDeleteTerms.
 
  auto flushing by time interval is added by solr
 
  rambufferSize  will use estimated size and the real used memory may be
  larger than this value. So if  your Xmx is 2700m, setRAMBufferSizeMB.
  should set value less than it. if you setRAMBufferSizeMB to 2700m and
  the other 3 conditions are not
  triggered, I think it will hit OOM exception.
 
  2011/3/28 Isan Fulia isan.fu...@germinait.com:
  Hi all ,
 
  I would like to know is there any relation between autocommit and
  rambuffersize.
  My solr config does not  contain rambuffersize which mean its
  deault(32mb).Autocommit setting are after 500 docs or 80 sec
  whichever is first.
  Solr starts with Xmx 2700M .Total Ram is 4 GB.
  Does the rambufferSize is alloted outside the heap memory(2700M)?
  How does rambuffersize is related to out of memory errors.
  What is the optimal value for rambuffersize.
 
  --
  Thanks  Regards,
  Isan Fulia.
 
 




-- 
Thanks  Regards,
Isan Fulia.


RamBufferSize and AutoCommit

2011-03-28 Thread Isan Fulia
Hi all ,

I would like to know is there any relation between autocommit and
rambuffersize.
My solr config does not  contain rambuffersize which mean its
deault(32mb).Autocommit setting are after 500 docs or 80 sec
whichever is first.
Solr starts with Xmx 2700M .Total Ram is 4 GB.
Does the rambufferSize is alloted outside the heap memory(2700M)?
How does rambuffersize is related to out of memory errors.
What is the optimal value for rambuffersize.

-- 
Thanks  Regards,
Isan Fulia.


LucidGaze Monitoring tool

2011-03-09 Thread Isan Fulia
Hi all,
Does anyone know what  does m on the y -axis stands for in req/sec graph for
update handler .

-- 
Thanks  Regards,
Isan Fulia.


StreamingUpdateSolrServer

2011-03-07 Thread Isan Fulia
Hi all,
I am using StreamingUpdateSolrServer with queuesize = 5 and threadcount=4
The no. of connections created are same as threadcount.
Is it that it creates a new connection for every thread.


-- 
Thanks  Regards,
Isan Fulia.


Separating Index Reader and Writer

2011-02-06 Thread Isan Fulia
Hi all,
I have setup two indexes one for reading(R) and other for writing(W).Index R
refers to the same data dir of W (defined in solrconfig via dataDir).
To make sure the R index sees the indexed documents of W , i am firing an
empty commit on R.
With this , I am getting performance improvement as compared to using the
same index for reading and writing .
Can anyone help me in knowing why this performance improvement is taking
place even though both the indexeses are pointing to the same data
directory.

-- 
Thanks  Regards,
Isan Fulia.


Re: Separating Index Reader and Writer

2011-02-06 Thread Isan Fulia
Hi peter ,
Can you elaborate a little on how performance gain is in cache warming.I am
getting a good improvement on search time.

On 6 February 2011 23:29, Peter Sturge peter.stu...@gmail.com wrote:

 Hi,

 We use this scenario in production where we have one write-only Solr
 instance and 1 read-only, pointing to the same data.
 We do this so we can optimize caching/etc. for each instance for
 write/read. The main performance gain is in cache warming and
 associated parameters.
 For your Index W, it's worth turning off cache warming altogether, so
 commits aren't slowed down by warming.

 Peter


 On Sun, Feb 6, 2011 at 3:25 PM, Isan Fulia isan.fu...@germinait.com
 wrote:
  Hi all,
  I have setup two indexes one for reading(R) and other for
 writing(W).Index R
  refers to the same data dir of W (defined in solrconfig via dataDir).
  To make sure the R index sees the indexed documents of W , i am firing an
  empty commit on R.
  With this , I am getting performance improvement as compared to using the
  same index for reading and writing .
  Can anyone help me in knowing why this performance improvement is taking
  place even though both the indexeses are pointing to the same data
  directory.
 
  --
  Thanks  Regards,
  Isan Fulia.
 




-- 
Thanks  Regards,
Isan Fulia.


facet.mincount

2011-02-03 Thread Isan Fulia
Hi all,
Even after making facet.mincount=1 , it is showing the results with count =
0.
Does anyone know why this is happening.

-- 
Thanks  Regards,
Isan Fulia.


Re: facet.mincount

2011-02-03 Thread Isan Fulia
Any query followed by

facet=onfacet.date=aUpdDtfacet.date.start=2011-01-02T08:00:00.000Zfacet.date.end=2011-02-03T08:00:00.000Zfacet.date.gap=%2B1HOURfacet.mincount=1

On 3 February 2011 15:14, Savvas-Andreas Moysidis 
savvas.andreas.moysi...@googlemail.com wrote:

 could you post the query you are submitting to Solr?

 On 3 February 2011 09:33, Isan Fulia isan.fu...@germinait.com wrote:

  Hi all,
  Even after making facet.mincount=1 , it is showing the results with count
 =
  0.
  Does anyone know why this is happening.
 
  --
  Thanks  Regards,
  Isan Fulia.
 




-- 
Thanks  Regards,
Isan Fulia.


Re: facet.mincount

2011-02-03 Thread Isan Fulia
I am using solr1.4.1 release version
I got the following error while using facet.mincount
java.lang.IllegalStateException: STREAM
at org.mortbay.jetty.Response.getWriter(Response.java:571)
at
org.apache.jasper.runtime.JspWriterImpl.initOut(JspWriterImpl.java:158)
at
org.apache.jasper.runtime.JspWriterImpl.flushBuffer(JspWriterImpl.java:151)
at
org.apache.jasper.runtime.PageContextImpl.release(PageContextImpl.java:208)
at
org.apache.jasper.runtime.JspFactoryImpl.internalReleasePageContext(JspFactoryImpl.java:144)
at
org.apache.jasper.runtime.JspFactoryImpl.releasePageContext(JspFactoryImpl.java:95)
at
org.apache.jsp.admin.index_jsp._jspService(org.apache.jsp.admin.index_jsp:397)
at
org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:80)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at
org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:373)
at
org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:464)
at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:358)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at
org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:487)
at
org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:367)
at
org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
at
org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181)
at
org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:712)
at
org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:405)
at org.mortbay.jetty.servlet.Dispatcher.forward(Dispatcher.java:268)
at org.mortbay.jetty.servlet.Dispatcher.forward(Dispatcher.java:126)
at
org.mortbay.jetty.servlet.DefaultServlet.doGet(DefaultServlet.java:431)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at
org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:487)
at
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1098)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:286)
at
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1089)
at
org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:365)
at
org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
at
org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181)
at
org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:712)
at
org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:405)
at
org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:211)
at
org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:114)
at
org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:139)
at org.mortbay.jetty.Server.handle(Server.java:285)
at
org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:502)
at
org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:821)
at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:513)
at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:208)
at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:378)
at
org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:226)
at
org.mortbay.thread.BoundedThreadPool$PoolThread.run(BoundedThreadPool.java:442)


On 3 February 2011 16:17, dan sutton danbsut...@gmail.com wrote:

 I don't think facet.mincount works with date faceting, see here:

 http://wiki.apache.org/solr/SimpleFacetParameters

 Dan

 On Thu, Feb 3, 2011 at 10:11 AM, Isan Fulia isan.fu...@germinait.com
 wrote:
  Any query followed by
 
 
 facet=onfacet.date=aUpdDtfacet.date.start=2011-01-02T08:00:00.000Zfacet.date.end=2011-02-03T08:00:00.000Zfacet.date.gap=%2B1HOURfacet.mincount=1
 
  On 3 February 2011 15:14, Savvas-Andreas Moysidis 
  savvas.andreas.moysi...@googlemail.com wrote:
 
  could you post the query you are submitting to Solr?
 
  On 3 February 2011 09:33, Isan Fulia isan.fu...@germinait.com wrote:
 
   Hi all,
   Even after making facet.mincount=1 , it is showing the results with
 count
  =
   0.
   Does anyone know why this is happening.
  
   --
   Thanks  Regards,
   Isan Fulia.
  
 
 
 
 
  --
  Thanks  Regards,
  Isan Fulia.
 




-- 
Thanks  Regards,
Isan Fulia.


Re: facet.mincount

2011-02-03 Thread Isan Fulia
Thanks to all

On 3 February 2011 20:21, Grijesh pintu.grij...@gmail.com wrote:


 Hi

 facet.mincount not works with facet.date option afaik.
 There is an issue for it as solr-343, but resolved.
 Try apply patch, provided as a solution in this issue may solve the
 problem.
 Fix version for this may be 1.5

 -
 Thanx:
 Grijesh
 http://lucidimagination.com
 --
 View this message in context:
 http://lucene.472066.n3.nabble.com/facet-mincount-tp2411930p2414232.html
 Sent from the Solr - User mailing list archive at Nabble.com.




-- 
Thanks  Regards,
Isan Fulia.


Patch for edismax Query Parser

2011-01-31 Thread Isan Fulia
Hi all,
I want to know how to apply patch for extended dismax query parser on solr
1.4.1.


-- 
Thanks  Regards,
Isan Fulia.


Re: Patch for edismax Query Parser

2011-01-31 Thread Isan Fulia
specifically for edismax patch

On 31 January 2011 18:22, Erick Erickson erickerick...@gmail.com wrote:

 Do you know how to apply patches in general? Or is this specifically
 about the edismax patch?

 Quick response for the general how to apply a patch question:
 1 get the source code for Solr
 2 get to the point you can run ant clean test successfully.
 3 apply the source patch
 4 execute ant dist.

 You should now have a war file in your solr_home/dist

 See: http://wiki.apache.org/solr/HowToContribute#Working_With_Patches

 NOTE: I haven't applied that specific patch to 1.4.1, so I don't know what
 gremlins
 are hanging around.

 Best
 Erick

 On Mon, Jan 31, 2011 at 7:12 AM, Isan Fulia isan.fu...@germinait.com
 wrote:

  Hi all,
  I want to know how to apply patch for extended dismax query parser on
 solr
  1.4.1.
 
 
  --
  Thanks  Regards,
  Isan Fulia.
 




-- 
Thanks  Regards,
Isan Fulia.


DismaxParser Query

2011-01-27 Thread Isan Fulia
Hi all,
The query for standard request handler is as follows
field1:(keyword1 OR keyword2) OR field2:(keyword1 OR keyword2) OR
field3:(keyword1 OR keyword2) AND field4:(keyword3 OR keyword4) AND
field5:(keyword5)


How the same above query can be written for dismax request handler

-- 
Thanks  Regards,
Isan Fulia.


Re: DismaxParser Query

2011-01-27 Thread Isan Fulia
but q=keyword1 keyword2  does AND operation  not OR

On 27 January 2011 16:22, lee carroll lee.a.carr...@googlemail.com wrote:

 use dismax q for first three fields and a filter query for the 4th and 5th
 fields
 so
 q=keyword1 keyword 2
 qf = field1,feild2,field3
 pf = field1,feild2,field3
 mm=something sensible for you
 defType=dismax
 fq= field4:(keyword3 OR keyword4) AND field5:(keyword5)

 take a look at the dismax docs for extra params



 On 27 January 2011 08:52, Isan Fulia isan.fu...@germinait.com wrote:

  Hi all,
  The query for standard request handler is as follows
  field1:(keyword1 OR keyword2) OR field2:(keyword1 OR keyword2) OR
  field3:(keyword1 OR keyword2) AND field4:(keyword3 OR keyword4) AND
  field5:(keyword5)
 
 
  How the same above query can be written for dismax request handler
 
  --
  Thanks  Regards,
  Isan Fulia.
 




-- 
Thanks  Regards,
Isan Fulia.


Re: DismaxParser Query

2011-01-27 Thread Isan Fulia
It worked by making mm=0 (it acted as OR operator)
but how to handle this

field1:((keyword1 AND keyword2) OR (keyword3 AND keyword4)) OR
field2:((keyword1 AND keyword2) OR (keyword3 AND keyword4)) OR
field3:((keyword1 AND keyword2) OR (keyword3 AND keyword4))




On 27 January 2011 17:06, lee carroll lee.a.carr...@googlemail.com wrote:

 sorry ignore that - we are on dismax here - look at mm param in the docs
 you can set this to achieve what you need

 On 27 January 2011 11:34, lee carroll lee.a.carr...@googlemail.com
 wrote:

  the default operation can be set in your config to be or or on the
 query
  something like q.op=OR
 
 
 
  On 27 January 2011 11:26, Isan Fulia isan.fu...@germinait.com wrote:
 
  but q=keyword1 keyword2  does AND operation  not OR
 
  On 27 January 2011 16:22, lee carroll lee.a.carr...@googlemail.com
  wrote:
 
   use dismax q for first three fields and a filter query for the 4th and
  5th
   fields
   so
   q=keyword1 keyword 2
   qf = field1,feild2,field3
   pf = field1,feild2,field3
   mm=something sensible for you
   defType=dismax
   fq= field4:(keyword3 OR keyword4) AND field5:(keyword5)
  
   take a look at the dismax docs for extra params
  
  
  
   On 27 January 2011 08:52, Isan Fulia isan.fu...@germinait.com
 wrote:
  
Hi all,
The query for standard request handler is as follows
field1:(keyword1 OR keyword2) OR field2:(keyword1 OR keyword2) OR
field3:(keyword1 OR keyword2) AND field4:(keyword3 OR keyword4) AND
field5:(keyword5)
   
   
How the same above query can be written for dismax request handler
   
--
Thanks  Regards,
Isan Fulia.
   
  
 
 
 
  --
  Thanks  Regards,
  Isan Fulia.
 
 
 




-- 
Thanks  Regards,
Isan Fulia.


Re: DismaxParser Query

2011-01-27 Thread Isan Fulia
Hi all,
I am currently using solr1.4.1 .Do  I need to apply patch for extended
dismax parser.

On 28 January 2011 03:42, Erick Erickson erickerick...@gmail.com wrote:

 In general, patches are applied to the source tree and it's re-compiled.
 See: http://wiki.apache.org/solr/HowToContribute#Working_With_Patches

 This is pretty easy, and I do know that some people have applied the
 eDismax
 patch to the 1.4 code line, but I haven't done it myself.

 Best
 Erick

 On Thu, Jan 27, 2011 at 10:27 AM, Jonathan Rochkind rochk...@jhu.edu
 wrote:

  Yes, I think nested queries are the only way to do that, and yes, nested
  queries like Daniel's example work (I've done it myself).  I haven't
 really
  tried to get into understanding/demonstrating _exactly_ how the relevance
  ends up working on the overall master query in such a situation, but it
 sort
  of works.
 
  (Just note that Daniel's example isn't quite right, I think you need
 double
  quotes for the nested _query_, just check the wiki page/blog post on
 nested
  queries).
 
  Does eDismax handle parens for order of operation too?  If so, eDismax is
  probably the best/easiest solution, especially if you're trying to parse
 an
  incoming query from some OTHER format and translate it to something that
 can
  be sent to Solr, which is what I often do.
 
  I haven't messed with eDismax myself yet.  Does anyone know if there's
 any
  easy (easy!) way to get eDismax in a Solr 1.4?  Any easy way to compile
 an
  eDismax query parser on it's own that works with Solr 1.4, and then just
  drop it into your local lib/ for use with an existing Solr 1.4?
 
  Jonathan
 
  
  From: Daniel Pötzinger [daniel.poetzin...@aoemedia.de]
  Sent: Thursday, January 27, 2011 9:26 AM
  To: solr-user@lucene.apache.org
  Subject: AW: DismaxParser Query
 
  It may also be an option to mix the query parsers?
  Something like this (not tested):
 
  q={!lucene}field1:test OR field2:test2 _query_:{!dismax qf=fields}+my
  dismax -bad
 
  So you have the benefits of lucene and dismax parser
 
  -Ursprüngliche Nachricht-
  Von: Erick Erickson [mailto:erickerick...@gmail.com]
  Gesendet: Donnerstag, 27. Januar 2011 15:15
  An: solr-user@lucene.apache.org
  Betreff: Re: DismaxParser Query
 
  What version of Solr are you using, and could you consider either 3x or
  applying a patch to 1.4.1? Because eDismax (extended dismax) handles the
  full Lucene query language and probably works here. See the Solr
  JIRA 1553 at https://issues.apache.org/jira/browse/SOLR-1553
 
  Best
  Erick
 
  On Thu, Jan 27, 2011 at 8:32 AM, Isan Fulia isan.fu...@germinait.com
  wrote:
 
   It worked by making mm=0 (it acted as OR operator)
   but how to handle this
  
   field1:((keyword1 AND keyword2) OR (keyword3 AND keyword4)) OR
   field2:((keyword1 AND keyword2) OR (keyword3 AND keyword4)) OR
   field3:((keyword1 AND keyword2) OR (keyword3 AND keyword4))
  
  
  
  
   On 27 January 2011 17:06, lee carroll lee.a.carr...@googlemail.com
   wrote:
  
sorry ignore that - we are on dismax here - look at mm param in the
  docs
you can set this to achieve what you need
   
On 27 January 2011 11:34, lee carroll lee.a.carr...@googlemail.com
wrote:
   
 the default operation can be set in your config to be or or on
 the
query
 something like q.op=OR



 On 27 January 2011 11:26, Isan Fulia isan.fu...@germinait.com
  wrote:

 but q=keyword1 keyword2  does AND operation  not OR

 On 27 January 2011 16:22, lee carroll 
 lee.a.carr...@googlemail.com
  
 wrote:

  use dismax q for first three fields and a filter query for the
 4th
   and
 5th
  fields
  so
  q=keyword1 keyword 2
  qf = field1,feild2,field3
  pf = field1,feild2,field3
  mm=something sensible for you
  defType=dismax
  fq= field4:(keyword3 OR keyword4) AND field5:(keyword5)
 
  take a look at the dismax docs for extra params
 
 
 
  On 27 January 2011 08:52, Isan Fulia isan.fu...@germinait.com
wrote:
 
   Hi all,
   The query for standard request handler is as follows
   field1:(keyword1 OR keyword2) OR field2:(keyword1 OR keyword2)
  OR
   field3:(keyword1 OR keyword2) AND field4:(keyword3 OR
 keyword4)
   AND
   field5:(keyword5)
  
  
   How the same above query can be written for dismax request
  handler
  
   --
   Thanks  Regards,
   Isan Fulia.
  
 



 --
 Thanks  Regards,
 Isan Fulia.



   
  
  
  
   --
   Thanks  Regards,
   Isan Fulia.
  
 




-- 
Thanks  Regards,
Isan Fulia.


Re: Solr Out of Memory Error

2011-01-19 Thread Isan Fulia
Hi all,
By adding more servers do u mean sharding of index.And after sharding , how
my query performance will be affected .
Will the query execution time increase.

Thanks,
Isan Fulia.

On 19 January 2011 12:52, Grijesh pintu.grij...@gmail.com wrote:


 Hi Isan,

 It seems your index size 25GB si much more compared to you have total Ram
 size is 4GB.
 You have to do 2 things to avoid Out Of Memory Problem.
 1-Buy more Ram ,add at least 12 GB of more ram.
 2-Increase the Memory allocated to solr by setting XMX values.at least 12
 GB
 allocate to solr.

 But if your all index will fit into the Cache memory it will give you the
 better result.

 Also add more servers to load balance as your QPS is high.
 Your 7 Laks data makes 25 GB of index its looking quite high.Try to lower
 the index size
 What are you indexing in your 25GB of index?

 -
 Thanx:
 Grijesh
 --
 View this message in context:
 http://lucene.472066.n3.nabble.com/Solr-Out-of-Memory-Error-tp2280037p2285779.html
 Sent from the Solr - User mailing list archive at Nabble.com.




-- 
Thanks  Regards,
Isan Fulia.


Solr Out of Memory Error

2011-01-18 Thread Isan Fulia
Hi all,
I got the following error on solr with m/c configuration 4GB RAM   and Intel
Dual Core Processor.Can you please  help me out.

java.lang.OutOfMemoryError: Java heap space
2011-01-18 18:00:27.655:WARN::Committed before 500 OutOfMemoryError likely
caused by the Sun VM Bug described in
https://issues.apache.org/jira/browse/LUCENE-1566; try calling
FSDirectory.setReadChunkSize with a a value smaller than the current chunk
size (2147483647)||java.lang.
OutOfMemoryError: OutOfMemoryError likely caused by the Sun VM Bug described
in https://issues.apache.org/jira/browse/LUCENE-1566; try calling
FSDirectory.setReadChunkSize with a a value smaller than the current chunk
size (2147483647)|?at
org.apache.lucene.store.NIOFSDirectory$NIOFSIndexInput.readInternal(NIOFSDirectory.java:161)|?at
org.apache.lucene.store.BufferedIndexInput.readBytes(BufferedIndexInput.java:139)|?at
org.apache.lucene.index.CompoundFileReader$CSIndexInput.readInternal(CompoundFileReader.java:285)|?at
org.apache.lucene.store.BufferedIndexInput.refill(BufferedIndexInput.java:160)|?at
org.apache.lucene.store.BufferedIndexInput.readByte(BufferedIndexInput.java:39)|?at
org.apache.lucene.store.DataInput.readVInt(DataInput.java:86)|?at
org.apache.lucene.index.FieldsReader.doc(FieldsReader.java:201)|?at
org.apache.lucene.index.SegmentReader.document(SegmentReader.java:828)|?at
org.apache.lucene.index.DirectoryReader.document(DirectoryReader.java:579)|?at
org.apache.lucene.index.IndexReader.document(IndexReader.java:755)|?at
org.apache.solr.search.SolrIndexReader.document(SolrIndexReader.java:454)|?at
org.apache.solr.search.SolrIndexSearcher.doc(SolrIndexSearcher.java:431)|?at
org.apache.solr.response.BinaryResponseWriter$Resolver.writeDocList(BinaryResponseWriter.java:120)|?at
org.apache.solr.response.BinaryResponseWriter$Resolver.resolve(BinaryResponseWriter.java:86)|?at
org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:143)|?at
org.apache.solr.common.util.JavaBinCodec.writeNamedList(JavaBinCodec.java:133)|?at
org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:221)|?at
org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:138)|?at
org.apache.solr.common.util.JavaBinCodec.marshal(JavaBinCodec.java:87)|?at
org.apache.solr.response.BinaryResponseWriter.write(BinaryResponseWriter.java:46)|?at
org.apache.solr.servlet.SolrDispatchFilter.writeResponse(SolrDispatchFilter.java:321)|?at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:253)|?at
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157)|?at
org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:388)|?at
org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)|?at
org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)|?at
org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:765)|?at
org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:418)|?at
org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)|?at
org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:114)|?at
org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)|?at
org.mortbay.jetty.Server.handle(Server.java:326)|?at
org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)|?at
org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnection.java:938)|?at
org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:755)|?at
org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:218)|?at
org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)|?at
org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228)|?at
org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)|Caused
by: java.lang.OutOfMemoryError: GC overhead limit exceeded|
2011-01-18 18:00:27.656:WARN::/solr/ProdContentIndex/select
java.lang.IllegalStateException: Committed
at org.mortbay.jetty.Response.resetBuffer(Response.java:1024)
at org.mortbay.jetty.Response.sendError(Response.java:240)
at
org.apache.solr.servlet.SolrDispatchFilter.sendError(SolrDispatchFilter.java:361)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:271)
at
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157)
at
org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:388)
at
org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
at
org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
at
org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:765)
at
org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:418)
at
org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
at

Re: Solr Out of Memory Error

2011-01-18 Thread Isan Fulia
Hi markus,
We dont have any XMX memory settings as such .Our java version is 1.6.0_19
and solr version is 1.4 developer version. Can u plz help us out.

Thanks,
Isan.

On 18 January 2011 19:54, Markus Jelsma markus.jel...@openindex.io wrote:

 Hi

 I haven't seen one like this before. Please provide JVM settings and Solr
 version.

 Cheers

 On Tuesday 18 January 2011 15:08:35 Isan Fulia wrote:
  Hi all,
  I got the following error on solr with m/c configuration 4GB RAM   and
  Intel Dual Core Processor.Can you please  help me out.
 
  java.lang.OutOfMemoryError: Java heap space
  2011-01-18 18:00:27.655:WARN::Committed before 500 OutOfMemoryError
 likely
  caused by the Sun VM Bug described in
  https://issues.apache.org/jira/browse/LUCENE-1566; try calling
  FSDirectory.setReadChunkSize with a a value smaller than the current
 chunk
  size (2147483647)||java.lang.
  OutOfMemoryError: OutOfMemoryError likely caused by the Sun VM Bug
  described in https://issues.apache.org/jira/browse/LUCENE-1566; try
  calling
  FSDirectory.setReadChunkSize with a a value smaller than the current
 chunk
  size (2147483647)|?at
 
 org.apache.lucene.store.NIOFSDirectory$NIOFSIndexInput.readInternal(NIOFSDi
  rectory.java:161)|?at
 
 org.apache.lucene.store.BufferedIndexInput.readBytes(BufferedIndexInput.ja
  va:139)|?at
 
 org.apache.lucene.index.CompoundFileReader$CSIndexInput.readInternal(Compo
  undFileReader.java:285)|?at
 
 org.apache.lucene.store.BufferedIndexInput.refill(BufferedIndexInput.java:
  160)|?at
 
 org.apache.lucene.store.BufferedIndexInput.readByte(BufferedIndexInput.jav
  a:39)|?at
 org.apache.lucene.store.DataInput.readVInt(DataInput.java:86)|?at
  org.apache.lucene.index.FieldsReader.doc(FieldsReader.java:201)|?at
 
 org.apache.lucene.index.SegmentReader.document(SegmentReader.java:828)|?at
 
 org.apache.lucene.index.DirectoryReader.document(DirectoryReader.java:579)|
  ?at
 org.apache.lucene.index.IndexReader.document(IndexReader.java:755)|?at
 
 org.apache.solr.search.SolrIndexReader.document(SolrIndexReader.java:454)|
  ?at
 
 org.apache.solr.search.SolrIndexSearcher.doc(SolrIndexSearcher.java:431)|?
  at
 
 org.apache.solr.response.BinaryResponseWriter$Resolver.writeDocList(Binary
  ResponseWriter.java:120)|?at
 
 org.apache.solr.response.BinaryResponseWriter$Resolver.resolve(BinaryRespo
  nseWriter.java:86)|?at
 
 org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:143)|?
  at
 
 org.apache.solr.common.util.JavaBinCodec.writeNamedList(JavaBinCodec.java:
  133)|?at
 
 org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:
  221)|?at
 
 org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:138)|?
  at
 
 org.apache.solr.common.util.JavaBinCodec.marshal(JavaBinCodec.java:87)|?at
 
 org.apache.solr.response.BinaryResponseWriter.write(BinaryResponseWriter.j
  ava:46)|?at
 
 org.apache.solr.servlet.SolrDispatchFilter.writeResponse(SolrDispatchFilte
  r.java:321)|?at
 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.jav
  a:253)|?at
 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandl
  er.java:1157)|?at
 
 org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:388)|?
  at
 
 org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216
  )|?at
 
 org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)|?
  at
 
 org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:765)|?
  at
  org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:418)|?at
 
 org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCo
  llection.java:230)|?at
 
 org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:
  114)|?at
 
 org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)|?
  at org.mortbay.jetty.Server.handle(Server.java:326)|?at
 
 org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)|?at
 
 org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnection.java
  :938)|?at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:755)|?at
  org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:218)|?at
  org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)|?at
 
 org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:2
  28)|?at
 
 org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:5
  82)|Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded|
  2011-01-18 18:00:27.656:WARN::/solr/ProdContentIndex/select
  java.lang.IllegalStateException: Committed
  at org.mortbay.jetty.Response.resetBuffer(Response.java:1024)
  at org.mortbay.jetty.Response.sendError(Response.java:240)
  at
 
 org.apache.solr.servlet.SolrDispatchFilter.sendError(SolrDispatchFilter.jav
  a:361) at
 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java
  :271

Re: Solr Out of Memory Error

2011-01-18 Thread Isan Fulia
Hi Grijesh,all,
We are having only single master and are using multicore environment with
size of various indexes  as 675MB ,516 MB , 3GB , 25GB.
Number of documents with 3GB index are roughly around 14 lakhs
and with 25 GB are roughly around 7 lakh
Queries are fired very frequently.
ramBufferSize and indexing  are all default settings.

Thanks ,
Isan.


On 19 January 2011 10:41, Grijesh pintu.grij...@gmail.com wrote:


 On which server [master/slave] Out of Memory ocuur
 What is your index in size[GB]?
 How many documents you have?
 What is query per second?
 How you are indexing?
 What is you ramBufferSize?

 -
 Thanx:
 Grijesh
 --
 View this message in context:
 http://lucene.472066.n3.nabble.com/Solr-Out-of-Memory-Error-tp2280037p2285392.html
 Sent from the Solr - User mailing list archive at Nabble.com.




-- 
Thanks  Regards,
Isan Fulia.