Re: Replication snapshot, tar says "file changed as we read it"

2011-03-23 Thread Andrew Clegg
Sorry to re-open an old thread, but this just happened to me again,
even with a 30 second sleep between taking the snapshot and starting
to tar it up. Then, even more strangely, the snapshot was removed
again before tar completed.

Archiving snapshot.20110320113401 into
/var/www/mesh/backups/weekly.snapshot.20110320113401.tar.bz2
tar: snapshot.20110320113401/_neqv.fdt: file changed as we read it
tar: snapshot.20110320113401/_neqv.prx: File removed before we read it
tar: snapshot.20110320113401/_neqv.fnm: File removed before we read it
tar: snapshot.20110320113401: Cannot stat: No such file or directory
tar: Exiting with failure status due to previous errors

Has anybody seen this before, or been able to replicate it themselves?
(no pun intended)

Or, is anyone else using replication snapshots for backup? Have I
misunderstood them? I thought the point of a snapshot was that once
taken it was immutable.

If it's important, this is on a machine configured as a replication
master, but with no slaves attached to it (it's basically a failover
and backup machine).

  
  
  startup
  commit
  admin-extra.html,elevate.xml,protwords.txt,schema.xml,scripts.conf,solrconfig_slave.xml:solrconfig.xml,stopwords.txt,synonyms.txt
  00:00:10
  
  

Thanks,

Andrew.


On 16 January 2011 12:55, Andrew Clegg  wrote:
> PS one other point I didn't mention is that this server has a very
> fast autocommit limit (2 seconds max time).
>
> But I don't know if this is relevant -- I thought the files in the
> snapshot wouldn't be committed to again. Please correct me if this is
> a huge misunderstanding.
>
> On 16 January 2011 12:30, Andrew Clegg  wrote:
>> (Many apologies if this appears twice, I tried to send it via Nabble
>> first but it seems to have got stuck, and is fairly urgent/serious.)
>>
>> Hi,
>>
>> I'm trying to use the replication handler to take snapshots, then
>> archive them and ship them off-site.
>>
>> Just now I got a message from tar that worried me:
>>
>> tar: snapshot.20110115035710/_70b.tis: file changed as we read it
>> tar: snapshot.20110115035710: file changed as we read it
>>
>> The relevant bit of script that does it looks like this (error
>> checking removed):
>>
>> curl 'http://localhost:8983/solr/core/1replication?command=backup'
>> PREFIX=''
>> if [[ "$START_TIME" =~ 'Sun' ]]
>> then
>>        PREFIX='weekly.'
>> fi
>> cd $SOLR_DATA_DIR
>> for snapshot in `ls -d -1 snapshot.*`
>> do
>>        TARGET="${LOCAL_BACKUP_DIR}/${PREFIX}${snapshot}.tar.bz2"
>>        echo "Archiving ${snapshot} into $TARGET"
>>        tar jcf $TARGET $snapshot
>>        echo "Deleting ${snapshot}"
>>        rm -rf $snapshot
>> done
>>
>> I was under the impression that files in the snapshot were guaranteed
>> to never change, right? Otherwise what's the point of the replication
>> backup command?
>>
>> I tried putting in a 30-second sleep after the snapshot and before the
>> tar, but the error occurred again anyway.
>>
>> There was a message from Lance N. with a similar error in, years ago:
>>
>> http://www.mail-archive.com/solr-user@lucene.apache.org/msg06104.html
>>
>> but that would be pre-replication anyway, right?
>>
>> This is on Ubuntu 10.10 using java 1.6.0_22 and Solr 1.4.0.
>>
>> Thanks,
>>
>> Andrew.
>>
>>
>> --
>>
>> :: http://biotext.org.uk/ :: http://twitter.com/andrew_clegg/ ::
>>
>
>
>
> --
>
> :: http://biotext.org.uk/ :: http://twitter.com/andrew_clegg/ ::
>



-- 

:: http://biotext.org.uk/ :: http://twitter.com/andrew_clegg/ ::


Re: Solr coding

2011-03-23 Thread satya swaroop
Hi Jayendra,
  the group field can be kept if the no. of groups are
small... if a user may belong to 1000 groups in that case it would be
difficult to make a query???,   if a user changes the groups then we have to
reindex the data again...

ok i will try ur suggestion, if it can fulfill the needs then task will be
very easy...

Regards,
satya


how to run boost query for non-dismax query parser

2011-03-23 Thread cyang2010
Hi,

I need to code some boosting logic when some field equal to some value.   I
was able to get it work if using dismax query parser.  However, since the
solr query will need to handle prefix or fuzzy query, therefore, dismax
query parser is not really my choice.  

Therefore, i want to use standard query parser, but still have dismax's
boosting query logic.  For example, this query return all the titles
regardless what the value is, however, will boost the score of those which
genres=5237:

http://localhost:8983/solr/titles/select?indent=on&start=0&rows=10&fl=*%2Cscore&wt=standard&explainOther=&hl.fl=&qt=standard&q={!boost%20b=genres:5237^2.2}*%3A*&debugQuery=on


Here is the exception i get:
HTTP ERROR: 400

org.apache.lucene.queryParser.ParseException: Expected ',' at position 6 in
'genres:5237^2.2'


I am just following the instruction on this page although the instruction
there is really to implement a boost function, instead of boosting query. 
http://wiki.apache.org/solr/SolrRelevancyFAQ#How_can_I_boost_the_score_of_newer_documents

Thanks for your help,

cy

--
View this message in context: 
http://lucene.472066.n3.nabble.com/how-to-run-boost-query-for-non-dismax-query-parser-tp2723442p2723442.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: dismax parser, parens, what do they do exactly

2011-03-23 Thread Chris Hostetter

: It looks like Dismax query parser can somehow handle parens, used for
: applying, for instance, + or - to a group, distributing it. But I'm not
: sure what effect they have on the overall query.

parens are treated like any regular character -- they have no semantic 
meaning.

what may be confusing you is what the *analyzer* you have configured for 
your query field then does with the paren.

For instance, using the example schema on trunk, try the same query...

q = book (dog +(cat -frog))

..but using a "qf" param containing a string field (no analysis) ...

/select?defType=dismax&q=book+(dog+%2B(cat+-frog))&tie=0.01&qf=text_s&debugQuery=true

It produces the following output (i've added some whitespace)...


 +(  DisjunctionMaxQuery((  text_s:book)~0.01) 
 DisjunctionMaxQuery((  text_s:(dog)~0.01) 
+DisjunctionMaxQuery((  text_s:(cat)~0.01) 
-DisjunctionMaxQuery((  text_s:frog))  )~0.01)
  ) 
  ()


...the parens from your query are being treated literally as characters in 
your terms.  you just don't see them in the parsed query because it shows 
you what those terms look like after the anlsysis.

Incidently...

: debugQuery shows:
: 
: +((DisjunctionMaxQuery((text:book)~0.01)
: +DisjunctionMaxQuery((text:dog)~0.01)
: DisjunctionMaxQuery((text:cat)~0.01)
: -DisjunctionMaxQuery((text:frog)~0.01))~2) ()

...double check that, it doesn't seem to match the query string you posted 
(it shows "dog" being mandatory.  i'm guessing you cut/paste the wrong 
example)


-Hoss


Why boost query not working?

2011-03-23 Thread cyang2010
Hi, 

This solr query faile:
1. get every title regardless what the title_name is
2. within the result, boost the one which genre id = 56.  (bq=genres:56^100)

http://localhost:8983/solr/titles/select?indent=on&version=2.2&start=0&rows=10&fl=*%2Cscore&wt=standard&defType=dismax&qf=title_name_en_US&q=*%3A*&bq=genres%3A56^100&debugQuery=on


But from debug i can tell it confuse the boost query parameter as part of
query string:

lst name="debug">
str name="rawquerystring">*:*
str name="querystring">*:*
str name="parsedquery">+() () genres:56^100.0
str name="parsedquery_toString">+() () genres:56^100.0
lst name="explain"/>
str name="QParser">DisMaxQParser
null name="altquerystring"/>
−
arr name="boost_queries">
str>genres:56^100
/arr>
−

genres:56^100.0




Just to note that genres field is a multivalue field.  I don't know if
boosting query has any requirement on single value/token.

But i also tried with another single value query field for bq, similar
problem:

http://localhost:8983/solr/titles/select?indent=on&version=2.2&q=*%3A*&fq=&start=0&rows=10&fl=*%2Cscore&wt=standard&explainOther=&hl.fl=&defType=dismax&qf=title_name_en_US&bq=year[2000
TO *]^2.2&debugQuery=on


Thanks for you help,

cy


--
View this message in context: 
http://lucene.472066.n3.nabble.com/Why-boost-query-not-working-tp2723304p2723304.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Problem with field collapsing of patched Solr 1.4

2011-03-23 Thread Afroz Ahmad
Have you enabled the collapse component in solconfig.xml?



Thanks
afroz


On Fri, Mar 18, 2011 at 8:14 PM, Kai Schlamp-2
wrote:

> Unfortunately I have to use Solr 1.4.x or 3.x as one of the interfaces to
> access Solr uses Sunspot (a Ruby Solr library), which doesn't seem to be
> compatible with 4.x.
>
> Kai
>
>
> Otis Gospodnetic-2 wrote:
> >
> > Kai, try SOLR-1086 with Solr trunk instead if trunk is OK for you.
> >
> > Otis
> > 
> > Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
> > Lucene ecosystem search :: http://search-lucene.com/
> >
> >
> >
> > - Original Message 
> >> From: Kai Schlamp 
> >> To: solr-user@lucene.apache.org
> >> Sent: Sun, March 13, 2011 11:58:56 PM
> >> Subject: Problem with field collapsing of patched Solr 1.4
> >>
> >> Hello.
> >>
> >> I just tried to patch Solr 1.4 with the field collapsing patch  of
> >> https://issues.apache.org/jira/browse/SOLR-236. The patching and  build
> >> process seemed to be ok (below are the steps I did), but the  field
> >> collapsing feature doesn't seem to work.
> >> When I go to `http://localhost:8982/solr/select/?q=*:*` I correctly
> >> get 10 documents  as result.
> >> When going to
> >>`
> http://localhost:8982/solr/select/?q=*:*&collapse=true&collapse.field=tag_name_ss&collapse.max=1`
> >>
> >> (tag_name_ss  is surely a field with content) I get the same 10 docs as
> >> result back. No  further information regarding the field collapsing.
> >> What do I miss? Do I have  to activate it somehow?
> >>
> >> * Downloaded
> >>[Solr](
> http://apache.lauf-forum.at//lucene/solr/1.4.1/apache-solr-1.4.1.tgz)
> >> *  Downloaded
> >>[SOLR-236-1_4_1-paging-totals-working.patch](
> https://issues.apache.org/jira/secure/attachment/12459716/SOLR-236-1_4_1-paging-totals-working.patch
> )
> >>
> >> *  Changed line 2837 of that patch to `@@ -0,0 +1,511 @@` (regarding
> >> this
> >>[comment](
> https://issues.apache.org/jira/browse/SOLR-236?focusedCommentId=12932905&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-12932905
> ))
> >>
> >> *  Downloaded
> >>[SOLR-236-1_4_1-NPEfix.patch](
> https://issues.apache.org/jira/secure/attachment/12470202/SOLR-236-1_4_1-NPEfix.patch
> )
> >>
> >> *  Extracted the Solr archive
> >> * Applied both patches:
> >> ** `cd  apache-solr-1.4.1`
> >> ** `patch -p0 <  ../SOLR-236-1_4_1-paging-totals-working.patch`
> >> ** `patch -p0 <  ../SOLR-236-1_4_1-NPEfix.patch`
> >> * Build Solr
> >> ** `ant clean`
> >> ** `ant  example` ... tells me "BUILD SUCCESSFUL"
> >> * Reindexed everything (using  Sunspot Solr)
> >> * Solr info tells me correctly "Solr Specification  Version:
> >> 1.4.1.2011.03.14.04.29.20"
> >>
> >> Kai
> >>
> >
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Problem-with-field-collapsing-of-patched-Solr-1-4-tp2678850p2701061.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>


Re: Search failing for matched text in large field

2011-03-23 Thread Markus Jelsma
Enable TermVectors for fields that you're going tot highlight. If it is 
disabled Solr will reanalyze the field, killing performance.

> I looked into the search that I'm doing a little closer and it seems
> like the highlighting is slowing it down. If I do the query without
> requesting highlighting it is fast. (BTW, I also have faceting and
> pagination in my query. Faceting doesn't seem to change the response
> time much, adding &rows= and &start= does, but not prohibitively.)
> 
> The field in question needs to be stored=true, because it is needed
> for highlighting.
> 
> I'm thinking of doing this in two searches: first without highlighting
> and put a progress spinner next to each result, then do an ajax call
> to repeat the search with highlighting that can take its time to
> finish.
> 
> (I, too, have seen random really long response times that seem to be
> related to not enough RAM, but this isn't the problem because the
> results here are repeatable.)
> 
> On Wed, Mar 23, 2011 at 2:30 PM, Sascha Szott  wrote:
> > On 23.03.2011 18:52, Paul wrote:
> >> I increased maxFieldLength and reindexed a small number of documents.
> >> That worked -- I got the correct results. In 3 minutes!
> > 
> > Did you mark the field in question as stored = false?
> > 
> > -Sascha
> > 
> >> I assume that if I reindex all my documents that all searches will
> >> become even slower. Is there any way to get all the results in a way
> >> that is quick enough that my user won't get bored waiting? Is there
> >> some optimization of this coming in solr 3.0?
> >> 
> >> On Wed, Mar 23, 2011 at 12:15 PM, Sascha Szott  wrote:
> >>> Hi Paul,
> >>> 
> >>> did you increase the value of the maxFieldLength parameter in your
> >>> solrconfig.xml?
> >>> 
> >>> -Sascha
> >>> 
> >>> On 23.03.2011 17:05, Paul wrote:
>  I'm using solr 1.4.1.
>  
>  I have a document that has a pretty big field. If I search for a
>  phrase that occurs near the start of that field, it works fine. If I
>  search for a phrase that appears even a little ways into the field, it
>  doesn't find it. Is there some limit to how far into a field solr will
>  search?
>  
>  Here's the way I'm doing the search. All I'm changing is the text I'm
>  searching on to make it succeed or fail:
>  
>  
>  
>  http://localhost:8983/solr/my_core/select/?q=%22search+phrase%22&hl=on
>  &hl.fl=text
>  
>  Or, if it is not related to how large the document is, what else could
>  it possibly be related to? Could there be some character in that field
>  that is stopping the search?


Re: Search failing for matched text in large field

2011-03-23 Thread Jonathan Rochkind
Yeah, you aren't going to be able to do highlighting on a very very 
large field without terrible performance.  I believe it's just the 
nature of the algorithm used by the highlighting component. I don't know 
of any workaround.  Other than inventing a new algorithm for 
highlighting and writing a component for it.


Even with an AJAX call, you don't want to wait 3 minutes. Plus the load 
on your server.


On 3/23/2011 3:52 PM, Paul wrote:

I looked into the search that I'm doing a little closer and it seems
like the highlighting is slowing it down. If I do the query without
requesting highlighting it is fast. (BTW, I also have faceting and
pagination in my query. Faceting doesn't seem to change the response
time much, adding&rows= and&start= does, but not prohibitively.)

The field in question needs to be stored=true, because it is needed
for highlighting.

I'm thinking of doing this in two searches: first without highlighting
and put a progress spinner next to each result, then do an ajax call
to repeat the search with highlighting that can take its time to
finish.

(I, too, have seen random really long response times that seem to be
related to not enough RAM, but this isn't the problem because the
results here are repeatable.)

On Wed, Mar 23, 2011 at 2:30 PM, Sascha Szott  wrote:

On 23.03.2011 18:52, Paul wrote:

I increased maxFieldLength and reindexed a small number of documents.
That worked -- I got the correct results. In 3 minutes!

Did you mark the field in question as stored = false?

-Sascha


I assume that if I reindex all my documents that all searches will
become even slower. Is there any way to get all the results in a way
that is quick enough that my user won't get bored waiting? Is there
some optimization of this coming in solr 3.0?

On Wed, Mar 23, 2011 at 12:15 PM, Sascha Szottwrote:

Hi Paul,

did you increase the value of the maxFieldLength parameter in your
solrconfig.xml?

-Sascha

On 23.03.2011 17:05, Paul wrote:

I'm using solr 1.4.1.

I have a document that has a pretty big field. If I search for a
phrase that occurs near the start of that field, it works fine. If I
search for a phrase that appears even a little ways into the field, it
doesn't find it. Is there some limit to how far into a field solr will
search?

Here's the way I'm doing the search. All I'm changing is the text I'm
searching on to make it succeed or fail:



http://localhost:8983/solr/my_core/select/?q=%22search+phrase%22&hl=on&hl.fl=text

Or, if it is not related to how large the document is, what else could
it possibly be related to? Could there be some character in that field
that is stopping the search?


Re: Search failing for matched text in large field

2011-03-23 Thread Paul
I looked into the search that I'm doing a little closer and it seems
like the highlighting is slowing it down. If I do the query without
requesting highlighting it is fast. (BTW, I also have faceting and
pagination in my query. Faceting doesn't seem to change the response
time much, adding &rows= and &start= does, but not prohibitively.)

The field in question needs to be stored=true, because it is needed
for highlighting.

I'm thinking of doing this in two searches: first without highlighting
and put a progress spinner next to each result, then do an ajax call
to repeat the search with highlighting that can take its time to
finish.

(I, too, have seen random really long response times that seem to be
related to not enough RAM, but this isn't the problem because the
results here are repeatable.)

On Wed, Mar 23, 2011 at 2:30 PM, Sascha Szott  wrote:
> On 23.03.2011 18:52, Paul wrote:
>>
>> I increased maxFieldLength and reindexed a small number of documents.
>> That worked -- I got the correct results. In 3 minutes!
>
> Did you mark the field in question as stored = false?
>
> -Sascha
>
>>
>> I assume that if I reindex all my documents that all searches will
>> become even slower. Is there any way to get all the results in a way
>> that is quick enough that my user won't get bored waiting? Is there
>> some optimization of this coming in solr 3.0?
>>
>> On Wed, Mar 23, 2011 at 12:15 PM, Sascha Szott  wrote:
>>>
>>> Hi Paul,
>>>
>>> did you increase the value of the maxFieldLength parameter in your
>>> solrconfig.xml?
>>>
>>> -Sascha
>>>
>>> On 23.03.2011 17:05, Paul wrote:

 I'm using solr 1.4.1.

 I have a document that has a pretty big field. If I search for a
 phrase that occurs near the start of that field, it works fine. If I
 search for a phrase that appears even a little ways into the field, it
 doesn't find it. Is there some limit to how far into a field solr will
 search?

 Here's the way I'm doing the search. All I'm changing is the text I'm
 searching on to make it succeed or fail:



 http://localhost:8983/solr/my_core/select/?q=%22search+phrase%22&hl=on&hl.fl=text

 Or, if it is not related to how large the document is, what else could
 it possibly be related to? Could there be some character in that field
 that is stopping the search?
>>>
>


Re: Adding the suggest component

2011-03-23 Thread Brian Lamb
Thank you for the suggestion. I followed your advice and was able to get a
version up and running. Thanks again for all the help!

On Wed, Mar 23, 2011 at 1:55 PM, Ahmet Arslan  wrote:

> > I'm still confused as to why I'm
> > getting this error. To me it reads that the
> > .java file was declared incorrectly but I shouldn't need to
> > change those
> > files so where am I doing something incorrectly?
> >
>
> Brian, I think best thing to do is checkout a new clean copy from
> subversion and then do things step by step on this clean copy.
>
>
>
>


Storing Nested Fields

2011-03-23 Thread Sethi, Parampreet
Hi All,

This is regarding nested array functionality. I have requirements
1. to store category and sub-category association with a word in the Solr.
2. Also each word can be listed under multiple categories (and thus
sub-categories). 
3. Query based on category or sub-category.

One way is to have two separate Array fields in Solr and making sure that
field category[0] is the super-category of field sub-category[0].

Has anyone encountered similar problem in Solr? Any suggestions will be
great.

Thanks
Param



Re: Solr performance issue

2011-03-23 Thread Doğacan Güney
Hello,

The problem turned out to be some sort of sharding/searching weirdness. We
modified some code in sharding but I don't think it is related. In any case,
we just added a new server that just shards (but doesn't do any searching /
doesn't contain any index) and performance is very very good.

Thanks for all the help.

On Tue, Mar 22, 2011 at 14:30, Alexey Serba  wrote:

> > Btw, I am monitoring output via jconsole with 8gb of ram and it still
> goes
> > to 8gb every 20 seconds or so,
> > gc runs, falls down to 1gb.
>
> Hmm, jvm is eating 8Gb for 20 seconds - sounds a lot.
>
> Do you return all results (ids) for your queries? Any tricky
> faceting/sorting/function queries?
>



-- 
Doğacan Güney


Re: Search failing for matched text in large field

2011-03-23 Thread Sascha Szott

On 23.03.2011 18:52, Paul wrote:

I increased maxFieldLength and reindexed a small number of documents.
That worked -- I got the correct results. In 3 minutes!

Did you mark the field in question as stored = false?

-Sascha



I assume that if I reindex all my documents that all searches will
become even slower. Is there any way to get all the results in a way
that is quick enough that my user won't get bored waiting? Is there
some optimization of this coming in solr 3.0?

On Wed, Mar 23, 2011 at 12:15 PM, Sascha Szott  wrote:

Hi Paul,

did you increase the value of the maxFieldLength parameter in your
solrconfig.xml?

-Sascha

On 23.03.2011 17:05, Paul wrote:


I'm using solr 1.4.1.

I have a document that has a pretty big field. If I search for a
phrase that occurs near the start of that field, it works fine. If I
search for a phrase that appears even a little ways into the field, it
doesn't find it. Is there some limit to how far into a field solr will
search?

Here's the way I'm doing the search. All I'm changing is the text I'm
searching on to make it succeed or fail:


http://localhost:8983/solr/my_core/select/?q=%22search+phrase%22&hl=on&hl.fl=text

Or, if it is not related to how large the document is, what else could
it possibly be related to? Could there be some character in that field
that is stopping the search?




Re: Search failing for matched text in large field

2011-03-23 Thread Jonathan Rochkind
Hmm, there's no reason it should take anywhere close 3 minutes to get a 
result from a simple search, even with very large documents/term lists.  
Especially if you're really JUST doing a simple search, you aren't using 
facetting or statistics component or highlighting etc at this point. (If 
you ARE using highlighting, that could be the culprit).


You might need more RAM allocated to the Solr JVM.  For reasons I can't 
explain myself, I sometimes get pathologically slow search results when 
I don't have enough RAM, even though there aren't any errors in my logs 
or anything -- which adding more RAM fixes.


It's also possible (just taking random guesses, I am not familiar with 
this part of Solr internals), that if you increased the maxFieldLength 
on an existing index, but only reindexed SOME of the results in that 
index, than Solr is getting all confused about your index. I don't know 
if Solr can handle changing the maxFieldLength on an existing index 
without re-indexing all docs.


Also, if you tell us HOW large you made maxFieldLength, someone (not me) 
might be able to say something about if it's so large it could create 
some kind of other problem.


On 3/23/2011 1:52 PM, Paul wrote:

I increased maxFieldLength and reindexed a small number of documents.
That worked -- I got the correct results. In 3 minutes!

I assume that if I reindex all my documents that all searches will
become even slower. Is there any way to get all the results in a way
that is quick enough that my user won't get bored waiting? Is there
some optimization of this coming in solr 3.0?

On Wed, Mar 23, 2011 at 12:15 PM, Sascha Szott  wrote:

Hi Paul,

did you increase the value of the maxFieldLength parameter in your
solrconfig.xml?

-Sascha

On 23.03.2011 17:05, Paul wrote:

I'm using solr 1.4.1.

I have a document that has a pretty big field. If I search for a
phrase that occurs near the start of that field, it works fine. If I
search for a phrase that appears even a little ways into the field, it
doesn't find it. Is there some limit to how far into a field solr will
search?

Here's the way I'm doing the search. All I'm changing is the text I'm
searching on to make it succeed or fail:


http://localhost:8983/solr/my_core/select/?q=%22search+phrase%22&hl=on&hl.fl=text

Or, if it is not related to how large the document is, what else could
it possibly be related to? Could there be some character in that field
that is stopping the search?


Re: Adding the suggest component

2011-03-23 Thread Ahmet Arslan
> I'm still confused as to why I'm
> getting this error. To me it reads that the
> .java file was declared incorrectly but I shouldn't need to
> change those
> files so where am I doing something incorrectly?
> 

Brian, I think best thing to do is checkout a new clean copy from subversion 
and then do things step by step on this clean copy.


  


Re: Search failing for matched text in large field

2011-03-23 Thread Paul
I increased maxFieldLength and reindexed a small number of documents.
That worked -- I got the correct results. In 3 minutes!

I assume that if I reindex all my documents that all searches will
become even slower. Is there any way to get all the results in a way
that is quick enough that my user won't get bored waiting? Is there
some optimization of this coming in solr 3.0?

On Wed, Mar 23, 2011 at 12:15 PM, Sascha Szott  wrote:
> Hi Paul,
>
> did you increase the value of the maxFieldLength parameter in your
> solrconfig.xml?
>
> -Sascha
>
> On 23.03.2011 17:05, Paul wrote:
>>
>> I'm using solr 1.4.1.
>>
>> I have a document that has a pretty big field. If I search for a
>> phrase that occurs near the start of that field, it works fine. If I
>> search for a phrase that appears even a little ways into the field, it
>> doesn't find it. Is there some limit to how far into a field solr will
>> search?
>>
>> Here's the way I'm doing the search. All I'm changing is the text I'm
>> searching on to make it succeed or fail:
>>
>>
>> http://localhost:8983/solr/my_core/select/?q=%22search+phrase%22&hl=on&hl.fl=text
>>
>> Or, if it is not related to how large the document is, what else could
>> it possibly be related to? Could there be some character in that field
>> that is stopping the search?
>


Re: Solr coding

2011-03-23 Thread Jayendra Patil
In that case, you may want to store the groups as multivalued fields
who would have access to the document.
A filter query on the user group should have the results filtered as
you expect.

you may also check Apache ManifoldCF as suggested by Szott.

Regards,
Jayendra

On Wed, Mar 23, 2011 at 9:46 AM, satya swaroop  wrote:
> Hi Jayendra,
>                I forgot to mention the result also depends on the group of
> user too It is some wat complex so i didnt tell it.. now i explain the
> exact way..
>
>  user1, group1 -> java1, c1,sap1
>  user2 ,group2-> java2, c2,sap2
>  user3 ,group1,group3-> java3, c3,sap3
>  user4 ,group3-> java4, c4,sap4
>  user5 ,group3-> java5, c5,sap5
>
>                             user1,group1 means user1 belong to group1
>
>
> Here the filter includes the group too.., if for eg: user1 searches for
> "java" then the results should show as java1,java3 since java3 file is
> acessable to all users who are related to the group1, so i thought of to
> edit the code...
>
> Thanks,
> satya
>


Re: multifield search using dismax

2011-03-23 Thread Jonathan Rochkind
It is not.  I think it is is possible in edismax, on trunk (not yet in a 
released version, not sure if it will be in the upcoming release).


Alternately, you can use Solr nested queries, although they're not 
really suitable for end-user-entry, and you might lose the behavior of 
dismax you want, depending on what behavior you want.


&defType=lucene
&q=_query_:"{!dismax qf='field1 field2'}value1" AND _query_:"{!dismax 
qf='field3 field4'}more values"




On 3/23/2011 12:38 PM, Gastone Penzo wrote:

Hi,
is it possible, USING DISMAX SEARCH HANDLER, to make a search like:

search value1 in field1&  value 2 in field 2&??

it's like q=field1:value1 field2:value2 in standard search, but i want to do
this in dismax

Thanx





Re: Search failing for matched text in large field

2011-03-23 Thread Jonathan Rochkind

How large?

But rather than think about if there's something in the "searching" 
that's not working, the first step might be to make sure that everything 
in the _indexing_ is working -- that your field is actually being 
indexed as you intend.


I forget the best way to view what's in your index -- the Luke request 
handler in the Solr admin maybe?


On 3/23/2011 12:05 PM, Paul wrote:

I'm using solr 1.4.1.

I have a document that has a pretty big field. If I search for a
phrase that occurs near the start of that field, it works fine. If I
search for a phrase that appears even a little ways into the field, it
doesn't find it. Is there some limit to how far into a field solr will
search?

Here's the way I'm doing the search. All I'm changing is the text I'm
searching on to make it succeed or fail:

http://localhost:8983/solr/my_core/select/?q=%22search+phrase%22&hl=on&hl.fl=text

Or, if it is not related to how large the document is, what else could
it possibly be related to? Could there be some character in that field
that is stopping the search?



multifield search using dismax

2011-03-23 Thread Gastone Penzo
Hi,
is it possible, USING DISMAX SEARCH HANDLER, to make a search like:

search value1 in field1 & value 2 in field 2 &??

it's like q=field1:value1 field2:value2 in standard search, but i want to do
this in dismax

Thanx



-- 
Gastone Penzo

*www.solr-italia.it*
*The first italian blog about Apache Solr *


Re: Search failing for matched text in large field

2011-03-23 Thread Paul
Ah, no, I'll try that now.

What is the disadvantage of setting that to a really large number?

I do want the search to work for every word I give to solr. Otherwise
I wouldn't have indexed it to begin with.

On Wed, Mar 23, 2011 at 11:15 AM, Sascha Szott  wrote:
> Hi Paul,
>
> did you increase the value of the maxFieldLength parameter in your
> solrconfig.xml?
>
> -Sascha
>
> On 23.03.2011 17:05, Paul wrote:
>>
>> I'm using solr 1.4.1.
>>
>> I have a document that has a pretty big field. If I search for a
>> phrase that occurs near the start of that field, it works fine. If I
>> search for a phrase that appears even a little ways into the field, it
>> doesn't find it. Is there some limit to how far into a field solr will
>> search?
>>
>> Here's the way I'm doing the search. All I'm changing is the text I'm
>> searching on to make it succeed or fail:
>>
>>
>> http://localhost:8983/solr/my_core/select/?q=%22search+phrase%22&hl=on&hl.fl=text
>>
>> Or, if it is not related to how large the document is, what else could
>> it possibly be related to? Could there be some character in that field
>> that is stopping the search?
>


Re: Search failing for matched text in large field

2011-03-23 Thread Sascha Szott

Hi Paul,

did you increase the value of the maxFieldLength parameter in your 
solrconfig.xml?


-Sascha

On 23.03.2011 17:05, Paul wrote:

I'm using solr 1.4.1.

I have a document that has a pretty big field. If I search for a
phrase that occurs near the start of that field, it works fine. If I
search for a phrase that appears even a little ways into the field, it
doesn't find it. Is there some limit to how far into a field solr will
search?

Here's the way I'm doing the search. All I'm changing is the text I'm
searching on to make it succeed or fail:

http://localhost:8983/solr/my_core/select/?q=%22search+phrase%22&hl=on&hl.fl=text

Or, if it is not related to how large the document is, what else could
it possibly be related to? Could there be some character in that field
that is stopping the search?


Search failing for matched text in large field

2011-03-23 Thread Paul
I'm using solr 1.4.1.

I have a document that has a pretty big field. If I search for a
phrase that occurs near the start of that field, it works fine. If I
search for a phrase that appears even a little ways into the field, it
doesn't find it. Is there some limit to how far into a field solr will
search?

Here's the way I'm doing the search. All I'm changing is the text I'm
searching on to make it succeed or fail:

http://localhost:8983/solr/my_core/select/?q=%22search+phrase%22&hl=on&hl.fl=text

Or, if it is not related to how large the document is, what else could
it possibly be related to? Could there be some character in that field
that is stopping the search?


Re: Unknown query type 'edismax'

2011-03-23 Thread Swapnonil Mukherjee
Hi,

It worked! Thanks a lot. At least I don't get the stacktrace on the jetty 
console and the Unknown query type error after adding this entry to the 
solrconfig.xml. 

We will have to examine the results to see if the Edismax parser is really 
kicking in.

Swapnonil Mukherjee
+91-40092712
+91-9007131999



On 23-Mar-2011, at 7:14 PM, Ahmet Arslan wrote:

>> I have downloaded apache-solr1.4.1 and then applied patch
>> SOLR-1553 to enable the EdismaxParserQueryPlugin, but
>> inspite of this I get the Unknown query type error.
> 
> Hmm, I don't know about whether it is usable/compatible with solr 1.4.1 but 
> you can try to register edismax in solrconfig.xml as follows:,
> 
>  class="org.apache.solr.search.ExtendedDismaxQParserPlugin"/>
> 
> 
> 
> 



Re: Adding the suggest component

2011-03-23 Thread Brian Lamb
I'm still confused as to why I'm getting this error. To me it reads that the
.java file was declared incorrectly but I shouldn't need to change those
files so where am I doing something incorrectly?

On Tue, Mar 22, 2011 at 3:40 PM, Brian Lamb
wrote:

> That fixed that error as well as the could not initialize Dataimport class
> error. Now I'm getting:
>
> org.apache.solr.common.SolrException: Error Instantiating Request Handler,
> org.apache.solr.handler.dataimport.DataImportHandler is not a
> org.apache.solr.request.SolrRequestHandler
>
> I can't find anything on this one. What I've added to the solrconfig.xml
> file matches whats in example-DIH so I don't quite understand what the issue
> is here. It sounds to me like it is not declared properly somewhere but I'm
> not sure where/why.
>
> Here is the relevant portion of my solrconfig.xml file:
>
>  class="org.apache.solr.handler.dataimport.DataImportHandler">
>
>  db-data-config.xml
>
> 
>
> Thanks for all the help so far. You all have been great.
>
> Brian Lamb
>
> On Tue, Mar 22, 2011 at 3:17 PM, Ahmet Arslan  wrote:
>
>> > java.lang.NoClassDefFoundError: Could not initialize class
>> > org.apache.solr.handler.dataimport.DataImportHandler
>> > at java.lang.Class.forName0(Native Method)
>> >
>> > java.lang.NoClassDefFoundError: org/slf4j/LoggerFactory
>> > at
>> >
>> org.apache.solr.handler.dataimport.DataImportHandler.(DataImportHandler.java:72)
>> >
>> > Caused by: java.lang.ClassNotFoundException:
>> > org.slf4j.LoggerFactory
>> > at java.net.URLClassLoader$1.run(URLClassLoader.java:217)
>> >
>>
>> You can find slf4j- related jars in \trunk\solr\lib, but this error is
>> weird.
>>
>>
>>
>>
>


Re: Solr coding

2011-03-23 Thread satya swaroop
Hi Jayendra,
I forgot to mention the result also depends on the group of
user too It is some wat complex so i didnt tell it.. now i explain the
exact way..

  user1, group1 -> java1, c1,sap1
  user2 ,group2-> java2, c2,sap2
  user3 ,group1,group3-> java3, c3,sap3
  user4 ,group3-> java4, c4,sap4
  user5 ,group3-> java5, c5,sap5

 user1,group1 means user1 belong to group1


Here the filter includes the group too.., if for eg: user1 searches for
"java" then the results should show as java1,java3 since java3 file is
acessable to all users who are related to the group1, so i thought of to
edit the code...

Thanks,
satya


Re: Unknown query type 'edismax'

2011-03-23 Thread Ahmet Arslan
> I have downloaded apache-solr1.4.1 and then applied patch
> SOLR-1553 to enable the EdismaxParserQueryPlugin, but
> inspite of this I get the Unknown query type error.

Hmm, I don't know about whether it is usable/compatible with solr 1.4.1 but you 
can try to register edismax in solrconfig.xml as follows:,





  


Re: Solr coding

2011-03-23 Thread Sascha Szott

Hi,

depending on your needs, take a look at Apache ManifoldCF. It adds 
document-level security on top of Solr.


-Sascha

On 23.03.2011 14:20, satya swaroop wrote:

Hi All,
   As for my project Requirement i need to keep privacy for search of
files so that i need to modify the code of solr,

for example if there are 5 users and each user indexes some files as
   user1 ->  java1, c1,sap1
   user2 ->  java2, c2,sap2
   user3 ->  java3, c3,sap3
   user4 ->  java4, c4,sap4
   user5 ->  java5, c5,sap5

and if a user2 searches for the keyword "java" then it should be display
only  the file java2 and not other files

so inorder to keep this filtering inside solr itself may i know where to
modify the code... i will access a database to check the user indexed files
and then filter the result... i didnt have any cores.. i indexed all files
in a single index...

Regards,
satya



Re: Solr coding

2011-03-23 Thread Jayendra Patil
Why not just add an extra field to the document in the Index for the
user, so you can easily filter out the results on the user field and
show only the documents submitted by the User.

Regards,
Jayendra

On Wed, Mar 23, 2011 at 9:20 AM, satya swaroop  wrote:
> Hi All,
>          As for my project Requirement i need to keep privacy for search of
> files so that i need to modify the code of solr,
>
> for example if there are 5 users and each user indexes some files as
>  user1 -> java1, c1,sap1
>  user2 -> java2, c2,sap2
>  user3 -> java3, c3,sap3
>  user4 -> java4, c4,sap4
>  user5 -> java5, c5,sap5
>
>   and if a user2 searches for the keyword "java" then it should be display
> only  the file java2 and not other files
>
> so inorder to keep this filtering inside solr itself may i know where to
> modify the code... i will access a database to check the user indexed files
> and then filter the result... i didnt have any cores.. i indexed all files
> in a single index...
>
> Regards,
> satya
>


Re: Solr - multivalue fields - please help

2011-03-23 Thread Jayendra Patil
Just a suggestion ..
You can try using dynamic fields by appending the company name (or ID)
as prefix ... e.g.

For data -
Employee ID Employer FromDate ToDate
21345
IBM 01/01/04 01/01/06
MS 01/01/07 01/01/08
BT 01/01/09 Present

Index data as :-
Employee ID - 21345
Employer Name - IBM MS BT (Multivalued fields)
IBM_FROM_DATE - 01/01/04 (Dynamic field)
IBM_TO_DATE - 01/01/06 (Dynamic field)

You should be able to match the results and get the from and to date
for the companies and handle it on UI side.

Regards,
Jayendra

On Wed, Mar 23, 2011 at 8:24 AM, Sandra  wrote:
> Hi everyone,
>
>        I know that Solr cannot match 1 value in a multi-valued field with
> the corresponding value in another multi-valued field. However my data set
> appears to be in that form at the moment.
>        With that in mind does anyone know of any good articles or
> discussions that have addressed this issue, specifically the alternatives
> that can be easily done/considered etc
>
> The data is of the following format:
>
>        I have an unique Employee ID field, Employer (multi-value), FromDate
> (multi-value) amd ToDate (multi-value). For a given employee ID I am trying
> to return the relevent data. For example for a ID of "21345" and emplyer
> "IMB" return the work dates from and to. Or for same id and 2 work dates
> return the company of companies that the id was associated with etc
>
>
> Employee ID Employer FromDate ToDate
> 21345 IBM 01/01/04 01/01/06
>                        MS 01/01/07 01/01/08
>                        BT 01/01/09 Present
>
>        Any suggestions/comments/ideas/articles much appreciated...
>
> Thanks,
> S.
>
> --
> View this message in context: 
> http://lucene.472066.n3.nabble.com/Solr-multivalue-fields-please-help-tp2720067p2720067.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>


Solr coding

2011-03-23 Thread satya swaroop
Hi All,
  As for my project Requirement i need to keep privacy for search of
files so that i need to modify the code of solr,

for example if there are 5 users and each user indexes some files as
  user1 -> java1, c1,sap1
  user2 -> java2, c2,sap2
  user3 -> java3, c3,sap3
  user4 -> java4, c4,sap4
  user5 -> java5, c5,sap5

   and if a user2 searches for the keyword "java" then it should be display
only  the file java2 and not other files

so inorder to keep this filtering inside solr itself may i know where to
modify the code... i will access a database to check the user indexed files
and then filter the result... i didnt have any cores.. i indexed all files
in a single index...

Regards,
satya


Re: email - DIH

2011-03-23 Thread Matias Alonso
Hi Gora,

Also, all the emails were received after that date.

Regards,

Matias.



2011/3/23 Gora Mohanty 

> On Tue, Mar 22, 2011 at 9:38 PM, Matias Alonso 
> wrote:
> [...]
> > The problem is that I´m indexing emails throw Data import Handler using
> > Gmail with imaps; I do this for search on email list in the future. The
> > emails are indexed partiality and I can´t found the problem of why don´t
> > index all of the emails.
> [...]
> > I´ve done a full import and no errors were found, but in the status I saw
> > that was added 28 documents, and in the console, I found 35 messanges.
> [...]
>
> > INFO: Total messages : 35
> >
> > Mar 22, 2011 3:55:16 PM
> > org.apache.solr.handler.dataimport.MailEntityProcessor$MessageIterator
> > 
> >
> > INFO: Search criteria applied. Batching disabled
> [...]
>
> The above seems to indicate that the MailEntityProcessor does find
> all 35 messages, but indexes only 28. Are you sure that all 35 are
> since 2010-01-01 00:00:00? Could you try without fetchMailsSince?
>
> Regards,
> Gora
>



-- 
Matias.


Re: Unknown query type 'edismax'

2011-03-23 Thread Swapnonil Mukherjee
Hi,

I have downloaded apache-solr1.4.1 and then applied patch SOLR-1553 to enable 
the EdismaxParserQueryPlugin, but inspite of this I get the Unknown query type 
error.

Swapnonil Mukherjee
+91-40092712
+91-9007131999



On 23-Mar-2011, at 4:36 PM, Ahmet Arslan wrote:

>> I just downloaded apache-solr-1.4.1 and am trying to run an
>> edismax query using the defType argument as edismax.
>> 
> 
> You need 3.1 or trunk for that. 
> 
> "NOTE: Solr 3.1 will include an experimental version of the Extended DisMax 
> parsing" http://wiki.apache.org/solr/DisMaxQParserPlugin
> 
> 
> 



Solr - multivalue fields - please help

2011-03-23 Thread Sandra
Hi everyone,

I know that Solr cannot match 1 value in a multi-valued field with
the corresponding value in another multi-valued field. However my data set
appears to be in that form at the moment.
With that in mind does anyone know of any good articles or
discussions that have addressed this issue, specifically the alternatives
that can be easily done/considered etc

The data is of the following format:

I have an unique Employee ID field, Employer (multi-value), FromDate
(multi-value) amd ToDate (multi-value). For a given employee ID I am trying
to return the relevent data. For example for a ID of "21345" and emplyer
"IMB" return the work dates from and to. Or for same id and 2 work dates
return the company of companies that the id was associated with etc


Employee ID Employer FromDate ToDate
21345 IBM 01/01/04 01/01/06
MS 01/01/07 01/01/08
BT 01/01/09 Present 

Any suggestions/comments/ideas/articles much appreciated...

Thanks,
S. 

--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-multivalue-fields-please-help-tp2720067p2720067.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: email - DIH

2011-03-23 Thread Matias Alonso
Hi Gora,

I appreciate your help.

I´ve done what you said but if omit "fetchMailsSince" "full-import" doesn´t
work.

This´s the messenge on the console ..."SEVERE: Full Import
failed:org.apache.solr.handler.dataimport.DataImportHandlerException:
Invalid value for fetchMailSince:  Processing Document # 1"...

The email I use for this was created at the begin of this month.


Regards,
Matias.



2011/3/23 Gora Mohanty 

> On Tue, Mar 22, 2011 at 9:38 PM, Matias Alonso 
> wrote:
> [...]
> > The problem is that I´m indexing emails throw Data import Handler using
> > Gmail with imaps; I do this for search on email list in the future. The
> > emails are indexed partiality and I can´t found the problem of why don´t
> > index all of the emails.
> [...]
> > I´ve done a full import and no errors were found, but in the status I saw
> > that was added 28 documents, and in the console, I found 35 messanges.
> [...]
>
> > INFO: Total messages : 35
> >
> > Mar 22, 2011 3:55:16 PM
> > org.apache.solr.handler.dataimport.MailEntityProcessor$MessageIterator
> > 
> >
> > INFO: Search criteria applied. Batching disabled
> [...]
>
> The above seems to indicate that the MailEntityProcessor does find
> all 35 messages, but indexes only 28. Are you sure that all 35 are
> since 2010-01-01 00:00:00? Could you try without fetchMailsSince?
>
> Regards,
> Gora
>



-- 
Matias.


Re: Unknown query type 'edismax'

2011-03-23 Thread Ahmet Arslan
> I just downloaded apache-solr-1.4.1 and am trying to run an
> edismax query using the defType argument as edismax.
> 

You need 3.1 or trunk for that. 

"NOTE: Solr 3.1 will include an experimental version of the Extended DisMax 
parsing" http://wiki.apache.org/solr/DisMaxQParserPlugin


  


RE: Architecture question about solr sharding

2011-03-23 Thread Baillie, Robert
I'd separate the splitting of the binary documents from the sharding in Solr - 
they're different things and the split may be required at different levels, due 
to different numbers of documents.

Splitting the dependency means that you can store the path in the document and 
not need to infer anything, and you can re-organise the Solr shards without 
having to worry about moving the binary documents around.

Also, if you think you're going to need to change Jan to Jan2011, then maybe 
you should just start with Jan2011.  Alternatively, considering that you think 
change is likely in the future, why not name the directories in such a way that 
you don't need to make the change earlier ones as the requirement to change the 
structures arises?

Does that make sense?

Rob

On Tue, Mar 22, 2011 at 3:20 PM, JohnRodey  wrote:
> I have an issue and I'm wondering if there is an easy way around it 
> with just SOLR.
>
> I have multiple SOLR servers and a field in my schema is a relative 
> path to a binary file.  Each SOLR server is responsible for a 
> different subset of data that belongs to a different base path.
>
> For Example...
>
> My directory structure may look like this:
> /someDir/Jan/binaryfiles/...
> /someDir/Feb/binaryfiles/...
> /someDir/Mar/binaryfiles/...
> /someDir/Apr/binaryfiles/...
>
> Server1 is responsible for Jan, Server2 for Feb, etc...
>
> And a response document may have a field like this my entry 
> binaryfiles/12345.bin
>
> How can I tell from my main search server which server returned a result?
> I cannot put the full path in the index because my path structure 
> might change in the future.  Using this example it may go to 
> '/someDir/Jan2011/'.
>
> I basically need to find a way to say 'Ah! server01 returned this 
> result, so it must be in /someDir/Jan'
>
> Thanks!
>
> --
> View this message in context: 
> http://lucene.472066.n3.nabble.com/Architecture-question-about-solr-sh
> arding-tp2716417p2716417.html Sent from the Solr - User mailing list 
> archive at Nabble.com.
>



This email transmission is confidential and intended solely for the 
addressee. If you are not the intended addressee, you must not 
disclose, copy or distribute the contents of this transmission. If you 
have received this transmission in error, please notify the sender 
immediately.

http://www.sthree.com



Unknown query type 'edismax'

2011-03-23 Thread Swapnonil Mukherjee
Hi,

I just downloaded apache-solr-1.4.1 and am trying to run an edismax query using 
the defType argument as edismax.

But I am setting an Unkown query type exception.

Mar 23, 2011 10:28:42 AM org.apache.solr.common.SolrException log
SEVERE: org.apache.solr.common.SolrException: Unknown query type 'edismax'
at org.apache.solr.core.SolrCore.getQueryPlugin(SolrCore.java:1462)
at org.apache.solr.search.QParser.getParser(QParser.java:251)
at 
org.apache.solr.handler.component.QueryComponent.prepare(QueryComponent.java:88)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:174)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1316)
at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:338)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:241)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1089)
at 
org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:365)
at 
org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
at 
org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181)
at 
org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:712)
at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:405)
at 
org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:211)
at 
org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:114)
at 
org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:139)
at org.mortbay.jetty.Server.handle(Server.java:285)
at 
org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:502)
at 
org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:821)
at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:513)
at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:208)
at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:378)
at 
org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:226)
at 
org.mortbay.thread.BoundedThreadPool$PoolThread.run(BoundedThreadPool.java:442)

Do I need to make any changes to solrconfig.xml to get edismax queries to work?

Swapnonil Mukherjee
+91-40092712
+91-9007131999