Re: [Solrj 4.0] No group response

2012-11-30 Thread Kissue Kissue
Here is how i have previously used grouping. Note i am using Solr 3.5:

SolrQuery query = new SolrQuery();
query.setRows(GROUPING_LIMIT);
query.setParam(group, Boolean.TRUE);
query.setParam(group.field, GROUP_FIELD);

This seems to work for me.



On Fri, Nov 30, 2012 at 1:17 PM, Roman SlavĂ­k slaav...@gmail.com wrote:

 Hi guys,

 I have problem with grouping in Solr 4.0 using Solrj api. I need this:
 search some documents limited with solr query, group them by one field and
 return total count of groups.
 There is param 'group.ngroups' for adding groups count into group
 response. Sounds easy, so I wrote something like this:

 SolrQuery query = new SolrQuery().setQuery(**queryString);
 query.addField(score);

 query.setParam(GroupParams.**GROUP, true);
 query.setParam(GroupParams.**GROUP_MAIN, true);
 query.setParam(GroupParams.**GROUP_FIELD, group_field);
 query.setParam(GroupParams.**GROUP_LIMIT, 1);
 query.setParam(GroupParams.**GROUP_TOTAL_COUNT, true);

 QueryResponse response = solrServer.query(query);
 // contains found docs
 GroupResponse groupResponse = response.getGroupResponse(); // null

 Search result is ok, QueryResponse contains docs I searched for. But group
 response is always null. Did I miss something, some magic parameter for
 enabling group response?

 Thanks for any advice

 Roman



Re: Items disappearing from Solr index

2012-10-30 Thread Kissue Kissue
I have encountered another case where deleteByQuery fails. It fails for
when i have a catalogueId value www
and thus issue the query ( {!term
f=catalogueId}www). One of my customers
just reported this now. Any ideas why a value like that when issued in a
deleteByQuery should be wiping out the entire index?

Thanks.

On Thu, Sep 27, 2012 at 2:27 PM, Kissue Kissue kissue...@gmail.com wrote:

 Actually this problem occurs even when i am doing just deletes. I tested
 by sending only one delete query for a single catalogue and had the same
 problem. I always optimize once.

 I changed to the syntax you suggested ( {!term f=catalogueId}Emory Labs)
 and works like a charm. Thanks for the pointer, saved me from another issue
 that could have occurred at some point.

 Thanks.




 On Thu, Sep 27, 2012 at 12:30 PM, Erick Erickson 
 erickerick...@gmail.comwrote:

 Wild shot in the dark

 What happens if you switch from StreamingUpdateSolrServer to
 HttpSolrServer?

 What I'm wondering is if somehow you're getting a queueing problem. If
 you have
 multiple threads defined for SUSS, it might be possible (and I'm
 guessing) that
 the delete bit is getting sent after some of the adds. Frankly I doubt
 this is
 the case, but this issue is so weird that I'm grasping at straws.

 BTW, there's no reason to optimize twice. Actually, the new thinking is
 that
 optimizing usually isn't necessary anyway. But if you insist on optimizing
 there's no reason to do it _both_ after the deletes and after the adds,
 just
 do it after the adds.

 Best
 Erick

 On Thu, Sep 27, 2012 at 4:31 AM, Kissue Kissue kissue...@gmail.com
 wrote:
  #What is the field type for that field - string or text?
 
  It is a string type.
 
  Thanks.
 
  On Wed, Sep 26, 2012 at 8:14 PM, Jack Krupansky 
 j...@basetechnology.comwrote:
 
  What is the field type for that field - string or text?
 
 
  -- Jack Krupansky
 
  -Original Message- From: Kissue Kissue
  Sent: Wednesday, September 26, 2012 1:43 PM
 
  To: solr-user@lucene.apache.org
  Subject: Re: Items disappearing from Solr index
 
  # It is looking for documents with Emory in the specified field OR
 Labs
  in the default search field.
 
  This does not seem to be the case. For instance issuing a
 deleteByQuery for
  catalogueId: PEARL LINGUISTICS LTD also deletes the contents of a
  catalogueId with the value: Ncl_**MacNaughtonMcGregorCoaching_**
  vf010811.
 
  Thanks.
 
  On Wed, Sep 26, 2012 at 2:37 PM, Jack Krupansky 
 j...@basetechnology.com*
  *wrote:
 
   It is looking for documents with Emory in the specified field OR
 Labs
  in the default search field.
 
  -- Jack Krupansky
 
  -Original Message- From: Kissue Kissue
  Sent: Wednesday, September 26, 2012 7:47 AM
  To: solr-user@lucene.apache.org
  Subject: Re: Items disappearing from Solr index
 
 
  I have just solved this problem.
 
  We have a field called catalogueId. One possible value for this field
  could
  be Emory Labs. I found out that when the following delete by query
 is
  sent to solr:
 
  getSolrServer().deleteByQuery(catalogueId + : + Emory Labs)
   [Notice
 
  that
  there are no quotes surrounding the catalogueId value - Emory Labs]
 
  For some reason this delete by query ends up deleting the contents of
 some
  other random catalogues too which is the reason why we are loosing
 items
  from the index. When the query is changed to:
 
  getSolrServer().deleteByQuery(catalogueId + : + Emory Labs),
  then it
 
  starts to correctly delete only items in the Emory Labs catalogue.
 
  So my first question is, what exactly does deleteByQuery do in the
 first
  query without the quotes? How is it determining which catalogues to
  delete?
 
  Secondly, shouldn't the correct behaviour be not to delete anything
 at all
  in this case since when a search is done for the same catalogueId
 without
  the quotes it just simply returns no results?
 
  Thanks.
 
 
  On Mon, Sep 24, 2012 at 3:12 PM, Kissue Kissue kissue...@gmail.com
  wrote:
 
   Hi Erick,
 
 
  Thanks for your reply. Yes i am using delete by query. I am currently
  logging the number of items to be deleted before handing off to
 solr. And
  from solr logs i can it deleted exactly that number. I will verify
  further.
 
  Thanks.
 
 
  On Mon, Sep 24, 2012 at 1:21 PM, Erick Erickson 
 erickerick...@gmail.com
  
  **wrote:
 
 
   How do you delete items? By ID or by query?
 
 
  My guess is that one of two things is happening:
  1 your delete process is deleting too much data.
  2 your index process isn't indexing what you think.
 
  I'd add some logging to the SolrJ program to see what
  it thinks is has deleted or added to the index and go from there.
 
  Best
  Erick
 
  On Mon, Sep 24, 2012 at 6:55 AM, Kissue Kissue kissue...@gmail.com
 
  wrote:
   Hi,
  
   I am running Solr 3.5, using SolrJ and using
 StreamingUpdateSolrServer
  to
   index and delete items from solr

Re: Items disappearing from Solr index

2012-09-27 Thread Kissue Kissue
#What is the field type for that field - string or text?

It is a string type.

Thanks.

On Wed, Sep 26, 2012 at 8:14 PM, Jack Krupansky j...@basetechnology.comwrote:

 What is the field type for that field - string or text?


 -- Jack Krupansky

 -Original Message- From: Kissue Kissue
 Sent: Wednesday, September 26, 2012 1:43 PM

 To: solr-user@lucene.apache.org
 Subject: Re: Items disappearing from Solr index

 # It is looking for documents with Emory in the specified field OR Labs
 in the default search field.

 This does not seem to be the case. For instance issuing a deleteByQuery for
 catalogueId: PEARL LINGUISTICS LTD also deletes the contents of a
 catalogueId with the value: Ncl_**MacNaughtonMcGregorCoaching_**
 vf010811.

 Thanks.

 On Wed, Sep 26, 2012 at 2:37 PM, Jack Krupansky j...@basetechnology.com*
 *wrote:

  It is looking for documents with Emory in the specified field OR Labs
 in the default search field.

 -- Jack Krupansky

 -Original Message- From: Kissue Kissue
 Sent: Wednesday, September 26, 2012 7:47 AM
 To: solr-user@lucene.apache.org
 Subject: Re: Items disappearing from Solr index


 I have just solved this problem.

 We have a field called catalogueId. One possible value for this field
 could
 be Emory Labs. I found out that when the following delete by query is
 sent to solr:

 getSolrServer().deleteByQuery(catalogueId + : + Emory Labs)
  [Notice

 that
 there are no quotes surrounding the catalogueId value - Emory Labs]

 For some reason this delete by query ends up deleting the contents of some
 other random catalogues too which is the reason why we are loosing items
 from the index. When the query is changed to:

 getSolrServer().deleteByQuery(catalogueId + : + Emory Labs),
 then it

 starts to correctly delete only items in the Emory Labs catalogue.

 So my first question is, what exactly does deleteByQuery do in the first
 query without the quotes? How is it determining which catalogues to
 delete?

 Secondly, shouldn't the correct behaviour be not to delete anything at all
 in this case since when a search is done for the same catalogueId without
 the quotes it just simply returns no results?

 Thanks.


 On Mon, Sep 24, 2012 at 3:12 PM, Kissue Kissue kissue...@gmail.com
 wrote:

  Hi Erick,


 Thanks for your reply. Yes i am using delete by query. I am currently
 logging the number of items to be deleted before handing off to solr. And
 from solr logs i can it deleted exactly that number. I will verify
 further.

 Thanks.


 On Mon, Sep 24, 2012 at 1:21 PM, Erick Erickson erickerick...@gmail.com
 
 **wrote:


  How do you delete items? By ID or by query?


 My guess is that one of two things is happening:
 1 your delete process is deleting too much data.
 2 your index process isn't indexing what you think.

 I'd add some logging to the SolrJ program to see what
 it thinks is has deleted or added to the index and go from there.

 Best
 Erick

 On Mon, Sep 24, 2012 at 6:55 AM, Kissue Kissue kissue...@gmail.com
 wrote:
  Hi,
 
  I am running Solr 3.5, using SolrJ and using StreamingUpdateSolrServer
 to
  index and delete items from solr.
 
  I basically index items from the db into solr every night. Existing
 items
  can be marked for deletion in the db and a delete request sent to solr
 to
  delete such items.
 
  My process runs as follows every night:
 
  1. Check if items have been marked for deletion and delete from solr.
  I
  commit and optimize after the entire solr deletion runs.
  2. Index any new items to solr. I commit and optimize after all the 
 new
  items have been added.
 
  Recently i started noticing that huge chunks of items that have not 
 been
  marked for deletion are disappearing from the index. I checked the 
 solr
  logs and the logs indicate that it is deleting exactly the number of
 items
  requested but still a lot of other items disappear from the index from
 time
  to time. Any ideas what might be causing this or what i am doing 
 wrong.
 
 
  Thanks.









Re: Items disappearing from Solr index

2012-09-27 Thread Kissue Kissue
Actually this problem occurs even when i am doing just deletes. I tested by
sending only one delete query for a single catalogue and had the same
problem. I always optimize once.

I changed to the syntax you suggested ( {!term f=catalogueId}Emory Labs)
and works like a charm. Thanks for the pointer, saved me from another issue
that could have occurred at some point.

Thanks.



On Thu, Sep 27, 2012 at 12:30 PM, Erick Erickson erickerick...@gmail.comwrote:

 Wild shot in the dark

 What happens if you switch from StreamingUpdateSolrServer to
 HttpSolrServer?

 What I'm wondering is if somehow you're getting a queueing problem. If you
 have
 multiple threads defined for SUSS, it might be possible (and I'm guessing)
 that
 the delete bit is getting sent after some of the adds. Frankly I doubt
 this is
 the case, but this issue is so weird that I'm grasping at straws.

 BTW, there's no reason to optimize twice. Actually, the new thinking is
 that
 optimizing usually isn't necessary anyway. But if you insist on optimizing
 there's no reason to do it _both_ after the deletes and after the adds,
 just
 do it after the adds.

 Best
 Erick

 On Thu, Sep 27, 2012 at 4:31 AM, Kissue Kissue kissue...@gmail.com
 wrote:
  #What is the field type for that field - string or text?
 
  It is a string type.
 
  Thanks.
 
  On Wed, Sep 26, 2012 at 8:14 PM, Jack Krupansky j...@basetechnology.com
 wrote:
 
  What is the field type for that field - string or text?
 
 
  -- Jack Krupansky
 
  -Original Message- From: Kissue Kissue
  Sent: Wednesday, September 26, 2012 1:43 PM
 
  To: solr-user@lucene.apache.org
  Subject: Re: Items disappearing from Solr index
 
  # It is looking for documents with Emory in the specified field OR
 Labs
  in the default search field.
 
  This does not seem to be the case. For instance issuing a deleteByQuery
 for
  catalogueId: PEARL LINGUISTICS LTD also deletes the contents of a
  catalogueId with the value: Ncl_**MacNaughtonMcGregorCoaching_**
  vf010811.
 
  Thanks.
 
  On Wed, Sep 26, 2012 at 2:37 PM, Jack Krupansky 
 j...@basetechnology.com*
  *wrote:
 
   It is looking for documents with Emory in the specified field OR
 Labs
  in the default search field.
 
  -- Jack Krupansky
 
  -Original Message- From: Kissue Kissue
  Sent: Wednesday, September 26, 2012 7:47 AM
  To: solr-user@lucene.apache.org
  Subject: Re: Items disappearing from Solr index
 
 
  I have just solved this problem.
 
  We have a field called catalogueId. One possible value for this field
  could
  be Emory Labs. I found out that when the following delete by query is
  sent to solr:
 
  getSolrServer().deleteByQuery(catalogueId + : + Emory Labs)
   [Notice
 
  that
  there are no quotes surrounding the catalogueId value - Emory Labs]
 
  For some reason this delete by query ends up deleting the contents of
 some
  other random catalogues too which is the reason why we are loosing
 items
  from the index. When the query is changed to:
 
  getSolrServer().deleteByQuery(catalogueId + : + Emory Labs),
  then it
 
  starts to correctly delete only items in the Emory Labs catalogue.
 
  So my first question is, what exactly does deleteByQuery do in the
 first
  query without the quotes? How is it determining which catalogues to
  delete?
 
  Secondly, shouldn't the correct behaviour be not to delete anything at
 all
  in this case since when a search is done for the same catalogueId
 without
  the quotes it just simply returns no results?
 
  Thanks.
 
 
  On Mon, Sep 24, 2012 at 3:12 PM, Kissue Kissue kissue...@gmail.com
  wrote:
 
   Hi Erick,
 
 
  Thanks for your reply. Yes i am using delete by query. I am currently
  logging the number of items to be deleted before handing off to solr.
 And
  from solr logs i can it deleted exactly that number. I will verify
  further.
 
  Thanks.
 
 
  On Mon, Sep 24, 2012 at 1:21 PM, Erick Erickson 
 erickerick...@gmail.com
  
  **wrote:
 
 
   How do you delete items? By ID or by query?
 
 
  My guess is that one of two things is happening:
  1 your delete process is deleting too much data.
  2 your index process isn't indexing what you think.
 
  I'd add some logging to the SolrJ program to see what
  it thinks is has deleted or added to the index and go from there.
 
  Best
  Erick
 
  On Mon, Sep 24, 2012 at 6:55 AM, Kissue Kissue kissue...@gmail.com
  wrote:
   Hi,
  
   I am running Solr 3.5, using SolrJ and using
 StreamingUpdateSolrServer
  to
   index and delete items from solr.
  
   I basically index items from the db into solr every night. Existing
  items
   can be marked for deletion in the db and a delete request sent to
 solr
  to
   delete such items.
  
   My process runs as follows every night:
  
   1. Check if items have been marked for deletion and delete from
 solr.
   I
   commit and optimize after the entire solr deletion runs.
   2. Index any new items to solr. I commit and optimize after all
 the 
  new
   items have been added

Re: Items disappearing from Solr index

2012-09-26 Thread Kissue Kissue
I have just solved this problem.

We have a field called catalogueId. One possible value for this field could
be Emory Labs. I found out that when the following delete by query is
sent to solr:

getSolrServer().deleteByQuery(catalogueId + : + Emory Labs)  [Notice that
there are no quotes surrounding the catalogueId value - Emory Labs]

For some reason this delete by query ends up deleting the contents of some
other random catalogues too which is the reason why we are loosing items
from the index. When the query is changed to:

getSolrServer().deleteByQuery(catalogueId + : + Emory Labs), then it
starts to correctly delete only items in the Emory Labs catalogue.

So my first question is, what exactly does deleteByQuery do in the first
query without the quotes? How is it determining which catalogues to delete?

Secondly, shouldn't the correct behaviour be not to delete anything at all
in this case since when a search is done for the same catalogueId without
the quotes it just simply returns no results?

Thanks.


On Mon, Sep 24, 2012 at 3:12 PM, Kissue Kissue kissue...@gmail.com wrote:

 Hi Erick,

 Thanks for your reply. Yes i am using delete by query. I am currently
 logging the number of items to be deleted before handing off to solr. And
 from solr logs i can it deleted exactly that number. I will verify further.

 Thanks.


 On Mon, Sep 24, 2012 at 1:21 PM, Erick Erickson 
 erickerick...@gmail.comwrote:

 How do you delete items? By ID or by query?

 My guess is that one of two things is happening:
 1 your delete process is deleting too much data.
 2 your index process isn't indexing what you think.

 I'd add some logging to the SolrJ program to see what
 it thinks is has deleted or added to the index and go from there.

 Best
 Erick

 On Mon, Sep 24, 2012 at 6:55 AM, Kissue Kissue kissue...@gmail.com
 wrote:
  Hi,
 
  I am running Solr 3.5, using SolrJ and using StreamingUpdateSolrServer
 to
  index and delete items from solr.
 
  I basically index items from the db into solr every night. Existing
 items
  can be marked for deletion in the db and a delete request sent to solr
 to
  delete such items.
 
  My process runs as follows every night:
 
  1. Check if items have been marked for deletion and delete from solr. I
  commit and optimize after the entire solr deletion runs.
  2. Index any new items to solr. I commit and optimize after all the new
  items have been added.
 
  Recently i started noticing that huge chunks of items that have not been
  marked for deletion are disappearing from the index. I checked the solr
  logs and the logs indicate that it is deleting exactly the number of
 items
  requested but still a lot of other items disappear from the index from
 time
  to time. Any ideas what might be causing this or what i am doing wrong.
 
 
  Thanks.





Re: Items disappearing from Solr index

2012-09-26 Thread Kissue Kissue
# It is looking for documents with Emory in the specified field OR Labs
in the default search field.

This does not seem to be the case. For instance issuing a deleteByQuery for
catalogueId: PEARL LINGUISTICS LTD also deletes the contents of a
catalogueId with the value: Ncl_MacNaughtonMcGregorCoaching_vf010811.

Thanks.

On Wed, Sep 26, 2012 at 2:37 PM, Jack Krupansky j...@basetechnology.comwrote:

 It is looking for documents with Emory in the specified field OR Labs
 in the default search field.

 -- Jack Krupansky

 -Original Message- From: Kissue Kissue
 Sent: Wednesday, September 26, 2012 7:47 AM
 To: solr-user@lucene.apache.org
 Subject: Re: Items disappearing from Solr index


 I have just solved this problem.

 We have a field called catalogueId. One possible value for this field could
 be Emory Labs. I found out that when the following delete by query is
 sent to solr:

 getSolrServer().deleteByQuery(**catalogueId + : + Emory Labs)  [Notice
 that
 there are no quotes surrounding the catalogueId value - Emory Labs]

 For some reason this delete by query ends up deleting the contents of some
 other random catalogues too which is the reason why we are loosing items
 from the index. When the query is changed to:

 getSolrServer().deleteByQuery(**catalogueId + : + Emory Labs), then it
 starts to correctly delete only items in the Emory Labs catalogue.

 So my first question is, what exactly does deleteByQuery do in the first
 query without the quotes? How is it determining which catalogues to delete?

 Secondly, shouldn't the correct behaviour be not to delete anything at all
 in this case since when a search is done for the same catalogueId without
 the quotes it just simply returns no results?

 Thanks.


 On Mon, Sep 24, 2012 at 3:12 PM, Kissue Kissue kissue...@gmail.com
 wrote:

  Hi Erick,

 Thanks for your reply. Yes i am using delete by query. I am currently
 logging the number of items to be deleted before handing off to solr. And
 from solr logs i can it deleted exactly that number. I will verify
 further.

 Thanks.


 On Mon, Sep 24, 2012 at 1:21 PM, Erick Erickson erickerick...@gmail.com
 **wrote:

  How do you delete items? By ID or by query?

 My guess is that one of two things is happening:
 1 your delete process is deleting too much data.
 2 your index process isn't indexing what you think.

 I'd add some logging to the SolrJ program to see what
 it thinks is has deleted or added to the index and go from there.

 Best
 Erick

 On Mon, Sep 24, 2012 at 6:55 AM, Kissue Kissue kissue...@gmail.com
 wrote:
  Hi,
 
  I am running Solr 3.5, using SolrJ and using StreamingUpdateSolrServer
 to
  index and delete items from solr.
 
  I basically index items from the db into solr every night. Existing
 items
  can be marked for deletion in the db and a delete request sent to solr
 to
  delete such items.
 
  My process runs as follows every night:
 
  1. Check if items have been marked for deletion and delete from solr. I
  commit and optimize after the entire solr deletion runs.
  2. Index any new items to solr. I commit and optimize after all the new
  items have been added.
 
  Recently i started noticing that huge chunks of items that have not 
 been
  marked for deletion are disappearing from the index. I checked the solr
  logs and the logs indicate that it is deleting exactly the number of
 items
  requested but still a lot of other items disappear from the index from
 time
  to time. Any ideas what might be causing this or what i am doing wrong.
 
 
  Thanks.







Items disappearing from Solr index

2012-09-24 Thread Kissue Kissue
Hi,

I am running Solr 3.5, using SolrJ and using StreamingUpdateSolrServer to
index and delete items from solr.

I basically index items from the db into solr every night. Existing items
can be marked for deletion in the db and a delete request sent to solr to
delete such items.

My process runs as follows every night:

1. Check if items have been marked for deletion and delete from solr. I
commit and optimize after the entire solr deletion runs.
2. Index any new items to solr. I commit and optimize after all the new
items have been added.

Recently i started noticing that huge chunks of items that have not been
marked for deletion are disappearing from the index. I checked the solr
logs and the logs indicate that it is deleting exactly the number of items
requested but still a lot of other items disappear from the index from time
to time. Any ideas what might be causing this or what i am doing wrong.


Thanks.


Re: Items disappearing from Solr index

2012-09-24 Thread Kissue Kissue
Hi Erick,

Thanks for your reply. Yes i am using delete by query. I am currently
logging the number of items to be deleted before handing off to solr. And
from solr logs i can it deleted exactly that number. I will verify further.

Thanks.

On Mon, Sep 24, 2012 at 1:21 PM, Erick Erickson erickerick...@gmail.comwrote:

 How do you delete items? By ID or by query?

 My guess is that one of two things is happening:
 1 your delete process is deleting too much data.
 2 your index process isn't indexing what you think.

 I'd add some logging to the SolrJ program to see what
 it thinks is has deleted or added to the index and go from there.

 Best
 Erick

 On Mon, Sep 24, 2012 at 6:55 AM, Kissue Kissue kissue...@gmail.com
 wrote:
  Hi,
 
  I am running Solr 3.5, using SolrJ and using StreamingUpdateSolrServer to
  index and delete items from solr.
 
  I basically index items from the db into solr every night. Existing items
  can be marked for deletion in the db and a delete request sent to solr to
  delete such items.
 
  My process runs as follows every night:
 
  1. Check if items have been marked for deletion and delete from solr. I
  commit and optimize after the entire solr deletion runs.
  2. Index any new items to solr. I commit and optimize after all the new
  items have been added.
 
  Recently i started noticing that huge chunks of items that have not been
  marked for deletion are disappearing from the index. I checked the solr
  logs and the logs indicate that it is deleting exactly the number of
 items
  requested but still a lot of other items disappear from the index from
 time
  to time. Any ideas what might be causing this or what i am doing wrong.
 
 
  Thanks.



Re: StreamingUpdateSolrServer - Failure during indexing

2012-09-04 Thread Kissue Kissue
Hi Lance,

As far as i can see, one document failing does not fail the entire update.
From my logs i can see the error logged in the logs but indexing just
continues to the next document. This happens with the
StreamingUpdateSolrServer which is multithreaded.

Thanks.

On Tue, Jun 19, 2012 at 9:58 AM, Lance Norskog goks...@gmail.com wrote:

 When one document fails, the entire update fails, right? Is there now
 a mode where successful documents are added and failed docs are
 dropped?

 If you want to know if a document is in the index, search for it!
 There is no other guaranteed way.

 On Sun, Jun 17, 2012 at 3:14 PM, Jack Krupansky j...@basetechnology.com
 wrote:
  You could instantiate an anonymous instance of StreamingUpdateSolrServer
  that has a handleError method that then parses the exception message to
  get the request URI. If there isn't enough information there, you could
 add
  a dummy request option to your original request that was a document
  identifier of your own.
 
  Pseudo code:
 
StreamingUpdateSolrServer myServer = new
 StreamingUpdateSolrServer(...){
  void handleError( Throwable ex ){
super.handleError(ex);
// extract text from ex.getMessage()
  }
};
 
  Included in the message text is request:  followed by the URI for the
 HTTP
  method, which presumably has the request options (unless they were
 encoded
  in the body of the request as multipart form data.)
 
  -- Jack Krupansky
 
  -Original Message- From: Kissue Kissue
  Sent: Sunday, June 17, 2012 7:40 AM
  To: solr-user@lucene.apache.org
  Subject: StreamingUpdateSolrServer - Failure during indexing
 
 
  Hi,
 
  Using the StreamingUpdateSolrServer, does anybody know how i can get the
  list of documents that failed during indexing so maybe i can index them
  later? Is it possible? I am using Solr 3.5 with SolrJ.
 
  Thanks.



 --
 Lance Norskog
 goks...@gmail.com



Re: Is it compulsory to define a tokenizer when defining field types in solr

2012-06-29 Thread Kissue Kissue
Thanks Erick for the clarification.

Cheers!

On Fri, Jun 29, 2012 at 2:08 PM, Erick Erickson erickerick...@gmail.comwrote:

 Yes, it's mandatory to define at least one tokenizer (and only one
 tokenizer). If
 you need the whole input treated as one token, you can use
 KeywordTokenizerFactory.

 Best
 Erick

 On Thu, Jun 28, 2012 at 11:10 AM, Kissue Kissue kissue...@gmail.com
 wrote:
  Hi,
 
  When defining a fieldtype is it compulsory to include a tokenizer in its
  definition?
 
  I have a field defined as follows without tokenizer:
 
  fieldType name=lowercase_pattern class=solr.TextField
  positionIncrementGap=100
   analyzer type=index
 filter class=solr.LowerCaseFilterFactory /
   /analyzer
   analyzer type=query
 filter class=solr.LowerCaseFilterFactory /
   /analyzer
 /fieldType
 
  Using this field when i try to start up Solr it says the field is not
  recognised. But when i change it to the following with tokenizer included
  it works:
 
  fieldType name=lowercase_pattern class=solr.TextField
  positionIncrementGap=100
   analyzer type=index
 tokenizer class=solr.KeywordTokenizerFactory/
 filter class=solr.LowerCaseFilterFactory /
   /analyzer
   analyzer type=query
 tokenizer class=solr.KeywordTokenizerFactory/
 filter class=solr.LowerCaseFilterFactory /
   /analyzer
 /fieldType
 
  Thanks.



Searching against stored wild cards

2012-06-29 Thread Kissue Kissue
Hi,

I Want to know if it is in any way possible for me to do this Solr:

1. Store this field in Solr index - AB-CD-EF-*
2. Do a search for AB-CD-EF-GH and return back AB-CD-EF-*

Thanks.


Re: searching for more then one word

2012-06-28 Thread Kissue Kissue
The analysis page is your best friend in these circumstances. Use the
analysis page in solr admin and turn verbose output for both index and
query and see what the analysis chain looks like. You maybe able to find
the culprit.


On Thu, Jun 28, 2012 at 10:57 AM, Arkadi Colson ark...@smartbit.be wrote:

 Hi

 I indexed following strings:

 abcdefg hijklmnop

 When searching for abcdefg hijklmnop Solr returns the result but when
 searching for abcdefg hijklmnop Solr returns nothing.

 Any idea how to search for more then one word?

 [params] = SolrObject Object
(
[debugQuery] = true
[shards] = solr03-gs.intnet.smartbit.be:**
 8983/solr,solr04-gs.intnet.**smartbit.be:8983/solr,solr03-**
 dcg.intnet.smartbit.be:8983/**solr,solr04-dcg.intnet.**
 smartbit.be:8983/solrhttp://solr03-gs.intnet.smartbit.be:8983/solr,solr04-gs.intnet.smartbit.be:8983/solr,solr03-dcg.intnet.smartbit.be:8983/solr,solr04-dcg.intnet.smartbit.be:8983/solr
[fl] = id,smsc_module,smsc_modulekey,**
 smsc_userid,smsc_ssid,smsc_**description,smsc_content,smsc_**
 courseid,smsc_lastdate,score
[indent] = on
[start] = 0
[q] = (smsc_content:abcdefg hijklmnop ||
 smsc_description:abcdefg hijklmnop)  
 (smsc_lastdate:[2008-05-28T08:**45:50Z
 TO 2012-06-28T08:45:50Z])
[distrib] = true
[wt] = xml
[version] = 2.2
[rows] = 50
)


fieldType name=text class=solr.TextField
 positionIncrementGap=100
  analyzer type=index
charFilter class=solr.**HTMLStripCharFilterFactory/
tokenizer class=solr.**KeywordTokenizerFactory/
filter class=solr.StopFilterFactory ignoreCase=true
 words=stopwords_en.txt,**stopwords_du.txt enablePositionIncrements=**
 true/
filter class=solr.**WordDelimiterFilterFactory
 generateWordParts=1 generateNumberParts=1 catenateWords=1
 catenateNumbers=1 catenateAll=0 splitOnCaseChange=1/
filter class=solr.**LowerCaseFilterFactory/
filter class=solr.**SnowballPorterFilterFactory
 language=Dutch /
  /analyzer
  analyzer type=query
tokenizer class=solr.**KeywordTokenizerFactory/
filter class=solr.**SynonymFilterFactory
 synonyms=synonyms.txt ignoreCase=true expand=true/--
filter class=solr.StopFilterFactory ignoreCase=true
 words=stopwords_en.txt,**stopwords_du.txt enablePositionIncrements=**
 true/
filter class=solr.**WordDelimiterFilterFactory
 generateWordParts=1 generateNumberParts=1 catenateWords=0
 catenateNumbers=0 catenateAll=0 splitOnCaseChange=1/
filter class=solr.**LowerCaseFilterFactory/
filter class=solr.**SnowballPorterFilterFactory
 language=Dutch /
  /analyzer
/fieldType


 Thanks!

 --
 Smartbit bvba
 Hoogstraat 13
 B-3670 Meeuwen
 T: +32 11 64 08 80
 F: +32 89 46 81 10
 W: http://www.smartbit.be
 E: ark...@smartbit.be




StreamingUpdateSolrServer - Failure during indexing

2012-06-17 Thread Kissue Kissue
Hi,

Using the StreamingUpdateSolrServer, does anybody know how i can get the
list of documents that failed during indexing so maybe i can index them
later? Is it possible? I am using Solr 3.5 with SolrJ.

Thanks.


Re: StreamingUpdateSolrServer Connection Timeout Setting

2012-06-16 Thread Kissue Kissue
Thanks Sami. Has anybody had any need to explicitly set the connection
timeout? Just trying to understand how folks use it.

Thanks.

On Fri, Jun 15, 2012 at 7:01 PM, Sami Siren ssi...@gmail.com wrote:

 The api doc for version 3.6.0 is available here:

 http://lucene.apache.org/solr/api/org/apache/solr/client/solrj/impl/StreamingUpdateSolrServer.html

 I think the default is coming from your OS if you are not setting it
 explicitly.

 --
  Sami Siren

 On Fri, Jun 15, 2012 at 8:22 PM, Kissue Kissue kissue...@gmail.com
 wrote:
  Hi,
 
  Does anybody know what the default connection timeout setting is for
  StreamingUpdateSolrServer? Can i explicitly set one and how?
 
  Thanks.



StreamingUpdateSolrServer Connection Timeout Setting

2012-06-15 Thread Kissue Kissue
Hi,

Does anybody know what the default connection timeout setting is for
StreamingUpdateSolrServer? Can i explicitly set one and how?

Thanks.


Re: Solr Scoring

2012-04-13 Thread Kissue Kissue
Thanks a lot. I had already implemented Walter's solution and was wondering
if this was the right way to deal with it. This has now given me the
confidence to go with the solution.

Many thanks.

On Fri, Apr 13, 2012 at 1:04 AM, Erick Erickson erickerick...@gmail.comwrote:

 GAH! I had my head in make this happen in one field when I wrote my
 response, without being explicit. Of course Walter's solution is pretty
 much the standard way to deal with this.

 Best
 Erick

 On Thu, Apr 12, 2012 at 5:38 PM, Walter Underwood wun...@wunderwood.org
 wrote:
  It is easy. Create two fields, text_exact and text_stem. Don't use the
 stemmer in the first chain, do use the stemmer in the second. Give the
 text_exact a bigger weight than text_stem.
 
  wunder
 
  On Apr 12, 2012, at 4:34 PM, Erick Erickson wrote:
 
  No, I don't think there's an OOB way to make this happen. It's
  a recurring theme, make exact matches score higher than
  stemmed matches.
 
  Best
  Erick
 
  On Thu, Apr 12, 2012 at 5:18 AM, Kissue Kissue kissue...@gmail.com
 wrote:
  Hi,
 
  I have a field in my index called itemDesc which i am applying
  EnglishMinimalStemFilterFactory to. So if i index a value to this field
  containing Edges, the EnglishMinimalStemFilterFactory applies
 stemming
  and Edges becomes Edge. Now when i search for Edges, documents
 with
  Edge score better than documents with the actual search word -
 Edges.
  Is there a way i can make documents with the actual search word in this
  case Edges score better than document with Edge?
 
  I am using Solr 3.5. My field definition is shown below:
 
  fieldType name=text_en class=solr.TextField
 positionIncrementGap=100
   analyzer type=index
 tokenizer class=solr.StandardTokenizerFactory/
filter class=solr.SynonymFilterFactory
  synonyms=index_synonyms.txt ignoreCase=true expand=false/
  filter class=solr.StopFilterFactory
 ignoreCase=true
 words=stopwords_en.txt
 enablePositionIncrements=true
  filter class=solr.LowerCaseFilterFactory/
 filter class=solr.EnglishPossessiveFilterFactory/
 filter class=solr.EnglishMinimalStemFilterFactory/
   /analyzer
   analyzer type=query
 tokenizer class=solr.StandardTokenizerFactory/
 filter class=solr.SynonymFilterFactory
 synonyms=synonyms.txt
  ignoreCase=true expand=true/
 filter class=solr.StopFilterFactory
 ignoreCase=true
 words=stopwords_en.txt
 enablePositionIncrements=true
 /
 filter class=solr.LowerCaseFilterFactory/
 filter class=solr.EnglishPossessiveFilterFactory/
 filter class=solr.KeywordMarkerFilterFactory
  protected=protwords.txt/
 filter class=solr.EnglishMinimalStemFilterFactory/
   /analyzer
 /fieldType
 
  Thanks.
 
 
 
 
 



Solr Scoring

2012-04-12 Thread Kissue Kissue
Hi,

I have a field in my index called itemDesc which i am applying
EnglishMinimalStemFilterFactory to. So if i index a value to this field
containing Edges, the EnglishMinimalStemFilterFactory applies stemming
and Edges becomes Edge. Now when i search for Edges, documents with
Edge score better than documents with the actual search word - Edges.
Is there a way i can make documents with the actual search word in this
case Edges score better than document with Edge?

I am using Solr 3.5. My field definition is shown below:

fieldType name=text_en class=solr.TextField positionIncrementGap=100
  analyzer type=index
tokenizer class=solr.StandardTokenizerFactory/
   filter class=solr.SynonymFilterFactory
synonyms=index_synonyms.txt ignoreCase=true expand=false/
 filter class=solr.StopFilterFactory
ignoreCase=true
words=stopwords_en.txt
enablePositionIncrements=true
 filter class=solr.LowerCaseFilterFactory/
filter class=solr.EnglishPossessiveFilterFactory/
filter class=solr.EnglishMinimalStemFilterFactory/
  /analyzer
  analyzer type=query
tokenizer class=solr.StandardTokenizerFactory/
filter class=solr.SynonymFilterFactory synonyms=synonyms.txt
ignoreCase=true expand=true/
filter class=solr.StopFilterFactory
ignoreCase=true
words=stopwords_en.txt
enablePositionIncrements=true
/
filter class=solr.LowerCaseFilterFactory/
filter class=solr.EnglishPossessiveFilterFactory/
filter class=solr.KeywordMarkerFilterFactory
protected=protwords.txt/
filter class=solr.EnglishMinimalStemFilterFactory/
  /analyzer
/fieldType

Thanks.


Wildcard searching

2012-04-12 Thread Kissue Kissue
Hi,

I am using the edismax query handler with solr 3.5. From the Solr admin
interface when i do a wildcard search with the string: edge*, all documents
are returned with exactly the same score. When i do the same search from my
application using SolrJ to the same solr instance, only a few documents
have the same maximum score and all the rest have the minimum score. I was
expecting all to have the same score just like in the Solr Admin.

Any pointers why this is happening?

Thanks.


Re: Wildcard searching

2012-04-12 Thread Kissue Kissue
Correction, this difference betweeen Solr admin scores and SolrJ scores
happens with leading wildcard queries e.g. *edge


On Thu, Apr 12, 2012 at 8:13 PM, Kissue Kissue kissue...@gmail.com wrote:

 Hi,

 I am using the edismax query handler with solr 3.5. From the Solr admin
 interface when i do a wildcard search with the string: edge*, all documents
 are returned with exactly the same score. When i do the same search from my
 application using SolrJ to the same solr instance, only a few documents
 have the same maximum score and all the rest have the minimum score. I was
 expecting all to have the same score just like in the Solr Admin.

 Any pointers why this is happening?

 Thanks.



Solr Http Caching

2012-04-11 Thread Kissue Kissue
Hi,

Are any of you using Solr Http caching? I am interested to see how people
use this functionality. I have an index that basically changes once a day
at midnight. Is it okay to enable Solr Http caching for such an index and
set the max age to 1 day? Any potential issues?

I am using solr 3.5 with SolrJ.

Thanks.


Problem with result grouping

2011-12-13 Thread Kissue Kissue
Hi,

Maybe there is something i am missing here but i have a field in my solr
index called categoryId. The field definition is as follows:

field name=categoryId type=string indexed=true stored=true
required=true /

I am trying to group on this field and i get a result as follows:
str name=groupValue43201810/str
result name=doclist numFound=72 start=0

This is the query i am sending to solr:
http://localhost:8080/solr/catalogue/select/?q=*.*%0D%0Aversion=2.2start=0rows=1000indent=ongroup=truegroup.field=categoryId

My understanding is that this means there are 72 documents in my index that
have the value 43201810 for categoryId. Now surprisingly when i search my
index specifically for categoryId:43201810 expecting to get 72 results i
instead get 124 results. This is the query sent:
http://localhost:8080/solr/catalogue/select/?q=categoryId%3A43201810version=2.2start=0rows=10indent=on

Is my understanding of result grouping correct? Is there something i am
doing wrong. Any help will be much appreciated. I am using Solr 3.5

Thanks.


Matching all documents in the index

2011-12-13 Thread Kissue Kissue
Hi,

I have come across this query in the admin interface: *.*
Is this meant to match all documents in my index?

Currently when i run query with q= *.*, numFound is 130310 but the actuall
number of documents in my index is 603308.
Shen i then run the query with q = *  then numFound is 603308 which is the
total number of documents in my index.

So what is the difference between query with q = *.*  and q = * ?

I ran into this problem because i have a particular scenario where in my
index where i have a field called categoryId which i am grouping on and
another field called orgId which i then filter on. So i do grouping on
categoryId but on all documents in the index matching the filter query
field. I use q = *.* but this dosen't give me the true picture as
highlighted above. So i use q = * and this works fine but takes about
2900ms to execute. Is this efficient? Is there a better way to do something
like this?

Solr version = 3.5

Thanks.


Re: Matching all documents in the index

2011-12-13 Thread Kissue Kissue
Hi Simon,

Thanks for this. Query time dramatically reduced to 27ms with this.

Many thanks.

On Tue, Dec 13, 2011 at 4:20 PM, Simon Willnauer 
simon.willna...@googlemail.com wrote:

 try *:* instead of *.*

 simon

 On Tue, Dec 13, 2011 at 5:03 PM, Kissue Kissue kissue...@gmail.com
 wrote:
  Hi,
 
  I have come across this query in the admin interface: *.*
  Is this meant to match all documents in my index?
 
  Currently when i run query with q= *.*, numFound is 130310 but the
 actuall
  number of documents in my index is 603308.
  Shen i then run the query with q = *  then numFound is 603308 which is
 the
  total number of documents in my index.
 
  So what is the difference between query with q = *.*  and q = * ?
 
  I ran into this problem because i have a particular scenario where in my
  index where i have a field called categoryId which i am grouping on and
  another field called orgId which i then filter on. So i do grouping on
  categoryId but on all documents in the index matching the filter query
  field. I use q = *.* but this dosen't give me the true picture as
  highlighted above. So i use q = * and this works fine but takes about
  2900ms to execute. Is this efficient? Is there a better way to do
 something
  like this?
 
  Solr version = 3.5
 
  Thanks.



Solr Load Testing

2011-12-12 Thread Kissue Kissue
Hi,

I ran some jmeter load testing on my solr instance version 3.5.0 running on
tomcat 6.6.29 using 1000 concurrent users and the error below is thrown
after a certain number of requests. My solr configuration is basically the
default configuration at this time. Has anybody done soemthing similar?
Should solr be able to handle 1000 concurrent users based on the default
configuration? Any ideas let me know. Thanks.

12-Dec-2011 15:56:02 org.apache.solr.common.SolrException log
SEVERE: ClientAbortException:  java.io.IOException
at
org.apache.catalina.connector.OutputBuffer.doFlush(OutputBuffer.java:319)
at
org.apache.catalina.connector.OutputBuffer.flush(OutputBuffer.java:288)
at
org.apache.catalina.connector.CoyoteOutputStream.flush(CoyoteOutputStream.java:98)
at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:278)
at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:122)
at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:212)
at org.apache.solr.common.util.FastWriter.flush(FastWriter.java:115)
at
org.apache.solr.servlet.SolrDispatchFilter.writeResponse(SolrDispatchFilter.java:344)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:265)
at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
at
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
at
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
at
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
at
org.apache.coyote.http11.Http11AprProcessor.process(Http11AprProcessor.java:861)
at
org.apache.coyote.http11.Http11AprProtocol$Http11ConnectionHandler.process(Http11AprProtocol.java:579)
at
org.apache.tomcat.util.net.AprEndpoint$Worker.run(AprEndpoint.java:1584)
at java.lang.Thread.run(Thread.java:619)
Caused by: java.io.IOException
at
org.apache.coyote.http11.InternalAprOutputBuffer.flushBuffer(InternalAprOutputBuffer.java:696)
at
org.apache.coyote.http11.InternalAprOutputBuffer.flush(InternalAprOutputBuffer.java:284)
at
org.apache.coyote.http11.Http11AprProcessor.action(Http11AprProcessor.java:1016)
at org.apache.coyote.Response.action(Response.java:183)
at
org.apache.catalina.connector.OutputBuffer.doFlush(OutputBuffer.java:314)
... 20 more


Re: Lucene version error after migrating to Solr 3.5

2011-12-08 Thread Kissue Kissue
After migrating to Solr 3.5, i restart tomcat and i get the error below.
Any ideas what i am doing wrong?

SEVERE: org.apache.solr.common.SolrException: Invalid luceneMatchVersion
'LUCENE_35', valid values are: [LUCENE_20, LUCENE_21, LUCENE_22, LU
CENE_23, LUCENE_24, LUCENE_29, LUCENE_30, LUCENE_31, LUCENE_32, LUCENE_33,
LUCENE_CURRENT] or a string in format 'V.V'
at
org.apache.solr.core.Config.parseLuceneVersionString(Config.java:353)
at org.apache.solr.core.Config.getLuceneVersion(Config.java:337)
at org.apache.solr.core.SolrConfig.init(SolrConfig.java:133)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:435)
at org.apache.solr.core.CoreContainer.load(CoreContainer.java:316)
at org.apache.solr.core.CoreContainer.load(CoreContainer.java:207)
at
org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:130)
at
org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:94)
at
org.apache.catalina.core.ApplicationFilterConfig.getFilter(ApplicationFilterConfig.java:295)
at
org.apache.catalina.core.ApplicationFilterConfig.setFilterDef(ApplicationFilterConfig.java:422)
at
org.apache.catalina.core.ApplicationFilterConfig.init(ApplicationFilterConfig.java:115)
at
org.apache.catalina.core.StandardContext.filterStart(StandardContext.java:4001)
at
org.apache.catalina.core.StandardContext.start(StandardContext.java:4651)
at
org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:791)
at
org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:771)
at
org.apache.catalina.core.StandardHost.addChild(StandardHost.java:546)
at
org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:637)
at
org.apache.catalina.startup.HostConfig.deployDescriptors(HostConfig.java:563)
at
org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:498)
at
org.apache.catalina.startup.HostConfig.start(HostConfig.java:1277)
at
org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:321)
at
org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:119)
at
org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1053)
at
org.apache.catalina.core.StandardHost.start(StandardHost.java:785)
at
org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1045)
at
org.apache.catalina.core.StandardEngine.start(StandardEngine.java:445)
at
org.apache.catalina.core.StandardService.start(StandardService.java:519)
at
org.apache.catalina.core.StandardServer.start(StandardServer.java:710)
at org.apache.catalina.startup.Catalina.start(Catalina.java:581)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289)
at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414)
Caused by: java.lang.IllegalArgumentException: No enum const class
org.apache.lucene.util.Version.LUCENE_35
at java.lang.Enum.valueOf(Enum.java:196)
at org.apache.lucene.util.Version.valueOf(Version.java:32)
at
org.apache.solr.core.Config.parseLuceneVersionString(Config.java:351)
... 34 more


solr.VelocityResponseWriter error in version 3.5.0

2011-12-08 Thread Kissue Kissue
I just migrated to Solr 3.5 and whenever i start it up i get the error
below. Any ideas what might be wrong? Previously i didn't have to do
anything special to get it to work. HAs anything changed in solr 3.5?


08-Dec-2011 10:45:03 org.apache.solr.common.SolrException log
SEVERE: org.apache.solr.common.SolrException: Error loading class
'solr.VelocityResponseWriter'
at
org.apache.solr.core.SolrResourceLoader.findClass(SolrResourceLoader.java:389)
at org.apache.solr.core.SolrCore.createInstance(SolrCore.java:425)
at
org.apache.solr.core.SolrCore.createInitInstance(SolrCore.java:447)
at org.apache.solr.core.SolrCore.initPlugins(SolrCore.java:1556)
at org.apache.solr.core.SolrCore.initPlugins(SolrCore.java:1550)
at org.apache.solr.core.SolrCore.initPlugins(SolrCore.java:1583)
at org.apache.solr.core.SolrCore.initWriters(SolrCore.java:1466)
at org.apache.solr.core.SolrCore.init(SolrCore.java:556)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:463)
at org.apache.solr.core.CoreContainer.load(CoreContainer.java:316)
at org.apache.solr.core.CoreContainer.load(CoreContainer.java:207)
at
org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:130)
at
org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:94)
at
org.apache.catalina.core.ApplicationFilterConfig.getFilter(ApplicationFilterConfig.java:295)
at
org.apache.catalina.core.ApplicationFilterConfig.setFilterDef(ApplicationFilterConfig.java:422)
at
org.apache.catalina.core.ApplicationFilterConfig.init(ApplicationFilterConfig.java:115)
at
org.apache.catalina.core.StandardContext.filterStart(StandardContext.java:4001)
at
org.apache.catalina.core.StandardContext.start(StandardContext.java:4651)
at
org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:791)
at
org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:771)
at
org.apache.catalina.core.StandardHost.addChild(StandardHost.java:546)
at
org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:637)
at
org.apache.catalina.startup.HostConfig.deployDescriptors(HostConfig.java:563)
at
org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:498)
at
org.apache.catalina.startup.HostConfig.start(HostConfig.java:1277)
at
org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:321)
at
org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:119)
at
org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1053)
at
org.apache.catalina.core.StandardHost.start(StandardHost.java:785)
at
org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1045)
at
org.apache.catalina.core.StandardEngine.start(StandardEngine.java:445)
at
org.apache.catalina.core.StandardService.start(StandardService.java:519)
at
org.apache.catalina.core.StandardServer.start(StandardServer.java:710)
at org.apache.catalina.startup.Catalina.start(Catalina.java:581)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289)
at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414)
Caused by: java.lang.ClassNotFoundException: solr.VelocityResponseWriter
at java.net.URLClassLoader$1.run(URLClassLoader.java:200)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:188)
at java.lang.ClassLoader.loadClass(ClassLoader.java:303)
at java.net.FactoryURLClassLoader.loadClass(URLClassLoader.java:592)
at java.lang.ClassLoader.loadClass(ClassLoader.java:248)
at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:316)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:247)
at
org.apache.solr.core.SolrResourceLoader.findClass(SolrResourceLoader.java:373)
... 39 more


Re: solr.VelocityResponseWriter error in version 3.5.0

2011-12-08 Thread Kissue Kissue
Thanks Marcus. To resolve this problem i just added a shared lib folder for
my cores and added the velocity jars in this folder and that resolved the
error. I hope it was the right thing to do though.

Thanks.


On Thu, Dec 8, 2011 at 11:20 AM, Markus Jelsma
markus.jel...@openindex.iowrote:

 From the changelog:


 187
 * SOLR-2588: Moved VelocityResponseWriter back to contrib module in order
 to
 188
 remove it as a mandatory core dependency. (Erik Hatcher)




 http://svn.apache.org/viewvc/lucene/dev/branches/branch_3x/solr/CHANGES.txt?view=markup



  I just migrated to Solr 3.5 and whenever i start it up i get the error
  below. Any ideas what might be wrong? Previously i didn't have to do
  anything special to get it to work. HAs anything changed in solr 3.5?
 
 
  08-Dec-2011 10:45:03 org.apache.solr.common.SolrException log
  SEVERE: org.apache.solr.common.SolrException: Error loading class
  'solr.VelocityResponseWriter'
  at
 
 org.apache.solr.core.SolrResourceLoader.findClass(SolrResourceLoader.java:3
  89) at org.apache.solr.core.SolrCore.createInstance(SolrCore.java:425) at
  org.apache.solr.core.SolrCore.createInitInstance(SolrCore.java:447)
  at org.apache.solr.core.SolrCore.initPlugins(SolrCore.java:1556)
  at org.apache.solr.core.SolrCore.initPlugins(SolrCore.java:1550)
  at org.apache.solr.core.SolrCore.initPlugins(SolrCore.java:1583)
  at org.apache.solr.core.SolrCore.initWriters(SolrCore.java:1466)
  at org.apache.solr.core.SolrCore.init(SolrCore.java:556)
  at
  org.apache.solr.core.CoreContainer.create(CoreContainer.java:463) at
  org.apache.solr.core.CoreContainer.load(CoreContainer.java:316) at
  org.apache.solr.core.CoreContainer.load(CoreContainer.java:207) at
 
 org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.jav
  a:130) at
 
 org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:94)
  at
 
 org.apache.catalina.core.ApplicationFilterConfig.getFilter(ApplicationFilte
  rConfig.java:295) at
 
 org.apache.catalina.core.ApplicationFilterConfig.setFilterDef(ApplicationFi
  lterConfig.java:422) at
 
 org.apache.catalina.core.ApplicationFilterConfig.init(ApplicationFilterCo
  nfig.java:115) at
 
 org.apache.catalina.core.StandardContext.filterStart(StandardContext.java:4
  001) at
  org.apache.catalina.core.StandardContext.start(StandardContext.java:4651)
  at
 
 org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:
  791) at
  org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:771)
  at
  org.apache.catalina.core.StandardHost.addChild(StandardHost.java:546)
  at
 
 org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:637
  ) at
 
 org.apache.catalina.startup.HostConfig.deployDescriptors(HostConfig.java:56
  3) at
  org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:498)
  at
  org.apache.catalina.startup.HostConfig.start(HostConfig.java:1277)
  at
 
 org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:321)
  at
 
 org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSuppo
  rt.java:119) at
  org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1053)
  at
  org.apache.catalina.core.StandardHost.start(StandardHost.java:785)
  at
  org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1045)
  at
  org.apache.catalina.core.StandardEngine.start(StandardEngine.java:445)
  at
  org.apache.catalina.core.StandardService.start(StandardService.java:519)
  at
  org.apache.catalina.core.StandardServer.start(StandardServer.java:710)
  at org.apache.catalina.startup.Catalina.start(Catalina.java:581)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at
 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:3
  9) at
 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImp
  l.java:25) at java.lang.reflect.Method.invoke(Method.java:597)
  at
 org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289)
  at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414)
  Caused by: java.lang.ClassNotFoundException: solr.VelocityResponseWriter
  at java.net.URLClassLoader$1.run(URLClassLoader.java:200)
  at java.security.AccessController.doPrivileged(Native Method)
  at java.net.URLClassLoader.findClass(URLClassLoader.java:188)
  at java.lang.ClassLoader.loadClass(ClassLoader.java:303)
  at
  java.net.FactoryURLClassLoader.loadClass(URLClassLoader.java:592) at
  java.lang.ClassLoader.loadClass(ClassLoader.java:248)
  at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:316)
  at java.lang.Class.forName0(Native Method)
  at java.lang.Class.forName(Class.java:247)
  at
 
 

Field collapsing results caching

2011-12-08 Thread Kissue Kissue
Hi,

I was just testing field collapsing in my solr admin on solr 3.5.0. I have
observed that the results of field collapsing are not being cached unlike
other solr query results. Am doing the same query multiple times and the
time taken still remains approximately the same.  Is there something i need
to configure?

Thanks.


Difference between field collapsing and result grouping

2011-12-07 Thread Kissue Kissue
Sorry if this question sounds stupid but i am really really confused about
this. Is there actually a difference between field collapsing and result
grouping in SOLR?

I have come across articles that have talked about setting up field
collapsing with commands that look different from the grouping ones while
in some cases i read that they are the same thing.

Any clarifications would be much appreciated.

Thanks.


Using result grouping with SolrJ

2011-12-07 Thread Kissue Kissue
Hi,

I am using Solr 3.3 with SolrJ. Does anybody know how i can use result
grouping with SolrJ? Particularly how i can retrieve the result grouping
results with SolrJ?

Any help will be much appreciated.

Thanks.


Re: Using result grouping with SolrJ

2011-12-07 Thread Kissue Kissue
Thanks Juan. I guess i have found my reason to migrate to 3.4.

Many thanks.

On Wed, Dec 7, 2011 at 7:43 PM, Juan Grande juan.gra...@gmail.com wrote:

 Hi Kissue,

 Support for grouping on SolrJ was added in Solr 3.4, see
 https://issues.apache.org/jira/browse/SOLR-2637

 In previous versions you can access the grouping results by simply
 traversing the various named lists.

 *Juan*



 On Wed, Dec 7, 2011 at 1:22 PM, Kissue Kissue kissue...@gmail.com wrote:

  Hi,
 
  I am using Solr 3.3 with SolrJ. Does anybody know how i can use result
  grouping with SolrJ? Particularly how i can retrieve the result grouping
  results with SolrJ?
 
  Any help will be much appreciated.
 
  Thanks.
 



Still too many files after running solr optimization

2011-09-28 Thread Kissue Kissue
Hi,

I am using solr 3.3. I noticed  that after indexing about 700, 000 records
and running optimization at the end, i still have about 91 files in my index
directory. I thought that optimization was supposed to reduce the number of
files.

My settings are the default that came with Solr (mergefactor, etc)

Any ideas what i could be doing wrong?


Re: Still too many files after running solr optimization

2011-09-28 Thread Kissue Kissue
numDocs and maxDocs are same size.

I was worried because when i used to use only Lucene for the same indexing,
before optimization there are many files but after optimization i always end
up with just 3 files in my index filder. Just want to find out if this was
ok.

Thanks

On Wed, Sep 28, 2011 at 1:23 PM, Vadim Kisselmann 
v.kisselm...@googlemail.com wrote:

 why should the optimization reduce the number of files?
 It happens only when you indexing docs with same unique key.

 Have you differences in numDocs und maxDocs after optimize?
 If yes:
 how is your optimize command ?

 Regards
 Vadim



 2011/9/28 Manish Bafna manish.bafna...@gmail.com

  Try to do optimize twice.
  The 2nd one will be quick and will delete lot of files.
 
  On Wed, Sep 28, 2011 at 5:26 PM, Kissue Kissue kissue...@gmail.com
  wrote:
   Hi,
  
   I am using solr 3.3. I noticed  that after indexing about 700, 000
  records
   and running optimization at the end, i still have about 91 files in my
  index
   directory. I thought that optimization was supposed to reduce the
 number
  of
   files.
  
   My settings are the default that came with Solr (mergefactor, etc)
  
   Any ideas what i could be doing wrong?
  
 



Bad Request accessing solr on linux

2011-09-22 Thread Kissue Kissue
Hi,

I am using solr 3.3 running on a linux box. For some reason when i make a
request to solr on my windows box, i do not get bad request error but when i
run it on my linux box, i get bad request. On the linux box, i have both my
application and solr deployed on the same tomcat instance.

Below is the error:

Bad Request

request:
http://172.16.2.26:8080/solr/catalogue/select?q=paperrows=10start=0fl=*,scorefq={!tag=catalogueId}catalogueId:
Angle Springs PricingcatalogueId: Edmundsons Electrical LtdcatalogueId:
Edmundsons Lamps and TubescatalogueId: fisher-punchoutcatalogueId:
Freds pricescatalogueId: Getech-keele-punchoutcatalogueId:
id001catalogueId: ID-001catalogueId: ID-1001catalogueId:
ID-1003catalogueId: Insight-punchoutcatalogueId:
lyrecouk123catalogueId: onecall19catalogueId: QC Supplies -
PricescatalogueId: RS-punchoutcatalogueId: Sigma-punchoutcatalogueId:
SLS-punchoutcatalogueId: Spring PersonnelcatalogueId:
supplies-team-punchoutcatalogueId: The BSS Group PLCcatalogueId: Tower
Supplies - PricingcatalogueId:
xma013hl=truehl.snippets=1wt=javabinversion=2

Any opinion on what is wrong with the request?

Thanks.


Problem using EdgeNGram

2011-09-21 Thread Kissue Kissue
Hi,

I am using solr 3.3 with SolrJ. I am trying to use EdgeNgram to power auto
suggest feature in my application. My understanding is that using EdgeNgram
would mean that results will only be returned for records starting with the
search criteria but this is not happening for me.

For example if i search for tr, i get results as following:

Greenham Trading 6
IT Training Publications
AA Training

Below are details of my configuration:

fieldType name=edgytext class=solr.TextField
positionIncrementGap=100
  analyzer type=index
tokenizer class=solr.StandardTokenizerFactory/
filter class=solr.LowerCaseFilterFactory/
filter class=solr.EdgeNGramFilterFactory minGramSize=1
maxGramSize=15 /
  /analyzer
  analyzer type=query
tokenizer class=solr.StandardTokenizerFactory/
filter class=solr.LowerCaseFilterFactory/
  /analyzer
/fieldType

field name=businessName type=edgytext indexed=true stored=true
required=true omitNorms=true omitTermFreqAndPositions=true /

Any ideas why this is happening will be much appreciated.

Thanks.


JSON response with SolrJ

2011-09-21 Thread Kissue Kissue
Hi,

I am using solr 3.3 with SolrJ. Does anybody have any idea how i can
retrieve JSON response with SolrJ? Is it possible? It seems to be more
focused on XML and Beans.

Thanks.


BigDecimal data type

2011-09-14 Thread Kissue Kissue
Hi,

Is there a way to use BigDecimal as a data type in solr? I am using solr
3.3.

Thanks.


Re: DIH primary key

2011-09-05 Thread Kissue Kissue
Thanks for replying. Unfortuately the table i need to import from is a view
and there is no unique key in there i can use as a primary key. How does
this affect my using DIH? does it mea i cannot use DIH?

Thanks.



On Sun, Sep 4, 2011 at 8:44 PM, Shawn Heisey s...@elyograg.org wrote:

 On 9/4/2011 12:16 PM, Kissue Kissue wrote:

 I was reading about DIH on the this Wiki link :
 http://wiki.apache.org/solr/**DataImportHandler#A_shorter_**data-confighttp://wiki.apache.org/solr/DataImportHandler#A_shorter_data-config
 The following was said about entity primary key: is *optional* and only
 needed when using delta-imports. Does this mean that the primary key is
 mandatory for delta imports? I am asking because i am going to be
 importing
 from a view with no primary key.


 I believe what it means is that you have to specify a field to be the
 primary key, and that it must exist in all three queries that you defined -
 query, deltaQuery and deltaImportQuery.  In my case, query and
 deltaImportQuery are identical, and deltaQuery is SELECT 1 AS did.  The
 only thing this query does is tell the DIH that there is something to do for
 a delta-import, which it then uses deltaImportQuery to do.  I keep track of
 which documents are new outside of Solr and pass values for the query in via
 the dataimport URL.

 As you might surmise, did is the primary key in my dataimport config file.
  I couldn't say what would happen if your query results have duplicate
 values in the primary key field.  In my case, did actually is is the primary
 key in the database, but I don't think that's required.  I use different
 fields for primary key and uniqueKey.  This allows us a little extra
 flexibility in the index.

 Hopefully you do still have a field that is unique (even if it's not a
 primary key) that you can use as the primary key in your config file.  It's
 a good idea to have such a thing available to serve as the uniqueKey in
 schema.xml, for automatic overwrites (delete and reinsert) of documents that
 change.

 Thanks,
 Shawn




Automatically generating unique key

2011-09-04 Thread Kissue Kissue
Hi,

Please does anybody know what configurations i need to have in order for
Solr to generate the unique key automatically? I am using solr 3.3.0. I have
the following fieldtype:

fieldType name=id_uuid class=solr.UUIDField indexed=true
required=true/

Thanks.


Re: Automatically generating unique key

2011-09-04 Thread Kissue Kissue
Sorry i found the solution. Many thanks.

On Sun, Sep 4, 2011 at 5:39 PM, Kissue Kissue kissue...@gmail.com wrote:

 Hi,

 Please does anybody know what configurations i need to have in order for
 Solr to generate the unique key automatically? I am using solr 3.3.0. I have
 the following fieldtype:

 fieldType name=id_uuid class=solr.UUIDField indexed=true
 required=true/

 Thanks.



DIH primary key

2011-09-04 Thread Kissue Kissue
Hi,

I was reading about DIH on the this Wiki link :
http://wiki.apache.org/solr/DataImportHandler#A_shorter_data-config
The following was said about entity primary key: is *optional* and only
needed when using delta-imports. Does this mean that the primary key is
mandatory for delta imports? I am asking because i am going to be importing
from a view with no primary key.

Thanks.


java.lang.Exception: Not Implemented

2011-09-02 Thread Kissue Kissue
Hi,

I am using apache solr 3.3.0 with SolrJ on a linux box.

I am getting the error below when indexing kicks in:

2011-09-02 10:35:01,617 ERROR
[org.apache.solr.client.solrj.impl.StreamingUpdateSolrServer] - error
java.lang.Exception: Not Implemented

Does anybody have any idea why this error maybe coming up?

Thanks.


Using SolrJ over HTTPS

2011-09-02 Thread Kissue Kissue
I am using SolrJ with Solr 3.3.0 over HTTPS and getting the following
exception:

2011-09-02 12:42:08,111 ERROR
[org.apache.solr.client.solrj.impl.StreamingUpdateSolrServer] - error
java.lang.Exception: Not Implemented

Just wanted to find out if there is anything special i need to do in order
to use solrJ over HTTPS?

Thanks.


Re: Using SolrJ over HTTPS

2011-09-02 Thread Kissue Kissue
Hi Simon,

Thanks for your reply. I investigated this further and discovered that the
actual error was:

2011-09-02 12:42:06,673 ERROR
[org.apache.solr.client.solrj.impl.StreamingUpdateSolrServer] - error
java.net.SocketException: Broken pipe
at java.net.SocketOutputStream.socketWrite0(Native Method)
at
java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92)
at java.net.SocketOutputStream.write(SocketOutputStream.java:136)
at
com.sun.net.ssl.internal.ssl.OutputRecord.writeBuffer(OutputRecord.java:295)
at
com.sun.net.ssl.internal.ssl.OutputRecord.write(OutputRecord.java:284)
at
com.sun.net.ssl.internal.ssl.SSLSocketImpl.writeRecordInternal(SSLSocketImpl.java:734)
at
com.sun.net.ssl.internal.ssl.SSLSocketImpl.writeRecord(SSLSocketImpl.java:722)
at
com.sun.net.ssl.internal.ssl.AppOutputStream.write(AppOutputStream.java:59)
at
java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:123)
at
org.apache.commons.httpclient.ChunkedOutputStream.flush(ChunkedOutputStream.java:191)
at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:278)
at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:122)
at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:212)
at
org.apache.solr.client.solrj.impl.StreamingUpdateSolrServer$Runner$1.writeRequest(StreamingUpdateSolrServer.java:137)
at
org.apache.commons.httpclient.methods.EntityEnclosingMethod.writeRequestBody(EntityEnclosingMethod.java:499)
at
org.apache.commons.httpclient.HttpMethodBase.writeRequest(HttpMethodBase.java:2114)
at
org.apache.commons.httpclient.HttpMethodBase.execute(HttpMethodBase.java:1096)
at
org.apache.commons.httpclient.HttpMethodDirector.executeWithRetry(HttpMethodDirector.java:398)
at
org.apache.commons.httpclient.HttpMethodDirector.executeMethod(HttpMethodDirector.java:171)
at
org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:397)
at
org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:323)
at
org.apache.solr.client.solrj.impl.StreamingUpdateSolrServer$Runner.run(StreamingUpdateSolrServer.java:154)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)

For some reason, our application was connecting to the solr instance over
the internet over HTTPS and this was throwing the above error. When we
changed the URL so our application could communicate locally with solr over
HTTP, the error disappeared.

Thanks.

On Fri, Sep 2, 2011 at 3:09 PM, simon mtnes...@gmail.com wrote:

 Not sure about the exact reason for the error. However, there's a related
 email thread today with a code fragment that you might find useful -- see


 http://www.lucidimagination.com/search/document/a553f89beb41e39a/how_to_use_solrj_self_signed_cert_ssl_basic_auth#a553f89beb41e39a

 -Simon

 On Fri, Sep 2, 2011 at 7:53 AM, Kissue Kissue kissue...@gmail.com wrote:

  I am using SolrJ with Solr 3.3.0 over HTTPS and getting the following
  exception:
 
  2011-09-02 12:42:08,111 ERROR
  [org.apache.solr.client.solrj.impl.StreamingUpdateSolrServer] - error
  java.lang.Exception: Not Implemented
 
  Just wanted to find out if there is anything special i need to do in
 order
  to use solrJ over HTTPS?
 
  Thanks.
 



Re: java.lang.Exception: Not Implemented

2011-09-02 Thread Kissue Kissue
Hi ,

Thanks for your reply. I investigated this further and discovered that the
actual error was:

2011-09-02 12:42:06,673 ERROR
[org.apache.solr.client.solrj.impl.StreamingUpdateSolrServer] - error
java.net.SocketException: Broken pipe
at java.net.SocketOutputStream.socketWrite0(Native Method)
at
java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92)
at java.net.SocketOutputStream.write(SocketOutputStream.java:136)
at
com.sun.net.ssl.internal.ssl.OutputRecord.writeBuffer(OutputRecord.java:295)
at
com.sun.net.ssl.internal.ssl.OutputRecord.write(OutputRecord.java:284)
at
com.sun.net.ssl.internal.ssl.SSLSocketImpl.writeRecordInternal(SSLSocketImpl.java:734)
at
com.sun.net.ssl.internal.ssl.SSLSocketImpl.writeRecord(SSLSocketImpl.java:722)
at
com.sun.net.ssl.internal.ssl.AppOutputStream.write(AppOutputStream.java:59)
at
java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:123)
at
org.apache.commons.httpclient.ChunkedOutputStream.flush(ChunkedOutputStream.java:191)
at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:278)
at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:122)
at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:212)
at
org.apache.solr.client.solrj.impl.StreamingUpdateSolrServer$Runner$1.writeRequest(StreamingUpdateSolrServer.java:137)
at
org.apache.commons.httpclient.methods.EntityEnclosingMethod.writeRequestBody(EntityEnclosingMethod.java:499)
at
org.apache.commons.httpclient.HttpMethodBase.writeRequest(HttpMethodBase.java:2114)
at
org.apache.commons.httpclient.HttpMethodBase.execute(HttpMethodBase.java:1096)
at
org.apache.commons.httpclient.HttpMethodDirector.executeWithRetry(HttpMethodDirector.java:398)
at
org.apache.commons.httpclient.HttpMethodDirector.executeMethod(HttpMethodDirector.java:171)
at
org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:397)
at
org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:323)
at
org.apache.solr.client.solrj.impl.StreamingUpdateSolrServer$Runner.run(StreamingUpdateSolrServer.java:154)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)

For some reason, our application was connecting to the solr instance over
the internet over HTTPS and this was throwing the above error. When we
changed the URL so our application could communicate locally with solr over
HTTP, the error disappeared.

Thanks.

On Fri, Sep 2, 2011 at 3:11 PM, simon mtnes...@gmail.com wrote:

 You need to give us more information. The code which throws this exception
 will be most helpful.

 -Simon

 On Fri, Sep 2, 2011 at 5:43 AM, Kissue Kissue kissue...@gmail.com wrote:

  Hi,
 
  I am using apache solr 3.3.0 with SolrJ on a linux box.
 
  I am getting the error below when indexing kicks in:
 
  2011-09-02 10:35:01,617 ERROR
  [org.apache.solr.client.solrj.impl.StreamingUpdateSolrServer] - error
  java.lang.Exception: Not Implemented
 
  Does anybody have any idea why this error maybe coming up?
 
  Thanks.
 



Scoring using POJO/SolrJ

2011-08-08 Thread Kissue Kissue
Hi,

I am using the SolrJ client library and using a POJO with the @Field
annotation to index documents and to retrieve documents from the index. I
retrieve the documents from the index like so:

ListItem beans = response.getBeans(Item.class)

Now in order to add the scores to the beans i added a field called score
with the @Field annotation and the scores were then returned when i read
from the index.

Now when i am indexing, i get the error: ERROR:unknown field 'score'. I
guess because it expects the score to be defined in my schema. Now i am
thinking that if i define this field in my schema then rather than returning
the document scores it might just go ahead and return actual values for the
field (null if i dont add a value).

How can i go around this problem?

Many thanks.


Re: highlight on prefix query

2011-08-06 Thread Kissue Kissue
I think this is correct behaviour. If you go to google and search for Tel,
you will see that telephone is highlighted.

On Fri, Aug 5, 2011 at 5:42 PM, Ahmed Boubaker
abdeka.boubake...@gmail.comwrote:

 Hi,

 I am using solr 3 and highlighting is working fine.  However when using
 prefix query like tel*, the highlighter highlights the whole matching words
 (i.e. television, telephone, ...).  I am highlighting a very short field
 (3~5 words length).

 How can I prevent the highlighter from doing so?  I want to get only the
 prefix of these words highlighted (i.e. emtel/emevision,
 emtel/emephone, ...), any solution or idea ?

 Many thanks for your help,

 Boubaker



Re: Minimum Score

2011-08-05 Thread Kissue Kissue
But that would mean returning all the results without pagination which i
dont want to do. I am looking for a way to do it without having to return
all the results at once.

Thanks.

On Thu, Aug 4, 2011 at 11:18 PM, Darren Govoni dar...@ontrenet.com wrote:

 Off the top of my head you maybe you can get the number of results and
 then
 look at the last document and check its score. I believe the results will
 be ordered by score?


 On 08/04/2011 05:44 PM, Kissue Kissue wrote:

 Hi,

 I am using Solr 3.1 with the SolrJ client library. I can see that it is
 possible to get the maximum score for your search by using the following:

 response.getResults().**getMaxScore()

 I am wondering is there some simple solution to get the minimum score?

 Many thanks.





Re: Minimum Score

2011-08-04 Thread Kissue Kissue
Hi,

I am using Solr 3.1 with the SolrJ client library. I can see that it is
possible to get the maximum score for your search by using the following:

response.getResults().getMaxScore()

I am wondering is there some simple solution to get the minimum score?

Many thanks.


Problem with Filter Query

2011-07-14 Thread Kissue Kissue
Hi,

I am using Solr 3.1 with SolrJ. I have a field called supplierName in my
index which i am trying to do filtering on. When i select about 5 suppliers
to filter on at the same time and use their supplier name to contruct a
filter query i do not get any results but when i filter which each
individual supplier name i get the required results.

Here is the line code to that i used to contruct the filter query:

*solrQuery.setParam(fq, arrayOfSupplierNames);

*The supplier name field is stored as a string in the index and here is the
config for the string type from my schema.xml file:

!-- The StrField type is not analyzed, but indexed/stored verbatim. --
fieldType name=string class=solr.StrField sortMissingLast=true
omitNorms=true/

Any help why this is happening will be much appreciated.

Thanks.


Re: Problem with Filter Query

2011-07-14 Thread Kissue Kissue
Thanks for your response.

Actually the elements are composed as follows:
fq=firstfq=second

But using Solr admin query screen i have modified the query to:
fq=supplierName:firstfq=supplierName:second
i still get the same results.

I will try to use solrQuery.addFilterQuery(arrayOfSupplierNames) like you
suggested and see how it goes.

Thanks.


On Thu, Jul 14, 2011 at 2:49 PM, Edoardo Tosca e.to...@sourcesense.comwrote:

 Hi,
 have you tried with:
 solrQuery.addFilterQuery(arrayOfSupplierNames) ?

 other question, is every element of your array composed in this way:
 supplierName:FIRST
 supplierName:SECOND
 etc..

 HTH
 edo

 On Thu, Jul 14, 2011 at 2:18 PM, Kissue Kissue kissue...@gmail.com
 wrote:

  Hi,
 
  I am using Solr 3.1 with SolrJ. I have a field called supplierName in my
  index which i am trying to do filtering on. When i select about 5
 suppliers
  to filter on at the same time and use their supplier name to contruct a
  filter query i do not get any results but when i filter which each
  individual supplier name i get the required results.
 
  Here is the line code to that i used to contruct the filter query:
 
  *solrQuery.setParam(fq, arrayOfSupplierNames);
 
  *The supplier name field is stored as a string in the index and here is
 the
  config for the string type from my schema.xml file:
 
  !-- The StrField type is not analyzed, but indexed/stored verbatim. --
 fieldType name=string class=solr.StrField sortMissingLast=true
  omitNorms=true/
 
  Any help why this is happening will be much appreciated.
 
  Thanks.
 



 --
 Edoardo Tosca
 Sourcesense - making sense of Open Source: http://www.sourcesense.com



Re: Problem with Filter Query

2011-07-14 Thread Kissue Kissue
No its not a multivalue field. Yes i can see that it looks like its doing an
AND on all the filter values but how can i get it to do an OR?
I just want it to return documents that have any of the supplied values as
their supplier name.

I have also tried: solrQuery.addFilterQuery(arrayOfSupplierNames) and i get
no results too.

Thanks.

On Thu, Jul 14, 2011 at 3:06 PM, Edoardo Tosca e.to...@sourcesense.comwrote:

 So with
 fq=supplierName:firstfq=supplierName:second
 you don't get any results?

 is this field a multivalue?
 Mutliple FQs are evaluated as AND
 so your document must have in supplierName both first and second

 Edo


 On Thu, Jul 14, 2011 at 3:00 PM, Kissue Kissue kissue...@gmail.com
 wrote:

  Thanks for your response.
 
  Actually the elements are composed as follows:
  fq=firstfq=second
 
  But using Solr admin query screen i have modified the query to:
  fq=supplierName:firstfq=supplierName:second
  i still get the same results.
 
  I will try to use solrQuery.addFilterQuery(arrayOfSupplierNames) like you
  suggested and see how it goes.
 
  Thanks.
 
 
  On Thu, Jul 14, 2011 at 2:49 PM, Edoardo Tosca e.to...@sourcesense.com
  wrote:
 
   Hi,
   have you tried with:
   solrQuery.addFilterQuery(arrayOfSupplierNames) ?
  
   other question, is every element of your array composed in this way:
   supplierName:FIRST
   supplierName:SECOND
   etc..
  
   HTH
   edo
  
   On Thu, Jul 14, 2011 at 2:18 PM, Kissue Kissue kissue...@gmail.com
   wrote:
  
Hi,
   
I am using Solr 3.1 with SolrJ. I have a field called supplierName in
  my
index which i am trying to do filtering on. When i select about 5
   suppliers
to filter on at the same time and use their supplier name to contruct
 a
filter query i do not get any results but when i filter which each
individual supplier name i get the required results.
   
Here is the line code to that i used to contruct the filter query:
   
*solrQuery.setParam(fq, arrayOfSupplierNames);
   
*The supplier name field is stored as a string in the index and here
 is
   the
config for the string type from my schema.xml file:
   
!-- The StrField type is not analyzed, but indexed/stored verbatim.
  --
   fieldType name=string class=solr.StrField
  sortMissingLast=true
omitNorms=true/
   
Any help why this is happening will be much appreciated.
   
Thanks.
   
  
  
  
   --
   Edoardo Tosca
   Sourcesense - making sense of Open Source: http://www.sourcesense.com
  
 



 --
 Edoardo Tosca
 Sourcesense - making sense of Open Source: http://www.sourcesense.com



Re: Problem with Filter Query

2011-07-14 Thread Kissue Kissue
I have tried this but not working too. Thanks for your help.


On Thu, Jul 14, 2011 at 3:59 PM, Edoardo Tosca e.to...@sourcesense.comwrote:

 As far as i know if you add multiple FQs they will be joined always with
 AND.
 You can do something like
 fq={!q.op=OR df=supplierName}first second third ...

 HTH

 Edo


 On Thu, Jul 14, 2011 at 3:50 PM, Kissue Kissue kissue...@gmail.com
 wrote:

  No its not a multivalue field. Yes i can see that it looks like its doing
  an
  AND on all the filter values but how can i get it to do an OR?
  I just want it to return documents that have any of the supplied values
 as
  their supplier name.
 
  I have also tried: solrQuery.addFilterQuery(arrayOfSupplierNames) and i
 get
  no results too.
 
  Thanks.
 
  On Thu, Jul 14, 2011 at 3:06 PM, Edoardo Tosca e.to...@sourcesense.com
  wrote:
 
   So with
   fq=supplierName:firstfq=supplierName:second
   you don't get any results?
  
   is this field a multivalue?
   Mutliple FQs are evaluated as AND
   so your document must have in supplierName both first and second
  
   Edo
  
  
   On Thu, Jul 14, 2011 at 3:00 PM, Kissue Kissue kissue...@gmail.com
   wrote:
  
Thanks for your response.
   
Actually the elements are composed as follows:
fq=firstfq=second
   
But using Solr admin query screen i have modified the query to:
fq=supplierName:firstfq=supplierName:second
i still get the same results.
   
I will try to use solrQuery.addFilterQuery(arrayOfSupplierNames) like
  you
suggested and see how it goes.
   
Thanks.
   
   
On Thu, Jul 14, 2011 at 2:49 PM, Edoardo Tosca 
  e.to...@sourcesense.com
wrote:
   
 Hi,
 have you tried with:
 solrQuery.addFilterQuery(arrayOfSupplierNames) ?

 other question, is every element of your array composed in this
 way:
 supplierName:FIRST
 supplierName:SECOND
 etc..

 HTH
 edo

 On Thu, Jul 14, 2011 at 2:18 PM, Kissue Kissue 
 kissue...@gmail.com
 wrote:

  Hi,
 
  I am using Solr 3.1 with SolrJ. I have a field called
 supplierName
  in
my
  index which i am trying to do filtering on. When i select about 5
 suppliers
  to filter on at the same time and use their supplier name to
  contruct
   a
  filter query i do not get any results but when i filter which
 each
  individual supplier name i get the required results.
 
  Here is the line code to that i used to contruct the filter
 query:
 
  *solrQuery.setParam(fq, arrayOfSupplierNames);
 
  *The supplier name field is stored as a string in the index and
  here
   is
 the
  config for the string type from my schema.xml file:
 
  !-- The StrField type is not analyzed, but indexed/stored
  verbatim.
--
 fieldType name=string class=solr.StrField
sortMissingLast=true
  omitNorms=true/
 
  Any help why this is happening will be much appreciated.
 
  Thanks.
 



 --
 Edoardo Tosca
 Sourcesense - making sense of Open Source:
  http://www.sourcesense.com

   
  
  
  
   --
   Edoardo Tosca
   Sourcesense - making sense of Open Source: http://www.sourcesense.com
  
 



 --
 Edoardo Tosca
 Sourcesense - making sense of Open Source: http://www.sourcesense.com



Re: Problem with Filter Query

2011-07-14 Thread Kissue Kissue
I have eventually gotten it to work with the following:

fq=supplierName:first + supplierName:second + supplierName:third
...

Thanks.

On Thu, Jul 14, 2011 at 4:14 PM, Kissue Kissue kissue...@gmail.com wrote:

 I have tried this but not working too. Thanks for your help.



 On Thu, Jul 14, 2011 at 3:59 PM, Edoardo Tosca e.to...@sourcesense.comwrote:

 As far as i know if you add multiple FQs they will be joined always with
 AND.
 You can do something like
 fq={!q.op=OR df=supplierName}first second third ...

 HTH

 Edo


 On Thu, Jul 14, 2011 at 3:50 PM, Kissue Kissue kissue...@gmail.com
 wrote:

  No its not a multivalue field. Yes i can see that it looks like its
 doing
  an
  AND on all the filter values but how can i get it to do an OR?
  I just want it to return documents that have any of the supplied values
 as
  their supplier name.
 
  I have also tried: solrQuery.addFilterQuery(arrayOfSupplierNames) and i
 get
  no results too.
 
  Thanks.
 
  On Thu, Jul 14, 2011 at 3:06 PM, Edoardo Tosca e.to...@sourcesense.com
  wrote:
 
   So with
   fq=supplierName:firstfq=supplierName:second
   you don't get any results?
  
   is this field a multivalue?
   Mutliple FQs are evaluated as AND
   so your document must have in supplierName both first and second
  
   Edo
  
  
   On Thu, Jul 14, 2011 at 3:00 PM, Kissue Kissue kissue...@gmail.com
   wrote:
  
Thanks for your response.
   
Actually the elements are composed as follows:
fq=firstfq=second
   
But using Solr admin query screen i have modified the query to:
fq=supplierName:firstfq=supplierName:second
i still get the same results.
   
I will try to use solrQuery.addFilterQuery(arrayOfSupplierNames)
 like
  you
suggested and see how it goes.
   
Thanks.
   
   
On Thu, Jul 14, 2011 at 2:49 PM, Edoardo Tosca 
  e.to...@sourcesense.com
wrote:
   
 Hi,
 have you tried with:
 solrQuery.addFilterQuery(arrayOfSupplierNames) ?

 other question, is every element of your array composed in this
 way:
 supplierName:FIRST
 supplierName:SECOND
 etc..

 HTH
 edo

 On Thu, Jul 14, 2011 at 2:18 PM, Kissue Kissue 
 kissue...@gmail.com
 wrote:

  Hi,
 
  I am using Solr 3.1 with SolrJ. I have a field called
 supplierName
  in
my
  index which i am trying to do filtering on. When i select about
 5
 suppliers
  to filter on at the same time and use their supplier name to
  contruct
   a
  filter query i do not get any results but when i filter which
 each
  individual supplier name i get the required results.
 
  Here is the line code to that i used to contruct the filter
 query:
 
  *solrQuery.setParam(fq, arrayOfSupplierNames);
 
  *The supplier name field is stored as a string in the index and
  here
   is
 the
  config for the string type from my schema.xml file:
 
  !-- The StrField type is not analyzed, but indexed/stored
  verbatim.
--
 fieldType name=string class=solr.StrField
sortMissingLast=true
  omitNorms=true/
 
  Any help why this is happening will be much appreciated.
 
  Thanks.
 



 --
 Edoardo Tosca
 Sourcesense - making sense of Open Source:
  http://www.sourcesense.com

   
  
  
  
   --
   Edoardo Tosca
   Sourcesense - making sense of Open Source: http://www.sourcesense.com
  
 



 --
 Edoardo Tosca
 Sourcesense - making sense of Open Source: http://www.sourcesense.com





Returning total matched document count with SolrJ

2011-06-30 Thread Kissue Kissue
Hi,

I am using Solr 3.1 and using the SolrJ client. Does anyone know how i can
get the *TOTAL* number of matched documents returned with the QueryResponse?
I am interested in the total documents matched not just the result returned
with the limit applied. Any help will be appreciated.

Thanks.


Re: Returning total matched document count with SolrJ

2011-06-30 Thread Kissue Kissue
Thanks Michael. Quite helpful.

On Thu, Jun 30, 2011 at 4:06 PM, Michael Ryan mr...@moreover.com wrote:

 SolrDocumentList docs = queryResponse.getResults();
 long totalMatches = docs.getNumFound();

 -Michael



Applying boost factors at run time

2011-06-21 Thread Kissue Kissue
Hi,

I have the following situation:

1. I am using Solr 3.1
2. I am using the edismax query handler for my queries
3. I am using the SolrJ client library
4. Currently i have configured the fields i want to search on and the bosst
factors in solr config.

But i have just been told that we would need the bosst factors to be stored
in a database so that admin can modify them as at when needed. So i want to
know if it is possible to set the boost factors at runtime for the fields
using the values stored in the database using Solr J?

Thanks


Re: Applying boost factors at run time

2011-06-21 Thread Kissue Kissue
Many thanks for the tip. I will give it a go.


On Tue, Jun 21, 2011 at 11:48 AM, Ahmet Arslan iori...@yahoo.com wrote:



 --- On Tue, 6/21/11, Kissue Kissue kissue...@gmail.com wrote:

  From: Kissue Kissue kissue...@gmail.com
  Subject: Applying boost factors at run time
  To: solr-user@lucene.apache.org
  Date: Tuesday, June 21, 2011, 1:31 PM
  Hi,
 
  I have the following situation:
 
  1. I am using Solr 3.1
  2. I am using the edismax query handler for my queries
  3. I am using the SolrJ client library
  4. Currently i have configured the fields i want to search
  on and the bosst
  factors in solr config.
 
  But i have just been told that we would need the bosst
  factors to be stored
  in a database so that admin can modify them as at when
  needed. So i want to
  know if it is possible to set the boost factors at runtime
  for the fields
  using the values stored in the database using Solr J?

 Yes you can always override defaults - defined in solrconfig.xml - in every
 request.

 SolrQuery.set(qf, myField^newBoostFactor);



Re: Including Score in Solr POJO

2011-05-24 Thread Kissue Kissue
Hi Anuj,

Thanks for your response. I am actually doing a bean search so am  doing the
following:

SolrQuery solrQuery = new SolrQuery(query);
QueryResponse response = solr.query(solrQuery);
ListProduct beans = response.getBeans(Product.class);

It is not immediately clear to me how to get the score doing a bean search.

Thanks.

On Mon, May 23, 2011 at 4:47 PM, Anuj Kumar anujs...@gmail.com wrote:

 Hi,

 On Mon, May 23, 2011 at 8:52 PM, Kissue Kissue kissue...@gmail.com
 wrote:

  Thanks Anuj for your reply. Would it then include it as a field in my
 POJO?
 

 I meant the score given by Solr in response to the search query. Is it an
 application specific score that you want to include?


  How do i define such field? I have a POJO with the @Field annotation
 which
  is mapped to fields in my schema.
 

 At the time of indexing, you need not specify the score. The score is
 calculated based on the relevance of the query against the matched
 documents. If you have an application specific score or weight that you
 want
 to add, you can add it as a separate field but what I understand from your
 query is that you want the score that Solr gives to each search results. In
 that case, just setting the property IncludeScore to true while
 constructing
 the query object (as shown in the example that I gave earlier) will
 suffice.

 From the query response, you can then query for the maximum score, as well
 as each document's score. For example-

 // get the response
 QueryResponse results = getSearchServer().query(query);
 // get the documents
 SolrDocumentList resultDocs = results.getResults();
 // get the maximum score
 float maxScore = resultDocs.getMaxScore();
 // iterate through the documents to see the results
 for(SolrDocument doc : resultDocs){
 // get the score
 Object score = doc.get(score);
 }

 Hope that helps.

 Regards,
 Anuj

 
  Thanks.
 
  On Mon, May 23, 2011 at 4:10 PM, Anuj Kumar anujs...@gmail.com wrote:
 
   Hi,
  
   If you mean SolrJ (as I understand by your description of POJOs), you
 can
   add the score by setting the property IncludeScore to true. For
 example-
  
   SolrQuery query = new SolrQuery().
  setQuery(keyword).
*setIncludeScore(true);*
  
   Regards,
   Anuj
  
   On Mon, May 23, 2011 at 8:31 PM, Kissue Kissue kissue...@gmail.com
   wrote:
  
Hi,
   
I am currently using Solr and indexing/reading my documents as POJO.
  The
question i have is how can i include the score in the POJO for each
document
found in the index?
   
Thanks.
   
  
 



Including Score in Solr POJO

2011-05-23 Thread Kissue Kissue
Hi,

I am currently using Solr and indexing/reading my documents as POJO. The
question i have is how can i include the score in the POJO for each document
found in the index?

Thanks.


Re: Including Score in Solr POJO

2011-05-23 Thread Kissue Kissue
Thanks Anuj for your reply. Would it then include it as a field in my POJO?
How do i define such field? I have a POJO with the @Field annotation which
is mapped to fields in my schema.

Thanks.

On Mon, May 23, 2011 at 4:10 PM, Anuj Kumar anujs...@gmail.com wrote:

 Hi,

 If you mean SolrJ (as I understand by your description of POJOs), you can
 add the score by setting the property IncludeScore to true. For example-

 SolrQuery query = new SolrQuery().
setQuery(keyword).
  *setIncludeScore(true);*

 Regards,
 Anuj

 On Mon, May 23, 2011 at 8:31 PM, Kissue Kissue kissue...@gmail.com
 wrote:

  Hi,
 
  I am currently using Solr and indexing/reading my documents as POJO. The
  question i have is how can i include the score in the POJO for each
  document
  found in the index?
 
  Thanks.