Re: solr search issue

2012-11-02 Thread Jack Krupansky
An id field simply uniquely identifies a row or document. It sounds like you 
are trying to use id as if it were a type field. Generally, one would 
not search in id field other than to select a specific row/document.


It is rather unhelpful to label a column/field as data - that conveys no 
useful information to the reader. Pick a term that DESCRIBES the purpose of 
the data.


Finally, be aware that the default operator is OR, so if your query has 
two or more terms (or is a composite term that gets split into two or more 
terms), then you will get ALL documents that match ANY of the terms. If you 
want ONLY the documents that match ALL of the terms, set the default query 
operator (q.op) to AND.


-- Jack Krupansky

-Original Message- 
From: Romita Saha

Sent: Friday, November 02, 2012 4:01 AM
To: solr-user@lucene.apache.org
Subject: Re: solr search issue

Hi,

I am new to solr. Could you kindly explain a bit about defining free text
search.

In my database I have two columns. One is id another is data.
I want my query to spread across multiple fields. When i search for a
parameter from id filed, it searches it in both the fields. However
whenever I search a parameter from data field, it only searches in data.
Below is my query.

http://localhost:8983/solr/db/select/?defType=dismaxq=2qf=data
id^2start=0rows=11fl=data,id

In my table, id=2 for data=level2.
  id=4 for data=cashier2.

When I search q=2qf=data id, it searches for '2' in data field also and
gives me both the results i.e data=level2 and data=cashier2.
However, when i search for q=cashier2qf=data id, it only gives me result
as data=cashier2 and not data=level2 (please note that id=2 for data =
level2. Ideally it should break the query into cashier+2 and search in id
field as well)


Thanks and regards,
Romita Saha

Panasonic RD Center Singapore
Blk 1022 Tai Seng Avenue #06-3530
Tai Seng Ind. Est. Singapore 534415
DID: (65) 6550 5383 FAX: (65) 6550 5459
email: romita.s...@sg.panasonic.com



From:   Erick Erickson erickerick...@gmail.com
To: solr-user@lucene.apache.org,
Date:   11/02/2012 02:42 PM
Subject:Re: solr search issue



First, define a free text search. If what you're after is that your
terms
(i.e. q=term1 term2) get spread
across multiple fields, simply add them to your qf parameter
(qf=field1,field2). If you want the terms
bound to a particular field, it's just the usual q=field:term, in which
case any field term does NOT get
spread amongst all the fields in your qf parameter.

Best
Erick


On Fri, Nov 2, 2012 at 1:56 AM, Romita Saha
romita.s...@sg.panasonic.comwrote:


Hi,

Thank you for your reply. What if I want to do a free text search?

Thanks and regards,
Romita


From:   Gora Mohanty g...@mimirtech.com
To: solr-user@lucene.apache.org,
Date:   11/02/2012 12:36 PM
Subject:Re: solr search issue



On 2 November 2012 09:51, Romita Saha romita.s...@sg.panasonic.com
wrote:

 Hi,

 I am trying to search a database . In my database I have a field

level2.


 My query:




http://localhost:8983/solr/db/select/?defType=dismaxq=search%20level2qf=data%20id


^2%20start=0rows=11fl=data,id


Where did you get this syntax from? If you want to search just on the
field level2, you should have:
http://localhost:8983/solr/db/select/?q=termdefType=dismaxqf=level2
where term is your search term. (I have omitted boosts, and extra
parameters.)

Regards,
Gora






Re: index solr using jquery AJAX

2012-11-02 Thread amit
Hi Luis 
I tried sending an array too, but no luck
This is how the request looks like.
$.ajax({
url: http://192.168.10.113:8080/solr/update/json?commit=true;,
type: POST,
contentType: application/json; charset=utf-8,
data: [ { id: 22, name: Seattle seahawks} ],
dataType: 'jsonp',
crossDomain: true,
jsonp: 'json.wrf'
 });



--
View this message in context: 
http://lucene.472066.n3.nabble.com/index-solr-using-jquery-AJAX-tp4017490p4017822.html
Sent from the Solr - User mailing list archive at Nabble.com.


RE: Solr 4.0 admin panel

2012-11-02 Thread Tannen, Lev (USAEO) [Contractor]
Thank you Hoss.
I am switching from Solr3.6 to Silr 4.0. In Solr 3.6 a  link   
http://localhost:8983/solr/admin or http://localhost:8983/solr/coreName/admin 
leads to an administration panel page with some useful information. Swapping 
also is done through the same link 
http://localhost:8983/solr/admin?action=SWAPcore=toSearchother=toIndex .
I expected from Solr 4.0 similar features.
I work for government and it does not allow other than IE browsers.
Lev

-Original Message-
From: Chris Hostetter [mailto:hossman_luc...@fucit.org] 
Sent: Wednesday, October 31, 2012 6:19 PM
To: solr-user@lucene.apache.org
Subject: Re: Solr 4.0 admin panel

: The right address to go is http://localhost:8983/solr/ on Solr 4.0.
: http://localhost:8983/solr/admin links to nothing if you go check the
: servlet.

For back compat, http://localhost:8983/solr/admin should automatially redirect 
to http://localhost:8983/solr/#/ -- regardless of wether you are in legacy 
single core mode or not (unless perhaps you have a solr core named admin ?.. 
haven't tried that)

:  administration panel. In Solr4.0 it returns just a general Apache Solr page
:  with dead links. Dead links means that when I click on them nothing
:  happends.

Can you elaborate on what exactly you are seeing?  A Jira issue with a 
screenshot attached would be helpful.  It would also be good to know what 
browser you are using, i think there are bugs with using hte new UI javascript 
works in IE9 that have never been addressed because the relative usage of IE by 
solr-users is so low there was no strong push to invest time in trying to 
figure them out.  Yeha, here's hte issue...

https://issues.apache.org/jira/browse/SOLR-3876

-Hoss


SolrCloud indexing blocks if node is recovering

2012-11-02 Thread Markus Jelsma
Hi,

We just tested indexing some million docs from Hadoop to a 10 node 2 rep 
SolrCloud cluster with this week's trunk. One of the nodes gave an OOM but 
indexing continued without interruption. When i restarted the node indexing 
stopped completely, the node tried to recover - which was unsuccessful. I 
restarted the node again but that wasn't very helpful either. Finally i decided 
to stop the node completely and see what happens - indexing resumed.

Why or how won't the other nodes accept incoming documents when one node 
behaves really bad? The dying node wasn't the node we were sending documents to 
and we are not using CloudSolrServer yet (see other thread). Is this known 
behavior? Is it a bug? 

Thanks,
Markus


Re: Solr 4.0 admin panel

2012-11-02 Thread Shawn Heisey

On 11/2/2012 7:13 AM, Tannen, Lev (USAEO) [Contractor] wrote:

Thank you James.
  
In Solr 3.6 http://localhost:8983/solr/admin links to the admin panel. So the question remains how to invoke an admin panel in Solr 4.0?

Does it mean that there is no such a thing as admin panel in Solr 4.0?


Try this URL on any version:

http://localhost:8983/solr

In 4.0, this will get you to an admin panel for everything.  In 3.x, 
this will get you to a page with links for all your admin panels. The 
3.x panel is fairly simple, such that you can save the URL for almost 
any page you're looking at and use that URL in a script.  The 4.x page 
is much more complex, generally you can't use the URL in your browser's 
address bar in scripts.  The new admin uses URLs like the following 
behind the scenes to gather information:


http://localhost:8983/solr/CORE/admin/mbeans?stats=true

On another of your messages you indicated the government only allows 
IE.  This is a problem.  The 4.0 admin interface is known to be broken 
under IE9, and quite possibly IE8 as well.  It may be worth pointing out 
that you can install and run Google Chrome without admin privileges of 
any kind.  There are also a number of pocket browsers designed to run 
from a USB stick that should run just fine from a directory on your hard 
drive.  I would not suggest using these without permission, but the 
capability is there.


Thanks,
Shawn



Re: After adding field to schema, the field is not being returned in results.

2012-11-02 Thread Dotan Cohen
On Thu, Nov 1, 2012 at 9:09 PM, Erick Erickson erickerick...@gmail.com wrote:
 What happens if you sort ascending rather than descending? Depending on
 what (if anything) you've done with sortMissingFirst/Last on that field,
 it's possible that you're just seeing the results of the sort and docs with
 your new field are somewhere down the list. If you've done nothing, you
 should be seeing the docs with the new field at the top of the list  with
 the query you posted, so this is grasping at straws a bit.


Thanks.  Sorting ASC or DESC still does not show the field, even in
documents for which the field should exist based on the time that it
was created. However, I am starting to wonder that perhaps my
application is creating the wrong field values and perhaps that is why
the field don't exist. This is the field in question:
fieldType name=tdate class=solr.TrieDateField omitNorms=true
precisionStep=6 positionIncrementGap=0/
field name=created_iso8601 type=tdate stored=true
multiValued=false indexed=true/

My application is writing dates in this format (ISO 8601):
2012-11-02T13:48:10Z

Here is the PHP code:
date(Y-m-d\TH:i:s\Z)

I am setting the server timezone, as PHP = 5.1 requires:
date_default_timezone_set('UTC');



 The solr admin page, try going to collectionschema browser and choose the
 field in question from your drop-down. see if it looks like it is stored
 and indexed, and see what some of the values are. This is getting the vals
 from the indexed terms, but at least it should say stored in the schema
 and index sections. If it doesn't, then you somehow have a mismatch between
 your schema and what's actually in your index. This really shouldn't be the
 case since it's a brand-new field


Sorry, no Term Info available :(

Alright, so it is an issue with the data that I'm feeding it. Would
Solr include the fields with good data and reject the fields with bad
data, but update the Document anyway? I can confirm that the variable
used to populate the field in question is not empty.



 Two other things I'd try.
 1 If you have the ID of the document you're _sure_ has a date field in it
 try your query just on that, with fl=*. This would avoid any possible
 sortMissingFirst/Last issues.


Yes, I've done that, with and without fl.



 2 Another way to restrict this would be to add an fq clause to the query
 so docs without the field would not be displayed, something like
 fq=[NOW-1YEAR TO NOW] assuming your dates are in the last year.

 But I guess we're down to needing to see the schema definition etc. if that
 doesn't work.

 Best
 Erick

Thanks, Erick. It does look like the issue is that the field remains
empty. Perhaps I'm writing ISO 8601 wrong, I'll get to looking at that
now. I'm surprised that Solr accepts the documents with bad data in
some of the fields, I will look into that too as well.

Have a peaceful Saturday.


-- 
Dotan Cohen

http://gibberish.co.il
http://what-is-what.com


Re: After adding field to schema, the field is not being returned in results.

2012-11-02 Thread Dotan Cohen
On Thu, Nov 1, 2012 at 9:28 PM, Lance Norskog goks...@gmail.com wrote:
 Have you uploaded data with that field populated? Solr is not like a 
 relational database. It does not automatically populate a new field when you 
 add it to the schema. If you sort on a field, a document with no data in that 
 field comes first or last (I don't know which).


Thank you. In fact, I am being careful to try to pull up records after
the date in which the application was updated to populate the field.


-- 
Dotan Cohen

http://gibberish.co.il
http://what-is-what.com


RE: DataImport Handler : Transformer Function Eval Failed Error

2012-11-02 Thread Mishra, Shikhar
We have a scenario where the same products are available from multiple vendors 
at different prices. We want to store these prices along with the products in 
the index (product has many prices), so that we can apply dynamic filtering on 
the prices at the time of search.


Thanks,
Shikhar

-Original Message-
From: Otis Gospodnetic [mailto:otis.gospodne...@gmail.com] 
Sent: Thursday, November 01, 2012 8:13 PM
To: solr-user@lucene.apache.org
Subject: Re: DataImport Handler : Transformer Function Eval Failed Error

Hi,

That looks a little painful... what are you trying to achieve by storing JSON 
in there? Maybe there's a simpler way to get there...

Otis
--
Performance Monitoring - http://sematext.com/spm On Nov 1, 2012 6:16 PM, 
Mishra, Shikhar shikhar.mis...@telcobuy.com
wrote:

 Hi,

 I'm trying to store a list of JSON objects as stored value for the 
 field prices (see below).

 I'm getting the following error from the custom transformer function 
 (see the data-config file at the end) of data import handler.

 Error Message

 --
 - Caused by: 
 org.apache.solr.handler.dataimport.DataImportHandlerException:
 'eval' failed with language: JavaScript and script:
 function vendorPrices(row){

 var wwtCost = row.get('WWT_COST');
 var listPrice = row.get('LIST_PRICE');
 var vendorName = row.get('VENDOR_NAME');

 //Below approach fails
 var prices = [];

 prices.push({'vendor':vendorName});
 prices.push({'wwtCost':wwtCost});
 prices.push({'listPrice':listPrice});

 row.put('prices':prices);

 //Below approach works
 //row.put('prices', '{' + 'vendor:' + vendorName + 
 ', ' + 'wwtCost:' + wwtCost + ', ' + 'listPrice:' + listPrice + '}');
 return row;
 } Processing Document # 1
 at
 org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndT
 hrow(DataImportHandlerException.java:71)

 Data Import Handler Configuration File dataConfig

 script
 ![CDATA[
 function vendorPrices(row){

 var wwtCost = row.get('WWT_COST');
 var listPrice = row.get('LIST_PRICE');
 var vendorName = row.get('VENDOR_NAME');

 //Below approach fails
 var prices = [];

 prices.push({'vendor':vendorName});
 prices.push({'wwtCost':wwtCost});
 prices.push({'listPrice':listPrice});

 row.put('prices':prices);

 //Below approach works
 //row.put('prices', '{' + 'vendor:' + vendorName + 
 ', ' + 'wwtCost:' + wwtCost + ', ' + 'listPrice:' + listPrice + '}');
 return row;
 }
 ]]
 /script

 dataSource driver=oracle.jdbc.driver.OracleDriver
 url=jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=
 rac-scan.somr.com)(PORT=3465))(CONNECT_DATA=(SERVICE_NAME=
 ERP_GENERAL.SOMR.ORG))) user=dummy password=xx/
 document
 entity name=item query=select * from 
 wwt_catalog.wwt_product prod, wwt_catalog.wwt_manufacturer mfg where 
 prod.mfg_id = mfg.mfg_id and prod.mfg_product_number = 'CON-CBO2-B22HPF'
 field column=PRODUCT_ID name=id /
 field column=MFG_PRODUCT_NUMBER name=name /
 field column=MFG_PRODUCT_NUMBER name=nameSort /
 field column=MFG_NAME name=manu /
 field column=MFG_ITEM_NUMBER name=alphaNameSort /
 field column=DESCRIPTION name=features /
 field column=DESCRIPTION name=description /

 entity name=vendor_sources
 transformer=script:vendorPrices query=SELECT PRICE.WWT_COST, 
 PRICE.LIST_PRICE, VEND.VENDOR_NAME, AVAIL.LEAD_TIME, 
 AVAIL.QTY_AVAILABLE FROM wwt_catalog.wwt_product prod, 
 wwt_catalog.wwt_product_pricing price, wwt_catalog.wwt_vendor vend, 
 wwt_catalog.wwt_product_availability avail WHERE  PROD.PRODUCT_ID = 
 price.product_id(+) AND price.vendor_id =
 vend.vendor_id(+) AND PRICE.PRODUCT_ID = avail.product_id(+) AND 
 PRICE.VENDOR_ID = AVAIL.VENDOR_ID(+) AND prod.PRODUCT_ID = 
 '${item.PRODUCT_ID}'

 /entity
 /entity

 /document
 /dataConfig


 Are there any syntactic errors in the JavaScript code above? Thanks.

 Shikhar





Possible memory leak in recovery

2012-11-02 Thread Markus Jelsma
Hi,

We wiped clean the data directories for one node. That node is never able to 
recover and regularly runs OOM. On another cluster (with an older build, 
september 10th) memory consumption on recovery is fairly low when recoverign 
and with only a 250MB heap allocated it's easy to recover two 4GB cores from 
scratch at the same time. On this new test cluster we see the following 
happening: 
- no index, start recovery
- recovery fails (see other thread, cannot read past EOF when reading index 
files)
- heap is not released
- recovery is retried, fails
- heap is not released
.. OOM

The distinct saw tooth pattern is not there, heap consumption only grows with 
siginifant steps when recovery is retried but fails. If i increase heap 
recovery simply fails a few more times.

I cannot find an existing issue but may have overlooked it. File bug or did i 
miss an Jira issue?

Thanks,
Markus



Re: SOLR - To point multiple indexes in different folder

2012-11-02 Thread Erick Erickson
That should be OK. The recursive bit happens when you define the shard
locations in your config files in the default search handler.


On Fri, Nov 2, 2012 at 6:42 AM, ravi.n rav...@ornext.com wrote:

 Erick,

 We are forming request something like below for default /select request
 handler, will this cause an issue?
 So far we are not facing any recursive issues.


 http://94.101.147.150:8080/solr/select/?q=*%3A*version=2.2start=0rows=10indent=onshards=localhost:8080/solr/coll1,localhost:8080/solr/coll2,localhost:8080/solr/coll3,localhost:8080/solr/coll4,localhost:8080/solr/coll5,localhost:8080/solr/coll6,localhost:8080/solr/coll7

 Below is the solrconfig for /select

   requestHandler name=/select class=solr.SearchHandler
  lst name=defaults
str name=echoParamsexplicit/str
int name=rows10/int
str name=dfrecordid/str
  /lst

 recordid - is the unique field in the document.

 Regards,
 Ravi



 --
 View this message in context:
 http://lucene.472066.n3.nabble.com/SOLR-To-point-multiple-indexes-in-different-folder-tp4016640p4017783.html
 Sent from the Solr - User mailing list archive at Nabble.com.



Re: SolrCloud: general questions

2012-11-02 Thread ku3ia
Hi Tomás!!!
The first three questions are major for me. Many thanks for your response.

About number of shards and documents in it I'll try to test.

Thanks.



--
View this message in context: 
http://lucene.472066.n3.nabble.com/SolrCloud-general-questions-tp4017769p4017836.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Solr Replication is not Possible on RAMDirectory?

2012-11-02 Thread Michael Della Bitta
 so it is not possible to use RAMdirectory for replication?

No, RAMDirectory doesn't work for replication. Use MMapDirectory... it
ends up storing the index in RAM and more efficiently so, plus it's
backed by disk.

Just be sure to not set a big heap because MMapDirectory works outside of heap.

Michael Della Bitta


Appinions
18 East 41st Street, 2nd Floor
New York, NY 10017-6271

www.appinions.com

Where Influence Isn’t a Game


On Fri, Nov 2, 2012 at 4:44 AM, deniz denizdurmu...@gmail.com wrote:
 so it is not possible to use RAMdirectory for replication?


Re: After adding field to schema, the field is not being returned in results.

2012-11-02 Thread Erick Erickson
Well, I'm at my wits end. I tried your field definitions (using the
exampledocs XML) and they work just fine. As far as if you mess up the date
on the way in, you should be seeing stack traces in your log files.

The only way I see not getting the Sorry, no Term Info available :(
message is if you don't have any values in the field. So, my guess is that
you're not getting the format right and the docs aren't getting indexed,
but that's just a guess. You can freely sort even if there are no values at
all in a particular field. This can be indicated if you sort asc and desc
and the order doesn't change. It just means the field is defined in the
schema, not necessarily that there are any values in it.

So, I claim you have no date values in your index. The fact that you can
sort is just an artifact of sortMissingFirst/Last doing something sensible.

Next question, are you absolutely sure that your indexing program and your
searching program are pointing at the same server?

So what I'd do next is
1 create a simple XML doc that conforms to your schema and use the
post.jar tool to send it to your server. Watch the output log for any date
format exceptions.
2 Use the admin UI to insure that you can see terms in docs added this way.
3 from there back up and see what step in the indexing process isn't
working (assuming that's the problem). Solr logs help here.

Note I'm completely PHP-ignorant, I have no clue whether the formatting
you're doing is OK or not. You might try logging the value somewhere in
your php so you an post that and/or include it in your sample XML file...

Best
Erick


On Fri, Nov 2, 2012 at 10:02 AM, Dotan Cohen dotanco...@gmail.com wrote:

 On Thu, Nov 1, 2012 at 9:28 PM, Lance Norskog goks...@gmail.com wrote:
  Have you uploaded data with that field populated? Solr is not like a
 relational database. It does not automatically populate a new field when
 you add it to the schema. If you sort on a field, a document with no data
 in that field comes first or last (I don't know which).
 

 Thank you. In fact, I am being careful to try to pull up records after
 the date in which the application was updated to populate the field.


 --
 Dotan Cohen

 http://gibberish.co.il
 http://what-is-what.com



Nested Join Queries

2012-11-02 Thread Gerald Blanck
At a high level, I have a need to be able to execute a query that joins
across cores, and that query during its joining may join back to the
originating core.

Example:
Find all Books written by an Author who has written a best selling Book.

In Solr query syntax
A) against the book core - bestseller:true
B) against the author core - {!join fromIndex=book from=id
to=bookid}bestseller:true
C) against the book core - {!join fromIndex=author from=id
to=authorid}{!join fromIndex=book from=id to=bookid}bestseller:true

A - returns results
B - returns results
C - does not return results

Given that A and C use the same core, I started looking for join code that
compares the originating core to the fromIndex and found this
in JoinQParserPlugin (line #159).

if (info.getReq().getCore() == fromCore) {

  // if this is the same core, use the searcher passed in...
otherwise we could be warming and

  // get an older searcher from the core.

  fromSearcher = searcher;

} else {

  // This could block if there is a static warming query with a
join in it, and if useColdSearcher is true.

  // Deadlock could result if two cores both had useColdSearcher
and had joins that used eachother.

  // This would be very predictable though (should happen every
time if misconfigured)

  fromRef = fromCore.getSearcher(false, true, null);


  // be careful not to do anything with this searcher that requires
the thread local

  // SolrRequestInfo in a manner that requires the core in the
request to match

  fromSearcher = fromRef.get();

}

I found that if I were to modify the above code so that it always follows
the logic in the else block, I get the results I expect.

Can someone explain to me why the code is written as it is?  And if we were
to run with only the else block being executed, what type of adverse
impacts we might have?

Does anyone have other ideas on how to solve this issue?

Thanks in advance.
-Gerald


Re: index solr using jquery AJAX

2012-11-02 Thread Luis Cappa Banda
Hello,

In that case try again with a JSON array and check if:

1. The request arrives to Solr server. In that case, copypaste the log
traces here.
2. The request is never executed. Analyze with Firebug in your browser.

Regards,

Luis Cappa.


2012/11/2 amit amit.mal...@gmail.com

 Hi Luis
 Yes solr.JsonUpdateRequestHandler is configured in solrconfig.
 If I try with json array without jsonp, I get the Cross Origin Error.
 My solr server is in a different domain.

 Thanks
 Amit

 On Fri, Nov 2, 2012 at 8:59 PM, Rakudten [via Lucene] 
 ml-node+s472066n4017849...@n3.nabble.com wrote:

  Hello,
 
  Check if you have solr.JsonUpdateRequestHandler configured in your
  solrconfig.xml file. Tried again with a JSON array and without any jsonp
  reference (dataType, crossDomain, jsonp).
 
  Good luck!
 
  Regards,
 
  Luis Cappa.
 
  2012/11/2 amit [hidden email]
 http://user/SendEmail.jtp?type=nodenode=4017849i=0
 
 
   I am using solr 3.6 version.
  
   Thanks
   Amit
  
   On Fri, Nov 2, 2012 at 7:47 PM, Rakudten [via Lucene] 
   [hidden email] http://user/SendEmail.jtp?type=nodenode=4017849i=1
  wrote:
  
Hello,
   
Are you using Solr 4.0 version? If you are using the last Solr 4.0
   version
the endpoint url appears to be bad formed. Try with:
   
http://192.168.10.113:8080/solr/update?commit=true
http://192.168.10.113:8080/solr/update/json?commit=true
   
   
Regards,
   
Luis Cappa.
   
   
2012/11/2 amit [hidden email]
   http://user/SendEmail.jtp?type=nodenode=4017835i=0
   
   
 Hi Luis
 I tried sending an array too, but no luck
 This is how the request looks like.
 $.ajax({
 url: 
  http://192.168.10.113:8080/solr/update/json?commit=true
   ,
 type: POST,
 contentType: application/json; charset=utf-8,
 data: [ { id: 22, name: Seattle seahawks} ],
 dataType: 'jsonp',
 crossDomain: true,
 jsonp: 'json.wrf'
  });



 --
 View this message in context:

   
  
 
 http://lucene.472066.n3.nabble.com/index-solr-using-jquery-AJAX-tp4017490p4017822.html
   
 Sent from the Solr - User mailing list archive at Nabble.com.

   
   
--
 If you reply to this email, your message will be added to the
  discussion
below:
   
   
  
 
 http://lucene.472066.n3.nabble.com/index-solr-using-jquery-AJAX-tp4017490p4017835.html
 To unsubscribe from index solr using jquery AJAX, click here
  
  
.
NAML
  
 
 http://lucene.472066.n3.nabble.com/template/NamlServlet.jtp?macro=macro_viewerid=instant_html%21nabble%3Aemail.namlbase=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespacebreadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml
 
   
   
  
  
  
  
   --
   View this message in context:
  
 
 http://lucene.472066.n3.nabble.com/index-solr-using-jquery-AJAX-tp4017490p4017846.html
 
   Sent from the Solr - User mailing list archive at Nabble.com.
  
 
 
  --
   If you reply to this email, your message will be added to the discussion
  below:
 
 
 http://lucene.472066.n3.nabble.com/index-solr-using-jquery-AJAX-tp4017490p4017849.html
   To unsubscribe from index solr using jquery AJAX, click here
 http://lucene.472066.n3.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_codenode=4017490code=YW1pdC5tYWxsaWtAZ21haWwuY29tfDQwMTc0OTB8LTk5Njc5OTA3NA==
 
  .
  NAML
 http://lucene.472066.n3.nabble.com/template/NamlServlet.jtp?macro=macro_viewerid=instant_html%21nabble%3Aemail.namlbase=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespacebreadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml
 
 




 --
 View this message in context:
 http://lucene.472066.n3.nabble.com/index-solr-using-jquery-AJAX-tp4017490p4017851.html
 Sent from the Solr - User mailing list archive at Nabble.com.



How to use (if it is applicable) hibernate inside a solr plugin

2012-11-02 Thread Ricci Gian Maria
Hi to everyone,

 

I'm a .NET developer, so I'm pretty new to java developing. I was able to
write a custom filter to use in solr, but I had problem with another filter
that should use hibernate to read some data from a database during
initialization.

 

First of all I was not able to make it read the hibernate.cfg.xml file,
because I always got a hibernate.cfg.xml not found. This is not a big issue,
I configured hibernate entirely with code and that problem was solved (even
if I'd like to understand how can I use it inside a solr plugin). Now the
problem is that when hibernate Configuration try to configure the mapping it
is not able to find Mapped classes.

 

In the error I found: caused by: org.hibernate.MappingException: class
Dictionary.Term not found while looking for property: value

 

My question is, anyone had already managed to access a database with
hibernate inside a solr plugin? Is this an unsupported scenario so I need to
resort to some other technique to access database, or am I missing
something?

 

Thanks for any help.

 

 

--

Ricci Gian Maria

MVP Visual Studio ALM

 http://www.codewrecks.com/ http://www.codewrecks.com

 http://blogs.ugidotnet.org/rgm http://blogs.ugidotnet.org/rgm

Twitter:  http://twitter.com/alkampfer http://twitter.com/alkampfer

Msn:  mailto:alkamp...@nablasoft.com alkamp...@nablasoft.com

  

 

 



Re: Dynamic core selection

2012-11-02 Thread Dzmitry Petrushenka

Hi all!

Just sharing the solution)

I've extended SolrDispatchFilter with my own implementation and did like  
this:


...

String core = determineCore(req);
super.doFilter(new CoreRoutingReqWrapper(req, core), response, chain);

...

code for the CoreRoutingReqWrapper class:

class CoreRoutingReqWrapper extends HttpServletRequestWrapper {
private String pathToCore;

public CoreRoutingReqWrapper(HttpServletRequest request, String  
core) {

super(request);
pathToCore = / + core + request.getServletPath();
}

@Override
public String getServletPath() {
return pathToCore;
}
}

Would be nice to have something like CoreResolver component in Solr  
architecture.


Something like this:

interface CoreResolver {
String resolveCore(HttpServlerRequest req);
}

Would make Solr server more customizable.

What do you think?

Thanx,



: as I said we have our own search handler (wrapping handleRequestBody  
method
: and adding logic before it) where we convert those custom_paramX  
params into
: Solr understandable params like q, fq, facet, etc. Then we delegate to  
Solr to

: process them.
:
: So what I want to do is core0 handle things if custom_param1=aaa and  
core1 if

: custom_param1=ccc.

Ah.. i think i'm understanding:
 * you know you need a custom search handler
 * you have a custom search handler that delegates to some other handler
based on some logic
 * your customer handler modifies the request params before delegating to
the handler it picks.
 * the part you are missing is how to delegate to an entirely differnet
SolrCore.

does that capture your question?

The nutshell is you would need to ask your current SolrCore for access to
the CoreContainer -- then create a new LocalSolrQueryRequest and ask
that SolrCore to execute it.  one hitch to watch out for is keeping track
of thinkgs like the SolrIndexSearcher used -- because stuff like  
DocList

values in the response will come from the *other* SolrIndexSearcher, and
you'll need to use that when writting the response out (because the
QueryResponseWriter needs to as the SolrInexSearcher for the stored  
fields

from those docids).

(Note: i have never tried this ... there may be other gotcha's i'm not
aware of)






-Hoss



--
Using Opera's revolutionary email client: http://www.opera.com/mail/


unable to create new native thread while importing

2012-11-02 Thread Chris Brown
I'm having a problem importing data into Solr 4.0 (the same error happens
in 3.6.1).  Here is the Error I get:

2012-11-02 09:50:07.265:WARN:oejs.AbstractConnector:
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:658)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.startThread(QueuedThreadPool
.java:436)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.dispatch(QueuedThreadPool.ja
va:361)
at 
org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.dispatch(Soc
ketConnector.java:212)
at 
org.eclipse.jetty.server.bio.SocketConnector.accept(SocketConnector.java:11
6)
at 
org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.j
ava:933)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java
:599)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:
534)
at java.lang.Thread.run(Thread.java:680)

This error occurs after approximately 344k documents imported using 4100
calls and containing aproximately 40mb (raw xml, so the data is smaller).
The full import will be approximately 1300x this size if I'm able to
finish it.  I'm importing use Java's HttpURLConnection and my imports look
something like this:

(data in the name column is redacted but contains a 7-bit-clean string in
this example)

POST http://172.31.1.127:8983/solr/
?xml version=1.0 encoding=UTF-8 standalone=yes?
add
  doc
field name=id3841/field
field name=name.../field
  /doc
  doc
field name=id3842/field
field name=name.../field
  /doc
...etc...
/add

There is a single import HttpURLConnection - I have multiple threads and
they're mutexing on the connection - and the client seems to operate find
until the server throws this error, then the client pauses until it times
out, then tries again and generates more outofmemory errors. Also, as far
as I can tell, the documents that appear to have been imported never get
indexed.

The configuration being used is the one in the solr example folder.

How do I do my import into Solr?  I've seen reference to changing the
AutoCommit settings which I've tried to no effect.  I also found mention
of a similar problem to do with Alpha 4.0 ConcurrentUpdateSolrServer but
since I'm not sure how to change this so I haven't tried this
(http://www.searchworkings.org/forum/-/message_boards/view_message/489575).

Thanks,
Chris...



large text blobs in string field

2012-11-02 Thread geeky2
hello 

environment - solr 3.5

i would like to know if anyone is using the technique of placing large text
blobs in to a non-indexed string field and if so - are there any good/bad
aspects to consider?

we are thinking of doing this to represent a 1:M relationship with the
Many being represented as a string in the schema (probably comprised
either of xml or json objects).

we are looking at the classic part : model scenario, where the client would
look up a part and the document would contain a string field with
potentially 200+ model numbers.  edge cases for this could be 400+ model
numbers.

thx

 



--
View this message in context: 
http://lucene.472066.n3.nabble.com/large-text-blobs-in-string-field-tp4017882.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: unable to create new native thread while importing

2012-11-02 Thread Alexandre Rafalovitch
Have you tried doing a thread dump and seeing how many threads you have and
what they are doing. Maybe a connection is not being closed somehow.

Regards,
   Alex.

Personal blog: http://blog.outerthoughts.com/
LinkedIn: http://www.linkedin.com/in/alexandrerafalovitch
- Time is the quality of nature that keeps events from happening all at
once. Lately, it doesn't seem to be working.  (Anonymous  - via GTD book)


On Fri, Nov 2, 2012 at 2:01 PM, Chris Brown cbr...@infoblox.com wrote:

 I'm having a problem importing data into Solr 4.0 (the same error happens
 in 3.6.1).  Here is the Error I get:

 2012-11-02 09:50:07.265:WARN:oejs.AbstractConnector:
 java.lang.OutOfMemoryError: unable to create new native thread
 at java.lang.Thread.start0(Native Method)
 at java.lang.Thread.start(Thread.java:658)
 at
 org.eclipse.jetty.util.thread.QueuedThreadPool.startThread(QueuedThreadPool
 .java:436)
 at
 org.eclipse.jetty.util.thread.QueuedThreadPool.dispatch(QueuedThreadPool.ja
 va:361)
 at
 org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.dispatch(Soc
 ketConnector.java:212)
 at
 org.eclipse.jetty.server.bio.SocketConnector.accept(SocketConnector.java:11
 6)
 at
 org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.j
 ava:933)
 at
 org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java
 :599)
 at
 org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:
 534)
 at java.lang.Thread.run(Thread.java:680)

 This error occurs after approximately 344k documents imported using 4100
 calls and containing aproximately 40mb (raw xml, so the data is smaller).
 The full import will be approximately 1300x this size if I'm able to
 finish it.  I'm importing use Java's HttpURLConnection and my imports look
 something like this:

 (data in the name column is redacted but contains a 7-bit-clean string in
 this example)

 POST http://172.31.1.127:8983/solr/
 ?xml version=1.0 encoding=UTF-8 standalone=yes?
 add
   doc
 field name=id3841/field
 field name=name.../field
   /doc
   doc
 field name=id3842/field
 field name=name.../field
   /doc
 ...etc...
 /add

 There is a single import HttpURLConnection - I have multiple threads and
 they're mutexing on the connection - and the client seems to operate find
 until the server throws this error, then the client pauses until it times
 out, then tries again and generates more outofmemory errors. Also, as far
 as I can tell, the documents that appear to have been imported never get
 indexed.

 The configuration being used is the one in the solr example folder.

 How do I do my import into Solr?  I've seen reference to changing the
 AutoCommit settings which I've tried to no effect.  I also found mention
 of a similar problem to do with Alpha 4.0 ConcurrentUpdateSolrServer but
 since I'm not sure how to change this so I haven't tried this
 (http://www.searchworkings.org/forum/-/message_boards/view_message/489575
 ).

 Thanks,
 Chris...




Solr-UIMA integration : analyzing multi-fields

2012-11-02 Thread vempap
Hello all,

  how to analyze multiple fields using UIMA when we add the UIMA update
chain to the update handler ? and how to map which field gets analyzed to
which field.

For instance,

lets say there are two text fields, text1  text2 for which I need to
generate pos-tags using UIMA. In the fields section I can definitely do this
:

lst name=analyzeFields
bool name=mergefalse/bool
arr name=fields
strtext1/str
strtext2/str
/arr
/lst

and in the fieldMappings :

lst name=type
str name=nameorg.apache.uima.TokenAnnotation/str
lst name=mapping
str name=featureposTag/str
str name=fieldpostags1/str
/lst
/lst

but how to specify that I need pos-tags for field text2 too and that too in
postags2 field. If there is any schema/DTD for these configuration settings
- please let me know.

Also, how can I change the code or is there a way to specify to generate
pos-tags after getting the token stream from an analyzer. Currently, the
update processor gets the text from the input field and generates pos-tags
into postags1 field using WhitespaceTokenizer defined in the xml
configuration files by default. how can I change the tokenizer such that it
uses a Solr Analyzer/ Tokenizer ?



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-UIMA-integration-analyzing-multi-fields-tp4017890.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: unable to create new native thread while importing

2012-11-02 Thread Chris Brown
Thanks for that, I didn't know I could see the thread dump so easily.
That does appear to have been the problem - I wasn't flushing my input and
the underlining api held the connection open-and-not-reusable until
garbage collection.

Chris...

On 12-11-02 1:05 PM, Alexandre Rafalovitch arafa...@gmail.com wrote:

Have you tried doing a thread dump and seeing how many threads you have
and
what they are doing. Maybe a connection is not being closed somehow.

Regards,
   Alex.

Personal blog: http://blog.outerthoughts.com/
LinkedIn: http://www.linkedin.com/in/alexandrerafalovitch
- Time is the quality of nature that keeps events from happening all at
once. Lately, it doesn't seem to be working.  (Anonymous  - via GTD book)


On Fri, Nov 2, 2012 at 2:01 PM, Chris Brown cbr...@infoblox.com wrote:

 I'm having a problem importing data into Solr 4.0 (the same error
happens
 in 3.6.1).  Here is the Error I get:

 2012-11-02 09:50:07.265:WARN:oejs.AbstractConnector:
 java.lang.OutOfMemoryError: unable to create new native thread
 at java.lang.Thread.start0(Native Method)
 at java.lang.Thread.start(Thread.java:658)
 at
 
org.eclipse.jetty.util.thread.QueuedThreadPool.startThread(QueuedThreadPo
ol
 .java:436)
 at
 
org.eclipse.jetty.util.thread.QueuedThreadPool.dispatch(QueuedThreadPool.
ja
 va:361)
 at
 
org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.dispatch(S
oc
 ketConnector.java:212)
 at
 
org.eclipse.jetty.server.bio.SocketConnector.accept(SocketConnector.java:
11
 6)
 at
 
org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector
.j
 ava:933)
 at
 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.ja
va
 :599)
 at
 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.jav
a:
 534)
 at java.lang.Thread.run(Thread.java:680)

 This error occurs after approximately 344k documents imported using 4100
 calls and containing aproximately 40mb (raw xml, so the data is
smaller).
 The full import will be approximately 1300x this size if I'm able to
 finish it.  I'm importing use Java's HttpURLConnection and my imports
look
 something like this:

 (data in the name column is redacted but contains a 7-bit-clean string
in
 this example)

 POST http://172.31.1.127:8983/solr/
 ?xml version=1.0 encoding=UTF-8 standalone=yes?
 add
   doc
 field name=id3841/field
 field name=name.../field
   /doc
   doc
 field name=id3842/field
 field name=name.../field
   /doc
 ...etc...
 /add

 There is a single import HttpURLConnection - I have multiple threads and
 they're mutexing on the connection - and the client seems to operate
find
 until the server throws this error, then the client pauses until it
times
 out, then tries again and generates more outofmemory errors. Also, as
far
 as I can tell, the documents that appear to have been imported never get
 indexed.

 The configuration being used is the one in the solr example folder.

 How do I do my import into Solr?  I've seen reference to changing the
 AutoCommit settings which I've tried to no effect.  I also found mention
 of a similar problem to do with Alpha 4.0 ConcurrentUpdateSolrServer but
 since I'm not sure how to change this so I haven't tried this
 
(http://www.searchworkings.org/forum/-/message_boards/view_message/489575
 ).

 Thanks,
 Chris...





Puzzled by search score

2012-11-02 Thread dm_tim
Howdy,
I'm reading a table in a db using the following schema:
 fields   
  
   field name=id type=string indexed=true stored=true
required=true / 
   field name=cid type=long indexed=true stored=true
required=true/
   field name=lang type=string indexed=true stored=true
required=true/
   field name=file_version type=int indexed=true stored=true
required=true/
   field name=search_id type=long indexed=true stored=true
required=true/
   field name=tag type=text_general indexed=true stored=true
required=true/
   field name=created type=date indexed=false stored=true/
   field name=last_modified type=date indexed=true stored=true/
   field name=version type=long indexed=true stored=true/
   field name=_version_ type=long indexed=true stored=true
multiValued=false/
 /fields
 
 
 uniqueKeyid/uniqueKey

 
 defaultSearchFieldtag/defaultSearchField

 
 solrQueryParser defaultOperator=OR/

So make the following query 
http://localhost:8080/apache-solr-4.0.0/core0/select?q=tag%3Aclothes~%2Bcid%3A14sort=score+descrows=10fl=tag+scorewt=jsonindent=true

You will notice that I'm doing a search on the tag field against the string
clothes and the cid field against the long 14 and requesting that the
results come back sorted on descending score values. So I'm surprised to see
these results:
{
  responseHeader:{
status:0,
QTime:1,
params:{
  q:tag:clothes~+cid:14,
  sort:score desc,
  rows:10,
  fl:tag score,
  wt:json,
  indent:true}},
  response:{numFound:1835,start:0,maxScore:3.9238024,docs:[
  {
tag:Table Cloth,
score:3.9238024},
  {
tag:Clothes,
score:3.9134552},
  {
tag:Clothes,
score:3.9134552},
  {
tag:Clothes,
score:3.9134552},
  {
tag:Clothes,
score:3.9134552},
  {
tag:Clothes,
score:3.9134552},
  {
tag:Clothes,
score:3.9134552},
  {
tag:Boys Clothes,
score:3.3968315},
  {
tag:Everyday Clothes,
score:3.3968315},
  {
tag:Designer Clothes,
score:3.3968315}]
  }}

Why does Table Cloth have a higher score than Clothes (which is an exact
textual match)? I could use some help understanding why I have these results
and how to tweak my query so that the results match my expectations.

Regards,

Tim



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Puzzled-by-search-score-tp4017904.html
Sent from the Solr - User mailing list archive at Nabble.com.


Solr 4.0 error message: Unsupported ContentType: Content-type:text/xml

2012-11-02 Thread Tom Burton-West
Hello all,

Trying to get Solr 4.0 up and running with a port of our production 3.6
schema and documents.

We are getting the following error message in the logs:

org.apache.solr.common.SolrException: Unsupported ContentType:
Content-type:text/xml  Not in: [app
lication/xml, text/csv, text/json, application/csv, application/javabin,
text/xml, application/json]


We use exactly the same code without problem with Solr 3.6.


We are sending a ContentType 'text/xml'.

Is it likely that there is some other problem and this is just not quite
the right error message?

Tom


Re: Solr 4.0 error message: Unsupported ContentType: Content-type:text/xml

2012-11-02 Thread Jack Krupansky
That message makes it sounds as if the literal text Content-type: is 
included in your content type. How exactly are you setting/sending the 
content type?


-- Jack Krupansky

-Original Message- 
From: Tom Burton-West

Sent: Friday, November 02, 2012 5:30 PM
To: solr-user@lucene.apache.org
Subject: Solr 4.0 error message: Unsupported ContentType: 
Content-type:text/xml


Hello all,

Trying to get Solr 4.0 up and running with a port of our production 3.6
schema and documents.

We are getting the following error message in the logs:

org.apache.solr.common.SolrException: Unsupported ContentType:
Content-type:text/xml  Not in: [app
lication/xml, text/csv, text/json, application/csv, application/javabin,
text/xml, application/json]


We use exactly the same code without problem with Solr 3.6.


We are sending a ContentType 'text/xml'.

Is it likely that there is some other problem and this is just not quite
the right error message?

Tom 



Re: Solr 4.0 error message: Unsupported ContentType: Content-type:text/xml

2012-11-02 Thread Tom Burton-West
Thanks Jack,

That is exactly the problem.  Apparently earlier versions of Solr ignored
the extra text, which is why we didn't catch the bug in our code earlier.

Thanks for the quick response.

Tom

On Fri, Nov 2, 2012 at 5:34 PM, Jack Krupansky j...@basetechnology.comwrote:

 That message makes it sounds as if the literal text Content-type: is
 included in your content type. How exactly are you setting/sending the
 content type?

 -- Jack Krupansky

 -Original Message- From: Tom Burton-West
 Sent: Friday, November 02, 2012 5:30 PM
 To: solr-user@lucene.apache.org
 Subject: Solr 4.0 error message: Unsupported ContentType:
 Content-type:text/xml

 Hello all,

 Trying to get Solr 4.0 up and running with a port of our production 3.6
 schema and documents.

 We are getting the following error message in the logs:

 org.apache.solr.common.**SolrException: Unsupported ContentType:
 Content-type:text/xml  Not in: [app
 lication/xml, text/csv, text/json, application/csv, application/javabin,
 text/xml, application/json]


 We use exactly the same code without problem with Solr 3.6.


 We are sending a ContentType 'text/xml'.

 Is it likely that there is some other problem and this is just not quite
 the right error message?

 Tom



Re: trouble instantiating CloudSolrServer

2012-11-02 Thread Mark Miller
I think the maven jars must be out of whack?

On Fri, Nov 2, 2012 at 6:38 AM, Markus Jelsma
markus.jel...@openindex.io wrote:
 Hi,

 We use trunk but got SolrJ 4.0 from Maven. Creating an instance of 
 CloudSolrServer fails because its constructor calls a not existing LBServer 
 constructor, it attempts to create an instance by only passing a HttpClient. 
 How is LBHttpSolrServer supposed to work without passing a SolrServer URL to 
 it?

   public CloudSolrServer(String zkHost) throws MalformedURLException {
   this.zkHost = zkHost;
   this.myClient = HttpClientUtil.createClient(null);
   this.lbServer = new LBHttpSolrServer(myClient);
   this.updatesToLeaders = true;
   }

 java.lang.NoSuchMethodError: 
 org.apache.solr.client.solrj.impl.LBHttpSolrServer.init(Lorg/apache/http/client/HttpClient;[Ljava/lang/String;)V
 at 
 org.apache.solr.client.solrj.impl.CloudSolrServer.init(CloudSolrServer.java:84)

 Thanks,
 Markus



-- 
- Mark


Re: SolrCloud indexing blocks if node is recovering

2012-11-02 Thread Mark Miller
Doesn't sound right. Still have the logs?

- Mark

On Fri, Nov 2, 2012 at 9:45 AM, Markus Jelsma
markus.jel...@openindex.io wrote:
 Hi,

 We just tested indexing some million docs from Hadoop to a 10 node 2 rep 
 SolrCloud cluster with this week's trunk. One of the nodes gave an OOM but 
 indexing continued without interruption. When i restarted the node indexing 
 stopped completely, the node tried to recover - which was unsuccessful. I 
 restarted the node again but that wasn't very helpful either. Finally i 
 decided to stop the node completely and see what happens - indexing resumed.

 Why or how won't the other nodes accept incoming documents when one node 
 behaves really bad? The dying node wasn't the node we were sending documents 
 to and we are not using CloudSolrServer yet (see other thread). Is this known 
 behavior? Is it a bug?

 Thanks,
 Markus



-- 
- Mark


Re: After adding field to schema, the field is not being returned in results.

2012-11-02 Thread Lance Norskog
If any value is in a bogus format, the entire document batch in that HTTP 
request fails. That is the right timestamp format.
The index may be corrupted somehow. Can you try removing all of the fields in 
data/ and trying again?

- Original Message -
| From: Erick Erickson erickerick...@gmail.com
| To: solr-user@lucene.apache.org
| Sent: Friday, November 2, 2012 7:32:40 AM
| Subject: Re: After adding field to schema, the field is not being returned in 
results.
| 
| Well, I'm at my wits end. I tried your field definitions (using the
| exampledocs XML) and they work just fine. As far as if you mess up
| the date
| on the way in, you should be seeing stack traces in your log files.
| 
| The only way I see not getting the Sorry, no Term Info available :(
| message is if you don't have any values in the field. So, my guess is
| that
| you're not getting the format right and the docs aren't getting
| indexed,
| but that's just a guess. You can freely sort even if there are no
| values at
| all in a particular field. This can be indicated if you sort asc and
| desc
| and the order doesn't change. It just means the field is defined in
| the
| schema, not necessarily that there are any values in it.
| 
| So, I claim you have no date values in your index. The fact that you
| can
| sort is just an artifact of sortMissingFirst/Last doing something
| sensible.
| 
| Next question, are you absolutely sure that your indexing program and
| your
| searching program are pointing at the same server?
| 
| So what I'd do next is
| 1 create a simple XML doc that conforms to your schema and use the
| post.jar tool to send it to your server. Watch the output log for any
| date
| format exceptions.
| 2 Use the admin UI to insure that you can see terms in docs added
| this way.
| 3 from there back up and see what step in the indexing process isn't
| working (assuming that's the problem). Solr logs help here.
| 
| Note I'm completely PHP-ignorant, I have no clue whether the
| formatting
| you're doing is OK or not. You might try logging the value somewhere
| in
| your php so you an post that and/or include it in your sample XML
| file...
| 
| Best
| Erick
| 
| 
| On Fri, Nov 2, 2012 at 10:02 AM, Dotan Cohen dotanco...@gmail.com
| wrote:
| 
|  On Thu, Nov 1, 2012 at 9:28 PM, Lance Norskog goks...@gmail.com
|  wrote:
|   Have you uploaded data with that field populated? Solr is not
|   like a
|  relational database. It does not automatically populate a new field
|  when
|  you add it to the schema. If you sort on a field, a document with
|  no data
|  in that field comes first or last (I don't know which).
|  
| 
|  Thank you. In fact, I am being careful to try to pull up records
|  after
|  the date in which the application was updated to populate the
|  field.
| 
| 
|  --
|  Dotan Cohen
| 
|  http://gibberish.co.il
|  http://what-is-what.com
| 
| 


Re: How to migrate index from 3.6 to 4.0 with solrcloud

2012-11-02 Thread Erick Erickson
It's not clear whether your index is already sharded or not. But it doesn't
matter because:
1 if it's not sharded, there's no shard-splitter (yet). So you can't copy
the right parts of your single index to the right shard.
2 if your 3.6 index _is_ sharded already, I pretty much guarantee that it
wasn't created with the same hashing algorithm that SolrCloud uses, so just
copying the shards to some node on the cloud won't work.

In either case, you'll have to re-indexing everything fresh.

Best
Erick


On Thu, Nov 1, 2012 at 9:34 PM, Zeng Lames lezhi.z...@gmail.com wrote:

 Dear all,

 we have an existing index with solr 3.6, and now we want to migrate it to
 Solr4.0 with shard (2 shards, 2 nodes in a shard). The question are:

 1.which node should i copy the existing index files to? any node in any
 shard ?

 2. if copy the index files into any one of nodes, can it be replicated to
 the 'right' shard according hash code?

 3. if above steps can't fulfill a index migrate to solrcloud, how should we
 do?

 thanks a lot
 Lames



Re: solr search issue

2012-11-02 Thread Erick Erickson
First, define a free text search. If what you're after is that your terms
(i.e. q=term1 term2) get spread
across multiple fields, simply add them to your qf parameter
(qf=field1,field2). If you want the terms
bound to a particular field, it's just the usual q=field:term, in which
case any field term does NOT get
spread amongst all the fields in your qf parameter.

Best
Erick


On Fri, Nov 2, 2012 at 1:56 AM, Romita Saha romita.s...@sg.panasonic.comwrote:

 Hi,

 Thank you for your reply. What if I want to do a free text search?

 Thanks and regards,
 Romita


 From:   Gora Mohanty g...@mimirtech.com
 To: solr-user@lucene.apache.org,
 Date:   11/02/2012 12:36 PM
 Subject:Re: solr search issue



 On 2 November 2012 09:51, Romita Saha romita.s...@sg.panasonic.com
 wrote:
 
  Hi,
 
  I am trying to search a database . In my database I have a field level2.
 
  My query:
 

 http://localhost:8983/solr/db/select/?defType=dismaxq=search%20level2qf=data%20id
 ^2%20start=0rows=11fl=data,id


 Where did you get this syntax from? If you want to search just on the
 field level2, you should have:
 http://localhost:8983/solr/db/select/?q=termdefType=dismaxqf=level2
 where term is your search term. (I have omitted boosts, and extra
 parameters.)

 Regards,
 Gora




Re: How to migrate index from 3.6 to 4.0 with solrcloud

2012-11-02 Thread Zeng Lames
thanks all for your prompt response. i think i know what should i do now.
thank you so much again.


On Fri, Nov 2, 2012 at 2:36 PM, Erick Erickson erickerick...@gmail.comwrote:

 It's not clear whether your index is already sharded or not. But it doesn't
 matter because:
 1 if it's not sharded, there's no shard-splitter (yet). So you can't copy
 the right parts of your single index to the right shard.
 2 if your 3.6 index _is_ sharded already, I pretty much guarantee that it
 wasn't created with the same hashing algorithm that SolrCloud uses, so just
 copying the shards to some node on the cloud won't work.

 In either case, you'll have to re-indexing everything fresh.

 Best
 Erick


 On Thu, Nov 1, 2012 at 9:34 PM, Zeng Lames lezhi.z...@gmail.com wrote:

  Dear all,
 
  we have an existing index with solr 3.6, and now we want to migrate it to
  Solr4.0 with shard (2 shards, 2 nodes in a shard). The question are:
 
  1.which node should i copy the existing index files to? any node in any
  shard ?
 
  2. if copy the index files into any one of nodes, can it be replicated to
  the 'right' shard according hash code?
 
  3. if above steps can't fulfill a index migrate to solrcloud, how should
 we
  do?
 
  thanks a lot
  Lames
 



Re: solr search issue

2012-11-02 Thread Romita Saha
Hi,

I am new to solr. Could you kindly explain a bit about defining free text 
search.

In my database I have two columns. One is id another is data.
I want my query to spread across multiple fields. When i search for a 
parameter from id filed, it searches it in both the fields. However 
whenever I search a parameter from data field, it only searches in data. 
Below is my query.

http://localhost:8983/solr/db/select/?defType=dismaxq=2qf=data 
id^2start=0rows=11fl=data,id

In my table, id=2 for data=level2.
   id=4 for data=cashier2.

When I search q=2qf=data id, it searches for '2' in data field also and 
gives me both the results i.e data=level2 and data=cashier2.
However, when i search for q=cashier2qf=data id, it only gives me result 
as data=cashier2 and not data=level2 (please note that id=2 for data = 
level2. Ideally it should break the query into cashier+2 and search in id 
field as well)


Thanks and regards,
Romita Saha

Panasonic RD Center Singapore
Blk 1022 Tai Seng Avenue #06-3530
Tai Seng Ind. Est. Singapore 534415
DID: (65) 6550 5383 FAX: (65) 6550 5459
email: romita.s...@sg.panasonic.com



From:   Erick Erickson erickerick...@gmail.com
To: solr-user@lucene.apache.org, 
Date:   11/02/2012 02:42 PM
Subject:Re: solr search issue



First, define a free text search. If what you're after is that your 
terms
(i.e. q=term1 term2) get spread
across multiple fields, simply add them to your qf parameter
(qf=field1,field2). If you want the terms
bound to a particular field, it's just the usual q=field:term, in which
case any field term does NOT get
spread amongst all the fields in your qf parameter.

Best
Erick


On Fri, Nov 2, 2012 at 1:56 AM, Romita Saha 
romita.s...@sg.panasonic.comwrote:

 Hi,

 Thank you for your reply. What if I want to do a free text search?

 Thanks and regards,
 Romita


 From:   Gora Mohanty g...@mimirtech.com
 To: solr-user@lucene.apache.org,
 Date:   11/02/2012 12:36 PM
 Subject:Re: solr search issue



 On 2 November 2012 09:51, Romita Saha romita.s...@sg.panasonic.com
 wrote:
 
  Hi,
 
  I am trying to search a database . In my database I have a field 
level2.
 
  My query:
 

 
http://localhost:8983/solr/db/select/?defType=dismaxq=search%20level2qf=data%20id

 ^2%20start=0rows=11fl=data,id


 Where did you get this syntax from? If you want to search just on the
 field level2, you should have:
 http://localhost:8983/solr/db/select/?q=termdefType=dismaxqf=level2
 where term is your search term. (I have omitted boosts, and extra
 parameters.)

 Regards,
 Gora





Solr Replication is not Possible on RAMDirectory?

2012-11-02 Thread deniz
Hi all, I am trying to set up a master/slave system, by following this page :
http://wiki.apache.org/solr/SolrReplication

I was able to set up and did some experiments with that, but when i try to
set the index for RAMDirectory, i got errors for indexing.

While master and slave are both using a non-RAM directory, everything is
okay... but when i try to use RAMdirectory on both I got this error below: 

16:40:31.626 [qtp28208563-24] ERROR org.apache.solr.core.SolrCore -
org.apache.lucene.index.IndexNotFoundException: no segments* file found in
org.apache.lucene.store.RAMDirectory@7e693f
lockFactory=org.apache.lucene.store.NativeFSLockFactory@92c787: files: []
at
org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:741)
at
org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:630)
at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:343)
at org.apache.lucene.index.IndexWriter.init(IndexWriter.java:639)
at 
org.apache.solr.update.SolrIndexWriter.init(SolrIndexWriter.java:75)
at 
org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:62)
at
org.apache.solr.update.DefaultSolrCoreState.createMainIndexWriter(DefaultSolrCoreState.java:191)
at
org.apache.solr.update.DefaultSolrCoreState.getIndexWriter(DefaultSolrCoreState.java:77)
at
org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:511)
at
org.apache.solr.update.processor.RunUpdateProcessor.processCommit(RunUpdateProcessorFactory.java:87)
at
org.apache.solr.update.processor.UpdateRequestProcessor.processCommit(UpdateRequestProcessor.java:64)
at
org.apache.solr.update.processor.DistributedUpdateProcessor.processCommit(DistributedUpdateProcessor.java:1016)
at
org.apache.solr.update.processor.LogUpdateProcessor.processCommit(LogUpdateProcessorFactory.java:157)
at
org.apache.solr.handler.RequestHandlerUtils.handleCommit(RequestHandlerUtils.java:69)
at
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68)
at
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1699)
at
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:455)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:276)
at
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1337)
at
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:484)
at
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:119)
at
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:524)
at
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:233)
at
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1065)
at
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:413)
at
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:192)
at
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:999)
at
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:117)
at
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:250)
at
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:149)
at
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:111)
at org.eclipse.jetty.server.Server.handle(Server.java:351)
at
org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:454)
at
org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:47)
at
org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:890)
at
org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:944)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:634)
at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:230)
at
org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:66)
at
org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:254)
at
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:599)
at
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:534)
at java.lang.Thread.run(Unknown Source)

16:40:31.627 [qtp28208563-24] ERROR o.a.solr.servlet.SolrDispatchFilter -
null:org.apache.lucene.index.IndexNotFoundException: no segments* file found
in org.apache.lucene.store.RAMDirectory@7e693f

SolrCloud: general questions

2012-11-02 Thread ku3ia
Hi all!
We plan to migrate from Solr 3.5 to SolrCloud 4.0. We pass some tests and I
want to conform results with you.

So, what I have on tests:
Ubuntu 12.04 LTS, Oracle JDK 7u7, Jetty 8, SolrCloud 4.0, 4 shards (4 JVM's
on the same machine on different ports [9080, 9081, 9082, 9083]), no
replicas

My questions are:
1) Is it true, that I may send data to any of shards [9080, 9081, 9082,
9083] and don't care about how SolrCloud will distribute data between
shards? What algorithm is used: round robin?

2) For example, in ColrCloud there is a document:
docfield name=id1/fieldfield name=namethis is Solr
3.5/field/doc
I have no information about shard in which this doc is. I need to update
information at field name. The new doc is:
docfield name=id1/fieldfield name=namethis is
SolrCloud/field/doc
Is it true, that I may send this doc to any of shards [9080, 9081, 9082,
9083] and after commit, when I run the query, I'll have this is SolrCloud 
instead of this is Solr 3.5 in results? As I see old data is still at
index until optimize done?

3) Is it true, that delete by query works regardless of where to send the
request?

4) My DnumShards=4. If I need to expand SolrCloud, for example, to 6 shards,
I need to remove Zookeeper data directory, set DnumShards to 6 and restart
Jetty. Can I set DnumShards=20 and only add new shards in a future with out
any removal and restart JVM?

5) Currently we have 30 shards with 50M docs. What schema you advice: shards
with ~15M docs, or more shards with less count of docs? What will be faster:
search on shards with ~15M docs or search on more shards with less count of
docs? Expected count of docs are ~1 500 000 000.

Thanks for your responses.



--
View this message in context: 
http://lucene.472066.n3.nabble.com/SolrCloud-general-questions-tp4017769.html
Sent from the Solr - User mailing list archive at Nabble.com.


how to get solr termVectorComponent results using solrj

2012-11-02 Thread cese
I am trying to write this query;

*localhost/solr/tvrh/?q=queryStringversion=2.2indent=ontv.tf_idf=true*

I want to get tf and idf values below;

/lst name=doc-40797
str name=uniqueKeytest20508/str
lst name=content
lst name=98
double name=tf-idf0.002304147465437788/double
/lst
lst name=apaan
double name=tf-idf0.1/double
/lst
lst name=aryadea
double name=tf-idf1.0/double
/lst
lst name=chelsea
double name=tf-idf0.005208/double
/lst
lst name=gua
double name=tf-idf0.0/double
/lst
lst name=osa
double name=tf-idf0.004662004662004662/double
/lst
lst name=pegangı
double name=tf-idf0.5/double
/lst
lst name=rt
double name=tf-idf1.4009526478005044E-4/double
/lst
lst name=vs
double name=tf-idf0.0030581039755351682/double
/lst
/lst
/lst
str name=uniqueKeyFieldNameid/str
lst name=doc-40746
str name=uniqueKeytest20457/str
lst name=content
lst name=075
double name=tf-idf0.027776/double
/lst
lst name=9
double name=tf-idf9.76657876745776E-5/double
/lst
lst name=atlético
double name=tf-idf0.045454545454545456/double
/lst
lst name=co
double name=tf-idf1.1130899376669635E-4/double
/lst
lst name=http
double name=tf-idf1.034233116144379E-4/double
/lst
lst name=jorna
double name=tf-idf0.25/double
/lst
lst name=lh
double name=tf-idf0.07142857142857142/double
/lst
lst name=ngn
double name=tf-idf0.5/double
/lst
lst name=osa
double name=tf-idf0.004662004662004662/double
/lst
lst name=puntuaciones
double name=tf-idf1.0/double
/lst
lst name=t
double name=tf-idf7.038783698176955E-5/double
/lst
lst name=vavelco
double name=tf-idf0.2/double
/lst
lst name=vía
double name=tf-idf0.03125/double
/lst
/lst
/lst
str name=uniqueKeyFieldNameid/str/

Up to now;

*SolrQuery query = new SolrQuery(queryString);
query.setQueryType(/tvrh);
query.setParam(tv.tf_idf, true);
QueryResponse response = server.query(query);*


I have written the query and I think I am gonna need QueryResponse object, I
think it is true. But I don't know what is next to get those tf-idf values
using solrj? Or is there a better way to get tf-idf values using solrj?
Thanks.




--
View this message in context: 
http://lucene.472066.n3.nabble.com/how-to-get-solr-termVectorComponent-results-using-solrj-tp4017772.html
Sent from the Solr - User mailing list archive at Nabble.com.


trouble instantiating CloudSolrServer

2012-11-02 Thread Markus Jelsma
Hi,

We use trunk but got SolrJ 4.0 from Maven. Creating an instance of 
CloudSolrServer fails because its constructor calls a not existing LBServer 
constructor, it attempts to create an instance by only passing a HttpClient. 
How is LBHttpSolrServer supposed to work without passing a SolrServer URL to it?

  public CloudSolrServer(String zkHost) throws MalformedURLException {
  this.zkHost = zkHost;
  this.myClient = HttpClientUtil.createClient(null);
  this.lbServer = new LBHttpSolrServer(myClient);
  this.updatesToLeaders = true;
  }

java.lang.NoSuchMethodError: 
org.apache.solr.client.solrj.impl.LBHttpSolrServer.init(Lorg/apache/http/client/HttpClient;[Ljava/lang/String;)V
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.init(CloudSolrServer.java:84)

Thanks,
Markus


Re: SOLR - To point multiple indexes in different folder

2012-11-02 Thread ravi.n
Erick,

We are forming request something like below for default /select request
handler, will this cause an issue?
So far we are not facing any recursive issues.

http://94.101.147.150:8080/solr/select/?q=*%3A*version=2.2start=0rows=10indent=onshards=localhost:8080/solr/coll1,localhost:8080/solr/coll2,localhost:8080/solr/coll3,localhost:8080/solr/coll4,localhost:8080/solr/coll5,localhost:8080/solr/coll6,localhost:8080/solr/coll7

Below is the solrconfig for /select

  requestHandler name=/select class=solr.SearchHandler
 lst name=defaults
   str name=echoParamsexplicit/str
   int name=rows10/int
   str name=dfrecordid/str
 /lst

recordid - is the unique field in the document.

Regards,
Ravi



--
View this message in context: 
http://lucene.472066.n3.nabble.com/SOLR-To-point-multiple-indexes-in-different-folder-tp4016640p4017783.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: SolrCloud: general questions

2012-11-02 Thread Tomás Fernández Löbbe

 My questions are:
 1) Is it true, that I may send data to any of shards [9080, 9081, 9082,
 9083] and don't care about how SolrCloud will distribute data between
 shards? What algorithm is used: round robin?

It is true, the document is forwarded to the correct shard automatically.
It's not round robin, it's a hash function applied to the unique key of the
document.



 2) For example, in ColrCloud there is a document:
 docfield name=id1/fieldfield name=namethis is Solr
 3.5/field/doc
 I have no information about shard in which this doc is. I need to update
 information at field name. The new doc is:
 docfield name=id1/fieldfield name=namethis is
 SolrCloud/field/doc
 Is it true, that I may send this doc to any of shards [9080, 9081, 9082,
 9083] and after commit, when I run the query, I'll have this is SolrCloud
 
 instead of this is Solr 3.5 in results? As I see old data is still at
 index until optimize done?

You'll only see the updated document, yes, as the hash function will give
the same result on the id field and it will go to the same shard as
before, there the document will be updated (deleted the old one and
inserted the new one). The old document will remain on the index (not
visible, as you said) until the segment where it is located is merged, this
can be due to optimize or background segment merging.



 3) Is it true, that delete by query works regardless of where to send the
 request?

yes.


 4) My DnumShards=4. If I need to expand SolrCloud, for example, to 6
 shards,
 I need to remove Zookeeper data directory, set DnumShards to 6 and restart
 Jetty. Can I set DnumShards=20 and only add new shards in a future with out
 any removal and restart JVM?

I think you could remove the collection and create it again. See the new
collections API. You need to have at least as many Solr instances (or Solr
cores) as the number of shards in order to be able to anything with your
collection. You won't be able to index of search if the number of nodes 
number of shards. Any change in the number of shards requires re indexing
everything.



 5) Currently we have 30 shards with 50M docs. What schema you advice:
 shards
 with ~15M docs, or more shards with less count of docs? What will be
 faster:
 search on shards with ~15M docs or search on more shards with less count of
 docs? Expected count of docs are ~1 500 000 000.


I think you'll have to test it, as it will depend much on your context (the
shape of your docs/index, your queries and other use cases), shards with
15M docs doesn't sound crazy, but I never tested with 100 shards really.

Tomás



 Thanks for your responses.



 --
 View this message in context:
 http://lucene.472066.n3.nabble.com/SolrCloud-general-questions-tp4017769.html
 Sent from the Solr - User mailing list archive at Nabble.com.



Re: SolrCloud Tomcat configuration: problems and doubts.

2012-11-02 Thread Luis Cappa Banda
Hello, Mark!

How are you? Thanks a lot for helping me. You were right about jetty.host
parameter. My fianl test solr.xml looks like:

*  cores adminPath=/admin/cores defaultCoreName=items_en
host=localhost hostPort=9080 hostContext=items_en*
*core name=items_en instanceDir=items_en /*
*  /cores*


I´ve noticed that 'hostContext' parameter was also required, so I included
it. After that corrections Cloud graph tree looks right, and executing
queries doesn' t return a 503 error. Phew! However, I checked in the Cloud
graph tree that acollection1 appears too pointing to
http://localhost:8983/solr. I will continue testing if I missed something,
but looks like it is creating another collection with default parameters
(collection name, port) without control.

While using Apache Tomcat I was forced to include in catalina.sh (or
setenv.sh) the following environment parameters, as I told you before:

*JAVA_OPTS=-DzkHost=127.0.0.1:9000 -Dcollection.configName=items_en *


Just three questions more:

*1.* That´s a problem for me, because I would like to deploy in each Tomcat
instance more than one Solr server with different configurations file (I
mean, differents configName parameters), so including that JAVA_OPTS forces
to me to deploy in that Tomcat server only Solr servers with this kind of
configuration. In a production environment I would like to deploy in a
single Tomcat instance at least for Solr servers, one per each kind of
documents that I will index and query to. Do you know any way to configure
the configName per each Solr server instance? Is it posible to configure it
inside solr.xml file? Also, it make sense to deploy in each Solr server a
multi-core configuration, each core with each configName allocated in
Zookeeper, but again using that kind of JAVA_OPTS on-fire params
configuration makes it impossible, :-(

*2.* The other question is about indexing. What is the best way to plain
index (I mean, without DIH or similar) in SolrCloud? Maybe configuring a
LBHttpSolrServer that decides itself what is the best Solr server instance
per indexation process?

*3.* The following question may sound strange, but... but the thing is that
I would like to help anyway in Apache Solr project contributing with code
(bugs corrections, new features, etc.). How can I contribute to the
community?

Thanks a lot.

Best Regards,


Luis Cappa.


2012/10/31 Mark Miller markrmil...@gmail.com

 A big difference if you are using tomcat is that you still need to
 specify jetty.port - unless you change the name of that sys prop in
 solr.xml.

 Some more below:

 On Wed, Oct 31, 2012 at 2:09 PM, Luis Cappa Banda luisca...@gmail.com
 wrote:
  Hello!
 
  How are you?I followed SolrCloud Wiki tutorial and noticed that all
 worked
  perfectly with Jetty and with a very basic configuration. My first
  impression was that SolrCloud is amazing and I´m interested on deploying
 a
  more complex and near-production environment SolrCloud architecture with
  tests purposes. I´m using Tomcat as application server, so I´ve started
  testing with it.
 
  I´ve installed Zookeper sevice in a single machine and started up with
 the
  following configuration:
 
  *1.)*
 
  ~zookeperhome/conf/zoo.cfg
 
  *tickTime=2000*
  *initLimit=10*
  *syncLimit=5*
  *dataDir=~zookeperhome/data/*
  *clientPort=9000*
 
  *2.) * I testing with a single core Solr server called 'items_en'. I have
  the configuration is as follows:
 
  *Indexes conf/data tree*: /mnt/data-store*/solr/*
 /solr.xml
 /zoo.cfg
 /items_en/
   /conf/
 
  schema.xml
 
  solrconfig.xml
 
 etc.
 
  So we have a simple configuration where conf files and data indexes files
  are in the same path.
 
  *3.)* Ok, so we have Solr server configured, but I have to save into
  Zookeper the configuration. I do as follows:
 
  *./bin/zkcli.sh -cmd upconfig -zkhost 127.0.0.1:9000 -confdir *
  /mnt/data-store/solr/*items_en/conf -collection items_en -confname
 items_en
  *
 
  And seems to work perfectly, because if I use Zookeper client and
 executes
  'ls' command the files appear:
 
  *./bin/zkCli.sh -server localhost:9000
  *
  *
  *
  *[zk: localhost:9000(CONNECTED) 1] ls /configs/items_en*
  *[admin-extra.menu-top.html, currency.xml, protwords.txt,
  mapping-FoldToASCII.txt, solrconfig.xml, lang, spellings.txt,
  mapping-ISOLatin1Accent.txt, admin-extra.html, xslt, scripts.conf,
  synonyms.txt, update-script.js, velocity, elevate.xml, zoo.cfg,
  admin-extra.menu-bottom.html, stopwords_en.txt, schema.xml]*
  *
  *
  *
  *
  *4.) *I would like that all the Solr servers deployed in that Tomcat
  instance points to Zookeper port 9000 service, so I included the
 following
  JAVA_OPTS hoping that they´ll make that posible:
 
  *JAVA_OPTS=-DzkHost=127.0.0.1:9000 

RE: Solr 4.0 admin panel

2012-11-02 Thread Tannen, Lev (USAEO) [Contractor]
Thank you Peter,
I know this. I cannot access admin in both cases single core and multicore. In 
the single core case I have had a generic solr page and in the multi core case 
I got just NOT FOUND message. Select works in both cases.
Lev 

-Original Message-
From: Péter Király [mailto:kirun...@gmail.com] 
Sent: Wednesday, October 31, 2012 4:48 PM
To: solr-user@lucene.apache.org
Subject: Re: Solr 4.0 admin panel

Dear Lev,

core0 is only available on multicore environment. You should start Solr as

java -Dsolr.solr.home=multicore -jar start.jar

cheers,
Péter

2012/10/31 Tannen, Lev (USAEO) [Contractor] lev.tan...@usdoj.gov:
 Hi,
 I apologize for the trivial question, but I cannot find out what is wrong. I 
 try to switch from Solr 3.6 to Solr 4.0. All what I have done was I have 
 downloaded and unzipped the  official binary file for Windows (32 bit) and 
 run just an example and it does not work.
 In Solr 3.6 the request http://localhost:8983/solr/admin returns an 
 administration panel. In Solr4.0 it returns just a general Apache Solr page 
 with dead links. Dead links means that when I click on them nothing happends.
 I have try to run the multicore example and 
 http://localhost:8983/solr/core0/admin  returns not found
 An attempt to run the cloud example also return a generic page. In all cases 
 search works. I can even add a document to the index and search for it. Only 
 admin does not work.

 Does Solr 4.0 works differently from 3.6?
 Please advise.
 Thank you.
 Lev Tannen

 Info: Operating system --- Windows 7 enterprise
   Java---  jre6 or jdk6

 Log:
 C:\myWork\apache-solr-4.0.0\examplejava -jar start.jar
 2012-10-31 11:44:44.526:INFO:oejs.Server:jetty-8.1.2.v20120308
 2012-10-31 11:44:44.526:INFO:oejdp.ScanningAppProvider:Deployment 
 monitor C:\myWork\apache-solr-4.0.0\example\contexts at interval 0
 2012-10-31 11:44:44.542:INFO:oejd.DeploymentManager:Deployable added: 
 C:\myWork\apache-solr-4.0.0\example\contexts\solr.xml
 2012-10-31 11:44:45.290:INFO:oejw.StandardDescriptorProcessor:NO JSP 
 Support for /solr, did not find org.apache.jasper.servlet.JspServlet
 2012-10-31 11:44:45.322:INFO:oejsh.ContextHandler:started 
 o.e.j.w.WebAppContext{/solr,file:/C:/myWork/apache-solr-4.0.0/example/
 solr-webapp/webapp/},C:\myWork\a 
 pache-solr-4.0.0\example/webapps/solr.war
 2012-10-31 11:44:45.322:INFO:oejsh.ContextHandler:started 
 o.e.j.w.WebAppContext{/solr,file:/C:/myWork/apache-solr-4.0.0/example/
 solr-webapp/webapp/},C:\myWork\a 
 pache-solr-4.0.0\example/webapps/solr.war
 Oct 31, 2012 11:44:45 AM org.apache.solr.core.SolrResourceLoader 
 locateSolrHome
 INFO: JNDI not configured for solr (NoInitialContextEx) Oct 31, 2012 
 11:44:45 AM org.apache.solr.core.SolrResourceLoader locateSolrHome
 INFO: solr home defaulted to 'solr/' (could not find system property 
 or JNDI) Oct 31, 2012 11:44:45 AM 
 org.apache.solr.core.SolrResourceLoader init
 INFO: new SolrResourceLoader for deduced Solr Home: 'solr/'
 Oct 31, 2012 11:44:45 AM org.apache.solr.servlet.SolrDispatchFilter 
 init
 INFO: SolrDispatchFilter.init()
 Oct 31, 2012 11:44:45 AM org.apache.solr.core.SolrResourceLoader 
 locateSolrHome
 INFO: JNDI not configured for solr (NoInitialContextEx) Oct 31, 2012 
 11:44:45 AM org.apache.solr.core.SolrResourceLoader locateSolrHome
 INFO: solr home defaulted to 'solr/' (could not find system property 
 or JNDI) Oct 31, 2012 11:44:45 AM 
 org.apache.solr.core.CoreContainer$Initializer initialize
 INFO: looking for solr.xml: 
 C:\myWork\apache-solr-4.0.0\example\solr\solr.xml
 Oct 31, 2012 11:44:45 AM org.apache.solr.core.CoreContainer init
 INFO: New CoreContainer 2091149
 Oct 31, 2012 11:44:45 AM org.apache.solr.core.CoreContainer load
 INFO: Loading CoreContainer using Solr Home: 'solr/'
 Oct 31, 2012 11:44:45 AM org.apache.solr.core.SolrResourceLoader 
 init
 INFO: new SolrResourceLoader for directory: 'solr/'
 Oct 31, 2012 11:44:45 AM org.apache.solr.core.CoreContainer load
 INFO: Registering Log Listener
 Oct 31, 2012 11:44:45 AM org.apache.solr.core.CoreContainer create
 INFO: Creating SolrCore 'collection1' using instanceDir: 
 solr\collection1 Oct 31, 2012 11:44:45 AM 
 org.apache.solr.core.SolrResourceLoader init
 INFO: new SolrResourceLoader for directory: 'solr\collection1\'
 Oct 31, 2012 11:44:45 AM org.apache.solr.core.SolrConfig initLibs
 INFO: Adding specified lib dirs to ClassLoader Oct 31, 2012 11:44:45 
 AM org.apache.solr.core.SolrResourceLoader replaceClassLoader
 INFO: Adding 
 'file:/C:/myWork/apache-solr-4.0.0/contrib/extraction/lib/apache-mime4
 j-core-0.7.2.jar' to classloader Oct 31, 2012 11:44:45 AM 
 org.apache.solr.core.SolrResourceLoader replaceClassLoader
 INFO: Adding 
 'file:/C:/myWork/apache-solr-4.0.0/contrib/extraction/lib/apache-mime4
 j-dom-0.7.2.jar' to classloader Oct 31, 2012 11:44:45 AM 
 org.apache.solr.core.SolrResourceLoader replaceClassLoader
 INFO: Adding 
 

RE: Solr 4.0 admin panel

2012-11-02 Thread Tannen, Lev (USAEO) [Contractor]
Thank you James.
 
In Solr 3.6 http://localhost:8983/solr/admin links to the admin panel. So the 
question remains how to invoke an admin panel in Solr 4.0?
Does it mean that there is no such a thing as admin panel in Solr 4.0?

Lev

-Original Message-
From: James Ji [mailto:jiayu...@gmail.com] 
Sent: Wednesday, October 31, 2012 4:51 PM
To: solr-user@lucene.apache.org
Subject: Re: Solr 4.0 admin panel

The right address to go is http://localhost:8983/solr/ on Solr 4.0.
http://localhost:8983/solr/admin links to nothing if you go check the servlet.

Cheers

James

On Wed, Oct 31, 2012 at 3:25 PM, Tannen, Lev (USAEO) [Contractor]  
lev.tan...@usdoj.gov wrote:

 Hi,
 I apologize for the trivial question, but I cannot find out what is wrong.
 I try to switch from Solr 3.6 to Solr 4.0. All what I have done was I 
 have downloaded and unzipped the  official binary file for Windows (32 
 bit) and run just an example and it does not work.
 In Solr 3.6 the request http://localhost:8983/solr/admin returns an 
 administration panel. In Solr4.0 it returns just a general Apache Solr 
 page with dead links. Dead links means that when I click on them 
 nothing happends.
 I have try to run the multicore example and 
 http://localhost:8983/solr/core0/admin  returns not found
 An attempt to run the cloud example also return a generic page. In all 
 cases search works. I can even add a document to the index and search 
 for it. Only admin does not work.

 Does Solr 4.0 works differently from 3.6?
 Please advise.
 Thank you.
 Lev Tannen

 Info: Operating system --- Windows 7 enterprise
   Java---  jre6 or jdk6

 Log:
 C:\myWork\apache-solr-4.0.0\examplejava -jar start.jar
 2012-10-31 11:44:44.526:INFO:oejs.Server:jetty-8.1.2.v20120308
 2012-10-31 11:44:44.526:INFO:oejdp.ScanningAppProvider:Deployment 
 monitor C:\myWork\apache-solr-4.0.0\example\contexts at interval 0
 2012-10-31 11:44:44.542:INFO:oejd.DeploymentManager:Deployable added:
 C:\myWork\apache-solr-4.0.0\example\contexts\solr.xml
 2012-10-31 11:44:45.290:INFO:oejw.StandardDescriptorProcessor:NO JSP 
 Support for /solr, did not find org.apache.jasper.servlet.JspServlet
 2012-10-31 11:44:45.322:INFO:oejsh.ContextHandler:started
 o.e.j.w.WebAppContext{/solr,file:/C:/myWork/apache-solr-4.0.0/example/
 solr-webapp/webapp/},C:\myWork\a 
 pache-solr-4.0.0\example/webapps/solr.war
 2012-10-31 11:44:45.322:INFO:oejsh.ContextHandler:started
 o.e.j.w.WebAppContext{/solr,file:/C:/myWork/apache-solr-4.0.0/example/
 solr-webapp/webapp/},C:\myWork\a 
 pache-solr-4.0.0\example/webapps/solr.war
 Oct 31, 2012 11:44:45 AM org.apache.solr.core.SolrResourceLoader
 locateSolrHome
 INFO: JNDI not configured for solr (NoInitialContextEx) Oct 31, 2012 
 11:44:45 AM org.apache.solr.core.SolrResourceLoader
 locateSolrHome
 INFO: solr home defaulted to 'solr/' (could not find system property 
 or
 JNDI)
 Oct 31, 2012 11:44:45 AM org.apache.solr.core.SolrResourceLoader 
 init
 INFO: new SolrResourceLoader for deduced Solr Home: 'solr/'
 Oct 31, 2012 11:44:45 AM org.apache.solr.servlet.SolrDispatchFilter 
 init
 INFO: SolrDispatchFilter.init()
 Oct 31, 2012 11:44:45 AM org.apache.solr.core.SolrResourceLoader
 locateSolrHome
 INFO: JNDI not configured for solr (NoInitialContextEx) Oct 31, 2012 
 11:44:45 AM org.apache.solr.core.SolrResourceLoader
 locateSolrHome
 INFO: solr home defaulted to 'solr/' (could not find system property 
 or
 JNDI)
 Oct 31, 2012 11:44:45 AM 
 org.apache.solr.core.CoreContainer$Initializer
 initialize
 INFO: looking for solr.xml:
 C:\myWork\apache-solr-4.0.0\example\solr\solr.xml
 Oct 31, 2012 11:44:45 AM org.apache.solr.core.CoreContainer init
 INFO: New CoreContainer 2091149
 Oct 31, 2012 11:44:45 AM org.apache.solr.core.CoreContainer load
 INFO: Loading CoreContainer using Solr Home: 'solr/'
 Oct 31, 2012 11:44:45 AM org.apache.solr.core.SolrResourceLoader 
 init
 INFO: new SolrResourceLoader for directory: 'solr/'
 Oct 31, 2012 11:44:45 AM org.apache.solr.core.CoreContainer load
 INFO: Registering Log Listener
 Oct 31, 2012 11:44:45 AM org.apache.solr.core.CoreContainer create
 INFO: Creating SolrCore 'collection1' using instanceDir: 
 solr\collection1 Oct 31, 2012 11:44:45 AM 
 org.apache.solr.core.SolrResourceLoader init
 INFO: new SolrResourceLoader for directory: 'solr\collection1\'
 Oct 31, 2012 11:44:45 AM org.apache.solr.core.SolrConfig initLibs
 INFO: Adding specified lib dirs to ClassLoader Oct 31, 2012 11:44:45 
 AM org.apache.solr.core.SolrResourceLoader
 replaceClassLoader
 INFO: Adding
 'file:/C:/myWork/apache-solr-4.0.0/contrib/extraction/lib/apache-mime4j-core-0.7.2.jar'
 to classloader
 Oct 31, 2012 11:44:45 AM org.apache.solr.core.SolrResourceLoader
 replaceClassLoader
 INFO: Adding
 'file:/C:/myWork/apache-solr-4.0.0/contrib/extraction/lib/apache-mime4j-dom-0.7.2.jar'
 to classloader
 Oct 31, 2012 11:44:45 AM org.apache.solr.core.SolrResourceLoader
 replaceClassLoader
 INFO: Adding