seperating requests over 2 ports is a nice solution when having multiple
user-types. I like that althuigh I don't think i need it for this case.
I'm just going to go the 'normal' caching-route and see where that takes me,
instead of thinking it can't be done upfront :-)
Thanks!
hossman
Hi,
Here's what I've got (multiplesolr instance within the same tomcat server)
In
/var/tomcat/conf/Catalina/localhost/
For an instance 'foo' :
foo.xml :
Context path=foo docBase=/var/tomcat/solrapp/solr.war debug=0
crossContext=true
Environment name=solr/home type=java.lang.String
Jérôme Etévé wrote:
[...]
/var/solr/foo/ is the solr home for this instance (where you'll put
your schema.xml , solrconfig.xml etc.. ) .
Thanks for the input Jérôme, I gave it another try and discovered that
what I was doing wrong was copying the solr/example/ directory to what
you call
On 10/9/07, Chris Laux [EMAIL PROTECTED] wrote:
Jérôme Etévé wrote:
[...]
/var/solr/foo/ is the solr home for this instance (where you'll put
your schema.xml , solrconfig.xml etc.. ) .
Thanks for the input Jérôme, I gave it another try and discovered that
what I was doing wrong was
Hi Hoss,
Yes I know that, but I want to have a proper dummy backup (something that
could be kept in a very controlled environment). I thought about using this
approach (a slave just for this purpose), but if I'm using it just as a
backup node there is no reason I don't use a proper backup
Hello
I’m a newbie to solr and I need ur help in developing an Arabic search engine
using solr.
I succeeded to build the index but failed searching it. I got that error when I
submit a query like “محمد”.
XML Parsing Error: mismatched tag. Expected: /HR.
Location:
Chris:
We're using Jetty also, so I get the sense I'm looking at the
wrong log file.
On that note -- I've read that Jetty isn't the best servlet
container to use in these situations, is that your experience?
Dave
-Original Message-
From: Chris Hostetter [mailto:[EMAIL PROTECTED]
All:
How can I break up my install onto more than one box? We've
hit a learning curve here and we don't understand how best to
proceed. Right now we have everything crammed onto one box
because we don't know any better.
So, how would you build it if you could? Here are the specs:
a) the
Are you compiling your custom request handler against the same
version of Solr that you are deploying with? My hunch is that
you're compiling against an older version.
Erik
On Oct 9, 2007, at 9:04 AM, Britske wrote:
I'm trying to add a new requestHandler-plugin to Solr by
It worked. Thanks a lot. I just updated value attrb of Environment tag of
solr.xml. Maybe you should update wiki for Unix as well as Windows examples.
Context path=solr docBase=C:/apache-solr-1.2.0/example/webapps/solr.war
debug=0 crossContext=true
Environment name=solr/home
Hi All,
i m trying to index my data using post.jar and i get the following error
titleError 500 /title
/head
bodyh2HTTP ERROR: 500/h2prename and value cannot both be empty
java.lang.IllegalArgumentException: name and value cannot both be empty
at
Yeah, I'm compiling with a reference to apache-solr-nightly.jar wich is from
the same nightly builld (7 october 2007) as the apache.solr-nightly.war I'm
deploying against. I include this same apache-solr-nightly.jar in the lib
folder of my deployed server.
It still seems odd that I have to
The way I'd do it would be to buy more servers, set up Tomcat on
each, and get SOLR replicating from your current machine to the
others. Then, throw them all behind a load balancer, and there you go.
You could also post your updates to every machine. Then you don't
need to worry about
What is the XML you POSTed into Solr?
It looks like somehow you've sent in a field with no name or value,
though this is an error that probably should be caught higher up in
Solr.
Erik
On Oct 9, 2007, at 11:06 AM, Urvashi Gadi wrote:
Hi All,
i m trying to index my data using
Hi All.
I run a faceted query against a very large index on a
regular schedule. Every now and then the query throws
an out of heap space error, and we're sunk.
So, naturally we increased the heap size and things worked
well for a while and then the errors would happen again.
We've increased
is there a way to find out the line number in the xml file? the xml file i m
using is quite large.
On 10/9/07, Erik Hatcher [EMAIL PROTECTED] wrote:
What is the XML you POSTed into Solr?
It looks like somehow you've sent in a field with no name or value,
though this is an error that
On 10/9/07, David Whalen [EMAIL PROTECTED] wrote:
I run a faceted query against a very large index on a
regular schedule. Every now and then the query throws
an out of heap space error, and we're sunk.
So, naturally we increased the heap size and things worked
well for a while and then the
Hi Yonik.
According to the doc:
This is only used during the term enumeration method of
faceting (facet.field type faceting on multi-valued or
full-text fields).
What if I'm faceting on just a plain String field? It's
not full-text, and I don't have multiValued set for it
Dave
I'm about to do a prototype deployment of Solr for a pretty
high-volume site, and I've been following this thread with some
interest.
One thing I want to confirm: It's really possible for Solr to handle a
constant stream of 10K updates/min (150 updates/sec) to a
25M-document index? I new Solr and
When we are doing a reindex (1x a day), we post around 150-200
documents per second, on average. Our index is not as large though,
about 200k docs. During this import, the search service (with faceted
page navigation) remains available for front-end searches and
performance does not
It still seems odd that I have to include the jar, since the
StandardRequestHandler should be picked up in the war right? Is this also a
sign that there must be something wrong with the deployment?
Note that in 1.3, the StandardRequestHandler was moved from
o.a.s.request to o.a.s.handler:
Does all your XML look like this sample here - http://wiki.apache.org/
solr/UpdateXmlMessages ??
Are you sending in any field elements without a name attribute or
with a blank value?
Erik
On Oct 9, 2007, at 12:45 PM, Urvashi Gadi wrote:
is there a way to find out the line number
: SEVERE: java.lang.ClassCastException:
: wrappt.solr.requesthandler.TopListRequestHandler cannot be cast to
: org.apache.solr.request.SolrRequestHandler at
: org.apache.solr.core.RequestHandlers$1.create(RequestHandlers.java:149)
: added this handler to a jar called: solrRequestHandler1.jar
On 10/9/07, David Whalen [EMAIL PROTECTED] wrote:
This is only used during the term enumeration method of
faceting (facet.field type faceting on multi-valued or
full-text fields).
What if I'm faceting on just a plain String field? It's
not full-text, and I don't have multiValued set for
Thanks, but I'm using the updated o.a.s.handler.StandardRequestHandler. I'm
going to try on 1.2 instead to see if it changes things.
Geert-Jan
ryantxu wrote:
It still seems odd that I have to include the jar, since the
StandardRequestHandler should be picked up in the war right? Is
: We're using Jetty also, so I get the sense I'm looking at the
: wrong log file.
if you are using the jetty configs that comes in the solr downloads, it
writes all of the solr log messages to stdout (ie: when you run it on the
commandline, the messages come to your terminal). i don't know
: So, naturally we increased the heap size and things worked
: well for a while and then the errors would happen again.
: We've increased the initial heap size to 2.5GB and it's
: still happening.
is this the same 25,000,000 document index you mentioned before?
2.5GB of heap doesn't seem like
Hello-
I am running into some scaling performance problems with SQL that I hope
a clever solr solution could fix. I've already gone through a bunch of
loops, so I figure I should solicit advice before continuing to chase my
tail.
I have a bunch of things (100K-500K+) that are defined by a
Make sure you have:
requestHandler name=/admin/luke
class=org.apache.solr.handler.admin.LukeRequestHandler /
defined in solrconfig.xml
What's the consequence of me changing the solrconfig.xml file?
Doesn't that cause a restart of solr?
for a large index, this can be very slow but the
David Whalen wrote:
Make sure you have:
requestHandler name=/admin/luke
class=org.apache.solr.handler.admin.LukeRequestHandler /
defined in solrconfig.xml
What's the consequence of me changing the solrconfig.xml file?
Doesn't that cause a restart of solr?
editing solrconfig.xml does *not*
Late reply on this but I just wanted to say thanks for the
suggestions. I went through my whole schema and was storing things
that didn't need to be stored and indexing a lot of things that didn't
need to be indexed. Just completed a full reindex and it's a much more
reasonable size now.
Kevin
On Oct 9, 2007, at 3:14 PM, Ryan McKinley wrote:
2. Figure out how to keep the base Tuple store in solr. I think
this will require finishing up SOLR-139. This would keep the the
core data in solr - so there is no good way to 'rebuild' the index.
With SOLR-139, cool stuff can be done to
Given that the tables are of type InnoDB, I think it's safe to assume that
you're not planning to use MySQL full-text search (only supported on MyISAM
tables). If you are not concerned about transactional integrity provided by
InnoDB, perhaps you could try using MyISAM tables (although most
You could just make a separate Lucene index with the document ID unique and
with multiple tag values. Your schema would have the entryID as the unique
field and multiple tag values per entryID.
I just made a phrase-suggesting clone of the Spellchecker class that is
almost exactly the same. It
On 9-Oct-07, at 12:36 PM, David Whalen wrote:
field name=id type=string indexed=true stored=true /
field name=content_date type=date indexed=true stored=true /
field name=media_type type=string indexed=true stored=true /
field name=location type=string indexed=true stored=true /
field
So, this problem came up again. Now it only happens in a linux environment
when searches are being conducted while an index is running.
Does anything need to be closed on the searching side?
AgentHubcap wrote:
As it turns out I was modifying code that wasn't being run. Running an
Using the filter cache method on the things like media type and
location; this will occupy ~2.3MB of memory _per unique value_
Mike, how did you calculate that value? I'm trying to tune my caches, and any
equations that could be used to determine some balanced settings would be
extremely
On 9-Oct-07, at 7:53 PM, Stu Hood wrote:
Using the filter cache method on the things like media type and
location; this will occupy ~2.3MB of memory _per unique value_
Mike, how did you calculate that value? I'm trying to tune my
caches, and any equations that could be used to determine
Sorry... where do the unique values come into the equation?
Also, you say that the queryResultCache memory usage is very low... how
could this be when it is storing the same information as the
filterCache, but with the addition of sorting?
Your answers are very helpful, thanks!
Stu Hood
: I have installed solr lucene for my website: clickindia.com, but I am
: unable to apply proximity search for the same over there.
:
: Please help me that how should I index solrconfig.xml schema.xml
: after providing an option of proximity search.
in order for us to help you, you're going to
FYI: you don't need to resend your question just because you didn't get a
reply within a day, either people haven't had a chance to reply, or they
don't know the answer.
: XML Parsing Error: mismatched tag. Expected: /HR.
:
: Location:
Here are some ways:
Index less data, store fewer fields and less data, compress fields,
change Lucene's the term index interval (default 128; increasing it
will make your index a little bit smaller, but will slow down
queries)... But in general, the more your index the more hw you'll
need. I
On 9-Oct-07, at 8:28 PM, Stu Hood wrote:
Sorry... where do the unique values come into the equation?
Faceting. You should have a filterCache # unique values in all
fields faceted-on (using the fieldCache method).
Also, you say that the queryResultCache memory usage is very low...
how
the most basic stuff, and copyField things around. With SOLR-139, to
rebuild an index you simply reconfigure the copyField settings and
basically `touch` each document to reindex it.
had not thought of that... yes, that would work
Yonik has some pretty prescient design ideas here:
44 matches
Mail list logo