Yonik Seeley wrote:
On Fri, Oct 3, 2008 at 1:56 PM, Uwe Klosa [EMAIL PROTECTED] wrote:
I have a big problem with one of my solr instances. A commit can
take up to
5 minutes. This time does not depend on the number of documents
which are
updated. The difference for 1 or 100 updated
: When I check my commit.log nothings is runned
commit.log is only updated by the bin/commit script ... not by Solr
itself. you'll see Solr log commits in whatever logs are kept by your
servlet container.
: My snapshooter too: but no log in snapshooter.log
: !-- A postCommit event is
Any Idea?
sunnyfr wrote:
Hi,
When I check my commit.log nothings is runned
but my config file seems ok to active my commit :
autoCommit
maxDocs1/maxDocs
maxTime1000/maxTime
/autoCommit
My snapshooter too: but no log in snapshooter.log
!-- A
Hello, thanks a lot for your answer :)
So it should look like :
!-- A postCommit event is fired after every commit or optimize command
--
listener event=postCommit class=solr.RunExecutableListener
str name=exesnapshooter/str
str name=dir/str
bool name=waittrue/bool
On Tue, Sep 23, 2008 at 7:36 PM, sunnyfr [EMAIL PROTECTED] wrote:
My snapshooter too:
!-- A postCommit event is fired after every commit or optimize command
--
listener event=postCommit class=solr.RunExecutableListener
str name=exe./data/solr/book/logs/snapshooter/str
str
Right my bad it was bin directory, but even when i fire commit no snapshot
created ??
Does it check the number of document even when i fire it and another
question I dont rember have put in the conf file the path to commit, but
even manually it doesnt work
[EMAIL PROTECTED]:/#
PROTECTED]
Sent: Tuesday, May 13, 2008 7:40 AM
To: solr-user@lucene.apache.org
Subject: Re: Commit problems on Solr 1.2 with Tomcat
I'm not sure if you are issuing a separate commit/ _request_
after your
add, or putting a commit/ into the same request. Solr only supports
]
Sent: Tuesday, May 13, 2008 7:40 AM
To: solr-user@lucene.apache.org
Subject: Re: Commit problems on Solr 1.2 with Tomcat
I'm not sure if you are issuing a separate commit/ _request_
after your
add, or putting a commit/ into the same request. Solr only supports
one command
--
From: Erik Hatcher [EMAIL PROTECTED]
Sent: Tuesday, May 13, 2008 7:40 AM
To: solr-user@lucene.apache.org
Subject: Re: Commit problems on Solr 1.2 with Tomcat
I'm not sure if you are issuing a separate commit/ _request_
after your
add
Maybe a delay in commit? How may time elapsed between commits?
2008/5/13 William Pierce [EMAIL PROTECTED]:
Hi,
I am having problems with Solr 1.2 running tomcat version 6.0.16 (I also
tried 6.0.14 but same problems exist). Here is the situation: I have an
ASP.net application where I am
By default, a commit won't return until a new searcher has been opened
and the results are visible.
So just make sure you wait for the commit command to return before querying.
Also, if you are committing every add, you can avoid a separate commit
command by putting ?commit=true in the URL of the
commit / command everything seems to work.
Why are TWO commit commands apparently required?
Thanks,
Sridhar
--
From: Yonik Seeley [EMAIL PROTECTED]
Sent: Tuesday, May 13, 2008 6:42 AM
To: solr-user@lucene.apache.org
Subject: Re: Commit problems
everything seems to
work. Why are TWO commit commands apparently required?
Thanks,
Sridhar
--
From: Yonik Seeley [EMAIL PROTECTED]
Sent: Tuesday, May 13, 2008 6:42 AM
To: solr-user@lucene.apache.org
Subject: Re: Commit problems on Solr 1.2
and issues the POST request to the update URL.
Thanks,
Bill
--
From: Erik Hatcher [EMAIL PROTECTED]
Sent: Tuesday, May 13, 2008 7:40 AM
To: solr-user@lucene.apache.org
Subject: Re: Commit problems on Solr 1.2 with Tomcat
I'm not sure if you are issuing
@lucene.apache.org
Subject: Re: Commit problems on Solr 1.2 with Tomcat
I'm not sure if you are issuing a separate commit/ _request_ after your
add, or putting a commit/ into the same request. Solr only supports
one command (add or commit, but not both) per request.
Erik
On May 13, 2008
Or, if you have multiple files to be updated, please make sure Index
Multiple Files and commit Once at the end of Indexing..
Jae
-Original Message-
From: Jae Joo [mailto:[EMAIL PROTECTED]
Sent: Tuesday, February 12, 2008 10:50 AM
To: solr-user@lucene.apache.org
Subject: RE: Commit
]
Sent: Tuesday, February 12, 2008 10:34 AM
To: solr-user@lucene.apache.org
Subject: Re: Commit preformance problem
I have a large solr index that is currently about 6 GB and is suffering
of
severe performance problems during updates. A commit can take over 10
minutes to complete. I have tried
I have a large solr index that is currently about 6 GB and is suffering of
severe performance problems during updates. A commit can take over 10
minutes to complete. I have tried to increase max memory to the JVM to over
6 GB, but without any improvement. I have also tried to turn off
if you just want commits ot happen on a regular frequenty take a look at
the autoCommit options.
sa for the specific errors you are getting, i don't know enouugh python to
unerstand them, but it may just be that your commits are taking too long
and your client is timing out on waiting for the
On Jan 31, 2008 8:20 AM, shenzhuxi [EMAIL PROTECTED] wrote:
curl %solr_home% --data-binary commit/ -H Content-type:text/xml;
charset=utf-8
It doesm't work to update. I have to restart solr to make update work. Do I
need to use:
curl %solr_home% --data-binary commit waitFlush=false
Yonik Seeley wrote:
On Jan 31, 2008 8:20 AM, shenzhuxi [EMAIL PROTECTED] wrote:
curl %solr_home% --data-binary commit/ -H Content-type:text/xml;
charset=utf-8
It doesm't work to update. I have to restart solr to make update work. Do
I
need to use:
curl %solr_home% --data-binary
if there is the commit is not done? does it
gets
unlocked automatically?
Could you elaborate this?
Thanks and Regards
Dilip
-Original Message-
From: Mike Klaas [mailto:[EMAIL PROTECTED]
Sent: Monday, September 17, 2007 11:29 PM
To: solr-user@lucene.apache.org
Subject: Re: commit
: I have a query, when u try for bulk updates, using autoCommit option (which
: does commit on regular basis).
There is alot of complexity going on when dealing with concurrent updates
-- some of it at the Lucene level, some at the Solr level. if you really
want to udnerstand the details, I
I've seen even longer commit times with our 2GB index and have not had a
chance to look into it deeper. What I have noticed is when there are
Searchers registered commits take a lot longer time. Perhaps looking at
the optional attributes for commit (waitSearcher, waitFlush) would help.
Since we
: How long should a commit take? I've got about 9.8G of data for 9M of
: records. (Yes, I'm indexing too much data.) My commits are taking 20-30
the low levels of updating aren't my forte, but as i recall the dominant
factor in how long it takes to execute a commit is the number of deleted
: chance to look into it deeper. What I have noticed is when there are
: Searchers registered commits take a lot longer time. Perhaps looking at
that's probably the warming time taken to reopen the new searcher ...
waitSearcher=false should cause those commits to reutrn much faster (the
down
aha,,same question i found few days ago.
i m sorry to forget submit it.
2007/6/22, Yonik Seeley [EMAIL PROTECTED]:
On 6/21/07, Ryan McKinley [EMAIL PROTECTED] wrote:
I just started running the scripts and
The commit script seems to run fine, but it says there was an error. I
looked into
: I guess we should look for 'status=0' ?
that wouldn't quite work.
: Or, if you get a response code of 200, it's a success unless
: you see status=nonzero
we could always make it an option in the scripts.conf file -- what
substring to match on ... just in case people want to write their own
OK figured this out. The short of it is, make sure your schema is
always up to date! : )
The schema did not match the xml docs being posted. And because we
had a previous solr update with those docs, even trying to post/
update a commit/ was failing because there was already bad data
: I thought so, but hoped there would be some experiences with heap space
: settings for Solr. But I guess I have to try for myself.
there's lots of experience, but it's hard to translate to generic rules
... there's so many variables involved that it's hard to even recognize
what the equation
On 3/16/07, Chris Hostetter [EMAIL PROTECTED] wrote:
: I thought so, but hoped there would be some experiences with heap space
: settings for Solr. But I guess I have to try for myself.
there's lots of experience, but it's hard to translate to generic rules
... there's so many variables
Mike Klaas schrieb:
On 3/12/07, Maximilian Hütter [EMAIL PROTECTED] wrote:
Hi,
I have a question regarding Solr's behaviour, in the standard
installation. When use the start.jar with a rather complex schema and I
do about 1000 updates and then try to commit, I get this:
result
On 3/14/07, Maximilian Hütter [EMAIL PROTECTED] wrote:
It is the default heap size for the Sun JVM, so I guess 64MB max. The
documents are rather large, but if you manage to index 100,000 docs,
there seems to be some problem with Solr.
The documents are not held in memory until a commit
: It is the default heap size for the Sun JVM, so I guess 64MB max. The
: documents are rather large, but if you manage to index 100,000 docs,
: there seems to be some problem with Solr.
i think you mean there DOES NOT seems to be some problem with Solr.
right ... why would Mike being able to
On 3/12/07, Maximilian Hütter [EMAIL PROTECTED] wrote:
Hi,
I have a question regarding Solr's behaviour, in the standard
installation. When use the start.jar with a rather complex schema and I
do about 1000 updates and then try to commit, I get this:
result status=1java.lang.OutOfMemoryError:
201 - 235 of 235 matches
Mail list logo