Thanks for the reply Hoss.
As far as our application goes, Commits and reads are done to the index during
the normal business hours. However, we observed the max warmers error happening
during a nightly job when the only operation is 4 parallel threads commits data
to index and Optimizes it
Hi,
We have an application with more 2.5 million docs currently. It is hosted
on a single box with 8 GIG memory. The number of warmers configured are 4 and
Cold-searcher is allowed too. The application is based on data entry and commit
to data happens as often as a data is entered. We
Hi,
We have an index of courses (about 4 million docs in prod) and we have a
nightly that would pick up newly added courses and update the index
accordingly. There is another Enterprise system that shares the same table and
that could delete data from the table too.
I just want to know
Great Thanks.
Date: Thu, 25 Sep 2008 11:54:32 -0700
Subject: Re: Best practice advice needed!
From: [EMAIL PROTECTED]
To: solr-user@lucene.apache.org
That should be flag it in a boolean column. --wunder
On 9/25/08 11:51 AM, Walter Underwood [EMAIL PROTECTED] wrote:
This will
with database, but you can always retrieve PKs from
SOLR, check database for those PKs, and 'filter' output...
--
Thanks,
Fuad Efendi
416-993-2060(cell)
Tokenizer Inc.
==
http://www.linkedin.com/in/liferay
Quoting sundar shankar [EMAIL PROTECTED]:
Hi,
We have an index
It Totally Helps. Thanks Jason!
Hoss,
Are the parameters you mentioned, available in the sample solrconfig.xml
that comes with the nightly build? My schema and config files are about a year
old(1.2.X version) one and am not sure if the 1.3 files for the same have some
default options
Hi All,
We have a cluster of 4 servers for the application and Just one
server for Solr. We have just about 2 million docs to index and we never
bothered to make the solr environment clustered as Solr was delivering
performance with the current setup itself. Offlate we just discovered
, 2008 at 1:05 PM, sundar shankar [EMAIL PROTECTED]wrote:
Hi All,
We have a cluster of 4 servers for the application and Just one
server for Solr. We have just about 2 million docs to index and we never
bothered to make the solr environment clustered as Solr was delivering
seem to fully
integrate all updates into a single index until an optimize is performed.
Jason
On Wed, Sep 10, 2008 at 1:05 PM, sundar shankar [EMAIL PROTECTED]wrote:
Hi All,
We have a cluster of 4 servers for the application and Just one
server for Solr. We have just about 2
Thats brilliant. I am just starting to wonder if there anything at all
that you guys haven't thought about ;) Thanks that setting should be
really useful.
Date: Wed, 10 Sep 2008 15:26:57 -0700
From: [EMAIL PROTECTED]
To: solr-user@lucene.apache.org
Subject: RE: Question on how index works -
Did u reindex after the change?
Date: Wed, 27 Aug 2008 23:43:05 +0300
From: [EMAIL PROTECTED]
To: solr-user@lucene.apache.org
Subject: Question about autocomplete feature
Hello.
I'm trying to implement autocomplete feature using the snippet posted
by Dan.
Nope I dont see any error logs when my Jboss starts up. I havent added solr
classes to my log4j.xml though. Should I add them and try again?
What does locktypesingle/locktype do, btw? Do i need to use this in
conjunction with unlockonstartup or do i use it separately??
Date: Wed, 6 Aug
on startup On Thu, Aug 7,
2008 at 12:17 PM, sundar shankar [EMAIL PROTECTED] wrote: I had the war
created from JUly 8th nightly. Do u want me to take the latest and try it
out?? Yes, please. -Yonik
_
Chose your Life Partner? Join MSN
Or time.
!--autocommit pending docs if certain criteria are met --
autoCommit maxDocs1/maxDocs maxTime1/maxTime
/autoCommit
Date: Thu, 7 Aug 2008 18:42:30 -0300 From: [EMAIL PROTECTED] To:
solr-user@lucene.apache.org Subject: Re: How do I configure commit to
Look at the update handlers section of the Solr stats page. I guess the url is
/admi/stats.jsp. This woudld give u an idea of how many docs are pending commit.
Date: Thu, 7 Aug 2008 14:53:02 -0700 From: [EMAIL PROTECTED] To:
solr-user@lucene.apache.org Subject: Re: How do I configure commit
Yes commits are very expensive and optimizes are even expensive.
Coming to your question of numdocs and 0's in update handler section
The numdocs that u see on top are the ones that are committed. The ones u see
below are the ones u have updated, not committed.
update handlers
, Jacob sundar shankar wrote: Yes
commits are very expensive and optimizes are even expensive. Coming to
your question of numdocs and 0's in update handler sectionThe numdocs
that u see on top are the ones that are committed. The ones u see below are
the ones u have updated, not committed
Hi All,
I am having to test solr indexing quite a bit on my local and dev
environments. I had the
unlockOnStartuptrue/unlockOnStartup.
But restarting my server still doesn't seem to remove the writelock file. Is
there some other configuration that I might have to do get this fixed.
as much memory as your whole index size. I
have a 3.5 million documents (aprox. 10Gb) running on this 2Gb heap VM.
Cheers, Daniel -Original Message- From: sundar
shankar [mailto:[EMAIL PROTECTED] Sent: 23 July 2008 23:45 To:
solr-user@lucene.apache.org Subject: RE: Out
'tokenized' field
for sorting (could you confirm please?)... ...you should use
'non-tokenized single-valued non-boolean' for better performance of
sorting... Fuad Efendi == http://www.tokenizer.org
Quoting sundar shankar [EMAIL PROTECTED]: Hi all, I seemed to have
found
Yes this is what I did. I got an out of memory while executing a query with a
sort param
1. Stopped Jboss server
2.
filterCache class=solr.LRUCache size=2048 initialSize=512
autowarmCount=256/
!-- queryResultCache caches results of searches - ordered lists of
Oh Wow, I didnt know that was the case. I am completely left baffled now. BAck
to square one I guess. :)
Date: Tue, 5 Aug 2008 14:31:28 -0700 From: [EMAIL PROTECTED] To:
solr-user@lucene.apache.org Subject: RE: Out of memory on Solr sorting
Sundar, very strange that increase of
512 000 000 bytes per
Searcher, and as Mark mentioned you should limit number of searchers
in SOLR.
So that Xmx512M is definitely not enough even for simple cases...
Quoting sundar shankar [EMAIL PROTECTED]:
I haven't seen the source code before, But I don't know why
Hi,We are developing a product in a agile manner and the current
implementation has a data of size just about a 800 megs in dev. The memory
allocated to solr on dev (Dual core Linux box) is 128-512. My config=
!-- autocommit pending docs if certain criteria are met autoCommit
Hi,
SOrry again fellos. I am not sure whats happening. The day with solr is bad for
me I guess. EZMLM didnt let me send any mails this morning. Asked me to confirm
subscription and when I did, it said I was already a member. Now my mails are
all coming out bad. Sorry for troubling y'all this
From: [EMAIL PROTECTED]
To: solr-user@lucene.apache.org
Subject: Out of memory on Solr sorting
Date: Tue, 22 Jul 2008 19:11:02 +
Hi,
Sorry again fellos. I am not sure whats happening. The day with solr is bad
for me I guess. EZMLM didnt let me send any mails this morning. Asked
memory, when application requests big
contigues fragment and GC is unable to optimize; looks like your
application requests a little and memory is not available... Quoting
sundar shankar [EMAIL PROTECTED]: From: [EMAIL PROTECTED]
To: solr-user@lucene.apache.org Subject: Out
sundar shankar wrote:
Thanks Fuad.
But why does just sorting provide an OOM. I executed the
query without adding the sort clause it executed perfectly. In fact I even
tried remove the maxrows=10 and executed. it came out fine. Queries with
bigger results seems to come out
a 2 million doc index would need 40-50 MB
(depending and rough, but to give an idea) per field your sorting on. -
Mark sundar shankar wrote: Thanks Fuad. But why does just sorting
provide an OOM. I executed the query without adding the sort clause it executed
perfectly. In fact I even tried
Hi Mark,
I am still getting an OOM even after increasing the heap to 1024.
The docset I have is
numDocs : 1138976 maxDoc : 1180554
Not sure how much more I would need. Is there any other way out of this. I
noticed another interesting behavior. I have a Solr setup on a personal
on the admin page? Maybe you are
trying to load up to many searchers at a time. I think there is a setting
to limit the number of searchers that can be on deck... sundar shankar
wrote: Hi Mark, I am still getting an OOM even after increasing the
heap to 1024. The docset I have is numDocs
more RAM to accommodate the sort most
likely...have you upped your xmx setting? I think you can roughly
say a 2 million doc index would need 40-50 MB (depending and rough,
but to give an idea) per field your sorting on. -
Mark sundar shankar wrote: Thanks Fuad.
But why does just
useful in the past
for me.
Date: Tue, 15 Jul 2008 11:26:16 +1000 From: [EMAIL PROTECTED] To:
solr-user@lucene.apache.org Subject: Re: Wiki for 1.3 On Mon, 14 Jul 2008
23:25:25 + sundar shankar [EMAIL PROTECTED] wrote: Thanks for
your patient response. I dont wanna know the classes
THANKS!!!
Date: Tue, 15 Jul 2008 11:38:06 -0700 From: [EMAIL PROTECTED] To:
solr-user@lucene.apache.org Subject: RE: Wiki for 1.3 : Thanks. Do we
expect the same some time soon. I agree that the user : community have shed
light in with a lot of examples. Just wanna know if : there was
Hi Hoss,
I was talking about classes like EdgeNGramFilterFactory,
PatterReplaceFilterfactory etc. I didnt find these in the 1.2 Jar. Where do I
find wiki for these and Specific classes introduced for 1.3?
-Sundar
Date: Sun, 13 Jul 2008 09:44:20 -0700
From: [EMAIL PROTECTED]
OOps,
Sorry about that.
Date: Sun, 13 Jul 2008 18:13:51 -0700 From: [EMAIL PROTECTED] To:
solr-user@lucene.apache.org Subject: Re: Max Warming searchers error :
Subject: Max Warming searchers error : In-Reply-To: [EMAIL PROTECTED] :
References: [EMAIL PROTECTED]
Copy field dest=text. I am not sure if u can copy into text or something like
that. We copy it into a field of type text or string etc.. Plus what is ur
query string. what gives u no results. How do u index it??
need more clues to figure out answer dude :)
From: [EMAIL PROTECTED] To:
++wikibtnG=Search
-S
Date: Tue, 15 Jul 2008 07:54:27 +1000 From: [EMAIL PROTECTED] To:
solr-user@lucene.apache.org Subject: Re: Wiki for 1.3 On Mon, 14 Jul 2008
15:52:35 + sundar shankar [EMAIL PROTECTED] wrote: Hi Hoss, I
was talking about classes like EdgeNGramFilterFactory
Hi
I recently was looking to find details of 1.3 specific analysers and
filters in the solr wiki and was unable to do so. Could anyone please point me
to a place where I can find some documentation of the same.
Thanks
Sundar
What was the type of the field that you are using. I guess you could achieve it
by a simple swap of text and string.
From: [EMAIL PROTECTED] To: solr-user@lucene.apache.org Subject: Solr
searching issue.. Date: Fri, 11 Jul 2008 11:28:50 +0100 Hi solr-users,
version type: nightly build
Hi ,
I am getting the Error opening new searcher. exceeded limit of
maxWarmingSearchers=4, try again later. My configuration includes enabling
coldSearchers to true and Having number of maxWarmimgSearchers as 4. We expect
a max of 40 concurrent users but an average of 5-10 at most times.
searchers error
You're trying to commit too fast and warming searchers are stacking up. Do
less warming of caches, or space out your commits a little more. -Yonik
On Fri, Jul 11, 2008 at 11:56 AM, sundar shankar [EMAIL PROTECTED] wrote:
Hi , I am getting the Error opening new searcher
: Tue, 8 Jul 2008 23:13:57 +0530 From: [EMAIL PROTECTED] To:
solr-user@lucene.apache.org Subject: Re: Auto complete He must be using a
nightly build of Solr 1.3 -- I think you can consider using it as it is
quite stable and close to release. On Tue, Jul 8, 2008 at 10:38 PM, sundar
shankar
})(.*)?
replacement=$1 replace=all / /analyzer /fieldType ... field
name=ac type=autocomplete indexed=true stored=true required=false
/ Regards, Dan On Mon, 2008-07-07 at 17:12 +, sundar
shankar wrote: Hi All, I am using Solr for some time and am having
trouble with an auto complete
Hi All,
I am using Solr for some time and am having trouble with an auto
complete feature that I have been trying to incorporate. I am indexing solr as
a database column to solr field mapping. I have tried various configs that were
mentioned in the solr user community suggestions
45 matches
Mail list logo