Am 19.08.2011 16:43, schrieb Yonik Seeley:
On Fri, Aug 19, 2011 at 10:36 AM, alexander sulz wrote:
using lsof I think I pinned down the problem: too many open files!
I already doubled from 512 to 1024 once but it seems there are many SOCKETS
involved,
which are listed as "can'
Am 19.08.2011 15:48, schrieb alexander sulz:
Am 10.08.2011 17:11, schrieb Yonik Seeley:
On Wed, Aug 10, 2011 at 11:00 AM, alexander
sulz wrote:
Okay, with this command it hangs.
It doesn't look like a hang from this thread dump. It doesn't look
like any solr requests are execut
Am 10.08.2011 17:11, schrieb Yonik Seeley:
On Wed, Aug 10, 2011 at 11:00 AM, alexander sulz wrote:
Okay, with this command it hangs.
It doesn't look like a hang from this thread dump. It doesn't look
like any solr requests are executing at the time the dump was taken.
Did you do
Okay, with this command it hangs.
Also: I managed to get a Thread Dump (attached).
regards
Am 05.08.2011 15:08, schrieb Yonik Seeley:
On Fri, Aug 5, 2011 at 7:33 AM, alexander sulz wrote:
Usually you get a XML-Response when doing commits or optimize, in this case
I get nothing
in return, but
older, the only changes i made was
enable logging and changing the port to 8985.
I'll try getting a thread dump if it happens again!
So far its looking good with having allocated more memory to it.
Am 04.08.2011 16:08, schrieb Yonik Seeley:
On Thu, Aug 4, 2011 at 8:09 AM, alexander sulz wr
just gone.
Turned out my system was out odd memory and swap got used up because of
another process which then forced the kernel to start killing off processes.
Google OOM linux and you will find plenty of other programs and people with
a similar problem.
Cameron
On Aug 2, 2011 6:02 AM, "alexan
Nope, none :/
Am 02.08.2011 12:33, schrieb Bernd Fehling:
Any JAVA_OPTS set?
Do not use "-XX:+OptimizeStringConcat" or "-XX:+AggressiveOpts" flags.
Am 02.08.2011 12:01, schrieb alexander sulz:
Hello folks,
I'm using the latest stable Solr release -> 3.3 and I
Hello folks,
I'm using the latest stable Solr release -> 3.3 and I encounter strange
phenomena with it.
After about 19 hours it just crashes, but I can't find anything in the
logs, no exceptions, no warnings,
no suspicious info entries..
I have an index-job running from 6am to 8pm every 10 mi
Hello
I enabled Jetty Logs but my GET requests seem so long that they get
truncated and without a line break,
so in the end it looks like this:
notice the logged ping and where it begins.
How can i change this?
thank you very much
000.000.000.000 - - [27/Jul/2011:17:38:04 +0100] "GET
/sol
Am 12.07.2011 10:08, schrieb alexander sulz:
Hi all,
Are there some kind of average indexing times or PDF's in relation to
its size?
I have here a 10MB PDF (50 pages) which takes about 30 seconds to
index!
Is that normal?
Depends on you hardware. PDF parsing is a lot more tedious tha
Hi all,
Are there some kind of average indexing times or PDF's in relation to
its size?
I have here a 10MB PDF (50 pages) which takes about 30 seconds to index!
Is that normal?
Depends on you hardware. PDF parsing is a lot more tedious than XML and
besides parsing it's also analyzed and stored
Hi all,
Are there some kind of average indexing times or PDF's in relation to
its size?
I have here a 10MB PDF (50 pages) which takes about 30 seconds to index!
Is that normal?
greetings
alex
I have the same problem with discarding the metadata title.
I thought the parameter "captureAttr" (can be provided at the
solrconfig.xml and via get/post as a parameter) is responsible for that?
I set it to false in in the xml and as a parameter, still, I get "not
multivalued field" errors due
Hello dear Solr Users..
As far as I understand, I am able to process stuff with analyzers (and
in there with tokenizers and filters and whatnot)
before indexing, but is it also possible to do that before storing the
input into a field?
What I want to do is to store some search words from user
Hello dear Solr Users..
As far as I understand, I am able to process stuff with analyzers (and
in there with tokenizers and filters and whatnot)
before indexing, but is it also possible to do that before storing the
input into a field?
What I want to do is to store some search words from users
Good Evening and Morning.
I noticed that if I do a facet search on a field which value contains
umlaute (öäü),
the facet list returned converted the value of the field into a normal
character (oau)..
How do I precent this from happening?
I cant seem to find the configuration for faceting in
September 2010 16:10:23 alexander sulz wrote:
Im sry to bother you all with this, but is there a way to search through
the mailinglist archive? Ive found
http://mail-archives.apache.org/mod_mbox/lucene-solr-user/ so far
but there isnt any convinient way to search through the archive.
Thanks for
Hi everyone.
Im successfully indexing PDF files right now but I still got some problems.
1. Tika seems to map some content to appropiate fields in my schema.xml
If I pass on a literal.title=blabla parameter, tika may have parsed some
information
out of the pdf to fill in the field "title" its
Im sry to bother you all with this, but is there a way to search through
the mailinglist archive? Ive found
http://mail-archives.apache.org/mod_mbox/lucene-solr-user/ so far
but there isnt any convinient way to search through the archive.
Thanks for your help
19 matches
Mail list logo