I opened SOLR-3789. As a workaround you can remove str
name=compressioninternal/str from the config and it should work.
--
Sami Siren
On Wed, Sep 5, 2012 at 5:58 AM, Ravi Solr ravis...@gmail.com wrote:
Hello,
I have a very simple setup one master and one slave configured
as below,
cores adminPath=/admin/cores
core name=core0 instanceDir=core0 /
core name=core1 instanceDir=core1 /
/cores
try the above code snippet , in solr.xml.
But it works on Tomcat.
On Wed, Sep 5, 2012 at 1:10 AM, Chris Hostetter hossman_luc...@fucit.orgwrote:
: core name=MYCORE_test
On Fri, 2012-08-31 at 13:35 +0200, Erick Erickson wrote:
Imagine you have two entries, aardvark and emu in your
multiValued field. How should that document sort relative to
another doc with camel and zebra? Any heuristic
you apply will be wrong for someone else
I see two obvious choices
Hi Rafal,
I worked with standalone zookeeper, which is starting.
But the next step is, I want to configure the zookeeper with my solr cloud
using Apache Tomcat.
How it is really possible? Can you please tell me the steps, which I have to
follow to implement the Solr Cloud with Apache Tomcat.
Set the -DzkHost= property in some Tomcat configuration as per the wiki page
and point it to the Zookeeper(s). On Debian systems you can use
/etc/default/tomcat6 to configure your properties.
-Original message-
From:bsargurunathan bsargurunat...@gmail.com
Sent: Wed 05-Sep-2012
Hi,
You are trying to use two different approaches at the same time.
1) Remove
arr name=last-components
strsuggest/str
strquery/str
/arr
from your requestHandler.
2) Execute this query URL : suggest/?q=michael bdf=titledefType=lucene
And you will see my point.
---
Hi Markus,
Can you please tell me the exact file name in the tomcat folder?
Means where I have to set the properties?
I am using Windows machine and I have the Tomcat6.
Thanks,
Guru
--
View this message in context:
Thanks for all the information.
I'm not sure how exactly you are measuring/defining replication lag but
if you mean lag in how long until the newly replicated documents are
visible in searches
That is exactly what I wanted to say.
I've attached the cache statistics.
If you are interested
HI,
Thanks,
i want to search with title and empname both. for example when we use any
search engine like google,yahoo... we donot specify any type that is (name
or title or song...). Here (*suggest/?q=michael
bdf=titledefType=lucene*) we are specifying the title type search.
I removed said
Hi,
At the moment, partitioning with solrcloud is hash based on uniqueid.
What I'd like to do is have custom partitioning, e.g. based on date
(shard_MMYY).
I'm aware of https://issues.apache.org/jira/browse/SOLR-2592, but
after a cursory look it seems that with the latest patch, one might
end up
I don't think I changed by solrconfig.xml file from the default that
was provided in the example folder for solr 4.0.
On Tue, Sep 4, 2012 at 3:40 PM, Chris Hostetter
hossman_luc...@fucit.org wrote:
: core name=MYCORE_test instanceDir=MYCORE dataDir=MYCORE_test /
I'm pretty sure what you hav
i want to search with title and empname both.
I know, I give that URL just to get the idea here.
If you try
suggest/?q=michael bdf=titledefType=lucenefl=title
you will see that your interested will in results section not lst
name=spellcheck section.
or title or song...). Here
I've recently upgraded to solr 4.0 from solr 3.5 and I think my delete
statement used to work, but now it doesn't seem to be deleting. I've
been experimenting around, and it seems like this should be the URL
for deleting the document with the uri of network_24.
In a browser, I first go here:
Check to make sure that you are not stumbling into SOLR-3432: deleteByQuery
silently ignored if updateLog is enabled, but {{_version_}} field does not
exist in schema.
See:
https://issues.apache.org/jira/browse/SOLR-3432
-- Jack Krupansky
-Original Message-
From: Paul
Sent:
I think I found the cause for this. It is partially my fault, because I sent
solr a field with empty value, but this is also a configuration problem.
https://issues.apache.org/jira/browse/SOLR-3792
-Original Message-
From: Yoni Amir [mailto:yoni.a...@actimize.com]
Sent: Tuesday,
Wow, That was quick. Thank you very much Mr. Siren. I shall remove the
compression node in the solrconfig.xml and let you know how it went.
Thanks,
Ravi Kiran Bhaskar
On Wed, Sep 5, 2012 at 2:54 AM, Sami Siren ssi...@gmail.com wrote:
I opened SOLR-3789. As a workaround you can remove str
This may be a bit off topic: How do you index an existing website and control
the data going into index?
We already have Java code to process the HTML (or XHTML) and turn it into a
SolrJ Document (removing tags and other things we do not want in the index). We
use SolrJ for indexing.
So I
Please take a look at the Apache Nutch project.
http://nutch.apache.org/
-Original message-
From:Lochschmied, Alexander alexander.lochschm...@vishay.com
Sent: Wed 05-Sep-2012 17:09
To: solr-user@lucene.apache.org
Subject: Website (crawler for) indexing
This may be a bit off
Hello!
You can implement your own crawler using Droids
(http://incubator.apache.org/droids/) or use Apache Nutch
(http://nutch.apache.org/), which is very easy to integrate with Solr
and is very powerful crawler.
--
Regards,
Rafał Kuć
Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
Check to make sure that you are not stumbling into SOLR-3432: deleteByQuery
silently ignored if updateLog is enabled, but {{_version_}} field does not
exist in schema.
See:
https://issues.apache.org/jira/browse/SOLR-3432
This could happen if you kept the new 4.0 solrconfig.xml, but copied in
Rohit:
If it's easy, the easiest thing to do is to turn off your servlet
container, rm -r * inside of the data directory, and then restart the
container.
Michael Della Bitta
Appinions | 18 East 41st St., Suite 1806 | New York, NY 10017
Hi,
We currently share a single solr read index on an nfs accessed by
various solr instances from various devices which gives us a high
performant cluster framework. We would like to migrate to Amazon or
other cloud. Is there any way (compatibility) to have solr index on
Amazon S3 file cloud
In the analysis page, the n-grams produced by EdgeNgramTokenFilter are at
sequential positions. This seems wrong, because an n-gram is associated with a
source token at a specific position. It also really messes up phrase matches.
With the source text fleen, these positions and tokens are
Thanks everyone. Adding the _version_ field in the schema worked.
Deleting the data directory works for me, but was not sure why deleting
using curl was not working.
On Wed, Sep 5, 2012 at 1:49 PM, Michael Della Bitta
michael.della.bi...@appinions.com wrote:
Rohit:
If it's easy, the easiest
Amazon doesn't have a prebuilt network filesystem that's mountable on
multiple hosts out of the box. The closest thing would be setting up
NFS among your hosts yourself, but at that point it'd probably be
easier to set up Solr replication.
Michael Della Bitta
That was exactly it. I added the following line to schema.xml and it now works.
field name=_version_ type=long indexed=true stored=true/
On Wed, Sep 5, 2012 at 10:13 AM, Jack Krupansky j...@basetechnology.com wrote:
Check to make sure that you are not stumbling into SOLR-3432: deleteByQuery
Nicolas -
Can you elaborate on your use and configuration of Solr on NFS?What lock
factory are you using? (you had to change from the default, right?)
And how are you coordinating updates/commits to the other servers? Where does
indexing occur and then how are commits sent to the NFS
: That was exactly it. I added the following line to schema.xml and it now
works.
:
: field name=_version_ type=long indexed=true stored=true/
Just to be clear: how exactly did you upgraded to solr 4.0 from solr 3.5
-- did you throw out your old solrconfig.xml and use the example
: I don't think I changed by solrconfig.xml file from the default that
: was provided in the example folder for solr 4.0.
ok ... well the Solr 4.0-BETA example solrconfig.xml has this in it...
dataDir${solr.data.dir:}/dataDir
So if you want to override the dataDir using a property like your
Actually, I didn't technically upgrade. I downloaded the new
version, grabbed the example, and pasted in the fields from my schema
into the new one. So the only two files I changed from the example are
schema.xml and solr.xml.
Then I reindexed everything from scratch so there was no old index
I don't see a Jira for it, but I do see the bad behavior in both Solr 3.6
and 4.0-BETA in Solr admin analysis.
Interestingly, the screen shot for LUCENE-3642 does in fact show the
(improperly) incremented positions for successive ngrams.
See:
https://issues.apache.org/jira/browse/LUCENE-3642
And when you pasted your 3.5 fields into the 4.0 schema, did you delete the
existing fields (including _version_) at the same time?
-- Jack Krupansky
-Original Message-
From: Paul
Sent: Wednesday, September 05, 2012 4:32 PM
To: solr-user@lucene.apache.org
Subject: Re: Still see
: Actually, I didn't technically upgrade. I downloaded the new
: version, grabbed the example, and pasted in the fields from my schema
: into the new one. So the only two files I changed from the example are
: schema.xml and solr.xml.
ok -- so with the fix for SOLR-3432, anyone who tries similar
: Subject: Re: use of filter queries in Lucene/Solr Alpha40 and Beta4.0
Günter, This is definitely strange
The good news is, i can reproduce your problem.
The bad news is, i can reproduce your problem - and i have no idea what's
causing it.
I've opened SOLR-3793 to try to get to the bottom
The replication finally worked after I removed the compression setting
from the solrconfig.xml on the slave. Thanks for providing the
workaround.
Ravi Kiran
On Wed, Sep 5, 2012 at 10:23 AM, Ravi Solr ravis...@gmail.com wrote:
Wow, That was quick. Thank you very much Mr. Siren. I shall remove
Not sure whether it is a duplicate question. Did try to browse through the
archive and did not find anything specific to what I was looking for.
I see duplicates in the dictionary if I update the document concurrently.
I am using Solr 3.6.1 with the following configurations for suggester:
Solr
Thanks for posting this!
I ran into exactly this issue yesterday, and ended up felting the files to
get around it.
Mark
Sent from my mobile doohickey.
On Sep 6, 2012 4:13 AM, Rohit Harchandani rhar...@gmail.com wrote:
Thanks everyone. Adding the _version_ field in the schema worked.
Deleting
Hi,
Running example Solr from the 3.6.1 distribution I can not make it to
keep persistent HTTP connections:
$ ab -c 1 -n 100 -k 'http://localhost:8983/solr/select?q=*:*' | grep
Keep-Alive
Keep-Alive requests:0
What should I change to fix that?
P.S. We have the same issue in production
Some extra information. If I use curl and force it to use HTTP 1.0, it
is more visible that Solr doesn't allow persistent connections:
$ curl -v -0 'http://localhost:8983/solr/select?q=*:*' -H'Connection:
Keep-Alive'* About to connect() to localhost port 8983 (#0)
* Trying ::1... connected
: I download solr 4.0 beta and the .asc file. I use gpg4win and type this in
: the command line:
:
: gpg --verify file.zip file.asc
:
: I get a message like this:
:
: *gpg: Can't check signature: No public key*
you can verify the asc sig file using the public KEYS file hosted on the
main
I have a data-config.xml with 2 entity, like
entity name=full PK=ID ...
...
/entity
and
entity name=delta_build PK=ID ...
...
/entity
entity delta_build is for delta import, query is
?command=full-importentity=delta_buildclean=false
and I want to using deletedPkQuery to delete index. So I
Thank you Hoss. I imported the KEYS file using *gpg --import KEYS.txt*.
Then I did the *--verify* again. This time I get an output like this:
gpg: Signature made 08/06/12 19:52:21 Pacific Daylight Time using RSA key
ID 322
D7ECA
gpg: Good signature from Robert Muir (Code Signing Key)
Any thoughts?
It is weird, i can see the words are cutting correctly in Field
Analysis. I checked almost every website that they are telling either
CJKAnalyzer, IKAnalyzer or SmartChineseAnalyzer. But if i can see the
words are cutting then it should not be the problem of settings of
There is another way to do this: crawl the mobile site!
The Fennec browser from Mozilla talks Android. I often use it to get pagecrap
off my screen.
- Original Message -
| From: Lance Norskog goks...@gmail.com
| To: solr-user@lucene.apache.org
| Sent: Wednesday, August 29, 2012 7:37:37
I believe that you should remove the Analyzer class name from the field type. I
think it overrides the stacks of tokenizer/tokenfilter. Other fieldType
declarations do not have an Analyzer class and Tokenizers.
analyzer type=index class=org.apache.lucene.analysis.cjk.CJKAnalyzer
should be:
Thank you Lance.
I just found out the problem, in case somebody came across this.
It turn out to be the problem that tomcat is not accepting UTF-8 in URL
by default.
http://wiki.apache.org/solr/SolrTomcat#URI_Charset_Config
I have no idea why it is the case but after i follow the instruction
46 matches
Mail list logo