Running 1.4 nightly in production as well, also for the Java replication and
for the improved facet count algorithms. No problems, all running smoothly.
Bye,
Jaco.
2009/5/13 Erik Hatcher
> We run a not too distant trunk (1.4, probably a month or so ago) version of
> Solr on LucidFind at http:/
Hello
I'm kind of new to Solr and I've read about replication, and the fact that a
node can act as both master and slave.
I a replica fails and then comes back on line I suppose that it will resyncs
with the master.
But what happnes if the master fails? A slave that is configured as master
will k
I have only one log4j.properties file in classpath and even if i configure for
the particular package where the solr exception would come then also the same
issue. I had removed the logger for my application and using only for solr
logging.
~Sagar
> Date: Tue, 12 May 2009 09:59:01
I was looking at the same problem, and had a discussion with Noble. You can
use a hack to achieve what you want, see
https://issues.apache.org/jira/browse/SOLR-1154
Thanks,
Jianhan
On Tue, May 12, 2009 at 5:13 PM, Bryan Talbot wrote:
> So how are people managing solrconfig.xml files which are
In reply to both Matt and Jay's comments, the particular situation I'm
dealing with is one where rights will change relatively little once
they are established. Typically a document will be loaded and
indexed, and a decision will be made on sharing that more-or-less
immediately. It might change a
send a request afterwards, or you can add ?commit=true to
the /update request with the adds.
Erik
On May 12, 2009, at 8:57 PM, alx...@aim.com wrote:
Tried to add a new record using
curl http://localhost:8983/solr/update -H "Content-Type: text/xml" --
data-binary '
20090512170
We run a not too distant trunk (1.4, probably a month or so ago)
version of Solr on LucidFind at http://www.lucidimagination.com/search
Erik
On May 12, 2009, at 5:02 PM, Walter Underwood wrote:
We're planning our move to 1.4, and want to run one of our production
servers with the new
Tried to add a new record using
curl http://localhost:8983/solr/update -H "Content-Type: text/xml"
--data-binary '
20090512170318
86937aaee8e748ac3007ed8b66477624
0.21189615
test.com
test test
20090513003210909
'
I get
071
and added records are not found in the search.
Any ideas w
So how are people managing solrconfig.xml files which are largely the
same other than differences for replication?
I don't think it's a "good thing" to maintain two copies of the same
file and I'd like to avoid that. Maybe enabling the XInclude feature
in DocumentBuilders would make it pos
hi all :)
I'm having trouble with camel-cased query strings and the dismax handler.
a user query
LeAnn Rimes
isn't matching the indexed term
Leann Rimes
even though both are lower-cased in the end. furthermore, the
analysis tool shows a match.
the debug query looks like
"parsedquery":"+
We're using 1.4-dev 749558:749756M that we built on 2009-03-03
13:10:05 for our master/slave production environment using the Java
Replication code.
Thanks for your time!
Matthew Runo
Software Engineer, Zappos.com
mr...@zappos.com - 702-943-7833
On May 12, 2009, at 2:02 PM, Walter Underwood
We're planning our move to 1.4, and want to run one of our production
servers with the new code. Just to feel better about it, is anyone else
running 1.4 in production?
I'm building 2009-05-11 right now.
wuner
Here is a good presentation on search security from the Infonortics
Search Conference that was held a few weeks ago.
http://www.infonortics.com/searchengines/sh09/slides/kehoe.pdf
The approach you are using is called early-binding. As Jay mentioned,
one of the downsides is updating the docu
Thanks for the tip. I went to their website (www.fastsearch.com), and got
as far as the second line, top left 'A Microsoft Subsidiary'...at which
point, hopes of it being another open source solution quickly faded. ;-)
Seriously, though, it looks like an interesting product, but open source is
a m
The only downside would be that you would have to update a document anytime
a user was granted or denied access. You would have to query before the
update to get the current values for grantedUID and deniedUID, remove/add
values, and update the index. If you don't have a lot of changes in the
syste
I also work with the FAST Enterprise Search engine and this is exactly
how their Security Access Module works. They actually use a modified
base-32 encoded value for indexing, but that is because they don't
have the luxury of untokenized/un-processed String fields like Solr.
Thanks,
Matt
On Tue, May 12, 2009 at 3:03 PM, Marc Sturlese wrote:
> I have seen that if I set the value of queryResultWindowSize to 0 in
> solrconfig.xml solr will return an error of divided by zero.
Seems like a configuration error since requesting that results be
retrieved in 0 size chunks doesn't make a
Paul -- thanks for the reply, I appreciate it. That's a very practical
approach, and is worth taking a closer look at. Actually, taking your idea
one step further, perhaps three fields; 1) ownerUid (uid of the document's
owner) 2) grantedUid (uid of users who have been granted access), and 3)
den
I have seen that if I set the value of queryResultWindowSize to 0 in
solrconfig.xml solr will return an error of divided by zero.
Checking the source I have seen it can be fixed in SolrIndexSearcher. At the
end of the function getDocListC it's coded:
if (maxDocRequested < queryResultWind
On Tue, May 12, 2009 at 10:42 PM, Bryan Talbot wrote:
> For replication in 1.4, the wiki at
> http://wiki.apache.org/solr/SolrReplication says that a node can be both
> the master and a slave:
>
> A node can act as both master and slave. In that case both the master and
> slave configuration lists
On Tue, May 12, 2009 at 9:48 PM, Wayne Pope wrote:
>
> I have this request:
>
>
> http://localhost:8983/solr/select?start=0&rows=20&qt=dismax&q=copy&hl=true&hl.snippets=4&hl.fragsize=50&facet=true&facet.mincount=1&facet.limit=8&facet.field=type&fq=company-id%3A1&wt=javabin&version=2.2
>
> (I've be
You can fix the path of the index in your solrconfig.xml
On Tue, May 12, 2009 at 4:48 PM, KK wrote:
> One more information I would like to add.
> The entry in solr stats page says this:
>
> readerDir : org.apache.lucene.store.FSDirectory@/home/kk/solr/data/index
>
> when I ran from /home/kk
> a
Yes, that is part of it, but there is more to it. See Yonik's comment
about needs further down.
On May 12, 2009, at 7:36 AM, Norman Leutner wrote:
So are you using boundary box to find results within a given range(km)
like mentioned here: http://www.nsshutdown.com/projects/lucene/whitepaper
For replication in 1.4, the wiki at http://wiki.apache.org/solr/SolrReplication
says that a node can be both the master and a slave:
A node can act as both master and slave. In that case both the master
and slave configuration lists need to be present inside the
ReplicationHandler requestHa
Usually that means there is another log4j.properties or log4j.xml file in
your classpath that is being found before the one you are intending to use.
Check your classpath for other versions of these files.
-Jay
On Tue, May 12, 2009 at 3:38 AM, Sagar Khetkade
wrote:
>
> Hi,
> I have solr impleme
Hi,
We're implemented search into our product here at our very small company,
and the developer who integrated Solr has left. I'm picking up the code base
and have run into a problem , which I imagine is simple to solve.
I have this request:
http://localhost:8983/solr/select?start=0&rows=20&qt=
I mean you can sort the facet results by frequency, which happens to
be the default behavior.
Here is an example field for your schema:
stored="true" multiValued="true" />
Here is an example query:
http://localhost:8983/solr/select?q=textfield:copper&facet=true&facet.field=textfieldfacet&fa
I just committed a minor match suggested by Jim Murphy in SOLR-42 to
slightly lower the safe read ahead limit to avoid reading beyond a a
mark. Could you try out trunk (or wait until the next nightly build?)
-Yonik
http://www.lucidimagination.com
On Tue, May 12, 2009 at 10:57 AM, Nikolai Derzhak
OK. I've applied dirty hack as temporary solution:
in src/java/org/apache/solr/analysis/HTMLStripReader.java of 1.4-dev -
enclosed io.reset in try structure.
( * @version $Id: HTMLStripReader.java 646799 2008-04-10 13:36:23Z yonik $)
"
private void restoreState() throws IOException {
try
Thanks Matt for your reply.
What do you mean by frequency(the default)?
Can you please provide an example schema and query will look like.
--Sachin
Matt Weber-2 wrote:
>
> You may have to take care of this at index time. You can create a new
> multivalued field that has minimal processing
You may have to take care of this at index time. You can create a new
multivalued field that has minimal processing. Then at index time,
index the full contents of textfield as normal, but then also split it
on whitespace and index each word in the new field you just created.
Now you wil
Does anybody have answer to this post.I have a similar requirement.
Suppose I have free text field say
I index the field.If I search for textfield:copper.I have to get facet
counts for the most common words found in a textfield.
ie.
example:search for textfield:glass
should return facet counts f
It must be KeywordTokenizer*Factory* :)
Koji
sunnyfr wrote:
hi
I tried but Ive an error :
May 12 15:48:51 solr-test jsvc.exec[2583]: May 12, 2009 3:48:51 PM
org.apache.solr.common.SolrException log SEVERE:
org.apache.solr.common.SolrException: Error loading class
'solr.KeywordTokenizer' ^Iat
o
Use KeywordTokenizerFactory. Pasted from Solr's example schema.xml:
Erik
On May 12, 2009, at 9:49 AM, sunnyfr wrote:
hi
I tried but Ive an error :
May 12 15:48:51 solr-test jsvc.exec[2583]: May 12, 2009 3:48:51 PM
org.apache.solr.common.SolrException log SEVERE:
org.apache.s
hi
I tried but Ive an error :
May 12 15:48:51 solr-test jsvc.exec[2583]: May 12, 2009 3:48:51 PM
org.apache.solr.common.SolrException log SEVERE:
org.apache.solr.common.SolrException: Error loading class
'solr.KeywordTokenizer' ^Iat
org.apache.solr.core.SolrResourceLoader.findClass(SolrResourceLo
So are you using boundary box to find results within a given range(km)
like mentioned here:
http://www.nsshutdown.com/projects/lucene/whitepaper/locallucene_v2.html ?
Best regards
Norman Leutner
all2e GmbH
-Ursprüngliche Nachricht-
Von: Grant Ingersoll [mailto:gsing...@apache.org]
Ges
One more information I would like to add.
The entry in solr stats page says this:
readerDir : org.apache.lucene.store.FSDirectory@/home/kk/solr/data/index
when I ran from /home/kk
and this:
readerDir : org.apache.lucene.store.FSDirectory@
/home/kk/junk/solr/data/index
after running from /home/
See https://issues.apache.org/jira/browse/SOLR-773. In other words,
we're working on it and would love some help!
-Grant
On May 12, 2009, at 7:12 AM, Norman Leutner wrote:
Hi together,
I'm new to Solr and want to port a geographical range search from
MySQL to Solr.
Currently I'm using
Hi together,
I'm new to Solr and want to port a geographical range search from MySQL to Solr.
Currently I'm using some mathematical functions (based on GRS80 modell)
directly within MySQL to calculate
the actual distance from the locations within the database to a current
location (lat and long
Thanks for your response @aklochkov.
But I again noticed that something is wrong in my solr/tomcat config[I
spent a lot of time making solr run], b'coz in the solr admin page [
http://localhost:8080/solr/admin/] what I see is that the $CWD is the
location where from I restarted tomcat and seems th
Hi,
I have solr implemented in multi-core scenario and also implemented
solr-560-slf4j.patch for implementing the logging. But the problem I am facing
is that the logs are going to the stdout.log file not the log file that I have
mentioned in the log4j.properties file. Can anybody give me work
Hi,
On May 7, 2009, at 6:03 , Noble Paul നോബിള്
नोब्ळ् wrote:
going forward the java based replication is going to be the preferred
means replicating index. It does not support replicating files in the
dataDir , it only supports replicating index files and conf files
(files in conf dir). I
Hi folks,
I just wrote a Servlet Filter to handle authentication for our
service. Here's what I did:
1. Created a dir in contrib
2. Put my project in there, I took the dataimporthandler build.xml as
an example and modified it to suit my needs. Worked great!
3. ant dist now builds my jar and inc
Good day, people.
We use solr to search in mailboxes (dovecot).
But with some "bad" messages solr 1.4-dev generate error:
"
SEVERE: java.io.IOException: Mark invalid
at java.io.BufferedReader.reset(BufferedReader.java:485)
at
org.apache.solr.analysis.HTMLStripReader.restoreState(HTMLStripReader.ja
Hi,
I know that when starting Solr checks index directory existence, and creates
new fresh index if it doesn't exist. Does it help? If no, the next step I'd
do in your case is patching SolrCore.initIndex method - insert some logging,
or run EmbeddedSolrServer with debugger etc.
On Mon, May 11, 20
45 matches
Mail list logo