Otis Gospodnetic wrote:
Are people using Solr trunk in serious production environments? I suspect
the
answer is yes, just want to see if there are any gotchas/warnings.
Yes, since it seemed the best way to get edismax with this patch[1]; and to get
the more update-friendly MergePolicy[2].
I have some documents with a bunch of attachments (images, thumbnails
for them, audio clips, word docs, etc); and am currently dealing with
them by just putting a path on a filesystem to them in solr; and then
jumping through hoops of keeping them in sync with solr.
Would it be nuts to stick the
by index order, a strict byte-by-byte sort. Which
doesn't always work for me either. I haven't quite figured out the solution
to this sort of problem.
Ron Mayer wrote:
Lance Norskog wrote:
You may not sort on a tokenized field. You may not sort on a multiValued
field. You can only have one
If I want to move a core from one physical machine to another,
is it as simple as just
scp -r core5 otherserver:/path/on/other/server/
and then adding
core name=core5name instanceDir=core5 /
on that other server's solr.xml file and restarting the server there?
PS: Should have I been able
Andrzej Bialecki wrote:
On 2010-10-25 11:22, Toke Eskildsen wrote:
On Thu, 2010-07-22 at 04:21 +0200, Li Li wrote:
But itshows a problem of distrubted search without common idf.
A doc will get different score in different shard.
Bingo.
I really don't understand why this fundamental problem
Erick Erickson wrote:
In general, the behavior when sorting is not predictable when
sorting on a tokenized field, which text is. What would
it mean to sort on a field with erick Moazzam as tokens
in a single document? Should it be in the es or the ms?
Might it be possible or reasonable to
that in english someone'll probably
put adjectives before nouns in both the query and the document's text.
The one annoyance is that I think the phrase slop doesn't care much
about the order of words..
On Sun, Oct 10, 2010 at 8:50 PM, Ron Mayer r...@0ape.com wrote:
Walter Underwood wrote:
I
My system which has documents being added pretty much
continually seems pretty well behaved except, it seems,
when large segments get merged. During that time
the system starts really dragging, and queries that took
only a couple seconds are taking dozens.
Some other I/O bound servers seem to
Stéphane Corlosquet wrote:
Hi all,
I'm new to solr so please let me know if there is a more appropriate place
for my question below.
I'm noticing a rather unexpected number of results when I add more keywords
to a search. I'm listing below a example (where I replaced the real keywords
Ron Mayer wrote:
Yonik Seeley wrote:
I just checked in the last part of those changes that should eliminate
any restriction on key.
But, that last part dealt with escaping keys that contained whitespace or }
Your example really should have worked after my previous 2 commits.
Perhaps not all
Yonik Seeley wrote:
On Tue, Sep 7, 2010 at 8:31 PM, Ron Mayer r...@0ape.com wrote:
Short summary:
* Mixing Facets and Shards give me a NullPointerException
when not all docs have all facets.
https://issues.apache.org/jira/browse/SOLR-2110
I believe the underlying real issue stemmed
Yonik Seeley wrote:
I just checked in the last part of those changes that should eliminate
any restriction on key.
But, that last part dealt with escaping keys that contained whitespace or }
Your example really should have worked after my previous 2 commits.
Perhaps not all of the servers got
Marc Sturlese wrote:
I noticed that long ago.
Fixed it doing in HighlightComponent finishStage:
...
public void finishStage(ResponseBuilder rb) {
...
}
Thanks! I'll try that
I also seem to have a similar problem with shards + facets -- in particular
it seems like the error
Short summary:
* Mixing Facets and Shards give me a NullPointerException
when not all docs have all facets.
* Attached patch improves the failure mode, but still
spews errors in the log file
* Suggestions how to fix that would be appreciated.
In my system, I tried separating out a
there too.
-Yonik
http://lucenerevolution.org Lucene/Solr Conference, Boston Oct 7-8
On Tue, Sep 7, 2010 at 8:31 PM, Ron Mayer r...@0ape.com wrote:
Short summary:
* Mixing Facets and Shards give me a NullPointerException
when not all docs have all facets.
* Attached patch improves
Is there a good way of handling a large number of facets that are quite
sparse (most documents not having any value most facets)?
In my system I have quite a few documents (few million, will soon
grow to mid tens of millions), and our users are requesting an
ever-increasing number of facets
Jonathan Rochkind wrote:
What matters isn't how many documents have a value, so much
as how many unique values there are in the field total. If
there aren't that many, faceting can be done fairly quickly and fairly
efficiently.
Really?
Don't these 2 log file lines:
INFO: UnInverted
Jonathan Rochkind wrote:
I could certainly be wrong. If you have a facet with a LOT fewer unique
values than documents in the query, I'd be curious what happens if you try
facet.method=enum.
Cool. I'll be trying that later.
I'm definitely not an expert, just trying to help figure it
Short summary:
* Using both highlighting and shards and q.alt is giving me a null
pointer exception.
* Really easy to workaround; but since the similar cases without
shards work, perhaps this should too.
* If you think it should be fixed, point me in the right direction
and I
://issues.apache.org/jira/browse/SOLR-2058
On 2010-08-19 Ron Mayer wrote:
Chris Hostetter wrote:
[Yonik Seeley wrote]
: Perhaps fold it into the pf/pf2 syntax?
: pf=text~1^2 // proposed syntax...
Big +1 to this idea ...
...
I added a ticket here: https://issues.apache.org/jira/browse/SOLR-2058
mraible wrote:
We're starting to use Solr for our application. The data that we'll be
indexing will change often and not accumulate over time. This means that we
want to blow away our index and re-create it every hour or so. What's the
easier way to do this while Solr is running and not give
Chris Hostetter wrote:
: Perhaps fold it into the pf/pf2 syntax?
:
: pf=text^2// current syntax... makes phrases with a boost of 2
: pf=text~1^2 // proposed syntax... makes phrases with a slop of 1 and
: a boost of 2
:
: That actually seems pretty natural given the lucene query
, normalClauses, phraseFields2, 2,
tiebreaker, 0);
Thanks!!! Indeed it seems to be providing better results for me (at first
glance on a test system).
Is there any way of lobbying to make this change in the official releases?
On Thu, Aug 12, 2010 at 1:04 PM, Ron Mayer r...@0ape.com wrote
Short summary:
* If I could make Solr merge oldest segments (or the one
with the most deleted docs) rather than smallest
segments; I think I'd almost never need optimize.
* Can I tell Solr to do this? Or if not, can someone
point me in the right direction regarding where I might
with red baseball hat.
Not sure of a good way to express that in config options, tho.
-Yonik
http://www.lucidimagination.com
On Fri, Aug 13, 2010 at 2:11 PM, Ron Mayer r...@0ape.com wrote:
Jayendra Patil wrote:
We pretty much had the same issue, ended up customizing the ExtendedDismax
code
Short summary:
Is there any way I can specify that I want a lot
of phrase slop for the pf parameter, but none
at all for the pf2 parameter?
I find the 'pf' parameter with a pretty large 'ps' to do a very
nice job for providing a modest boost to many documents that are
quite well related
cars. And other times, they'd rather see the value
of the cars returned.
In SQL I can do a select sum(value) from incidents join vehicles...,
and haven't (yet) found similar for facets in solr.
Then again, maybe I should be using the database for that part
On Feb 24, 2010, at 6:40 PM, Ron
Grant Ingersoll wrote:
What would it be?
* Run a MapReduce-likejob on all docs matching the results of a search?
I'm currently working on an app where I hope to be able to do
a query (hopefully using solr) and generate a map where every state
(or county or zip-code or school district or police
28 matches
Mail list logo