don't need SolrCloud nor Zookeeper for it. (they may provide
other benefits, but you don't need them for distributed faceting).
Upayavira
On Fri, Mar 25, 2011 at 1:35 PM, Yonik Seeley
yo...@lucidimagination.comwrote:
On Tue, Mar 22, 2011 at 7:51 AM, Dmitry Kan dmitry@gmail.com wrote
On Fri, 25 Mar 2011 14:26 +0200, Dmitry Kan dmitry@gmail.com
wrote:
Hi, Upayavira
Probably I'm confusing the terms here. When I say distributed faceting
I'm
more into SOLR on the cloud (e.g. HDFS + MR + cloud of commodity
machines)
rather than into traditional multicore/sharded SOLR
get when you do your search against both fields.
If your code is separable, you could have text_en and text_gr, each with
their own language specific analyser chains, and index your content into
the field relevant for that language.
HTH
Upayavira
On Thu, 17 Mar 2011 18:18 -0700, abiratsis abirat
at other ways to
structure your query if you're hitting the URL length limit.
Upayavira
On Tue, 15 Mar 2011 12:23 +0100, Gastone Penzo
gastone.pe...@gmail.com wrote:
Hi,
is possible to change Solr sending query method from get to post?
because my query has a lot of OR..OR..OR and the log says to me
I'm not sure if I get what you are trying to achieve. What do you mean
by constraint?
Are you saying that you effectively want to filter the facets that are
returned?
e.g. for source field, you want to show html/pdf/email, but not, say xls
or doc?
Upayavira
On Tue, 15 Mar 2011 15:38 +
post.jar is intended for demo purposes, not production use, so it
doesn;t surprise me you've managed to break it.
Have you tried using curl to do the post?
Upayavira
On Thu, 03 Mar 2011 17:02 -0500, Solr User solr...@gmail.com wrote:
Hi All,
I am trying to create indexes out of a 400MB XML
As I understand it there is, and the best you can do is keep the same
number of docs per shard, and keep your documents randomised across
shards. That way you'll minimise the chances of suffering from
distributed IDF issues.
Upayavira
On Wed, 02 Mar 2011 10:10 -0500, Jae Joo jaejo...@gmail.com
Next question, do you have your type field set to index=true in your
schema?
Upayavira
On Tue, 01 Mar 2011 11:06 -0500, Brian Lamb
brian.l...@journalexperts.com wrote:
Thank you for your reply but the searching is still not working out. For
example, when I go to:
http://localhost:8983/solr
q=dog is equivalent to q=text:dog (where the default search field is
defined as text at the bottom of schema.xml).
If you want to specify a different field, well, you need to tell it :-)
Is that it?
Upayavira
On Mon, 28 Feb 2011 15:38 -0500, Brian Lamb
brian.l...@journalexperts.com wrote:
Hi
Likely because it is a Solr 4.0 feature and you are using Solr 1.4.1.
That'd be my guess.
(Solr 4.0 is the latest greatest as yet unreleased version of Solr -
numbering scheme changed to fit with that of Lucene).
Upayavira
On Mon, 14 Feb 2011 16:05 +0530, Isha Garg isha.g...@orkash.com
wrote
make your Solr client code enhance the data it gets back after querying
Solr with data extracted from Mysql. What is the issue here?
Upayavira
On Mon, 07 Feb 2011 23:17 -0800, Ishwar ishwarsridha...@yahoo.com
wrote:
Hi all,
Been a solr user for a while now, and now I need to add some
. It would mean you
just have one index to manage, rather than an index and a database -
after all, the words *have* to take up disk space somewhere :-). If you
end up with so many documents indexed that performance grinds (over
10million??) you can split your index across multiple shards.
Upayavira
Once
, it isn't an easy task to do properly, and
there's nothing in Solr to do it. However, there is a very clever class
in Lucene contrib that you can use to split a Lucene index [1], and you
can safely use it to split a Solr index so long as the index isn't in
use while you're doing it.
Upayavira
[1
Actually, in that situation, we indexed twice, to both, so there was no
master and no slave. Our testing showed that search was not slowed down
unduly by indexing.
Upayavira
On Tue, 08 Feb 2011 22:34 -0800, Ishwar ishwarsridha...@yahoo.com
wrote:
In the situation that you'd explained, I'm
What problem are you trying to solve by using a Lucene 3.x index within
a Solr 1.4 system?
Upayavira
On Tue, 01 Feb 2011 14:59 +0100, Churchill Nanje Mambe
mambena...@afrovisiongroup.com wrote:
is there any way I can change the lucene version wrapped in side solr 1.4
from lucene 2.x to lucene
has an indexer app that reads from
HBase and writes to a standard Solr by hitting its Rest API.
So, nothing funky, just a little app that reads from HBase and posts to
Solr.
Upayavira
On Jan 31, 2011, at 5:34 AM, Steven Noels stev...@outerthought.org
wrote:
On Fri, Jan 28, 2011 at 1:30 AM
Brilliant. So obvious.
Upayavira
On Sat, 29 Jan 2011 18:53 -0700, Bob Sandiford
bob.sandif...@sirsidynix.com wrote:
Or - you could add a standard field to each shard, populate with a
distinct value for each shard, and facet on that field. Then look at the
facet counts of the value
of this information in its response.
Upayavira
On Sat, 29 Jan 2011 03:48 -0800, csj christiansonnejen...@gmail.com
wrote:
Hi,
Is it possible to construct a Solr query that will return the total
number
of hits there across all shards, and at the same time getting the number
of
hits per shard
get the tomcat auth login screen.
in the same way can i configure the http client so that i dont have to
specify the port
Sure. This likely means your traffic is going via Apache (on the default
port 80) but there's no real problem with that.
Upayavira
---
Enterprise Search Consultant
? Likewise, Solr should be a
service that listens, waiting to be given data.
Upayavira
---
Enterprise Search Consultant at Sourcesense UK,
Making Sense of Open Source
Looks like you are connecting to Tomcat's AJP port, not the HTTP one.
Connect to the Tomcat HTTP port and I suspect you'll have greater
success.
Upayavira
On Wed, 26 Jan 2011 22:45 -0800, Darniz rnizamud...@edmunds.com
wrote:
Hello,
i uploaded solr.war file on my hosting provider and added
.
Upayavira
On Thu, 23 Dec 2010 12:12 +, Francis Rhys-Jones
francis.rhys-jo...@guardian.co.uk wrote:
Hi,
Were running a cloud based cluster of servers and its not that easy to
get a list of the current slaves. Since my problem is only around the
restart/redeployment of the master it seems
, and you should be
good to go.
I tried this, watching the request log on my master, and the incoming
replication requests did actually stop due to the disablepolling
command, so you should be fine with this approach.
Does this get you to where you want to be?
Upayavira
On Wed, 22 Dec 2010 17:10
missing something, that while a load
balancer can be useful, it is only as a part of a larger scheme when it
comes to master replication. Or am I missing something?
Upayavira
[1] http://www.slideshare.net/sourcesense/sharded-solr-setup-with-master
On Sun, 19 Dec 2010 22:41 -0800, Lance Norskog goks
it to be a backup of the new master
* make it pull a fresh index over
But, Jan Høydahl suggested using SolrCloud. I'm going to follow up on
how that might work in that thread.
Upayavira
On Sun, 19 Dec 2010 00:20 -0800, Tri Nguyen tringuye...@yahoo.com
wrote:
Hi,
In the master-slave
, but still not quite sure on how it works exactly.
Upayavira
On Fri, 17 Dec 2010 10:09 +0100, Jan Høydahl / Cominvent
jan@cominvent.com wrote:
Hi,
I believe the way to go is through ZooKeeper[1], not property files or
local hacks. We've already started on this route and it makes sense
/solr/SolrReplication), but I've not actually done
it.
Upayavira
--- On Sun, 12/19/10, Upayavira u...@odoko.co.uk wrote:
From: Upayavira u...@odoko.co.uk
Subject: Re: master master, repeaters
To: solr-user@lucene.apache.org
Date: Sunday, December 19, 2010, 10:13 AM
We had a (short
, as that could be your speed bottleneck.
Upayavira
On Wed, 15 Dec 2010 18:52 -0500, Burton-West, Tom tburt...@umich.edu
wrote:
Hello all,
Are there any general guidelines for determining the main factors in
memory use during merges?
We recently changed our indexing configuration to speed up
an HTTP request to the slave
7) See if your posted content is available on your slave.
Maybe someone else here can tell you what is actually going on and save
you the effort!
Does that help you get some understand what is going on?
Upayavira
On Tue, 14 Dec 2010 09:15 -0500, Jonathan Rochkind
that - it causes a new index reader to be created,
based upon the new on disk files, which will include updates from both
syncs.
Upayavira
On Mon, 13 Dec 2010 23:11 -0500, Jonathan Rochkind rochk...@jhu.edu
wrote:
Sorry, I guess I don't understand the details of replication enough.
So slave tries
slaves and
masters and sets up replication, etc (essentially the same sort of thing
that solr cloud does!)
Don't know when I'll get the time, though, I'm afraid.
Upayavira
On Fri, 10 Dec 2010 10:45 +0100, György Frivolt
gyorgy.friv...@gmail.com wrote:
Hi,
I tried to setup Solr by chef
be done on the slave, since the master, which triggers
the update, is unaware who is slaves are.
Any ideas on how to do this?
http://wiki.apache.org/solr/CoreAdmin#RELOAD
Doesn't this do it?
Upayavira
can get access to the account I need to do it!
Upayavira
On Tue, 07 Dec 2010 11:36 +0100, Peter Karich peat...@yahoo.de
wrote:
Hi Hamid,
try to avoid autowarming when indexing (see solrconfig.xml:
caches-autowarm + newSearcher + maxSearcher).
If you need to query and indexing at the same
, as Lucene assumes it has complete control over
its files. Unless there's a specific way to set up a 'read only' solr?
Upayavira
to video it, so if
successful, I expect it'll get put online somewhere.
Upayavira
On Wed, 01 Dec 2010 03:44 +, Jayant Das jayan...@hotmail.com
wrote:
Hi, A diagram will be very much appreciated.
Thanks,
Jayant
From: u...@odoko.co.uk
To: solr-user@lucene.apache.org
Subject: Re
assign a 'virtual IP'
to your load balancer, and it is responsible for forwarding traffic to
that IP to one of the hosts in that particular pool.
Upayavira
to pass on
extra attribute 'masterUrl' or other attributes like 'compression' (or
any other parameter which is specified in the lst name=slave tag) to
do a one time replication from a master. This obviates the need for
hardcoding the master in the slave.
HTH, Upayavira
On Wed, 01 Dec 2010 06:24
, but that doesn't mean you need to increase the
frequency of your commits.
Upayavira
On Tue, 30 Nov 2010 00:36 -0800, stockii st...@shopgate.com wrote:
aha aha :D
hm i dont know. we import in 2MillionSteps because we think that solr
locks
our database and we want a better controll of the import
master you pull from.
Does this answer your question?
Upayavira
On Tue, 30 Nov 2010 09:18 -0800, Ken Krugler
kkrugler_li...@transpac.com wrote:
Hi Tommaso,
On Nov 30, 2010, at 7:41am, Tommaso Teofili wrote:
Hi all,
in a replication environment if the host where the master is running
and I'll have another go
at explaining it (or even attempt a diagram).
Upayavira
On Tue, 30 Nov 2010 13:27 -0800, Cinquini, Luca (3880)
luca.cinqu...@jpl.nasa.gov wrote:
Hi,
I'd like to know if anybody has suggestions/opinions on what is
currently the best architecture for a distributed
, and
one containing all of your documents). The net result being you don't
need to optimise at that point.
Note - I'm no solr guru, so I could be wrong with some of the above -
I'm happy to be corrected.
Upayavira
will take care of that for you.
Another perk is that your backups won't take any additional disk space
(just the space for the directory data, not the files themselves). As
your index changes, disk usage will gradually increase though.
Upayavira
On Mon, 29 Nov 2010 16:13 +0100, Rodolico Piero
in your index, and segments are created by
commits. If you don't do many commits, you won't need to optimise - at
least you won't at the point of initial ingestion.
Upayavira
) and completely re-index
everything.
Hope this helps.
Upayavira
On Wed, 24 Nov 2010 04:11 -0800, Hamid Vahedi hvb...@yahoo.com
wrote:
Hi to all
We using solr multi core with 6 core in shard mode per server (2 server
till
now. therefore totally 12 core). using tomcat on windows 2008 with 18GB
RAM
In the header is a line saying what rules your message matched. That'll
let you know what about your message was causing your mails to be
rejected.
Upayavira
On Wed, 10 Nov 2010 11:42 -0800, robo - robom...@gmail.com wrote:
Thanks for all your help Ezequiel. I cannot see anything in my email
You need to watch what you are setting your solr.home to. That is where
your indexes are being written. Are they getting overwritten/lost
somehow. Watch the files in that dir while doing a restart.
That's a start at least.
Upayavira
On Tue, 26 Oct 2010 16:40 +0300, Mackram Raydan mack
this. Is there a data output interface I can implement
for this purpose?
Or can this be done in some way?
Why do you want to do this?
Solr embeds a lucene index, and Lucene has a Directory interface, that
can be implemented differently (something other than the default
FSDirectory implementation).
Upayavira
for, (memory
requirements, impact of index optimisations, etc), but it certainly can
be done.
Upayavira
On Thu, 14 Oct 2010 14:01 +0200, Marco Ciaramella
ciaramellama...@gmail.com wrote:
Hi all,
I am working on a performance specification document on a
Solr/Lucene-based
application; this document
they must have the same schema file. But that can be the aggregation
of both schemas.
See here[1] for more info on merging indexes.
Upayavira
[1] http://wiki.apache.org/solr/MergingSolrIndexes
it is too
early for me to read about SolrCloud as I'm still learning Solr)
I don't believe SolrCloud is aiming to support master/master
replication.
HTH
Upayavira
query
requests on to the backend shards and aggregates the results.
That way, your application doesn't need to know about how you split the
indexes, it thinks it is just querying a single SOLR instance.
Does anything like this exist, or do I have to write it?
Regards, Upayavira
pointers?
Thank you!
Upayavira
On Sep 12, 2008, at 5:38 AM, Upayavira wrote:
The http://wiki.apache.org/solr/DistributedSearch page implies that
you
must know what shards exist when doing a search across multiple
shards.
A colleague tells me that there is a feature that makes
On Fri, 2008-09-12 at 14:02 +0100, Upayavira wrote:
On Fri, 2008-09-12 at 06:05 -0400, Erik Hatcher wrote:
Even in the example in that page, the client _is_ just querying a
single Solr instance - it is that Solr instance that is then querying
the shards. Is your interest in moving
On Fri, 2008-09-12 at 16:44 -0400, Yonik Seeley wrote:
On Fri, Sep 12, 2008 at 9:02 AM, Upayavira [EMAIL PROTECTED] wrote:
On Fri, 2008-09-12 at 06:05 -0400, Erik Hatcher wrote:
Is your interest in moving the shards parameter to the
server-side instead? You can do that with the request
801 - 854 of 854 matches
Mail list logo