Is it possible to run solr without zookeeper, but still using sharding, if
it's all running on one host? Would the shards have to be explicitly
included in the query urls?
Thanks,
/Martin
On Fri, Mar 1, 2013 at 3:58 PM, Shawn Heisey s...@elyograg.org wrote:
On 3/1/2013 7:34 AM, Martin Koch
with it. Is this possible?
Thanks,
/Martin Koch - Senior Systems Architect - Issuu.com
- there are advantages and disadvantages to each.
- Mark
On Mar 1, 2013, at 9:03 AM, Martin Koch m...@issuu.com wrote:
On a host that is running two separate solr (jetty) processes and a
single
zookeeper process, we're often seeing solr complain that it can't find a
particular core. If we restart
Thank you very much, Shawn. I had understood that Zookeeper was a mandatory
component for Solr 4, and it is immensely useful to know that it is
possible to do without.
/Martin Koch
On Fri, Mar 1, 2013 at 3:58 PM, Shawn Heisey s...@elyograg.org wrote:
On 3/1/2013 7:34 AM, Martin Koch wrote
can see the blog post
herehttp://blog.issuu.com/post/41189476451/how-search-at-issuu-actually-works
.
Happy reading,
/Martin Koch - Senior Systems Architect - Issuu.
by
allocating more hardware?
Thanks in advance!
On Wed, Nov 21, 2012 at 3:56 PM, Martin Koch m...@issuu.com wrote:
Mikhail,
PSB
On Wed, Nov 21, 2012 at 10:08 AM, Mikhail Khludnev
mkhlud...@griddynamics.com wrote:
On Wed, Nov 21, 2012 at 11:53 AM, Martin Koch m...@issuu.com
Are all your fields marked as stored in your schema? This is a
requirement for atomic updates.
/Martin Koch
On Mon, Nov 26, 2012 at 7:58 PM, Darniz rnizamud...@edmunds.com wrote:
i tried using the same logic to update a specific field and to my surprise
all my other fields were lost. i had
On Thu, Nov 22, 2012 at 3:53 PM, Yonik Seeley yo...@lucidworks.com wrote:
On Tue, Nov 20, 2012 at 4:16 AM, Martin Koch m...@issuu.com wrote:
around 7M documents in the index; each document has a 45 character ID.
7M documents isn't that large. Is there a reason why you need so many
shards (16
21, 2012 at 3:56 PM, Martin Koch m...@issuu.com wrote:
Mikhail,
PSB
On Wed, Nov 21, 2012 at 10:08 AM, Mikhail Khludnev
mkhlud...@griddynamics.com wrote:
On Wed, Nov 21, 2012 at 11:53 AM, Martin Koch m...@issuu.com wrote:
I wasn't aware until now that it is possible
On Wed, Nov 21, 2012 at 7:08 AM, Mikhail Khludnev
mkhlud...@griddynamics.com wrote:
On Wed, Nov 21, 2012 at 2:07 AM, Martin Koch m...@issuu.com wrote:
I'm not sure about the mmap directory or where that
would be configured in solr - can you explain that?
You can check it at Solr Admin
Mikhail,
PSB
On Wed, Nov 21, 2012 at 10:08 AM, Mikhail Khludnev
mkhlud...@griddynamics.com wrote:
On Wed, Nov 21, 2012 at 11:53 AM, Martin Koch m...@issuu.com wrote:
I wasn't aware until now that it is possible to send a commit to one core
only. What we observed was the effect of curl
.
/Martin Koch - ISSUU - senior systems architect.
On Mon, Nov 19, 2012 at 3:22 PM, Simone Gianni simo...@apache.org wrote:
Hi all,
I'm planning to move a quite big Solr index to SolrCloud. However, in this
index, an external file field is used for popularity ranking.
Does SolrCloud supports
, Martin Koch m...@issuu.com wrote:
Solr 4.0 does support using EFFs, but it might not give you what you're
hoping fore.
We tried using Solr Cloud, and have given up again.
The EFF is placed in the parent of the index directory in each core; each
core reads the entire EFF and picks out
is supposed to replicate those
files as configs under solr home. And I'm really looking forward to know
how it works with huge files in production.
Thank You, Guys!
20.11.2012 18:06 пользователь Martin Koch m...@issuu.com написал:
Hi Mikhail
Please see answers below.
On Tue, Nov 20, 2012
it won't fit in ram, but we're using an
SSD disk to minimize disk access time. We have tried putting the EFF onto a
ram disk, but this didn't have a measurable effect.
Thanks,
/Martin
Thanks
On Wed, Nov 21, 2012 at 2:07 AM, Martin Koch m...@issuu.com wrote:
Mikhail
PSB
On Tue, Nov 20
Are you using solr 4.0? We had some problems similar to this (not in a
master/slave setup, though), where the resolution was to disable the
transaction log, i.e. remove updateLog in the updateHandler section -
we don't need NRT get, so this isn't important to us.
Cheers,
/Martin Koch
On Thu, Nov
In my experience, about as fast as you can push the new data :) Depending
on the size of your records, this should be a matter of seconds.
/Martin Koch
On Wed, Oct 24, 2012 at 9:01 PM, Marcelo Elias Del Valle mvall...@gmail.com
wrote:
Erick,
Thanks for the help, it sure helps a lot
previously read relevant values for each shard as they
are read in.
I guess a change in the ExternalFileField code would be required to achieve
this, but I have no experience here, so suggestions are very welcome.
Thanks,
/Martin Koch - Issuu - Senior Systems Architect.
PM, Mikhail Khludnev mkhlud...@griddynamics.com
wrote:
Martin,
Can you tell me what's the content of that field, and how it should affect
search result?
On Mon, Oct 8, 2012 at 12:55 PM, Martin Koch m...@issuu.com wrote:
Hi List
We're using Solr-4.0.0-Beta with a 7M document index
(I'm working with Raghav on this): We've got several parallel workers that
add documents in batches of 16 through pysolr, and using commitWithin at 60
seconds when the commit causes solr to freeze; if the commit is only 5
seconds, then everything seems to work fine. In both cases, throughput is
It actually is Beta that we're working with.
/Martin
On Mon, Aug 27, 2012 at 10:38 PM, Martin Koch m...@issuu.com wrote:
(I'm working with Raghav on this): We've got several parallel workers that
add documents in batches of 16 through pysolr, and using commitWithin at 60
seconds when
We're doing something similar: We want to combine search relevancy with a
fitness value computed from several other data sources.
For this, we pre-compute the fitness value for each document and store it a
flat file (lines of the format document_id=fitness_score) that we use an
Thanks for writing this up. These are good tips.
/Martin
On Fri, Mar 23, 2012 at 9:57 PM, dw5ight dw5i...@gmail.com wrote:
Hey All-
we run a http://carsabi.com car search engine with Solr and did some
benchmarking recently after we switched from a hosted service to
self-hosting. In
Thanks,
/Martin Koch
I guess this would depend on network bandwidth, but we move around
150G/hour when hooking up a new slave to the master.
/Martin
On Fri, Mar 23, 2012 at 12:33 PM, Ben McCarthy
ben.mccar...@tradermedia.co.uk wrote:
Hello,
Im looking at the replication from a master to a number of slaves. I
something here?
Best
Erick
On Tue, Jan 3, 2012 at 1:33 PM, Martin Koch m...@issuu.com wrote:
Hi List
I have a Solr cluster set up in a master/slave configuration where the
master acts as an indexing node and the slaves serve user requests.
To avoid accidental posts of new documents
Hi List
I have a Solr cluster set up in a master/slave configuration where the
master acts as an indexing node and the slaves serve user requests.
To avoid accidental posts of new documents to the slaves, I have disabled
the update handlers.
However, I use an externalFileField. When the file is
Could it be a commit you're needing?
curl 'localhost:8983/solr/update?commit=true'
/Martin
On Wed, Dec 28, 2011 at 11:47 AM, mumairshamsi mumairsha...@gmail.comwrote:
http://lucene.472066.n3.nabble.com/file/n3616191/02.xml 02.xml
i am trying to index this file for this i am using
Have you looked here http://wiki.apache.org/solr/VelocityResponseWriter ?
/Martin
On Mon, Dec 19, 2011 at 12:44 PM, remi tassing tassingr...@yahoo.comwrote:
Hello guys,
the default search UI doesn't work for me.
http://localhost:8983/solr/browse gives me an HTTP 404 error.
I'm using
Instead of handling it from within solr, I'd suggest writing an external
application (e.g. in python using pysolr) that wraps the (fast) SQL query
you like. Then retrieve a batch of documents, and write them to solr. For
extra speed, don't commit until you're done.
/Martin
On Wed, Dec 14, 2011
Do you commit often? If so, try committing less often :)
/Martin
On Wed, Dec 7, 2011 at 12:16 PM, Adrian Fita adrian.f...@gmail.com wrote:
Hi. I experience an issue where Solr is using huge ammounts of I/O.
Basically it uses the whole HDD continously, leaving nothing to the
other processes.
Hi List
I have a solr index where I want to include numerical fields in my ranking
function as well as keyword relevance. For example, each document has a
document view count, and I'd like to increase the relevancy of documents
that are read often, and penalize documents with a very low view
32 matches
Mail list logo