Is it possible to run solr without zookeeper, but still using sharding, if
it's all running on one host? Would the shards have to be explicitly
included in the query urls?
Thanks,
/Martin
On Fri, Mar 1, 2013 at 3:58 PM, Shawn Heisey wrote:
> On 3/1/2013 7:34 AM, Martin Koch wrote:
&
Thank you very much, Shawn. I had understood that Zookeeper was a mandatory
component for Solr 4, and it is immensely useful to know that it is
possible to do without.
/Martin Koch
On Fri, Mar 1, 2013 at 3:58 PM, Shawn Heisey wrote:
> On 3/1/2013 7:34 AM, Martin Koch wrote:
>
>>
are advantages and disadvantages to each.
>
> - Mark
>
> On Mar 1, 2013, at 9:03 AM, Martin Koch wrote:
>
> > On a host that is running two separate solr (jetty) processes and a
> single
> > zookeeper process, we're often seeing solr complain that it can't find a
ith it. Is this possible?
Thanks,
/Martin Koch - Senior Systems Architect - Issuu.com
unique).
You can see the blog post
here<http://blog.issuu.com/post/41189476451/how-search-at-issuu-actually-works>
.
Happy reading,
/Martin Koch - Senior Systems Architect - Issuu.
ze"?
>
> Thanks
>
>
> On Thu, Nov 22, 2012 at 4:31 PM, Martin Koch wrote:
>
>> Mikhail
>>
>> To avoid freezes we deployed the patches that are now on the 4.1 trunk
>> (bug
>> 3985). But this wasn't good enough, because SOLR would still take
Are all your fields marked as "stored" in your schema? This is a
requirement for atomic updates.
/Martin Koch
On Mon, Nov 26, 2012 at 7:58 PM, Darniz wrote:
> i tried using the same logic to update a specific field and to my surprise
> all my other fields were lost. i had a do
On Thu, Nov 22, 2012 at 3:53 PM, Yonik Seeley wrote:
> On Tue, Nov 20, 2012 at 4:16 AM, Martin Koch wrote:
> > around 7M documents in the index; each document has a 45 character ID.
>
> 7M documents isn't that large. Is there a reason why you need so many
> shards (16 i
s by
> allocating more hardware?
> Thanks in advance!
>
>
> On Wed, Nov 21, 2012 at 3:56 PM, Martin Koch wrote:
>
> > Mikhail,
> >
> > PSB
> >
> > On Wed, Nov 21, 2012 at 10:08 AM, Mikhail Khludnev <
> > mkhlud...@griddynamics.com> wrote:
>
Mikhail,
PSB
On Wed, Nov 21, 2012 at 10:08 AM, Mikhail Khludnev <
mkhlud...@griddynamics.com> wrote:
> On Wed, Nov 21, 2012 at 11:53 AM, Martin Koch wrote:
>
> >
> > I wasn't aware until now that it is possible to send a commit to one core
> > only. What
On Wed, Nov 21, 2012 at 7:08 AM, Mikhail Khludnev <
mkhlud...@griddynamics.com> wrote:
> On Wed, Nov 21, 2012 at 2:07 AM, Martin Koch wrote:
>
> > I'm not sure about the mmap directory or where that
> > would be configured in solr - can you explain that?
> >
mx=10G. are you sure that total size of cores index
> directories is less than 45G?
>
> The total index size is 230 GB, so it won't fit in ram, but we're using an
SSD disk to minimize disk access time. We have tried putting the EFF onto a
ram disk, but this didn't have
ent ApacheCon sessions is that Zookeeper is supposed to replicate those
> files as configs under solr home. And I'm really looking forward to know
> how it works with huge files in production.
>
> Thank You, Guys!
>
> 20.11.2012 18:06 пользователь "Martin Koch" нап
eld" can help you?
>
>
> It looks very interesting :) Does it make it possible to avoid re-reading
the EFF on every commit, and only re-read the values that have actually
changed?
/Martin
>
> On Tue, Nov 20, 2012 at 1:16 PM, Martin Koch wrote:
>
> > Solr 4.0 does
from,
say, 100 to 101 reads.
/Martin Koch - ISSUU - senior systems architect.
On Mon, Nov 19, 2012 at 3:22 PM, Simone Gianni wrote:
> Hi all,
> I'm planning to move a quite big Solr index to SolrCloud. However, in this
> index, an external file field is used for popularity
Are you using solr 4.0? We had some problems similar to this (not in a
master/slave setup, though), where the resolution was to disable the
transaction log, i.e. remove in the section -
we don't need NRT get, so this isn't important to us.
Cheers,
/Martin Koch
On Thu, Nov 1, 2012
In my experience, about as fast as you can push the new data :) Depending
on the size of your records, this should be a matter of seconds.
/Martin Koch
On Wed, Oct 24, 2012 at 9:01 PM, Marcelo Elias Del Valle wrote:
> Erick,
>
> Thanks for the help, it sure helps a lot to read th
12 at 7:02 PM, Mikhail Khludnev wrote:
> Martin,
>
> Can you tell me what's the content of that field, and how it should affect
> search result?
>
> On Mon, Oct 8, 2012 at 12:55 PM, Martin Koch wrote:
>
> > Hi List
> >
> > We're using Solr-4.0.0
the
background, updating previously read relevant values for each shard as they
are read in.
I guess a change in the ExternalFileField code would be required to achieve
this, but I have no experience here, so suggestions are very welcome.
Thanks,
/Martin Koch - Issuu - Senior Systems Architect.
It actually is Beta that we're working with.
/Martin
On Mon, Aug 27, 2012 at 10:38 PM, Martin Koch wrote:
> (I'm working with Raghav on this): We've got several parallel workers that
> add documents in batches of 16 through pysolr, and using commitWithin at 60
> second
(I'm working with Raghav on this): We've got several parallel workers that
add documents in batches of 16 through pysolr, and using commitWithin at 60
seconds when the commit causes solr to freeze; if the commit is only 5
seconds, then everything seems to work fine. In both cases, throughput is
aro
We're doing something similar: We want to combine search relevancy with a
fitness value computed from several other data sources.
For this, we pre-compute the fitness value for each document and store it a
flat file (lines of the format document_id=fitness_score) that we use an
externalFileField
Thanks for writing this up. These are good tips.
/Martin
On Fri, Mar 23, 2012 at 9:57 PM, dw5ight wrote:
> Hey All-
>
> we run a http://carsabi.com car search engine with Solr and did some
> benchmarking recently after we switched from a hosted service to
> self-hosting. In brief, we went fro
I guess this would depend on network bandwidth, but we move around
150G/hour when hooking up a new slave to the master.
/Martin
On Fri, Mar 23, 2012 at 12:33 PM, Ben McCarthy <
ben.mccar...@tradermedia.co.uk> wrote:
> Hello,
>
> Im looking at the replication from a master to a number of slaves.
utureTask.run(FutureTask.java:166)
... 3 more
Thanks,
/Martin Koch
missing something here?
>
> Best
> Erick
>
> On Tue, Jan 3, 2012 at 1:33 PM, Martin Koch wrote:
> > Hi List
> >
> > I have a Solr cluster set up in a master/slave configuration where the
> > master acts as an indexing node and the slaves serve user requests.
> &
Hi List
I have a Solr cluster set up in a master/slave configuration where the
master acts as an indexing node and the slaves serve user requests.
To avoid accidental posts of new documents to the slaves, I have disabled
the update handlers.
However, I use an externalFileField. When the file is
Could it be a commit you're needing?
curl 'localhost:8983/solr/update?commit=true'
/Martin
On Wed, Dec 28, 2011 at 11:47 AM, mumairshamsi wrote:
> http://lucene.472066.n3.nabble.com/file/n3616191/02.xml 02.xml
>
> i am trying to index this file for this i am using this command
>
> java
Have you looked here http://wiki.apache.org/solr/VelocityResponseWriter ?
/Martin
On Mon, Dec 19, 2011 at 12:44 PM, remi tassing wrote:
> Hello guys,
> the default search UI doesn't work for me.
> http://localhost:8983/solr/browse gives me an HTTP 404 error.
> I'm using Solr-1.4. Any idea how
Do you commit often? If so, try committing less often :)
/Martin
On Wed, Dec 7, 2011 at 12:16 PM, Adrian Fita wrote:
> Hi. I experience an issue where Solr is using huge ammounts of I/O.
> Basically it uses the whole HDD continously, leaving nothing to the
> other processes. Solr is called by a
Instead of handling it from within solr, I'd suggest writing an external
application (e.g. in python using pysolr) that wraps the (fast) SQL query
you like. Then retrieve a batch of documents, and write them to solr. For
extra speed, don't commit until you're done.
/Martin
On Wed, Dec 14, 2011 at
Hi List
I have a solr index where I want to include numerical fields in my ranking
function as well as keyword relevance. For example, each document has a
document view count, and I'd like to increase the relevancy of documents
that are read often, and penalize documents with a very low view count
32 matches
Mail list logo