What's in the logs of the node that won't recover on restart after
> clearing the index and tlog
>
> - Mark
>
> On Jan 29, 2014, at 11:41 AM, Greg Preston
> wrote:
>
> >> If you removed the tlog and index and restart it should resync, or
> > something i
json is a red herring. You have to merge the live nodes
> info with the state to know the real state.
>
> - Mark
>
> http://www.about.me/markrmiller
>
> > On Jan 28, 2014, at 12:31 PM, Greg Preston
> wrote:
> >
> > ** Using solrcloud 4.4.0 **
> >
>
at
org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:219)
-Greg
On Tue, Jan 28, 2014 at 9:53 AM, Shawn Heisey wrote:
> On 1/28/2014 10:31 AM, Greg Preston wrote:
>
>> ** Using solrcloud 4.4.0 **
>>
>> I had to kill a running solrcloud node.
** Using solrcloud 4.4.0 **
I had to kill a running solrcloud node. There is still a replica for that
shard, so everything is functional. We've done some indexing while the
node was killed.
I'd like to bring back up the downed node and have it resync from the other
replica. But when I restart
Anyway, thank you very much for the suggestion.
-Greg
On Fri, Dec 27, 2013 at 4:25 AM, Michael McCandless
wrote:
> Likely this is for field norms, which use doc values under the hood.
>
> Mike McCandless
>
> http://blog.mikemccandless.com
>
>
> On Thu, Dec 26, 2013 at
se doc values under the hood.
>
> Mike McCandless
>
> http://blog.mikemccandless.com
>
>
> On Thu, Dec 26, 2013 at 5:03 PM, Greg Preston
> wrote:
>> Does anybody with knowledge of solr internals know why I'm seeing
>> instances of Lucene42DocValuesProducer wh
Does anybody with knowledge of solr internals know why I'm seeing
instances of Lucene42DocValuesProducer when I don't have any fields
that are using DocValues? Or am I misunderstanding what this class is
for?
-Greg
On Mon, Dec 23, 2013 at 12:07 PM, Greg Preston
wrote:
> Hello,
&g
g
On Mon, Dec 23, 2013 at 2:16 PM, David Santamauro
wrote:
> On 12/23/2013 05:03 PM, Greg Preston wrote:
>>>
>>> Yes, I'm well aware of the performance implications, many of which are
>>> mitigated by 2TB of SSD and 512GB RAM
>>
>>
>> I'
een the commits. Then watch the memory as the
> replaying occurs with the smaller tlog.
>
> Joel
>
>
>
>
> Joel Bernstein
> Search Engineer at Heliosearch
>
>
> On Mon, Dec 23, 2013 at 4:17 PM, Greg Preston
> wrote:
>
>> Hi Joel,
>>
>>
>Yes, I'm well aware of the performance implications, many of which are
>mitigated by 2TB of SSD and 512GB RAM
I've got a very similar setup in production. 2TB SSD, 256G RAM (128G
heaps), and 1 - 1.5 TB of index per node. We're in the process of
splitting that to multiple JVMs per host. GC pau
commit setting it's possible the OOM errors will stop
> occurring.
>
> Joel
>
> Joel Bernstein
> Search Engineer at Heliosearch
>
>
> On Mon, Dec 23, 2013 at 3:07 PM, Greg Preston
> wrote:
>
>> Hello,
>>
>> I'm loading up our solr cloud
Hello,
I'm loading up our solr cloud with data (from a solrj client) and
running into a weird memory issue. I can reliably reproduce the
problem.
- Using Solr Cloud 4.4.0 (also replicated with 4.6.0)
- 24 solr nodes (one shard each), spread across 3 physical hosts, each
host has 256G of memory
-
ck Krupansky
>
> -Original Message- From: Greg Preston
> Sent: Wednesday, September 25, 2013 5:43 PM
> To: solr-user@lucene.apache.org
> Subject: How to always tokenize on underscore?
>
>
> [Using SolrCloud 4.4.0]
>
> I have a text field where the data will sometimes be d
[Using SolrCloud 4.4.0]
I have a text field where the data will sometimes be delimited by
whitespace, and sometimes by underscore. For example, both of the
following are possible input values:
Group_EN_1000232142_blah_1000232142abc_foo
Group EN 1000232142 blah 1000232142abc foo
What I'd like to
Our index is too large to uninvert on the fly, so we've been looking
into using DocValues to keep a particular field uninverted at index
time. See http://wiki.apache.org/solr/DocValues
I don't know if this will solve your problem, but it might be worth
trying it out.
-Greg
On Tue, Sep 3, 2013
I don't know about SOLR-5017, but why don't you want to use parent_id
as a shard key?
So if you've got a doc with a key of "abc123" and a parent_id of 456,
just use a key of "456!abc123" and all docs with the same parent_id
will go to the same shard.
We're doing something similar and limiting que
I haven't been able to successfully split a shard with Solr 4.4.0
If I have an empty index, or all documents would go to one side of the
split, I hit SOLR-5144. But if I avoid that case, I consistently get
this error:
290391 [qtp243983770-60] INFO
org.apache.solr.update.processor.LogUpdateProces
olr (Lucene) always rewrites the complete
> document.
>
>
> On 08/23/2013 09:03 AM, Greg Preston wrote:
>
>> Perhaps an atomic update that only changes the fields you want to change?
>>
>> -Greg
>>
>>
>> On Fri, Aug 23, 2013 at 4:16 AM, Luís Portela Afo
Perhaps an atomic update that only changes the fields you want to change?
-Greg
On Fri, Aug 23, 2013 at 4:16 AM, Luís Portela Afonso
wrote:
> Hi thanks by the answer, but the uniqueId is generated by me. But when solr
> indexes and there is an update in a doc, it deletes the doc and creates a
Found it. Add "[shard]" to your fl.
-Greg
On Tue, Aug 20, 2013 at 1:24 PM, Greg Preston
wrote:
> I know I've done this in a search via the admin console, but I can't
> remember/find the exact syntax right now...
>
> -Greg
>
>
> On Tue, Aug 20, 2013 a
I know I've done this in a search via the admin console, but I can't
remember/find the exact syntax right now...
-Greg
On Tue, Aug 20, 2013 at 12:56 PM, AdamP wrote:
> Hi,
>
> We have several shards which we're querying across using distributed search.
> This initial search only returns basic i
Two possibilities:
>
> 1. You need a lot more hardware.
> 2. You need to scale back your ambitions.
>
> -- Jack Krupansky
>
> -Original Message- From: Greg Preston
> Sent: Tuesday, August 20, 2013 2:00 PM
>
> To: solr-user@lucene.apache.org
> Subject: Autosug
The filter query would be on a different field (clientId) than the
field we want to autosuggest on (title).
Or are you proposing we index a compound field that would be
clientId+titleTokens so we would then prefix the suggester with
clientId+userInput ?
Interesting idea.
-Greg
On Tue, Aug 20,
Using 4.4.0 -
I would like to be able to do an autosuggest query against one of the
fields in our index and have the results be limited by an fq.
I can get exactly the results I want with a facet query using a
facet.prefix, but the first query takes ~5 minutes to run on our QA
env (~240M docs).
Have you tried it with a smaller number of documents? I haven't been able
to successfully split a shard with 4.4.0 with even a handful of docs.
-Greg
On Fri, Aug 16, 2013 at 7:09 AM, Harald Kirsch wrote:
> Hi all.
>
> Using the example setup of solr-4.4.0, I was able to easily feed 23
> milli
I'm running into the same issue using composite routing keys when all of
the shard keys end up in one of the subshards.
-Greg
On Tue, Aug 13, 2013 at 9:34 AM, Shalin Shekhar Mangar <
shalinman...@gmail.com> wrote:
> Scratch that. I obviously didn't pay attention to the stack trace.
> There is n
Are you manually setting the shard on each document? If not, documents
will be hashed across all the shards.
-Greg
On Mon, Aug 12, 2013 at 3:50 PM, Thierry Thelliez <
thierry.thelliez.t...@gmail.com> wrote:
> Hello, I am trying to set a four shard system for the first time. I do
> not unders
I've simplified things from my previous email, and I'm still seeing errors.
Using solr 4.4.0 with two nodes, starting with a single shard. Collection
is named "marin", host names are dumbo and solrcloud1. I bring up an empty
cloud and index 50 documents. I can query them and everything looks fi
7;ll get a lot more attention
> than 4.1
>
> FWIW,
> Erick
>
>
> On Fri, Aug 9, 2013 at 7:32 PM, Greg Preston >wrote:
>
> > Howdy,
> >
> > I'm trying to test shard splitting, and it's not working for me. I've
> got
> > a 4 n
Howdy,
I'm trying to test shard splitting, and it's not working for me. I've got
a 4 node cloud with a single collection and 2 shards.
I've indexed 170k small documents, and I'm using the compositeId router,
with an internal "client id" as the shard key, with 4 distinct values
across the data se
30 matches
Mail list logo