On Tue, Aug 14, 2012 at 11:14 AM, Jack Krupansky
<j...@basetechnology.com> wrote:
> If you send a dummy document using a curl command, without the commit
> option, does it auto-commit and become visible in 1 minute?

Sending a JSON document using curl:

{
  "add": {
    "commitWithin": 60000,
    "overwrite": false,
    "doc": {
      "id" : "1",
      "type" : "foo"
    }
  }
}

This worked fine. But If use the EmbeddedServer.add(doc, commitWithin)
it doesn't show up in the search result.

>From this article:
http://www.cominvent.com/2011/09/09/discover-commitwithin-in-solr/

I see there's is multiple ways to specify this commitWithin options:

https://issues.apache.org/jira/browse/SOLR-2742 introduced it to the
.add() methods for SolrServer, could it be broken only there?

I will go try this syntax:

    UpdateRequest req = new UpdateRequest();
    req.add(mySolrInputDocument);
    req.setCommitWithin(10000);
    req.process(server);

Cheers,

/jonathan

>
> -- Jack Krupansky
>
> -----Original Message----- From: Jonatan Fournier
> Sent: Tuesday, August 14, 2012 11:03 AM
> To: solr-user@lucene.apache.org ; erickerick...@gmail.com
> Subject: Re: Index not loading
>
>
> Hi Erick,
>
> On Tue, Aug 14, 2012 at 10:25 AM, Erick Erickson
> <erickerick...@gmail.com> wrote:
>>
>> This is quite odd, it really sounds like you're not
>> actually committing. So, some questions.
>>
>> 1> What happens if you search before you shut
>> down your tomcat? Do you see docs then? If so,
>> somehow you're doing soft commits and never
>> doing a hard commit.
>
>
> No I'm not seeing any documents if I do search for anything. Like
> mentioned above, Num and Max docs are 0.
>
> Like I mentioned below, my index files are not deleted when I
> start/restart tomcat, but when within tomcat I send a commit/optimize
> command.
>
> On thing I noticed that was different in the log output from the
> embedded server was that when I use the solrconfig.xml autoCommit,
> after the delay I see some stdout message about commiting to the
> index. But when relying on the commitWithin, I never see the solr
> server output freeze for a moment while commiting, I only see all my
> add document stdout message. Should the behavior be the same? Or the
> commit messages pass by so fast I don't see them?
>
> It must be trying to do some kind of commit/merge, because when I was
> monitoring the memory I could see periodic memory increase (when I
> assumed it was merging) then memory decreased until the next delay...
>
>>
>> 2> What happens if, as the last statement in your SolrJ
>> program you do a commit()?
>
>
> Let me try that and come back to you, for now here's the commands I
> was using in the 3 test scenarios:
>
> SolrInputDocument doc = new SolrInputDocument();
> solrDoc.addField("id", someId);
> ...
> server.add(doc); // In the case I have either autoCommit
> <maxTime>60000</maxTime> enabled in the solrconfig.xml or
> <autoSoftCommit>
> // Both scenarios works, in those 2 cases when I shutdown my
> embeddedserver and restart tomcat I have all my data indexed/commited
>
> or
>
> server.add(doc, 60000) // In the case I don't have autoCommit enabled,
> try to rely on commitWithin param.
>
>
>>
>> 3> While you're indexing, what do you see in your index
>> directory? You should see multiple segments being
>> created, and possibly merged so the number of
>> files should go up and down. If you only have a single
>> set of files, you're somehow not doing a commit.
>
>
> No I do see a bunch of files being created/merged, at the end I had a
> bout 89G in many many files.
>
> Another thing I was playing around when trying to use the commitWithin
> is to change the <useCompoundFile>true</useCompoundFile> and
> <mergeFactor>10</mergeFactor> to reduce the number of files created.
> Could it impact things?
>
>>
>> 4> Is there something really silly going on like your
>> restart scripts delete the index directory? Or you're
>> using a VM that restores a blank image?
>
>
> No VM, no scripts, no replication.
>
>>
>> 5> When you do restart, are there any files at all
>> in your index directory?
>
>
> When I restart tomcat I do see all the same 89G files that was created
> using the embedded server, they only vanish when I force a commit or
> optimize, then it's like if my data directory didn't exist and the 2
> initial segment files are being created and all the rest deleted.
>
>>
>> I really suspect you've got some configuration problem
>> here....
>
>
> Maybe, but other than playing with the compound file thingy I don't
> have any fancy config changes.
>
> Cheers,
>
> /jonathan
>
>>
>> Best
>> Erick
>>
>>
>>
>> On Mon, Aug 13, 2012 at 9:11 AM, Jonatan Fournier
>> <jonatan.fourn...@gmail.com> wrote:
>>>
>>> Hi,
>>>
>>> I'm using Solr 4.0.0-ALPHA and the EmbeddedSolrServer.
>>>
>>> Within my SolrJ application, the documents are added to the server
>>> using the commitWithin parameter (in my case 60s). After 1 day my 125
>>> millions document are all added to the server and I can see 89G of
>>> index data files. I stop my SolrJ application and reload my Solr
>>> instance in Tomcat.
>>>
>>> From the Solr admin panel related to my Core (collection1) I see this
>>> info:
>>>
>>>
>>> Last Modified:
>>> Num Docs:0
>>> Max Doc:0
>>> Version:1
>>> Segment Count:0
>>> Optimized: (green check)
>>> Current:  (green check)
>>> Master:
>>> Version: 0
>>> Gen: 1
>>> Size: 88.14 GB
>>>
>>>
>>> From the general Core Admin panel I see:
>>>
>>> lastModified:
>>> version:1
>>> numDocs:0
>>> maxDoc:0
>>> optimized: (red circle)
>>> current: (green check)
>>> hasDeletions: (red circle)
>>>
>>> If I query my index for *:* I get 0 result. If I trigger optimize it
>>> wipes ALL my data inside the index and reset to empty. I've played
>>> around my EmbeddedServer initially using autoCommit/softCommit and it
>>> was working fine. Now that I've switched to commitWithin the document
>>> add query, it always do that! I'm never able to reload my index within
>>> Tomcat/Solr.
>>>
>>> Any idea?
>>>
>>> Cheers,
>>>
>>> /jonathan
>
>

Reply via email to