Delete efficiently

2014-07-22 Thread Ophir Michaeli
Hi all,

I want to delete 1 million documents at a time from a given list of 
documents I get. The delete is by query (on one of the documents fields).
I want to understand what is the best practice to do that.
Loop through the million and delete one by one async, or will it cause an 
overload on elsdaticsearch, and I should delete X at a time async and wait 
till it's
done and then delete additional X. Or is there a batch delete that does 
this work more efficiently?

Thanks
 

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/4ef1b8d7-9682-4315-af47-ba1d21f535e2%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Delete by a Filed value

2014-07-20 Thread Ophir Michaeli
Hi all,

Is it possible to delete by a field value?
In the documentation I see only delete by document id.

Thanks,
Ophir

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/0a5dba42-bde1-4d11-b784-c4f7b49f6b28%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Delete oldest X documents from index

2014-07-16 Thread Ophir Michaeli
Hi all,

I want to delete the oldest X docs from my elasticsearch index.
How do I do that?

Thanks, Ophir

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/04e6e4b0-2498-4be8-893d-01059e6f1a69%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Keep the number of segments to 5

2014-07-14 Thread Ophir Michaeli
 

Is there an optimal ratio between index disk size and ram?


On Monday, July 14, 2014 12:44:14 PM UTC+3, Michael McCandless wrote:
>
> Also, optimize is an incredibly costly (CPU, IO) operation.  Really, it 
> should only be done when you know the index will no longer change, e.g. 
> when the daily log index is done being written.
>
> Mike McCandless
>
> http://blog.mikemccandless.com
>
>
> On Sun, Jul 13, 2014 at 9:26 AM, Itamar Syn-Hershko  > wrote:
>
>> Because Elasticsearch will usually get the merge policy right, and you 
>> are better off not trying to fine tune it yourself.
>>
>> Based on those numbers, I'd say you should add more servers if not RAM. 
>> 230GB on 16GB servers is going to cause a lot of thrashing definitely if 
>> you are doing a lot of aggregation operations (aka faceting)
>>
>> You probably can find ways to fine tune and squeeze more performance out 
>> of what you currently have (again - using filters, codecs and other 
>> advanced configs) but it's probably just wiser to scale out
>>
>> --
>>
>> Itamar Syn-Hershko
>> http://code972.com | @synhershko <https://twitter.com/synhershko>
>> Freelance Developer & Consultant
>> Author of RavenDB in Action <http://manning.com/synhershko/>
>>
>>
>> On Sun, Jul 13, 2014 at 4:20 PM, Ophir Michaeli > > wrote:
>>
>>> Shard size on disk is 115GB (230GB for both).
>>> Adding ram is not an option now, I got good results when the index was 
>>> optimized, why not try optimize the delta (for example optimize each added 
>>> million docs each time, if that is too expensive than half a million and so 
>>> on)?
>>>
>>>
>>> On Sunday, July 13, 2014 4:06:51 PM UTC+3, Itamar Syn-Hershko wrote:
>>>
>>>> By shard size I meant on disk
>>>>
>>>> There's a lot you can do to optimize performance, worrying about the 
>>>> number of segments is the last of them really
>>>>
>>>> Look into getting more RAM (32gb is my personal recommendation), using 
>>>> filters, making sure you use enough servers (50GB shards on 16GB RAM 
>>>> server 
>>>> isn't cool, especially if you use aggregations), look into codecs and much 
>>>> more. There's no need for you to look into segments, especially since if 
>>>> this is a live index which is being written to there's a large cost (CPU, 
>>>> IO and GC) associated with merging segments
>>>>
>>>> --
>>>>
>>>> Itamar Syn-Hershko
>>>> http://code972.com | @synhershko <https://twitter.com/synhershko>
>>>> Freelance Developer & Consultant
>>>> Author of RavenDB in Action <http://manning.com/synhershko/>
>>>>
>>>>
>>>> On Sun, Jul 13, 2014 at 3:59 PM, Ophir Michaeli  
>>>> wrote:
>>>>
>>>>>  I got to 5 by doing some performance tests, could be that 1 or 10 
>>>>> are also ok.
>>>>> Each shard is 33 Million documents (2 shards on a 66 Million docs, one 
>>>>> node on a machine).
>>>>> Each server is Server 2008 R2, 16GB Ram.
>>>>>
>>>>>
>>>>> On Sunday, July 13, 2014 3:15:57 PM UTC+3, Itamar Syn-Hershko wrote:
>>>>>
>>>>>> How did you arrive at this number of 5?
>>>>>>
>>>>>> To being with, what sizes are your shards? what are the specs of your 
>>>>>> servers?
>>>>>>
>>>>>> --
>>>>>>
>>>>>> Itamar Syn-Hershko
>>>>>> http://code972.com | @synhershko <https://twitter.com/synhershko>
>>>>>> Freelance Developer & Consultant
>>>>>> Author of RavenDB in Action <http://manning.com/synhershko/>
>>>>>>
>>>>>>
>>>>>> On Sun, Jul 13, 2014 at 3:11 PM, Ophir Michaeli  
>>>>>> wrote:
>>>>>>
>>>>>>>  Hi everyone,
>>>>>>>
>>>>>>> I'm running a system of 2 nodes with 66 million documents each 
>>>>>>> (additional nodes will be added up to a total of 500 Million documents).
>>>>>>> I want to keep the number of segments to 5 (otherwise the search 
>>>>>>> while indexing is too slow), meaning running optimize every given time 
>>>>>>> on 
>>>>>>> the d

Re: Keep the number of segments to 5

2014-07-14 Thread Ophir Michaeli



On Monday, July 14, 2014 12:44:14 PM UTC+3, Michael McCandless wrote:
>
> Also, optimize is an incredibly costly (CPU, IO) operation.  Really, it 
> should only be done when you know the index will no longer change, e.g. 
> when the daily log index is done being written.
>
> Mike McCandless
>
> http://blog.mikemccandless.com
>
>  Is there an optimal ratio between index disk size and ram?

>
> On Sun, Jul 13, 2014 at 9:26 AM, Itamar Syn-Hershko  > wrote:
>
>> Because Elasticsearch will usually get the merge policy right, and you 
>> are better off not trying to fine tune it yourself.
>>
>> Based on those numbers, I'd say you should add more servers if not RAM. 
>> 230GB on 16GB servers is going to cause a lot of thrashing definitely if 
>> you are doing a lot of aggregation operations (aka faceting)
>>
>> You probably can find ways to fine tune and squeeze more performance out 
>> of what you currently have (again - using filters, codecs and other 
>> advanced configs) but it's probably just wiser to scale out
>>
>> --
>>
>> Itamar Syn-Hershko
>> http://code972.com | @synhershko <https://twitter.com/synhershko>
>> Freelance Developer & Consultant
>> Author of RavenDB in Action <http://manning.com/synhershko/>
>>
>>
>> On Sun, Jul 13, 2014 at 4:20 PM, Ophir Michaeli > > wrote:
>>
>>> Shard size on disk is 115GB (230GB for both).
>>> Adding ram is not an option now, I got good results when the index was 
>>> optimized, why not try optimize the delta (for example optimize each added 
>>> million docs each time, if that is too expensive than half a million and so 
>>> on)?
>>>
>>>
>>> On Sunday, July 13, 2014 4:06:51 PM UTC+3, Itamar Syn-Hershko wrote:
>>>
>>>> By shard size I meant on disk
>>>>
>>>> There's a lot you can do to optimize performance, worrying about the 
>>>> number of segments is the last of them really
>>>>
>>>> Look into getting more RAM (32gb is my personal recommendation), using 
>>>> filters, making sure you use enough servers (50GB shards on 16GB RAM 
>>>> server 
>>>> isn't cool, especially if you use aggregations), look into codecs and much 
>>>> more. There's no need for you to look into segments, especially since if 
>>>> this is a live index which is being written to there's a large cost (CPU, 
>>>> IO and GC) associated with merging segments
>>>>
>>>> --
>>>>
>>>> Itamar Syn-Hershko
>>>> http://code972.com | @synhershko <https://twitter.com/synhershko>
>>>> Freelance Developer & Consultant
>>>> Author of RavenDB in Action <http://manning.com/synhershko/>
>>>>
>>>>
>>>> On Sun, Jul 13, 2014 at 3:59 PM, Ophir Michaeli  
>>>> wrote:
>>>>
>>>>>  I got to 5 by doing some performance tests, could be that 1 or 10 
>>>>> are also ok.
>>>>> Each shard is 33 Million documents (2 shards on a 66 Million docs, one 
>>>>> node on a machine).
>>>>> Each server is Server 2008 R2, 16GB Ram.
>>>>>
>>>>>
>>>>> On Sunday, July 13, 2014 3:15:57 PM UTC+3, Itamar Syn-Hershko wrote:
>>>>>
>>>>>> How did you arrive at this number of 5?
>>>>>>
>>>>>> To being with, what sizes are your shards? what are the specs of your 
>>>>>> servers?
>>>>>>
>>>>>> --
>>>>>>
>>>>>> Itamar Syn-Hershko
>>>>>> http://code972.com | @synhershko <https://twitter.com/synhershko>
>>>>>> Freelance Developer & Consultant
>>>>>> Author of RavenDB in Action <http://manning.com/synhershko/>
>>>>>>
>>>>>>
>>>>>> On Sun, Jul 13, 2014 at 3:11 PM, Ophir Michaeli  
>>>>>> wrote:
>>>>>>
>>>>>>>  Hi everyone,
>>>>>>>
>>>>>>> I'm running a system of 2 nodes with 66 million documents each 
>>>>>>> (additional nodes will be added up to a total of 500 Million documents).
>>>>>>> I want to keep the number of segments to 5 (otherwise the search 
>>>>>>> while indexing is too slow), meaning running optimize every given time 
>>>>>>> on 
>>>>>>> t

Re: Keep the number of segments to 5

2014-07-13 Thread Ophir Michaeli
Shard size on disk is 115GB (230GB for both).
Adding ram is not an option now, I got good results when the index was 
optimized, why not try optimize the delta (for example optimize each added 
million docs each time, if that is too expensive than half a million and so 
on)?

On Sunday, July 13, 2014 4:06:51 PM UTC+3, Itamar Syn-Hershko wrote:
>
> By shard size I meant on disk
>
> There's a lot you can do to optimize performance, worrying about the 
> number of segments is the last of them really
>
> Look into getting more RAM (32gb is my personal recommendation), using 
> filters, making sure you use enough servers (50GB shards on 16GB RAM server 
> isn't cool, especially if you use aggregations), look into codecs and much 
> more. There's no need for you to look into segments, especially since if 
> this is a live index which is being written to there's a large cost (CPU, 
> IO and GC) associated with merging segments
>
> --
>
> Itamar Syn-Hershko
> http://code972.com | @synhershko <https://twitter.com/synhershko>
> Freelance Developer & Consultant
> Author of RavenDB in Action <http://manning.com/synhershko/>
>
>
> On Sun, Jul 13, 2014 at 3:59 PM, Ophir Michaeli  > wrote:
>
>>  I got to 5 by doing some performance tests, could be that 1 or 10 are 
>> also ok.
>> Each shard is 33 Million documents (2 shards on a 66 Million docs, one 
>> node on a machine).
>> Each server is Server 2008 R2, 16GB Ram.
>>
>>
>> On Sunday, July 13, 2014 3:15:57 PM UTC+3, Itamar Syn-Hershko wrote:
>>
>>> How did you arrive at this number of 5?
>>>
>>> To being with, what sizes are your shards? what are the specs of your 
>>> servers?
>>>
>>> --
>>>
>>> Itamar Syn-Hershko
>>> http://code972.com | @synhershko <https://twitter.com/synhershko>
>>> Freelance Developer & Consultant
>>> Author of RavenDB in Action <http://manning.com/synhershko/>
>>>
>>>
>>> On Sun, Jul 13, 2014 at 3:11 PM, Ophir Michaeli  
>>> wrote:
>>>
>>>>  Hi everyone,
>>>>
>>>> I'm running a system of 2 nodes with 66 million documents each 
>>>> (additional nodes will be added up to a total of 500 Million documents).
>>>> I want to keep the number of segments to 5 (otherwise the search while 
>>>> indexing is too slow), meaning running optimize every given time on the 
>>>> delta, while indexing and search still running.
>>>> Is this a good practice? Or are there better ideas for a good 
>>>> performance. 
>>>>
>>>> Best Regards,
>>>> Ophir
>>>>
>>>>  -- 
>>>> You received this message because you are subscribed to the Google 
>>>> Groups "elasticsearch" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send 
>>>> an email to elasticsearc...@googlegroups.com.
>>>>
>>>> To view this discussion on the web visit https://groups.google.com/d/
>>>> msgid/elasticsearch/b33ae8c6-667a-45cd-8b1e-e7d42bb8e99e%
>>>> 40googlegroups.com 
>>>> <https://groups.google.com/d/msgid/elasticsearch/b33ae8c6-667a-45cd-8b1e-e7d42bb8e99e%40googlegroups.com?utm_medium=email&utm_source=footer>
>>>> .
>>>> For more options, visit https://groups.google.com/d/optout.
>>>>
>>>
>>>  -- 
>> You received this message because you are subscribed to the Google Groups 
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to elasticsearc...@googlegroups.com .
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/elasticsearch/9413947b-5ce8-493c-bfe5-e297d1b48879%40googlegroups.com
>>  
>> <https://groups.google.com/d/msgid/elasticsearch/9413947b-5ce8-493c-bfe5-e297d1b48879%40googlegroups.com?utm_medium=email&utm_source=footer>
>> .
>>
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/33a7679b-3d83-4a8c-9c30-6a2a86411a75%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Keep the number of segments to 5

2014-07-13 Thread Ophir Michaeli
 

I got to 5 by doing some performance tests, could be that 1 or 10 are also 
ok.
Each shard is 33 Million documents (2 shards on a 66 Million docs, one node 
on a machine).
Each server is Server 2008 R2, 16GB Ram.


On Sunday, July 13, 2014 3:15:57 PM UTC+3, Itamar Syn-Hershko wrote:
>
> How did you arrive at this number of 5?
>
> To being with, what sizes are your shards? what are the specs of your 
> servers?
>
> --
>
> Itamar Syn-Hershko
> http://code972.com | @synhershko <https://twitter.com/synhershko>
> Freelance Developer & Consultant
> Author of RavenDB in Action <http://manning.com/synhershko/>
>
>
> On Sun, Jul 13, 2014 at 3:11 PM, Ophir Michaeli  > wrote:
>
>> Hi everyone,
>>
>> I'm running a system of 2 nodes with 66 million documents each 
>> (additional nodes will be added up to a total of 500 Million documents).
>> I want to keep the number of segments to 5 (otherwise the search while 
>> indexing is too slow), meaning running optimize every given time on the 
>> delta, while indexing and search still running.
>> Is this a good practice? Or are there better ideas for a good 
>> performance. 
>>
>> Best Regards,
>> Ophir
>>
>>  -- 
>> You received this message because you are subscribed to the Google Groups 
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to elasticsearc...@googlegroups.com .
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/elasticsearch/b33ae8c6-667a-45cd-8b1e-e7d42bb8e99e%40googlegroups.com
>>  
>> <https://groups.google.com/d/msgid/elasticsearch/b33ae8c6-667a-45cd-8b1e-e7d42bb8e99e%40googlegroups.com?utm_medium=email&utm_source=footer>
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/9413947b-5ce8-493c-bfe5-e297d1b48879%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Keep the number of segments to 5

2014-07-13 Thread Ophir Michaeli
Hi everyone,

I'm running a system of 2 nodes with 66 million documents each (additional 
nodes will be added up to a total of 500 Million documents).
I want to keep the number of segments to 5 (otherwise the search while 
indexing is too slow), meaning running optimize every given time on the 
delta, while indexing and search still running.
Is this a good practice? Or are there better ideas for a good performance. 

Best Regards,
Ophir

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/b33ae8c6-667a-45cd-8b1e-e7d42bb8e99e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Set the number of index segments for indexing

2014-07-03 Thread Ophir Michaeli
Hi, 

While optimizing it's pretty easy to determine the number of segments by 
executing Optimize command with max_num_segments parameter. 
My question is about indexing prior to optimizing.
I checked the 
http://www.elasticsearch.org/guide/en/elasticsearch/reference/0.90/index-modules-merge.html#tiered
 section 
and I still couldn't find - how do I control the number of segments during 
indexing? Is there a simple way to tell to ES that I am willing to pay with 
indexing time and frequent merges but keep number of segments below 10, for 
example? 

Thanks,
Ophir

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/3a57e027-1cd3-4578-9421-bf351e833dd4%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


elasticsearch console screen stays open even though java process closes

2014-07-03 Thread Ophir Michaeli
Hi all,

I'm running an auto test for my elasticsearch project. 

The auto test runs a test and then return the same test for 50 times to 
check average time on multiple runs.
When one test finishes the auto test closes the java process, 
so the next run will have es without cache (so all the re-runs will have 
the same es state).

I have a problem that the console screen that es runs on is not closing 
when I close the java process.
When the next run starts it opens a new es node, but because the former es 
node console stayed open
than es runs now with 2 nodes and each time another node is added, while I 
want only one new node to run.

This happens only on one machine, on other machines the es console screen 
closes when the java process closes.

Thank you,
Ophir

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/197bf2bf-f770-490c-b17c-58ae5f046ab4%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Min Hard Drive Requirements

2014-07-02 Thread Ophir Michaeli
When I tried to optimize the index had 51 shards.
Regards, Ophir

On Wednesday, July 2, 2014 11:27:50 AM UTC+3, Mark Walkom wrote:
>
> It will work until it's full, but then ES will fall over.
> Merging does require a certain amount of disk space, usually the same 
> amount as the segment that is being merged as it has to take a copy of the 
> shard to work on. So for a 10GB segment, you'd need at least 10GB free.
>
> How many shards do you have for the index, or how many are you trying to 
> optimise (merge) down to?
>
> Regards,
> Mark Walkom
>
> Infrastructure Engineer
> Campaign Monitor
> email: ma...@campaignmonitor.com 
> web: www.campaignmonitor.com
>  
>
> On 2 July 2014 18:13, Ophir Michaeli > 
> wrote:
>
>>  Hi all,
>>
>> I'm testing the indexing of 100 million documents, it took about 400GB of 
>> the hard drive.
>> Is there a minimum free hard drive space needed for the index to work OK?
>> I'm asking because after we indexed 100 million documents we tested the 
>> index and it worked OK, 
>> but then when trying to optimize the optimize took days and then the 
>> index did not respond.
>> The hard drive had only 10 GB free space so we tried to copy the index to 
>> a new hard drive with a bigger free space, but the index is still not 
>> functioning.
>>
>> Thank you,
>> Ophir
>>
>>   
>>
>>  
>>  
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to elasticsearc...@googlegroups.com .
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/elasticsearch/3405d84f-49d4-4cf9-836e-6b6bc09fdc74%40googlegroups.com
>>  
>> <https://groups.google.com/d/msgid/elasticsearch/3405d84f-49d4-4cf9-836e-6b6bc09fdc74%40googlegroups.com?utm_medium=email&utm_source=footer>
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/6cdd64fb-dfe7-479f-b44c-3c1e8cff1ce7%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Min Hard Drive Requirements

2014-07-02 Thread Ophir Michaeli
 

Hi all,

I'm testing the indexing of 100 million documents, it took about 400GB of 
the hard drive.
Is there a minimum free hard drive space needed for the index to work OK?
I'm asking because after we indexed 100 million documents we tested the 
index and it worked OK, 
but then when trying to optimize the optimize took days and then the index 
did not respond.
The hard drive had only 10 GB free space so we tried to copy the index to a 
new hard drive with a bigger free space, but the index is still not 
functioning.

Thank you,
Ophir

  

 

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/3405d84f-49d4-4cf9-836e-6b6bc09fdc74%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Failed to merge - There is not enough space on the disk

2014-06-29 Thread Ophir Michaeli
It looks like it takes some time for the node to get back to normal, it 
works now. thanks

On Sunday, June 29, 2014 5:27:21 PM UTC+3, Itamar Syn-Hershko wrote:
>
> If it was corrupted you would have seen other errors, not 503. Check your 
> network settings.
>
> --
>
> Itamar Syn-Hershko
> http://code972.com | @synhershko <https://twitter.com/synhershko>
> Freelance Developer & Consultant
> Author of RavenDB in Action <http://manning.com/synhershko/>
>
>
> On Sun, Jun 29, 2014 at 5:25 PM, Ophir Michaeli  > wrote:
>
>> Hi,
>>
>> I restarted elasticesaerch node and the ES console screen shows no error, 
>> but I can't connect to the node (with REST I get "503 Service Unavailable
>> ").
>> Is the index corrupt? 
>>
>> Thanks!
>>
>>
>> On Sunday, June 29, 2014 2:51:56 PM UTC+3, Itamar Syn-Hershko wrote:
>>
>>> This error means indexing has stopped at one point, up to that point 
>>> everything is preserved.
>>>
>>> See http://www.elasticsearch.org/guide/en/elasticsearch/
>>> reference/current/index-modules-allocation.html#disk for how to avoid 
>>> this from now on
>>>
>>> --
>>>
>>> Itamar Syn-Hershko
>>> http://code972.com | @synhershko <https://twitter.com/synhershko>
>>> Freelance Developer & Consultant
>>> Author of RavenDB in Action <http://manning.com/synhershko/>
>>>
>>>
>>> On Sun, Jun 29, 2014 at 1:37 PM, Ophir Michaeli  
>>> wrote:
>>>
>>>>  Hi, 
>>>>
>>>> I get an error - 
>>>> failed to merge java.io.IOException: There is not enough space on the 
>>>> disk.
>>>> The disk was full, cleaned it now.
>>>> What is the best way not to loose what I already indexed? Or does this 
>>>> error mean that the index is lost and I need to delete and index again?
>>>>
>>>> Thanks,
>>>> Ophir
>>>>
>>>> -- 
>>>> You received this message because you are subscribed to the Google 
>>>> Groups "elasticsearch" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send 
>>>> an email to elasticsearc...@googlegroups.com.
>>>>
>>>> To view this discussion on the web visit https://groups.google.com/d/
>>>> msgid/elasticsearch/0b1ae217-aeb4-465f-9468-d2b617a47679%
>>>> 40googlegroups.com 
>>>> <https://groups.google.com/d/msgid/elasticsearch/0b1ae217-aeb4-465f-9468-d2b617a47679%40googlegroups.com?utm_medium=email&utm_source=footer>
>>>> .
>>>> For more options, visit https://groups.google.com/d/optout.
>>>>
>>>
>>>  -- 
>> You received this message because you are subscribed to the Google Groups 
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to elasticsearc...@googlegroups.com .
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/elasticsearch/63357c69-0c1e-45f8-8b0f-962dc805bc50%40googlegroups.com
>>  
>> <https://groups.google.com/d/msgid/elasticsearch/63357c69-0c1e-45f8-8b0f-962dc805bc50%40googlegroups.com?utm_medium=email&utm_source=footer>
>> .
>>
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/950049b0-f2f2-4648-b09a-002db1b5ab77%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Failed to merge - There is not enough space on the disk

2014-06-29 Thread Ophir Michaeli
Hi,

I restarted elasticesaerch node and the ES console screen shows no error, 
but I can't connect to the node (with REST I get "503 Service Unavailable").
Is the index corrupt? 

Thanks!

On Sunday, June 29, 2014 2:51:56 PM UTC+3, Itamar Syn-Hershko wrote:
>
> This error means indexing has stopped at one point, up to that point 
> everything is preserved.
>
> See 
> http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/index-modules-allocation.html#disk
>  
> for how to avoid this from now on
>
> --
>
> Itamar Syn-Hershko
> http://code972.com | @synhershko <https://twitter.com/synhershko>
> Freelance Developer & Consultant
> Author of RavenDB in Action <http://manning.com/synhershko/>
>
>
> On Sun, Jun 29, 2014 at 1:37 PM, Ophir Michaeli  > wrote:
>
>> Hi, 
>>
>> I get an error - 
>> failed to merge java.io.IOException: There is not enough space on the 
>> disk.
>> The disk was full, cleaned it now.
>> What is the best way not to loose what I already indexed? Or does this 
>> error mean that the index is lost and I need to delete and index again?
>>
>> Thanks,
>> Ophir
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to elasticsearc...@googlegroups.com .
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/elasticsearch/0b1ae217-aeb4-465f-9468-d2b617a47679%40googlegroups.com
>>  
>> <https://groups.google.com/d/msgid/elasticsearch/0b1ae217-aeb4-465f-9468-d2b617a47679%40googlegroups.com?utm_medium=email&utm_source=footer>
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/63357c69-0c1e-45f8-8b0f-962dc805bc50%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Failed to merge - There is not enough space on the disk

2014-06-29 Thread Ophir Michaeli
Hi, 

I get an error - 
failed to merge java.io.IOException: There is not enough space on the disk.
The disk was full, cleaned it now.
What is the best way not to loose what I already indexed? Or does this 
error mean that the index is lost and I need to delete and index again?

Thanks,
Ophir

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/0b1ae217-aeb4-465f-9468-d2b617a47679%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.