histogram aggregation with float interval 1.0 gives ArithmeticException
Hi, the following query (note bolded line): { query: { filtered: { query: { term: { self_and_ancestors: diamonds } }, filter: { terms: { attr_types.diamond#color: [ d ] } } } }, sort: [ { sell_offer_cents: { order: asc } } ], fields: _source, script_fields: { gap_cents: { script: custom_score_item_bid_ask_gap, params: { individual_price_item_ids: [], individual_price_item_cents: [], pb_amount_below_cents: 0 }, lang: native } }, aggs: { all_items: { global: {}, aggs: { gem#carats: { filter: { terms: { attr_types.diamond#polish: [ ex, 0001vg ] } }, aggs: { gems#carats: { histogram: { field: attr_types.gem#carats, * interval: 0.1,* min_doc_count: 0 } }, gem#carats_stats: { stats: { field: attr_types.gem#carats } } } } } } } } Gives the following error: { error: SearchPhaseExecutionException[Failed to execute phase [query], all shards failed; shardFailures {[D6h8GKwjRqO_Yem09Hu_VA][development-liquidibles::application-items][4]: QueryPhaseExecutionException[[development-liquidibles::application-items][4]: query[filtered(filtered(self_and_ancestors:diamonds)-cache(attr_types.diamond#color:d))-cache(_type:item)],from[0],size[10],sort[custom:\sell_offer_cents\: org.elasticsearch.index.fielddata.fieldcomparator.LongValuesComparatorSource@11ce49a]: Query Failed [Failed to execute global aggregators]]; nested: ArithmeticException; }{[D6h8GKwjRqO_Yem09Hu_VA][development-liquidibles::application-items][3]: QueryPhaseExecutionException[[development-liquidibles::application-items][3]: query[filtered(filtered(self_and_ancestors:diamonds)-cache(attr_types.diamond#color:d))-cache(_type:item)],from[0],size[10],sort[custom:\sell_offer_cents\: org.elasticsearch.index.fielddata.fieldcomparator.LongValuesComparatorSource@8c9d82]: Query Failed [Failed to execute global aggregators]]; nested: ArithmeticException; }{[D6h8GKwjRqO_Yem09Hu_VA][development-liquidibles::application-items][2]: QueryPhaseExecutionException[[development-liquidibles::application-items][2]: query[filtered(filtered(self_and_ancestors:diamonds)-cache(attr_types.diamond#color:d))-cache(_type:item)],from[0],size[10],sort[custom:\sell_offer_cents\: org.elasticsearch.index.fielddata.fieldcomparator.LongValuesComparatorSource@73a7e5]: Query Failed [Failed to execute global aggregators]]; nested: ArithmeticException; }{[D6h8GKwjRqO_Yem09Hu_VA][development-liquidibles::application-items][1]: QueryPhaseExecutionException[[development-liquidibles::application-items][1]: query[filtered(filtered(self_and_ancestors:diamonds)-cache(attr_types.diamond#color:d))-cache(_type:item)],from[0],size[10],sort[custom:\sell_offer_cents\: org.elasticsearch.index.fielddata.fieldcomparator.LongValuesComparatorSource@15d1b1a]: Query Failed [Failed to execute global aggregators]]; nested: ArithmeticException; }{[D6h8GKwjRqO_Yem09Hu_VA][development-liquidibles::application-items][0]: QueryPhaseExecutionException[[development-liquidibles::application-items][0]: query[filtered(filtered(self_and_ancestors:diamonds)-cache(attr_types.diamond#color:d))-cache(_type:item)],from[0],size[10],sort[custom:\sell_offer_cents\: org.elasticsearch.index.fielddata.fieldcomparator.LongValuesComparatorSource@1b8c216]: Query Failed [Failed to execute global aggregators]]; nested: *ArithmeticException*; }], status: 500 } If I change the interval to be 1.0 or greater, it works. But, I want intervals of 0.1... -- You received this message because you are subscribed to the Google Groups elasticsearch group. To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/b8dad371-dfef-4c57-b7d8-433ee1c308c6%40googlegroups.com. For more options, visit https://groups.google.com/d/optout.
Re: Scrolling performance
Hi Jörg, It looks like you know something about scan/scroll I haven't found documented elsewhere -- how to scan 60 million docs with 1000 documents per fetch, at constant time per fetch. Other comments I've seen indicate that, the deeper you get into fetching the results, the slower each fetch gets. I'm looking at alternatives for implementing a feature which will require scan/scroll on a similar scale, and knowing that what you've done is possible is critical for my planning. Could you please share the key parts of your setup/retrieve code, in addition to the configuration / version information you've already shared? Thanks in advance, -Mark On Wednesday, November 27, 2013 8:03:46 AM UTC-6, Jörg Prante wrote: I executed a scan/scroll over 60 million docs, size of the indices (folder 'data' size) is 87G. java version 1.7.0_25 Java(TM) SE Runtime Environment (build 1.7.0_25-b15) Java HotSpot(TM) 64-Bit Server VM (build 23.25-b01, mixed mode) Heap is 2G Red Hat Enterprise Linux Server release 6.4 (Santiago) Jörg -- You received this message because you are subscribed to the Google Groups elasticsearch group. To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/705ede01-a19e-4c40-94d4-2e592de72303%40googlegroups.com. For more options, visit https://groups.google.com/groups/opt_out.
Re: Scrolling performance
Hi Jörg, Thanks for sharing! What kind of elaspsed time (in milliseconds?), roughly, does a 1000-document fetch take for you? Can you tell me key info that you thinks affects this for you, e.g. how many shards etc? -Mark On Sunday, January 19, 2014 12:28:48 PM UTC-6, Ivan Babrou wrote: Mark, my scrolling performance is pretty constant now too. My problem was in incorrect code, actually. You could check out correct version here: https://github.com/bobrik/esreindexer/blob/master/reindexer.go On 19 January 2014 21:49, LiquidMark mark.e...@gmail.com javascript: wrote: Hi Jörg, It looks like you know something about scan/scroll I haven't found documented elsewhere -- how to scan 60 million docs with 1000 documents per fetch, at constant time per fetch. Other comments I've seen indicate that, the deeper you get into fetching the results, the slower each fetch gets. I'm looking at alternatives for implementing a feature which will require scan/scroll on a similar scale, and knowing that what you've done is possible is critical for my planning. Could you please share the key parts of your setup/retrieve code, in addition to the configuration / version information you've already shared? Thanks in advance, -Mark On Wednesday, November 27, 2013 8:03:46 AM UTC-6, Jörg Prante wrote: I executed a scan/scroll over 60 million docs, size of the indices (folder 'data' size) is 87G. java version 1.7.0_25 Java(TM) SE Runtime Environment (build 1.7.0_25-b15) Java HotSpot(TM) 64-Bit Server VM (build 23.25-b01, mixed mode) Heap is 2G Red Hat Enterprise Linux Server release 6.4 (Santiago) Jörg -- You received this message because you are subscribed to a topic in the Google Groups elasticsearch group. To unsubscribe from this topic, visit https://groups.google.com/d/topic/elasticsearch/RYVP-VH8BaA/unsubscribe. To unsubscribe from this group and all its topics, send an email to elasticsearc...@googlegroups.com javascript:. To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/705ede01-a19e-4c40-94d4-2e592de72303%40googlegroups.com. For more options, visit https://groups.google.com/groups/opt_out. -- Regards, Ian Babrou http://bobrik.name http://twitter.com/ibobrik skype:i.babrou -- You received this message because you are subscribed to the Google Groups elasticsearch group. To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/e16ca409-f171-4406-b61c-b2cc89d1b5e6%40googlegroups.com. For more options, visit https://groups.google.com/groups/opt_out.