Found this in the logs:

[2015-04-22 22:01:25,063][ERROR][river.jdbc.BulkNodeClient] bulk [15] 
failed with 945 failed items, failure message = failure in bulk execution:



On Wednesday, April 22, 2015 at 7:53:25 PM UTC-5, GWired wrote:
>
> Hi All,
>
> I've just been informed that i'm off by up to 100k records or so in my 
> jdbc river fed index.
>
> I am using the column strategy using a createddate and lastmodified date. 
>
> Kibana is reporting an entirely different # than what i see reported in 
> the DB..
>
> Table A has 978634 in SQL, 934646 shown in Kibana.
> Table B has 957327 in SQL, 876725 shown in Kibana.
> Table C has 312826 in SQL, 238534 shown in Kibana
>
> I see in the ES logs 
>
> Table A metrics: 979044 rows,
> Table B metrics: 957591 rows
> Table C metrics: 312827 rows,
>
> These are the right numbers...well at least closer to right.
>
> But if i do this using Sense:
>
> GET jdbc/mytable/_count?q=*
>
> It returns the same # as Kibana is return.  
>
> This erring version is running on ES 1.5.1 with Kibana version 3.0
>
> On another server with ES 1.5.0 and Kibana 3.0 it is working just fine #'s 
> match up.
>
> Any ideas?
>
>
>
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/5561753d-9553-4bc5-bea2-102b7e030396%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to