Thx Pawel,

Note huge but larger then limit. Working on a fix.


On Friday, June 6, 2014 10:10:45 AM UTC+2, Paweł Krzaczkowski wrote:
>
> This one is without metadata
>
> http://pastebin.com/tmJGA5Kq
>
> http://xxx:9200/_cluster/state/version,master_node,nodes,routing_table,blocks/?human&pretty
>
> Pawel
>
> W dniu piątek, 6 czerwca 2014 09:28:30 UTC+2 użytkownik Boaz Leskes 
> napisał:
>>
>> HI Pawel,
>>
>> I see - your cluster state (nodes + routing only, not meta data), seems 
>> to be larger then 16KB when rendered to SMILE, which is quite big - does 
>> this make sense?
>>
>> Above 16KB an underlying paging system introduced in the ES 1.x branch 
>> kicks in. At that breaks something in Marvel than normally ships very small 
>> documents.
>>
>> I'll work on a fix. Can you confirm your cluster state (again, without 
>> the metadata) is indeed very large?
>>
>> Cheers,
>> Boaz
>>
>> On Thursday, June 5, 2014 10:56:00 AM UTC+2, Paweł Krzaczkowski wrote:
>>>
>>> Hi.
>>>
>>> After upgrading Marvel to 1.2.0 (running on Elasticsearch 1.2.1) i'm 
>>> getting errors like
>>>
>>> [2014-06-05 10:47:25,346][INFO ][node                     ] [es-m-3] 
>>> version[1.2.1], pid[68924], build[6c95b75/2014-06-03T15:02:52Z]
>>> [2014-06-05 10:47:25,347][INFO ][node                     ] [es-m-3] 
>>> initializing ...
>>> [2014-06-05 10:47:25,367][INFO ][plugins                  ] [es-m-3] 
>>> loaded [marvel, analysis-icu], sites [marvel, head, segmentspy, browser, 
>>> paramedic]
>>> [2014-06-05 10:47:28,455][INFO ][node                     ] [es-m-3] 
>>> initialized
>>> [2014-06-05 10:47:28,456][INFO ][node                     ] [es-m-3] 
>>> starting ...
>>> [2014-06-05 10:47:28,597][INFO ][transport                ] [es-m-3] 
>>> bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/
>>> 192.168.0.212:9300]}
>>> [2014-06-05 10:47:42,340][INFO ][cluster.service          ] [es-m-3] 
>>> new_master [es-m-3][0H3grrJxTJunU1U6FmkIEg][es-m-3][inet[
>>> 192.168.0.212/192.168.0.212:9300]]{data=false 
>>> <http://192.168.0.212/192.168.0.212:9300%5D%5D%7Bdata=false>, 
>>> master=true}, reason: zen-disco-join (elected_as_master)
>>> [2014-06-05 10:47:42,350][INFO ][discovery                ] [es-m-3] 
>>> freshmind/0H3grrJxTJunU1U6FmkIEg
>>> [2014-06-05 10:47:42,365][INFO ][http                     ] [es-m-3] 
>>> bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/
>>> 192.168.0.212:9200]}
>>> [2014-06-05 10:47:42,368][INFO ][node                     ] [es-m-3] 
>>> started
>>> [2014-06-05 10:47:44,098][INFO ][cluster.service          ] [es-m-3] 
>>> added 
>>> {[es-m-1][MHl5Ls-cRXCwc7OC-P0J5w][es-m-1][inet[/192.168.0.210:9300]]{data=false,
>>>  
>>> machine=44454c4c-5300-1052-8038-b9c04f5a5a31, master=true},}, reason: 
>>> zen-disco-receive(join from 
>>> node[[es-m-1][MHl5Ls-cRXCwc7OC-P0J5w][es-m-1][inet[/192.168.0.210:9300]]{data=false,
>>>  
>>> machine=44454c4c-5300-1052-8038-b9c04f5a5a31, master=true}])
>>> [2014-06-05 10:47:44,401][INFO ][gateway                  ] [es-m-3] 
>>> recovered [28] indices into cluster_state
>>> [2014-06-05 10:47:48,683][ERROR][marvel.agent             ] [es-m-3] 
>>> exporter [es_exporter] has thrown an exception:
>>> java.lang.IllegalStateException: array not available
>>>         at 
>>> org.elasticsearch.common.bytes.PagedBytesReference.array(PagedBytesReference.java:289)
>>>         at 
>>> org.elasticsearch.marvel.agent.exporter.ESExporter.addXContentRendererToConnection(ESExporter.java:209)
>>>         at 
>>> org.elasticsearch.marvel.agent.exporter.ESExporter.exportXContent(ESExporter.java:252)
>>>         at 
>>> org.elasticsearch.marvel.agent.exporter.ESExporter.exportEvents(ESExporter.java:161)
>>>         at 
>>> org.elasticsearch.marvel.agent.AgentService$ExportingWorker.exportEvents(AgentService.java:305)
>>>         at 
>>> org.elasticsearch.marvel.agent.AgentService$ExportingWorker.run(AgentService.java:240)
>>>         at java.lang.Thread.run(Thread.java:745)
>>> [2014-06-05 10:47:58,738][ERROR][marvel.agent             ] [es-m-3] 
>>> exporter [es_exporter] has thrown an exception:
>>> java.lang.IllegalStateException: array not available
>>>         at 
>>> org.elasticsearch.common.bytes.PagedBytesReference.array(PagedBytesReference.java:289)
>>>         at 
>>> org.elasticsearch.marvel.agent.exporter.ESExporter.addXContentRendererToConnection(ESExporter.java:209)
>>>         at 
>>> org.elasticsearch.marvel.agent.exporter.ESExporter.exportXContent(ESExporter.java:252)
>>>         at 
>>> org.elasticsearch.marvel.agent.exporter.ESExporter.exportEvents(ESExporter.java:161)
>>>         at 
>>> org.elasticsearch.marvel.agent.AgentService$ExportingWorker.exportEvents(AgentService.java:305)
>>>         at 
>>> org.elasticsearch.marvel.agent.AgentService$ExportingWorker.run(AgentService.java:240)
>>>         at java.lang.Thread.run(Thread.java:745)
>>> [2014-06-05 10:48:36,572][INFO ][cluster.service          ] [es-m-3] 
>>> added 
>>> {[es-m-2][e5uEqGRhS7uEioNxaYkwTg][es-m-2][inet[/192.168.0.211:9300]]{data=false,
>>>  
>>> master=true},}, reason: zen-disco-receive(join from 
>>> node[[es-m-2][e5uEqGRhS7uEioNxaYkwTg][es-m-2][inet[/192.168.0.211:9300]]{data=false,
>>>  
>>> master=true}])
>>> [2014-06-05 10:48:38,859][ERROR][marvel.agent             ] [es-m-3] 
>>> exporter [es_exporter] has thrown an exception:
>>> java.lang.IllegalStateException: array not available
>>>         at 
>>> org.elasticsearch.common.bytes.PagedBytesReference.array(PagedBytesReference.java:289)
>>>         at 
>>> org.elasticsearch.marvel.agent.exporter.ESExporter.addXContentRendererToConnection(ESExporter.java:209)
>>>         at 
>>> org.elasticsearch.marvel.agent.exporter.ESExporter.exportXContent(ESExporter.java:252)
>>>         at 
>>> org.elasticsearch.marvel.agent.exporter.ESExporter.exportEvents(ESExporter.java:161)
>>>         at 
>>> org.elasticsearch.marvel.agent.AgentService$ExportingWorker.exportEvents(AgentService.java:305)
>>>         at 
>>> org.elasticsearch.marvel.agent.AgentService$ExportingWorker.run(AgentService.java:240)
>>>         at java.lang.Thread.run(Thread.java:745)
>>> [2014-06-05 10:49:47,283][INFO ][cluster.service          ] [es-m-3] 
>>> added 
>>> {[es-d-1][9T5-BOVCSveOyh5dV9YvDg][es-d-1.localhost][inet[/192.168.0.213:9300]]{machine=44454c4c-5300-1052-8038-b9c04f5a5a31,
>>>  
>>> master=false},}, reason: zen-disco-receive(join from 
>>> node[[es-d-1][9T5-BOVCSveOyh5dV9YvDg][es-d-1.localhost][inet[/192.168.0.213:9300]]{machine=44454c4c-5300-1052-8038-b9c04f5a5a31,
>>>  
>>> master=false}])
>>> [2014-06-05 10:49:47,350][INFO ][cluster.service          ] [es-m-3] 
>>> added 
>>> {[es-d-2][ddoIhezQSjuGYWPmSWltIg][es-d-2][inet[/192.168.0.200:9300]]{machine=44454c4c-5200-1038-8030-c2c04f365931,
>>>  
>>> master=false},}, reason: zen-disco-receive(join from 
>>> node[[es-d-2][ddoIhezQSjuGYWPmSWltIg][es-d-2][inet[/192.168.0.200:9300]]{machine=44454c4c-5200-1038-8030-c2c04f365931,
>>>  
>>> master=false}])
>>> [2014-06-05 10:49:49,055][ERROR][marvel.agent             ] [es-m-3] 
>>> exporter [es_exporter] has thrown an exception:
>>> java.lang.IllegalStateException: array not available
>>>         at 
>>> org.elasticsearch.common.bytes.PagedBytesReference.array(PagedBytesReference.java:289)
>>>         at 
>>> org.elasticsearch.marvel.agent.exporter.ESExporter.addXContentRendererToConnection(ESExporter.java:209)
>>>         at 
>>> org.elasticsearch.marvel.agent.exporter.ESExporter.exportXContent(ESExporter.java:252)
>>>         at 
>>> org.elasticsearch.marvel.agent.exporter.ESExporter.exportEvents(ESExporter.java:161)
>>>         at 
>>> org.elasticsearch.marvel.agent.AgentService$ExportingWorker.exportEvents(AgentService.java:305)
>>>         at 
>>> org.elasticsearch.marvel.agent.AgentService$ExportingWorker.run(AgentService.java:240)
>>>         at java.lang.Thread.run(Thread.java:745)
>>> [2014-06-05 10:49:59,090][ERROR][marvel.agent             ] [es-m-3] 
>>> exporter [es_exporter] has thrown an exception:
>>> java.lang.IllegalStateException: array not available
>>>         at 
>>> org.elasticsearch.common.bytes.PagedBytesReference.array(PagedBytesReference.java:289)
>>>         at 
>>> org.elasticsearch.marvel.agent.exporter.ESExporter.addXContentRendererToConnection(ESExporter.java:209)
>>>         at 
>>> org.elasticsearch.marvel.agent.exporter.ESExporter.exportXContent(ESExporter.java:252)
>>>         at 
>>> org.elasticsearch.marvel.agent.exporter.ESExporter.exportEvents(ESExporter.java:161)
>>>         at 
>>> org.elasticsearch.marvel.agent.AgentService$ExportingWorker.exportEvents(AgentService.java:305)
>>>         at 
>>> org.elasticsearch.marvel.agent.AgentService$ExportingWorker.run(AgentService.java:240)
>>>         at java.lang.Thread.run(Thread.java:745)
>>> [2014-06-05 10:50:09,267][ERROR][marvel.agent             ] [es-m-3] 
>>> exporter [es_exporter] has thrown an exception:
>>> java.lang.IllegalStateException: array not available
>>>         at 
>>> org.elasticsearch.common.bytes.PagedBytesReference.array(PagedBytesReference.java:289)
>>>         at 
>>> org.elasticsearch.marvel.agent.exporter.ESExporter.addXContentRendererToConnection(ESExporter.java:209)
>>>         at 
>>> org.elasticsearch.marvel.agent.exporter.ESExporter.exportXContent(ESExporter.java:252)
>>>         at 
>>> org.elasticsearch.marvel.agent.exporter.ESExporter.exportEvents(ESExporter.java:161)
>>>         at 
>>> org.elasticsearch.marvel.agent.AgentService$ExportingWorker.exportEvents(AgentService.java:305)
>>>         at 
>>> org.elasticsearch.marvel.agent.AgentService$ExportingWorker.run(AgentService.java:240)
>>>         at java.lang.Thread.run(Thread.java:745)
>>> [root@es-m-3 ~]# 
>>>
>>> Marvel is running on a separate cluster.
>>> Cluster state is not being transmitted to marvel. Error occurs every 
>>> time cluster state changes (node restart, shard allocation, etc.)
>>>
>>> ENV on master node 
>>> [root@es-m-3 ~]# env
>>>
>>> MANPATH=/opt/local/man:/usr/share/man:/opt/local/gcc47/man:/opt/local/java/sun6/man:/opt/local/lib/perl5/man:/opt/local/lib/perl5/vendor_perl/man
>>> HZ=100
>>> SHELL=/usr/bin/bash
>>> TERM=xterm
>>> ES_HEAP_SIZE=512M
>>> ES_JAVA_OPTS=-d64 -server -Des.processors=1 -Des.node.name=es-m-3 
>>> -Des.node.machine=44454c4c-5200-1038-8030-c2c04f365931
>>> COLUMNS=238
>>> PAGER=less
>>> MAIL=/var/mail/root
>>>
>>> PATH=/usr/local/sbin:/usr/local/bin:/opt/local/sbin:/opt/local/bin:/usr/sbin:/usr/bin:/sbin
>>> PWD=/root
>>> JAVA_HOME=/opt/jdk1.7.0_55/
>>> LINES=74
>>> SHLVL=1
>>> HOME=/root
>>> TERMINFO=/opt/local/share/lib/terminfo
>>> LOGNAME=root
>>> FTPMODE=auto
>>> _=/opt/local/bin/env
>>>
>>> [root@es-m-3 ~]# /opt/jdk1.7.0_55/bin/java -d64 -version
>>> java version "1.7.0_55"
>>> Java(TM) SE Runtime Environment (build 1.7.0_55-b13)
>>> Java HotSpot(TM) 64-Bit Server VM (build 24.55-b03, mixed mode)
>>>
>>>
>>> Marvel 1.1.1 worked without any problems.
>>>
>>> Pawel
>>>
>>>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/a19525e2-273f-4b1e-a950-f93362e76e1a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to