I would guess that your global configuration value for 'es.ip' looks
something like "http://..."; which is incorrect.  It should just be the
hostname or IP address with no protocol specifier.

For example, by default the global properties look like the following for
the Quick Dev environment.

{
  "es.clustername": "metron",
  "es.ip": "node1",
  "es.port": "9300",
  "es.date.format": "yyyy.MM.dd.HH"
}


On Wed, Nov 9, 2016 at 8:18 AM, Dima Kovalyov <dima.koval...@sstech.us>
wrote:

> Thank you Jon,
>
> I have resolved it by increasing "max user processes" for user storm using:
> # su - storm
> $ ulimit -u 257597
>
> Topologies are working without crashes now, however in Indexing topology
> indexingBolt now gives me this error:
> [ERROR] Async loop died!
> java.lang.RuntimeException: java.lang.RuntimeException:
> java.net.UnknownHostException: http: unknown error
> ...
> [ERROR] Halting process: ("Worker died")
> java.lang.RuntimeException: ("Worker died")
>
> And this one is a graveyard because there is nothing about it in google.
> I have attached worker.log.
>
> Data is not appearing in ElasticSearch. I wonder, maybe it is caused by
> ElasticSearch poorly configured?
>
> Please assist.
> Thank you.
>
> - Dima
>
> On 11/08/2016 11:20 PM, zeo...@gmail.com wrote:
> > Hi Dima,
> >
> > You probably want to increase the -Xmx setting in "worker.childopts",
> which
> > is available in ambari under $Server:8080/#/main/services/STORM/configs.
> >
> > Jon
> >
> > On Tue, Nov 8, 2016 at 2:47 PM DimaKovalyov <g...@git.apache.org> wrote:
> >
> >> Github user DimaKovalyov commented on the issue:
> >>
> >>     https://github.com/apache/incubator-metron/pull/318
> >>
> >>     Thank you James,
> >>
> >>     > Once you have data in your kafka queue this should go away.
> >>     That is true! Once I create a topic and stream data through it the
> >> error is gone.
> >>
> >>     My data is now going to enrichment and both bolts and spouts (all of
> >> them) are having this weird error:
> >>     `java.lang.OutOfMemoryError: unable to create new native thread at
> >> java.lang.Thread.start0(Native Method) at
> >> java.lang.Thread.start(Thread.java:714) at
> >> org.apache.zookeeper.ClientCnxn.start(ClientCnxn.java:417) at
> >> org.apache.zookeeper.ZooKeeper.<init>(ZooKeeper.java:450) at
> >>     ...
> >>     java.lang.Thread.run(Thread.java:745)`
> >>     `
> >>
> >>     And supervisor crashes also after 5-10 minutes with:
> >>     ```
> >>     2016-11-08 14:25:56.125 o.a.s.event [ERROR] Error when processing
> event
> >>     java.lang.OutOfMemoryError: unable to create new native thread
> >>             at java.lang.Thread.start0(Native Method)
> >>             at java.lang.Thread.start(Thread.java:714)
> >>             at
> >> java.util.concurrent.ThreadPoolExecutor.addWorker(
> ThreadPoolExecutor.java:950)
> >>             at
> >> java.util.concurrent.ThreadPoolExecutor.execute(
> ThreadPoolExecutor.java:1368)
> >>             at java.lang.UNIXProcess.initStreams(UNIXProcess.java:289)
> >>             at java.lang.UNIXProcess.lambda$new$2(UNIXProcess.java:259)
> >>             at java.security.AccessController.doPrivileged(Native
> Method)
> >>             at java.lang.UNIXProcess.<init>(UNIXProcess.java:258)
> >>             at java.lang.ProcessImpl.start(ProcessImpl.java:134)
> >>             at java.lang.ProcessBuilder.start(ProcessBuilder.java:1029)
> >>             at java.lang.Runtime.exec(Runtime.java:620)
> >>             at org.apache.storm.shade.org
> >> .apache.commons.exec.launcher.Java13CommandLauncher.exec(
> Java13CommandLauncher.java:58)
> >>             at org.apache.storm.shade.org
> >> .apache.commons.exec.DefaultExecutor.launch(DefaultExecutor.java:254)
> >>             at org.apache.storm.shade.org
> >> .apache.commons.exec.DefaultExecutor.executeInternal(
> DefaultExecutor.java:319)
> >>             at org.apache.storm.shade.org
> >> .apache.commons.exec.DefaultExecutor.execute(DefaultExecutor.java:160)
> >>             at org.apache.storm.shade.org
> >> .apache.commons.exec.DefaultExecutor.execute(DefaultExecutor.java:147)
> >>             at
> >> org.apache.storm.util$exec_command_BANG_.invoke(util.clj:402)
> >>             at
> >> org.apache.storm.util$send_signal_to_process.invoke(util.clj:429)
> >>             at
> >> org.apache.storm.util$kill_process_with_sig_term.invoke(util.clj:454)
> >>             at
> >> org.apache.storm.daemon.supervisor$shutdown_worker.
> invoke(supervisor.clj:290)
> >>             at
> >> org.apache.storm.daemon.supervisor$sync_processes.
> invoke(supervisor.clj:435)
> >>             at clojure.core$partial$fn__4527.invoke(core.clj:2492)
> >>             at
> >> org.apache.storm.event$event_manager$fn__7248.invoke(event.clj:40)
> >>             at clojure.lang.AFn.run(AFn.java:22)
> >>             at java.lang.Thread.run(Thread.java:745)
> >>
> >>     ```
> >>     Even though I have more than 30 GB RAM available. Do I need to tune
> >> Storm for better memory usage?
> >>     Please advise.
> >>
> >>     -  Dima
> >>
> >>
> >> ---
> >> If your project is set up for it, you can reply to this email and have
> your
> >> reply appear on GitHub as well. If your project does not have this
> feature
> >> enabled and wishes so, or if the feature is enabled but not working,
> please
> >> contact infrastructure at infrastruct...@apache.org or file a JIRA
> ticket
> >> with INFRA.
> >> ---
> >>
>
>


-- 
Nick Allen <n...@nickallen.org>

Reply via email to