Hi Cedric,
welcome to the StreamPipes community 😊. So, do I understand the problem correctly, your pipeline has a Data Source (ROS), then the JavaScript Evaluator (processor) and a sink (data lake – dashboard). And the data does not show up in the sink (either data lake or dashboard) once you start the pipeline for the first time, but when you restart it multiple times it works? Did you edit the pipeline or only click the stop and start button? Does the data from the ROS adapter work when you connect it directly to a sink? Philipp Von: Cedric Kulbach <[email protected]> Antworten an: <[email protected]> Datum: Montag, 13. Dezember 2021 um 11:32 An: "[email protected]" <[email protected]> Betreff: ROS Adapter & JS Eval Hi all, I am currently trying to build a Ros Pipeline using the JS Processor and then visualise it in the Dashboard or Data Explorer. Unfortunately, I have to restart the pipeline several times until the data runs through the pipeline. Enclosed you will find the error message. Do you have any idea where the problem is? Best Cedric 2] INFO o.a.s.m.kafka.SpKafkaConsumer - Kafka consumer: Connecting to org.apache.streampipes.LNgwJtnfpVkyERlHDUny [engine] WARNING: The polyglot context is using an implementation that does not support runtime compilation. The guest application code will therefore be executed in interpreted mode only. Execution only in interpreted mode will strongly impact the guest application performance. For more information on using GraalVM see https://www.graalvm.org/java/quickstart/. To disable this warning the '--engine.WarnInterpreterOnly=false' option or use the '-Dpolyglot.engine.WarnInterpreterOnly=false' system property. Exception in thread "Thread-19" java.lang.NullPointerException at org.apache.streampipes.messaging.kafka.SpKafkaProducer.publish(SpKafkaProducer.java:85) at org.apache.streampipes.wrapper.standalone.routing.StandaloneSpOutputCollector.collect(StandaloneSpOutputCollector.java:47) at org.apache.streampipes.processors.enricher.jvm.processor.jseval.JSEval.onEvent(JSEval.java:56) at org.apache.streampipes.wrapper.standalone.runtime.StandaloneEventProcessorRuntime.process(StandaloneEventProcessorRuntime.java:69) at org.apache.streampipes.wrapper.standalone.routing.StandaloneSpInputCollector.send(StandaloneSpInputCollector.java:53) at org.apache.streampipes.wrapper.standalone.routing.StandaloneSpInputCollector.lambda$onEvent$0(StandaloneSpInputCollector.java:47) at org.apache.streampipes.wrapper.standalone.routing.StandaloneSpInputCollector$$Lambda$942/0x00000000941a8950.accept(Unknown Source) at java.util.concurrent.ConcurrentHashMap.forEach(ConcurrentHashMap.java:1597) at org.apache.streampipes.wrapper.standalone.routing.StandaloneSpInputCollector.onEvent(StandaloneSpInputCollector.java:47) at org.apache.streampipes.wrapper.standalone.routing.StandaloneSpInputCollector.onEvent(StandaloneSpInputCollector.java:29) at org.apache.streampipes.messaging.kafka.SpKafkaConsumer.lambda$run$0(SpKafkaConsumer.java:115) at org.apache.streampipes.messaging.kafka.SpKafkaConsumer$$Lambda$788/0x00000000940d70c0.accept(Unknown Source) at java.lang.Iterable.forEach(Iterable.java:75) at org.apache.streampipes.messaging.kafka.SpKafkaConsumer.run(SpKafkaConsumer.java:114) at java.lang.Thread.run(Thread.java:826) [To redirect Truffle log output to a file use one of the following options: * '--log.file=<path>' if the option is passed using a guest language launcher. * '-Dpolyglot.log.file=<path>' if the option is passed using the host Java launcher. * Configure logging using the polyglot embedding API.]
