Hi all,
I am facing and issue for long running spark job on yarn. If there occures some
bottle neck on hdfs and/or kafka, active batch count increases immidiately.
I am plannning to check the active batch count with java client and create
alarms for the operations group.
So, is it possible to
While trying to execute python script with pycharm on Windows version am
getting this error.
Anyone has and ideaabout the error ?
Spark version : 2.3.0
py4j.protocol.Py4JJavaError: An error occurred while calling
None.org.apache.spark.api.java.JavaSparkContext.
:
Kafka Clients are blocking spark streaming jobs and after a time streaming job
queue increases.
-Original Message-
From: Cody Koeninger [mailto:c...@koeninger.org]
Sent: Tuesday, December 26, 2017 6:47 PM
To: Diogo Munaro Vieira <diogo.mun...@corp.globo.com>
Cc: Serkan TAS &
Hi,
Working on spark 2.2.0 cluster and 1.0 kafka brokers.
I was using the library
"org.apache.spark" % "spark-streaming-kafka-0-10_2.11" % "2.2.0"
and had lots of problems during streaming process then downgraded to
"org.apache.spark" % "spark-streaming-kafka-0-8_2.11" % "2.2.0"
-with-kafka-fail-with-requirement-failed-n
https://forums.databricks.com/questions/11055/how-to-resolve-illegalargumentexception-requiremen.html
From: Prem Sure [mailto:sparksure...@gmail.com]
Sent: Wednesday, November 1, 2017 8:11 PM
To: Serkan TAS <serkan@enerjisa.com>
Cc: user@spark.apac
Hi,
I searched the error in kafka but i think at last, it is related with spark not
kafka.
Has anyone faced to an exception that is terminating program with error
"numRecords must not be negative" while streaming ?
Thanx in advance.
Regards.
Bu ileti