Any solution please ?
On Fri, Apr 10, 2020 at 11:04 PM Debabrata Ghosh
wrote:
> Hi,
> I have a spark streaming application where Kafka is producing
> records but unfortunately spark streaming isn't able to consume those.
>
> I am hitting the following error:
>
> 20/04/10 17:28:04 ERROR
unsubscribe
Yes the Kafka producer is producing records from the same host - Rechecked
Kafka connection and the connection is there. Came across this URL but
unable to understand it
https://stackoverflow.com/questions/42264669/spark-streaming-assertion-failed-failed-to-get-records-for-spark-executor-a-gro
Check if your broker details are correct, verify if you have network
connectivity to your client box and Kafka broker server host.
On Fri, Apr 10, 2020 at 11:04 PM Debabrata Ghosh
wrote:
> Hi,
> I have a spark streaming application where Kafka is producing
> records but unfortunately
Hi,
I have a spark streaming application where Kafka is producing
records but unfortunately spark streaming isn't able to consume those.
I am hitting the following error:
20/04/10 17:28:04 ERROR Executor: Exception in task 0.5 in stage 0.0 (TID 24)
java.lang.AssertionError: assertion
No, there was no internal domain issue. As I mentioned I saw this issue
only on a few nodes on the cluster.
On Thu, Apr 9, 2020 at 10:49 PM Wei Zhang wrote:
> Is there any internal domain name resolving issues?
>
> > Caused by: java.net.UnknownHostException:
>
Hi all,
I am on spark 2.4.4 and using scala 2.11.12, and running cluster mode on
mesos. I am ingesting from an oracle database using spark.read.jdbc. I am
seeing a strange issue where spark just hangs and does nothing, not
starting any new tasks. Normally this job finishes in 30 stages but
Hello Yasir,
You need to check your 'PYTHONPATH' environment variable.
For windows, If I do a "pip install", the package is installed in
"lib\site-packages" under the python folder. If I "print (sys.path)", I see
"lib\site-packages" as one of the entries, and I can expect "import
" to work.
Peace dear all,
I hope you all are well and healthy...
I am brand new to Spark/Hadoop. My env. is: Windows 7 with Jupyter/Anaconda
and Spark/Hadoop all installed on my laptop. How can I run the following
without errors:
import findspark
findspark.init()
findspark.find()
from pyspark.sql import