Is there any internal domain name resolving issues?
> Caused by: java.net.UnknownHostException:
> spark-1586333186571-driver-svc.fractal-segmentation.svc
-z
From: Prudhvi Chennuru (CONT)
Sent: Friday, April 10, 2020 2:44
To: user
Subject: Driver pods
Hi,
*We are running spark batch jobs on K8s.*
*Kubernetes version:* 1.11.5 ,
* spark version*: 2.3.2,
* docker version:* 19.3.8
*Issue: Few Driver pods are stuck in running state indefinitely with
error*
```
The Initial job has not accepted any resources; check your cluster UI
Sorry for the late reply.
I can help you with getting started with
https://github.com/qubole/spark-acid to read Hive ACID tables. Feel free
to drop me a mail or raise an issue here:
https://github.com/qubole/spark-acid/issues
Regards,
Amogh
On Tue, Mar 10, 2020 at 4:20 AM Chetan Khatri
wrote:
You can take a look at the code that Spark generates:
import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.execution.debug.codegenString
val spark: SparkSession
import org.apache.spark.sql.functions._
import spark.implicits._
val data = Seq("A","b","c").toDF("col")
Hi all,
I'm using ML Pipeline to construct a flow of transformation. I'm wondering
if it is possible to set multiple dataframes as the input of a transformer?
For example I need to join two dataframes together in a transformer, then
feed into the estimator for training. If not, is there any plan
Hi all,
I'm using ML Pipeline to construct a flow of transformation. I'm wondering
if it is possible to set multiple dataframes as the input of a transformer?
For example I need to join two dataframes together in a transformer, then
feed into the estimator for training. If not, is there any plan