version to
2.3.7.
at
org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:345)
On Thu, Jul 23, 2020 at 6:22 PM yunpeng jia wrote:
> refer to this
> https://docs.databricks.com/data/metastores/external-hive-metastore.html
>
> Ashika Umanga 于2020年7月22日周三 下午3:27写
Greetings,
Our standalone Spark 3 cluster is trying to connect to Hadoop 2.6 cluster
running Hive server 1.2
(/usr/hdp/2.6.2.0-205/hive/lib/hive-service-1.2.1000.2.6.2.0-205.jar)
import org.apache.spark.sql.functions._
import java.sql.Timestamp
val df1 = spark.createDataFrame(
Seq(
o hadoop 2.6 which should work fine. In fact,
> we have production deployment using this way for a while.
>
> On Sun, Jul 19, 2020 at 8:10 PM Ashika Umanga
> wrote:
> >
> > Greetings,
> >
> > Hadoop 2.6 has been removed according to this ticket
> https://issue
Greetings,
Hadoop 2.6 has been removed according to this ticket
https://issues.apache.org/jira/browse/SPARK-25016
We run our Spark cluster on K8s in standalone mode.
We access HDFS/Hive running on a Hadoop 2.6 cluster.
We've been using Spark 2.4.5 and planning on upgrading to Spark 3.0.0
However,
further descriptions:
Environment: Spark cluster running in standalone mode with 1 master, 5
slaves, each has 4 vCPUS, 8GB RAM
data is being streamed from 3 node kafka cluster (managed by 3 node zk
cluster).
Checkpointing is being done at hadoop-cluster,
plus we are also saving state in HBase (
I am getting following warning while running stateful computation. The state
consists of BloomFilter (stream-lib) as Value and Integer as key.
The program runs smoothly for few minutes and after that, i am getting this
warning, and streaming app becomes unstable (processing time increases
exponent
data enrichment, if the code has been in Scala, i could use it
with rdd.map() function. So, is there any way to use non scala existing code
in map with spark streaming in scala like Storm’s ShellBolt and IRichBolt?
umanga,
kathmandu
--
View this message in context:
http://apache-spark-user-list
ce the functionalities is all
about data enrichment, if the code has been in Scala, i could use it with
rdd.map() function. So, is there any way to use non scala existing code in map
with spark streaming in scala like Storm’s ShellBolt and IRichBolt?
Umanga Bista
Kathmandu, Nepal