Hi Team,
I am using spark 2.2 , so can I use kafka version 2.5 in my spark streaming
application?
Thanks & Regards,
Renu Yadav
tly
> you need to refresh at the application level?
>
> HTH
>
> On Fri, 7 May 2021 at 11:34, Renu Yadav wrote:
>
>> Hi Team,
>>
>> Is it possible to override the variable of spark-env.sh on application
>> level ?
>>
>> Thanks & Regards
Hi Team,
Is it possible to override the variable of spark-env.sh on application
level ?
Thanks & Regards,
Renu Yadav
On Fri, May 7, 2021 at 12:16 PM Renu Yadav wrote:
> Hi Team,
>
> Is it possible to override the variable of spark-env.sh on application
> level ?
>
a:184)
at
org.apache.spark.streaming.StreamingContext.(StreamingContext.scala:85)
Thanks & Regards,
Renu Yadav
s-is.
> It's not yet happened but it's on the road so I strongly recommend to
> migrate to Structured Streaming...
> We simply can't support 2 streaming engines for huge amount of time.
>
> G
>
>
> On Fri, Mar 12, 2021 at 3:02 PM Renu Yadav wrote:
>
>> Hi Gabor,
&
Hi Gabor,
It seems like it is better to upgrade my spark version .
Are there major changes in terms of streaming from spark 2.2 to spark 2.4?
PS: I am using KafkaUtils api to create steam
Thanks & Regards,
Renu yadav
On Fri, Mar 12, 2021 at 7:25 PM Renu Yadav wrote:
> Thank
Thanks Gabor,
This is very useful.
Regards,
Renu Yadav
On Fri, Mar 12, 2021 at 5:36 PM Gabor Somogyi
wrote:
> Kafka client upgrade is not a trivial change which may or may not work
> since new versions can contain incompatible API and/or behavior changes.
> I've collected how Spar
Hi Team,
I am using spark -2.2 and spark_streamin_kafka 2.2 , which is pointing to
kafka-client 0.10 . How can I upgrade a kafka client to kafka 2.2.0 ?
Thanks & Regards,
Renu Yadav
Has anybody implemented bulk load into hbase using spark?
I need help to optimize its performance.
Please help.
Thanks & Regards,
Renu Yadav
Any suggestions?
On Wed, Jan 20, 2016 at 6:50 PM, Renu Yadav <yren...@gmail.com> wrote:
> Hi ,
>
> I am facing spark task scheduling delay issue in spark 1.4.
>
> suppose I have 1600 tasks running then 1550 tasks runs fine but for the
> remaining 50 i am facing task
spark job in high priority queue.
PLEASE SUGGEST SOME SOLUTION
Thanks & Regards,
Renu Yadav
Hi ,
I am using dataframe and want to load orc file using multiple directory
like this:
hiveContext.read.format.load("mypath/3660,myPath/3661")
but it is not working.
Please suggest how to achieve this
Thanks & Regards,
Renu Yadav
l 2500 but starts delay scheduling after
that which results in slow performance.
*If anyone has any idea on this.Please do reply as I need this very urgent*
Thanks in advance
Regards,
Renu Yadav
& Regards,
Renu Yadav
what are the parameters on which locality depends
On Sun, Nov 15, 2015 at 5:54 PM, Renu Yadav <yren...@gmail.com> wrote:
> Hi,
>
> I am working on spark 1.4 and reading a orc table using dataframe and
> converting that DF to RDD
>
> I spark UI I observe that 50 % task
I have tried with G1 GC .Please if anyone can provide their setting for GC.
At code level I am :
1.reading orc table usind dataframe
2.map df to rdd of my case class
3. changed that rdd to paired rdd
4.Applied combineByKey
5. saving the result to orc file
Please suggest
Regards,
Renu Yadav
am using spark 1.4 and my application is taking much time in GC around
60-70% of time for each task
I am using parallel GC.
please help somebody as soon as possible.
Thanks,
Renu
I am reading parquet file from a dir which has 400 file of max 180M size
so while reading my partition should be 400 as split size is 256 M in my
case
But it is taking 787 partiition .Why is it so?
Please help.
Thanks,
Renu
Hi,
I am reading data from hive orc table using spark-sql which is taking
256mb as split size.
How can i change this size
Thanks,
Renu
Hi
I have query regarding driver memory
what are the tasks in which driver memory is used?
Please Help
-- Forwarded message --
From: Renu Yadav <yren...@gmail.com>
Date: Mon, Sep 14, 2015 at 4:51 PM
Subject: Spark job failed
To: d...@spark.apache.org
I am getting below error while running spark job:
storage.DiskBlockObjectWriter: Uncaught exception while reverting partial
21 matches
Mail list logo