Re: What is the best way to organize a join within a foreach?

2023-04-27 Thread Amit Joshi
Hi Marco,

I am not sure if you will get access to data frame inside the for each, as
spark context used to be non serialized, if I remember correctly.

One thing you can do.
Use cogroup operation on both the dataset.
This will help you have (Key- iter(v1),itr(V2).
And then use for each partition for performing your task of converting to
json and more.

Thus performance wise, you can group batch per user records and also share
the same connection in each partition if needed.

Hope this will help.

Regards
Amit


On Wed, 26 Apr, 2023, 15:58 Marco Costantini, <
marco.costant...@rocketfncl.com> wrote:

> Thanks team,
> Email was just an example. The point was to illustrate that some actions
> could be chained using Spark's foreach. In reality, this is an S3 write and
> a Kafka message production, which I think is quite reasonable for spark to
> do.
>
> To answer Ayan's first question. Yes, all a users orders, prepared for
> each and every user.
>
> Other than the remarks that email transmission is unwise (which I've now
> reminded is irrelevant) I am not seeing an alternative to using Spark's
> foreach. Unless, your proposal is for the Spark job to target 1 user, and
> just run the job 1000's of times taking the user_id as input. That doesn't
> sound attractive.
>
> Also, while we say that foreach is not optimal, I cannot find any evidence
> of it; neither here nor online. If there are any docs about the inner
> workings of this functionality, please pass them to me. I continue to
> search for them. Even late last night!
>
> Thanks for your help team,
> Marco.
>
> On Wed, Apr 26, 2023 at 6:21 AM Mich Talebzadeh 
> wrote:
>
>> Indeed very valid points by Ayan. How email is going to handle 1000s of
>> records. As a solution architect I tend to replace. Users by customers and
>> for each order there must be products sort of many to many relationship. If
>> I was a customer I would also be interested in product details as
>> well.sending via email sounds like a Jurassic park solution 😗
>>
>> On Wed, 26 Apr 2023 at 10:24, ayan guha  wrote:
>>
>>> Adding to what Mitch said,
>>>
>>> 1. Are you trying to send statements of all orders to all users? Or the
>>> latest order only?
>>>
>>> 2. Sending email is not a good use of spark. instead, I suggest to use a
>>> notification service or function. Spark should write to a queue (kafka,
>>> sqs...pick your choice here).
>>>
>>> Best regards
>>> Ayan
>>>
>>> On Wed, 26 Apr 2023 at 7:01 pm, Mich Talebzadeh <
>>> mich.talebza...@gmail.com> wrote:
>>>
 Well OK in a nutshell you want the result set for every user prepared
 and email to that user right.

 This is a form of ETL where those result sets need to be posted
 somewhere. Say you create a table based on the result set prepared for each
 user. You may have many raw target tables at the end of the first ETL. How
 does this differ from using forEach? Performance wise forEach may not be
 optimal.

 Can you take the sample tables and try your method?

 HTH

 Mich Talebzadeh,
 Lead Solutions Architect/Engineering Lead
 Palantir Technologies Limited
 London
 United Kingdom


view my Linkedin profile
 


  https://en.everybodywiki.com/Mich_Talebzadeh



 *Disclaimer:* Use it at your own risk. Any and all responsibility for
 any loss, damage or destruction of data or any other property which may
 arise from relying on this email's technical content is explicitly
 disclaimed. The author will in no case be liable for any monetary damages
 arising from such loss, damage or destruction.




 On Wed, 26 Apr 2023 at 04:10, Marco Costantini <
 marco.costant...@rocketfncl.com> wrote:

> Hi Mich,
> First, thank you for that. Great effort put into helping.
>
> Second, I don't think this tackles the technical challenge here. I
> understand the windowing as it serves those ranks you created, but I don't
> see how the ranks contribute to the solution.
> Third, the core of the challenge is about performing this kind of
> 'statement' but for all users. In this example we target Mich, but that
> reduces the complexity by a lot! In fact, a simple join and filter would
> solve that one.
>
> Any thoughts on that? For me, the foreach is desirable because I can
> have the workers chain other actions to each iteration (send email, send
> HTTP request, etc).
>
> Thanks Mich,
> Marco.
>
> On Tue, Apr 25, 2023 at 6:06 PM Mich Talebzadeh <
> mich.talebza...@gmail.com> wrote:
>
>> Hi Marco,
>>
>> First thoughts.
>>
>> foreach() is an action operation that is to iterate/loop over each
>> element in the dataset, meaning cursor based. That is different from
>> operating over the dataset as a set which is far more efficient.
>>

Re: Splittable or not?

2022-09-14 Thread Amit Joshi
Hi Sid,

Snappy itself is not splittable. But the format that contains the actual
data like parquet (which are basically divided into row groups) can be
compressed using snappy.
This works because blocks(pages of parquet format) inside the parquet can
be independently compressed using snappy.

Thanks
Amit

On Wed, Sep 14, 2022 at 8:14 PM Sid  wrote:

> Hello experts,
>
> I know that Gzip and snappy files are not splittable i.e data won't be
> distributed into multiple blocks rather it would try to load the data in a
> single partition/block
>
> So, my question is when I write the parquet data via spark it gets stored
> at the destination with something like *part*.snappy.parquet*
>
> So, when I read this data will it affect my performance?
>
> Please help me if there is any understanding gap.
>
> Thanks,
> Sid
>


Re: Salting technique doubt

2022-07-31 Thread Amit Joshi
Hi Sid,

I am not sure I understood your question.
But the keys cannot be different post salting in both the tables, this is
what i have shown in the explanation.
You salt Table A and then explode Table B to create all possible values.

In your case, I do not understand, what Table B has x_8/9. It should be all
possible values which you used to create salt.

I hope you understand.

Thanks



On Sun, Jul 31, 2022 at 10:02 AM Sid  wrote:

> Hi Amit,
>
> Thanks for your reply. However, your answer doesn't seem different from
> what I have explained.
>
> My question is after salting if the keys are different like in my example
> then post join there would be no results assuming the join type as inner
> join because even though the keys are segregated in different partitions
> based on unique keys they are not matching because x_1/x_2 !=x_8/x_9
>
> How do you ensure that the results are matched?
>
> Best,
> Sid
>
> On Sun, Jul 31, 2022 at 1:34 AM Amit Joshi 
> wrote:
>
>> Hi Sid,
>>
>> Salting is normally a technique to add random characters to existing
>> values.
>> In big data we can use salting to deal with the skewness.
>> Salting in join cas be used as :
>> * Table A-*
>> Col1, join_col , where join_col values are {x1, x2, x3}
>> x1
>> x1
>> x1
>> x2
>> x2
>> x3
>>
>> *Table B-*
>> join_col, Col3 , where join_col  value are {x1, x2}
>> x1
>> x2
>>
>> *Problem: *Let say for table A, data is skewed on x1
>> Now salting goes like this.  *Salt value =2*
>> For
>> *table A, *create a new col with values by salting join col
>> *New_Join_Col*
>> x1_1
>> x1_2
>> x1_1
>> x2_1
>> x2_2
>> x3_1
>>
>> For *Table B,*
>> Change the join_col to all possible values of the sale.
>> join_col
>> x1_1
>> x1_2
>> x2_1
>> x2_2
>>
>> And then join it like
>> table1.join(table2, where tableA.new_join_col == tableB. join_col)
>>
>> Let me know if you have any questions.
>>
>> Regards
>> Amit Joshi
>>
>>
>> On Sat, Jul 30, 2022 at 7:16 PM Sid  wrote:
>>
>>> Hi Team,
>>>
>>> I was trying to understand the Salting technique for the column where
>>> there would be a huge load on a single partition because of the same keys.
>>>
>>> I referred to one youtube video with the below understanding:
>>>
>>> So, using the salting technique we can actually change the joining
>>> column values by appending some random number in a specified range.
>>>
>>> So, suppose I have these two values in a partition of two different
>>> tables:
>>>
>>> Table A:
>>> Partition1:
>>> x
>>> .
>>> .
>>> .
>>> x
>>>
>>> Table B:
>>> Partition1:
>>> x
>>> .
>>> .
>>> .
>>> x
>>>
>>> After Salting it would be something like the below:
>>>
>>> Table A:
>>> Partition1:
>>> x_1
>>>
>>> Partition 2:
>>> x_2
>>>
>>> Table B:
>>> Partition1:
>>> x_3
>>>
>>> Partition 2:
>>> x_8
>>>
>>> Now, when I inner join these two tables after salting in order to avoid
>>> data skewness problems, I won't get a match since the keys are different
>>> after applying salting techniques.
>>>
>>> So how does this resolves the data skewness issue or if there is some
>>> understanding gap?
>>>
>>> Could anyone help me in layman's terms?
>>>
>>> TIA,
>>> Sid
>>>
>>


Re: Salting technique doubt

2022-07-30 Thread Amit Joshi
Hi Sid,

Salting is normally a technique to add random characters to existing values.
In big data we can use salting to deal with the skewness.
Salting in join cas be used as :
* Table A-*
Col1, join_col , where join_col values are {x1, x2, x3}
x1
x1
x1
x2
x2
x3

*Table B-*
join_col, Col3 , where join_col  value are {x1, x2}
x1
x2

*Problem: *Let say for table A, data is skewed on x1
Now salting goes like this.  *Salt value =2*
For
*table A, *create a new col with values by salting join col
*New_Join_Col*
x1_1
x1_2
x1_1
x2_1
x2_2
x3_1

For *Table B,*
Change the join_col to all possible values of the sale.
join_col
x1_1
x1_2
x2_1
x2_2

And then join it like
table1.join(table2, where tableA.new_join_col == tableB. join_col)

Let me know if you have any questions.

Regards
Amit Joshi


On Sat, Jul 30, 2022 at 7:16 PM Sid  wrote:

> Hi Team,
>
> I was trying to understand the Salting technique for the column where
> there would be a huge load on a single partition because of the same keys.
>
> I referred to one youtube video with the below understanding:
>
> So, using the salting technique we can actually change the joining column
> values by appending some random number in a specified range.
>
> So, suppose I have these two values in a partition of two different tables:
>
> Table A:
> Partition1:
> x
> .
> .
> .
> x
>
> Table B:
> Partition1:
> x
> .
> .
> .
> x
>
> After Salting it would be something like the below:
>
> Table A:
> Partition1:
> x_1
>
> Partition 2:
> x_2
>
> Table B:
> Partition1:
> x_3
>
> Partition 2:
> x_8
>
> Now, when I inner join these two tables after salting in order to avoid
> data skewness problems, I won't get a match since the keys are different
> after applying salting techniques.
>
> So how does this resolves the data skewness issue or if there is some
> understanding gap?
>
> Could anyone help me in layman's terms?
>
> TIA,
> Sid
>


Re: [Spark] Optimize spark join on different keys for same data frame

2021-10-04 Thread Amit Joshi
Hi spark users,

Can anyone please provide any views on the topic.


Regards
Amit Joshi

On Sunday, October 3, 2021, Amit Joshi  wrote:

> Hi Spark-Users,
>
> Hope you are doing good.
>
> I have been working on cases where a dataframe is joined with more than
> one data frame separately, on different cols, that too frequently.
> I was wondering how to optimize the join to make them faster.
> We can consider the dataset to be big in size so broadcast joins is not an
> option.
>
> For eg:
>
> schema_df1  = new StructType()
> .add(StructField("key1", StringType, true))
> .add(StructField("key2", StringType, true))
> .add(StructField("val", DoubleType, true))
>
>
> schema_df2  = new StructType()
> .add(StructField("key1", StringType, true))
> .add(StructField("val", DoubleType, true))
>
>
> schema_df3  = new StructType()
> .add(StructField("key2", StringType, true))
> .add(StructField("val", DoubleType, true))
>
> Now if we want to join
> join1 =  df1.join(df2,"key1")
> join2 =  df1.join(df3,"key2")
>
> I was thinking of bucketing as a solution to speed up the joins. But if I
> bucket df1 on the key1,then join2  may not benefit, and vice versa (if
> bucket on key2 for df1).
>
> or Should we bucket df1 twice, one with key1 and another with key2?
> Is there a strategy to make both the joins faster for both the joins?
>
>
> Regards
> Amit Joshi
>
>
>
>


[Spark] Optimize spark join on different keys for same data frame

2021-10-03 Thread Amit Joshi
Hi Spark-Users,

Hope you are doing good.

I have been working on cases where a dataframe is joined with more than one
data frame separately, on different cols, that too frequently.
I was wondering how to optimize the join to make them faster.
We can consider the dataset to be big in size so broadcast joins is not an
option.

For eg:

schema_df1  = new StructType()
.add(StructField("key1", StringType, true))
.add(StructField("key2", StringType, true))
.add(StructField("val", DoubleType, true))


schema_df2  = new StructType()
.add(StructField("key1", StringType, true))
.add(StructField("val", DoubleType, true))


schema_df3  = new StructType()
.add(StructField("key2", StringType, true))
.add(StructField("val", DoubleType, true))

Now if we want to join
join1 =  df1.join(df2,"key1")
join2 =  df1.join(df3,"key2")

I was thinking of bucketing as a solution to speed up the joins. But if I
bucket df1 on the key1,then join2  may not benefit, and vice versa (if
bucket on key2 for df1).

or Should we bucket df1 twice, one with key1 and another with key2?
Is there a strategy to make both the joins faster for both the joins?


Regards
Amit Joshi


Re: Does Rollups work with spark structured streaming with state.

2021-06-17 Thread Amit Joshi
HI Mich,

Thanks for your email.
I have tried for the batch mode,
Still looking to try in streaming mode.
Will update you as per.


Regards
Amit Joshi

On Thu, Jun 17, 2021 at 1:07 PM Mich Talebzadeh 
wrote:

> OK let us start with the basic cube
>
> create a DF first
>
> scala> val df = Seq(
>  |   ("bar", 2L),
>  |   ("bar", 2L),
>  |   ("foo", 1L),
>  |   ("foo", 2L)
>  | ).toDF("word", "num")
> df: org.apache.spark.sql.DataFrame = [word: string, num: bigint]
>
>
> Now try cube on it
>
>
> scala> df.cube($"word", $"num").count.sort(asc("word"), asc("num")).show
>
> +++-+
> |word| num|count|
> +++-+
> |null|null|4| Total rows in df
> |null|   1|1| Count where num equals 1
> |null|   2|3| Count where num equals 2
> | bar|null|2| Where word equals bar
> | bar|   2|2| Where word equals bar and num equals 2
> | foo|null|2| Where word equals foo
> | foo|   1|1| Where word equals foo and num equals 1
> | foo|   2|1| Where word equals foo and num equals 2
> +++-+
>
>
> and rollup
>
>
> scala> df.rollup($"word",$"num").count.sort(asc("word"), asc("num")).show
>
>
> +++-+
> |word| num|count|
> +++-+
> |null|null|4| Count of all rows
> | bar|null|2| Count when word is bar
> | bar|   2|2| Count when num is 2
> | foo|null|2| Count when word is foo
> | foo|   1|1| When word is foo and num is 1
> | foo|   2|1| When word is foo and num is 2
> +++-+
>
>
> So rollup() returns a subset of the rows returned by cube(). From the
> above, rollup returns 6 rows whereas cube returns 8 rows. Here are the
> missing rows.
>
> +++-+
> |word| num|count|
> +++-+
> |null|   1|1| Word is null and num is 1
> |null|   2|3| Word is null and num is 2
> +++-+
>
> Now back to Spark Structured Streaming (SSS), we have basic aggregations
>
>
> """
> We work out the window and the AVG(temperature) in the
> window's timeframe below
> This should return back the following Dataframe as struct
>
>  root
>  |-- window: struct (nullable = false)
>  ||-- start: timestamp (nullable = true)
>  ||-- end: timestamp (nullable = true)
>  |-- avg(temperature): double (nullable = true)
>
> """
> resultM = resultC. \
>  withWatermark("timestamp", "5 minutes"). \
>  groupBy(window(resultC.timestamp, "5 minutes", "5
> minutes")). \
>  avg('temperature')
>
> # We take the above Dataframe and flatten it to get the
> columns aliased as "startOfWindowFrame", "endOfWindowFrame" and
> "AVGTemperature"
> resultMF = resultM. \
>select( \
>
> F.col("window.start").alias("startOfWindowFrame") \
>   , F.col("window.end").alias("endOfWindowFrame") \
>   ,
> F.col("avg(temperature)").alias("AVGTemperature"))
>
> Now basic aggregation on singular columns can be done like
> avg('temperature'),max(),stddev() etc
>
>
> For cube() and rollup() I will require additional columns like location
> etc in my kafka topic. Personally I have not tried it but it will be
> interesting to see if it works.
>
>
> Have you tried cube() first?
>
>
> HTH
>
>
>view my Linkedin profile
> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
> On Thu, 17 Jun 2021 at 07:44, Amit Joshi 
> wrote:
>
>> Hi Mich,
>>
>> Yes, you may think of cube rollups.
>> Let me try to give an example:
>> If we have a stream of data like (country,area,count, time), we would be
>> able to get the updated count with different combinations of keys.
>>
>>> As example -
>>>  (country - count)
>>>  (country , area - coun

Re: Does Rollups work with spark structured streaming with state.

2021-06-16 Thread Amit Joshi
Hi Mich,

Yes, you may think of cube rollups.
Let me try to give an example:
If we have a stream of data like (country,area,count, time), we would be
able to get the updated count with different combinations of keys.

> As example -
>  (country - count)
>  (country , area - count)


We may need to store the state to update the count. So spark structured
streaming states will come into picture.

As now with batch programming, we can do it with

> df.rollup(col1,col2).count


But if I try to use it with spark structured streaming state, will it store
the state of all the groups as well?
I hope I was able to make my point clear.

Regards
Amit Joshi

On Wed, Jun 16, 2021 at 11:36 PM Mich Talebzadeh 
wrote:

>
>
> Hi,
>
> Just to clarify
>
> Are we talking about* rollup* as a subset of a cube that computes
> hierarchical subtotals from left to right?
>
>
>
>
>
>view my Linkedin profile
> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
> On Wed, 16 Jun 2021 at 16:37, Amit Joshi 
> wrote:
>
>> Appreciate if someone could give some pointers in the question below.
>>
>> -- Forwarded message -
>> From: Amit Joshi 
>> Date: Tue, Jun 15, 2021 at 12:19 PM
>> Subject: [Spark]Does Rollups work with spark structured streaming with
>> state.
>> To: spark-user 
>>
>>
>> Hi Spark-Users,
>>
>> Hope you are all doing well.
>> Recently I was looking into rollup operations in spark.
>>
>> As we know state based aggregation is supported in spark structured
>> streaming.
>> I was wondering if rollup operations are also supported?
>> Like the state of previous aggregation on the rollups are saved.
>>
>> If rollups are not supported, then what is the standard way to handle
>> this?
>>
>>
>> Regards
>> Amit Joshi
>>
>


Fwd: Does Rollups work with spark structured streaming with state.

2021-06-16 Thread Amit Joshi
Appreciate if someone could give some pointers in the question below.

-- Forwarded message -
From: Amit Joshi 
Date: Tue, Jun 15, 2021 at 12:19 PM
Subject: [Spark]Does Rollups work with spark structured streaming with
state.
To: spark-user 


Hi Spark-Users,

Hope you are all doing well.
Recently I was looking into rollup operations in spark.

As we know state based aggregation is supported in spark structured
streaming.
I was wondering if rollup operations are also supported?
Like the state of previous aggregation on the rollups are saved.

If rollups are not supported, then what is the standard way to handle this?


Regards
Amit Joshi


Does Rollups work with spark structured streaming with state.

2021-06-14 Thread Amit Joshi
Hi Spark-Users,

Hope you are all doing well.
Recently I was looking into rollup operations in spark.

As we know state based aggregation is supported in spark structured
streaming.
I was wondering if rollup operations are also supported?
Like the state of previous aggregation on the rollups are saved.

If rollups are not supported, then what is the standard way to handle this?


Regards
Amit Joshi


Re: multiple query with structured streaming in spark does not work

2021-05-21 Thread Amit Joshi
Hi Jian,

I found this link that could be useful.
https://spark.apache.org/docs/latest/job-scheduling.html#scheduling-within-an-application

By the way you can try once giving enough resources to run both jobs
without defining the scheduler.
I mean run the queries with default scheduler, but provide enough memory in
the spark cluster to run both.


Regards
Amit Joshi



On Sat, May 22, 2021 at 5:41 AM  wrote:

> Hi Amit;
>
>
>
> Thank you for your prompt reply and kind help. Wonder how to set the
> scheduler to FAIR mode in python. Following code seems to me does not work
> out.
>
>
>
> conf = SparkConf().setMaster("local").setAppName("HSMSTest1")
>
> sc = SparkContext(conf=conf)
>
> sc.setLocalProperty('spark.scheduler.mode', 'FAIR')
>
> spark =
> SparkSession.builder.appName("HSMSStructedStreaming1").getOrCreate()
>
>
>
> by the way, as I am using nc -lk  to input the stream, will it cause
> by the reason as the input stream can only be consumed by one query as
> mentioned in below post as;
>
>
>
>
> https://stackoverflow.com/questions/45618489/executing-separate-streaming-queries-in-spark-structured-streaming
>
>
>
> appreciate your further help/support.
>
>
>
> Best Regards,
>
>
>
> Jian Xu
>
>
>
> *From:* Amit Joshi 
> *Sent:* Friday, May 21, 2021 12:52 PM
> *To:* jia...@xtronica.no
> *Cc:* user@spark.apache.org
> *Subject:* Re: multiple query with structured streaming in spark does not
> work
>
>
>
> Hi Jian,
>
>
>
> You have to use same spark session to run all the queries.
>
> And use the following to wait for termination.
>
>
>
> q1 = writestream.start
>
> q2 = writstream2.start
>
> spark.streams.awaitAnyTermination
>
>
>
> And also set the scheduler in the spark config to FAIR scheduler.
>
>
>
>
>
> Regards
>
> Amit Joshi
>
>
>
>
>
> On Saturday, May 22, 2021,  wrote:
>
> Hi There;
>
>
>
> I am new to spark. We are using spark to develop our app for data
> streaming with sensor readings.
>
>
>
> I am having trouble to get two queries with structured streaming working
> concurrently.
>
>
>
> Following is the code. It can only work with one of them. Wonder if there
> is any way to get it doing. Appreciate help from the team.
>
>
>
> Regards,
>
>
>
> Jian Xu
>
>
>
>
>
> hostName = 'localhost'
>
> portNumber= 
>
> wSize= '10 seconds'
>
> sSize ='2 seconds'
>
>
>
> def wnq_fb_func(batch_df, batch_id):
>
> print("batch is processed from time:{}".format(datetime.now()))
>
> print(batch_df.collect())
>
> batch_df.show(10,False,False)
>
>
>
> lines = spark.readStream.format('socket').option('host',
> hostName).option('port', portNumber).option('includeTimestamp', True).load()
>
>
>
> nSensors=3
>
>
>
> scols = split(lines.value, ',').cast(ArrayType(FloatType()))
>
> sensorCols = []
>
> for i in range(nSensors):
>
> sensorCols.append(scols.getItem(i).alias('sensor'+ str(i)))
>
>
>
> nlines=lines.select(lines.timestamp,lines.value, *sensorCols)
>
> nlines.printSchema()
>
>
>
> wnlines =nlines.select(window(nlines.timestamp, wSize,
> sSize).alias('TimeWindow'), *lines.columns)
>
> wnquery= wnlines.writeStream.trigger(processingTime=sSize)\
>
> .outputMode('append').foreachBatch(wnq_fb_func).start()
>
>
>
> nquery=nlines.writeStream.outputMode('append').format('console').start()
>
> nquery.awaitTermination()
>
> wnquery.awaitTermination()
>
>
>
>
>
>
>
>


Re: multiple query with structured streaming in spark does not work

2021-05-21 Thread Amit Joshi
Hi Jian,

You have to use same spark session to run all the queries.
And use the following to wait for termination.

q1 = writestream.start
q2 = writstream2.start
spark.streams.awaitAnyTermination

And also set the scheduler in the spark config to FAIR scheduler.


Regards
Amit Joshi



On Saturday, May 22, 2021,  wrote:

> Hi There;
>
>
>
> I am new to spark. We are using spark to develop our app for data
> streaming with sensor readings.
>
>
>
> I am having trouble to get two queries with structured streaming working
> concurrently.
>
>
>
> Following is the code. It can only work with one of them. Wonder if there
> is any way to get it doing. Appreciate help from the team.
>
>
>
> Regards,
>
>
>
> Jian Xu
>
>
>
>
>
> hostName = 'localhost'
>
> portNumber= 
>
> wSize= '10 seconds'
>
> sSize ='2 seconds'
>
>
>
> def wnq_fb_func(batch_df, batch_id):
>
> print("batch is processed from time:{}".format(datetime.now()))
>
> print(batch_df.collect())
>
> batch_df.show(10,False,False)
>
>
>
> lines = spark.readStream.format('socket').option('host',
> hostName).option('port', portNumber).option('includeTimestamp',
> True).load()
>
>
>
> nSensors=3
>
>
>
> scols = split(lines.value, ',').cast(ArrayType(FloatType()))
>
> sensorCols = []
>
> for i in range(nSensors):
>
> sensorCols.append(scols.getItem(i).alias('sensor'+ str(i)))
>
>
>
> nlines=lines.select(lines.timestamp,lines.value, *sensorCols)
>
> nlines.printSchema()
>
>
>
> wnlines =nlines.select(window(nlines.timestamp, wSize,
> sSize).alias('TimeWindow'), *lines.columns)
>
> wnquery= wnlines.writeStream.trigger(processingTime=sSize)\
>
> .outputMode('append').foreachBatch(wnq_fb_func).start()
>
>
>
> nquery=nlines.writeStream.outputMode('append').format('console').start()
>
> nquery.awaitTermination()
>
> wnquery.awaitTermination()
>
>
>
>
>
>
>


Re: [EXTERNAL] Urgent Help - Py Spark submit error

2021-05-14 Thread Amit Joshi
Hi KhajaAsmath,

Client vs Cluster: In client mode driver runs in the machine from where you
submit your job. Whereas in cluster mode driver runs in one of the worker
nodes.

I think you need to pass the conf file to your driver, as you are using it
in the driver code, which runs in one of the worker nodes.
Use this command to pass it to driver
*--files  /appl/common/ftp/conf.json  --conf
spark.driver.extraJavaOptions="-Dconfig.file=conf.json*

And make sure you are able to access the file location from worker nodes.


Regards
Amit Joshi

On Sat, May 15, 2021 at 5:14 AM KhajaAsmath Mohammed <
mdkhajaasm...@gmail.com> wrote:

> Here is my updated spark submit without any luck.,
>
> spark-submit --master yarn --deploy-mode cluster --files
> /appl/common/ftp/conf.json,/etc/hive/conf/hive-site.xml,/etc/hadoop/conf/core-site.xml,/etc/hadoop/conf/hdfs-site.xml
> --num-executors 6 --executor-cores 3 --driver-cores 3 --driver-memory 7g
> --executor-memory 7g /appl/common/ftp/ftp_event_data.py
> /appl/common/ftp/conf.json 2021-05-10 7
>
> On Fri, May 14, 2021 at 6:19 PM KhajaAsmath Mohammed <
> mdkhajaasm...@gmail.com> wrote:
>
>> Sorry my bad, it did not resolve the issue. I still have the same issue.
>> can anyone please guide me. I was still running as a client instead of a
>> cluster.
>>
>> On Fri, May 14, 2021 at 5:05 PM KhajaAsmath Mohammed <
>> mdkhajaasm...@gmail.com> wrote:
>>
>>> You are right. It worked but I still don't understand why I need to pass
>>> that to all executors.
>>>
>>> On Fri, May 14, 2021 at 5:03 PM KhajaAsmath Mohammed <
>>> mdkhajaasm...@gmail.com> wrote:
>>>
>>>> I am using json only to read properties before calling spark session. I
>>>> don't know why we need to pass that to all executors.
>>>>
>>>>
>>>> On Fri, May 14, 2021 at 5:01 PM Longjiang.Yang <
>>>> longjiang.y...@target.com> wrote:
>>>>
>>>>> Could you check whether this file is accessible in executors? (is it
>>>>> in HDFS or in the client local FS)
>>>>> /appl/common/ftp/conf.json
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> *From: *KhajaAsmath Mohammed 
>>>>> *Date: *Friday, May 14, 2021 at 4:50 PM
>>>>> *To: *"user @spark" 
>>>>> *Subject: *[EXTERNAL] Urgent Help - Py Spark submit error
>>>>>
>>>>>
>>>>>
>>>>> /appl/common/ftp/conf.json
>>>>>
>>>>


Re: jar incompatibility with Spark 3.1.1 for structured streaming with kafka

2021-04-07 Thread Amit Joshi
Hi Mich,

If I correctly understood your problem, it is that the spark-kafka jar is
shadowed by the installed kafka client jar at run time.
I had been in that place earlier.
I can recommend resolving the issue using the shade plugin. The example I
am pasting here works for pom.xml.
I am very sure you will find something for sbt as well.
This is a maven shade plugin to change the name of the class while
packaging. This will form an uber jar.
<*relocations*>
<*relocation*>
<*pattern*>org.apache.kafka
<*shadedPattern*>shade.org.apache.kafka



Hope this helps.

Regards
Amit Joshi

On Wed, Apr 7, 2021 at 8:14 PM Mich Talebzadeh 
wrote:

>
> Did some tests. The concern is SSS job running under YARN
>
>
> *Scenario 1)*  use spark-sql-kafka-0-10_2.12-3.1.0.jar
>
>- Removed spark-sql-kafka-0-10_2.12-3.1.0.jar from anywhere on
>CLASSPATH including $SPARK_HOME/jars
>- Added the said jar file to spark-submit in client mode (the only
>mode available to PySpark) with --jars
>- spark-submit --master yarn --deploy-mode client --conf
>spark.pyspark.virtualenv.enabled=true .. bla bla..  --driver-memory 4G
>--executor-memory 4G --num-executors 2 --executor-cores 2 *--jars
>$HOME/jars/spark-sql-kafka-0-10_2.12-3.1.0.jar *xyz.py
>
> This works fine
>
>
> *Scenario 2)* use spark-sql-kafka-0-10_2.12-3.1.1.jar in spark-submit
>
>
>
>-  spark-submit --master yarn --deploy-mode client --conf
>spark.pyspark.virtualenv.enabled=true ..bla bla.. --driver-memory 4G
>--executor-memory 4G --num-executors 2 --executor-cores 2 *--jars
>$HOME/jars/spark-sql-kafka-0-10_2.12-*3.1.1*.jar *xyz.py
>
> it failed with
>
>
>
>- Caused by: java.lang.NoSuchMethodError:
>
> org.apache.spark.kafka010.KafkaTokenUtil$.needTokenUpdate(Ljava/util/Map;Lscala/Option;)Z
>
> Scenario 3) use the package as per Structured Streaming + Kafka
> Integration Guide (Kafka broker version 0.10.0 or higher) - Spark 3.1.1
> Documentation (apache.org)
> <https://spark.apache.org/docs/latest/structured-streaming-kafka-integration.html#deploying>
>
>
>- spark-submit --master yarn --deploy-mode client --conf
>spark.pyspark.virtualenv.enabled=true ..bla bla.. --driver-memory 4G
>--executor-memory 4G --num-executors 2 --executor-cores 2 *--packages
>org.apache.spark:spark-sql-kafka-0-10_2.12:3.1.1 *xyz.py
>
> it failed with
>
>- Caused by: java.lang.NoSuchMethodError:
>
> org.apache.spark.kafka010.KafkaTokenUtil$.needTokenUpdate(Ljava/util/Map;Lscala/Option;)Z
>
>
> HTH
>
>
>view my Linkedin profile
> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
> On Wed, 7 Apr 2021 at 13:20, Gabor Somogyi 
> wrote:
>
>> +1 on Sean's opinion
>>
>> On Wed, Apr 7, 2021 at 2:17 PM Sean Owen  wrote:
>>
>>> You shouldn't be modifying your cluster install. You may at this point
>>> have conflicting, excess JARs in there somewhere. I'd start it over if you
>>> can.
>>>
>>> On Wed, Apr 7, 2021 at 7:15 AM Gabor Somogyi 
>>> wrote:
>>>
>>>> Not sure what you mean not working. You've added 3.1.1 to packages
>>>> which uses:
>>>> * 2.6.0 kafka-clients:
>>>> https://github.com/apache/spark/blob/1d550c4e90275ab418b9161925049239227f3dc9/pom.xml#L136
>>>> * 2.6.2 commons pool:
>>>> https://github.com/apache/spark/blob/1d550c4e90275ab418b9161925049239227f3dc9/pom.xml#L183
>>>>
>>>> I think it worth an end-to-end dep-tree analysis what is really
>>>> happening on the cluster...
>>>>
>>>> G
>>>>
>>>>
>>>> On Wed, Apr 7, 2021 at 11:11 AM Mich Talebzadeh <
>>>> mich.talebza...@gmail.com> wrote:
>>>>
>>>>> Hi Gabor et. al.,
>>>>>
>>>>> To be honest I am not convinced this package --packages
>>>>> org.apache.spark:spark-sql-kafka-0-10_2.12:3.1.1 is really working!
>>>>>
>>>>> I know for definite that spark-sql-kafka-0-10_2.12-3.1.0.jar works
>>>>> fine. I reported the package working before because under $SPARK_HOME/jars
>>>>> on all nodes there was a copy 3.0.1 jar file. 

Re: [Spark Structured Streaming] Processing the data path coming from kafka.

2021-01-18 Thread Amit Joshi
Hi Boris,

Thanks for your code block.
I understood what you are trying to achieve in the code.

But content in the file are json records seperated by new line.
And we have to make the dataframe out of it, as some processing has to be
done on it.

Regards
Amit
On Monday, January 18, 2021, Boris Litvak  wrote:

> HI Amit,
>
>
>
> I was thinking along the lines of (python):
>
>
>
>
> @udf(returnType=StringType())
> def reader_udf(filename: str) -> str:
> with open(filename, "r") as f:
> return f.read()
>
>
> def run_locally():
> with utils.build_spark_session("Local", local=True) as spark:
> df = spark.readStream.csv(r'testdata', schema=StructType([
> StructField('filename', StringType(), True)]))
> df = df.withColumn('content', reader_udf(col('filename')))
> q = df.select('content').writeStream.queryName('test').format(
> 'console').start()
> q.awaitTermination()
>
>
>
> Now each row contains the contents of the files, provided they are not
> large you can foreach() over the df/rdd and do whatever you want with it,
> such as json.loads()/etc.
>
> If you know the shema of the jsons, you can later explode() them into a
> flat DF, ala https://stackoverflow.com/questions/38243717/spark-
> explode-nested-json-with-array-in-scala
>
>
>
> Note that unless I am missing something you cannot access spark session
> from foreach as code is not running on the driver.
>
> Please say if it makes sense or did I miss anything.
>
>
>
> Boris
>
>
>
> *From:* Amit Joshi 
> *Sent:* Monday, 18 January 2021 17:10
> *To:* Boris Litvak 
> *Cc:* spark-user 
> *Subject:* Re: [Spark Structured Streaming] Processing the data path
> coming from kafka.
>
>
>
> Hi Boris,
>
>
>
> I need to do processing on the data present in the path.
>
> That is the reason I am trying to make the dataframe.
>
>
>
> Can you please provide the example of your solution?
>
>
>
> Regards
>
> Amit
>
>
>
> On Mon, Jan 18, 2021 at 7:15 PM Boris Litvak  wrote:
>
> Hi Amit,
>
>
>
> Why won’t you just map()/mapXXX() the kafkaDf with the mapping function
> that reads the paths?
>
> Also, do you really have to read the json into an additional dataframe?
>
>
>
> Thanks, Boris
>
>
>
> *From:* Amit Joshi 
> *Sent:* Monday, 18 January 2021 15:04
> *To:* spark-user 
> *Subject:* [Spark Structured Streaming] Processing the data path coming
> from kafka.
>
>
>
> Hi ,
>
>
>
> I have a use case where the file path of the json records stored in s3 are
> coming as a kafka
>
> message in kafka. I have to process the data using spark structured
> streaming.
>
>
>
> The design which I thought is as follows:
>
> 1. In kafka Spark structures streaming, read the message containing the
> data path.
>
> 2. Collect the message record in driver. (Messages are small in sizes)
>
> 3. Create the dataframe from the datalocation.
>
>
>
> *kafkaDf*.select(*$"value"*.cast(StringType))
>   .writeStream.foreachBatch((batchDf:DataFrame, batchId:Long) =>  {
>
> //rough code
>
> //collec to driver
>
> *val *records = batchDf.collect()
>
> //create dataframe and process
> records foreach((rec: Row) =>{
>   *println*(*"records:##"*,rec.toString())
>   val path = rec.getAs[String](*"data_path"*)
>
>   val dfToProcess =spark.read.json(path)
>
>   
>
> })
>
> }
>
> I would like to know the views, if this approach is fine? Specifically if 
> there is some problem with
>
> with creating the dataframe after calling collect.
>
> If there is any better approach, please let know the same.
>
>
>
> Regards
>
> Amit Joshi
>
>


Re: [Spark Structured Streaming] Processing the data path coming from kafka.

2021-01-18 Thread Amit Joshi
Hi Boris,

I need to do processing on the data present in the path.
That is the reason I am trying to make the dataframe.

Can you please provide the example of your solution?

Regards
Amit

On Mon, Jan 18, 2021 at 7:15 PM Boris Litvak  wrote:

> Hi Amit,
>
>
>
> Why won’t you just map()/mapXXX() the kafkaDf with the mapping function
> that reads the paths?
>
> Also, do you really have to read the json into an additional dataframe?
>
>
>
> Thanks, Boris
>
>
>
> *From:* Amit Joshi 
> *Sent:* Monday, 18 January 2021 15:04
> *To:* spark-user 
> *Subject:* [Spark Structured Streaming] Processing the data path coming
> from kafka.
>
>
>
> Hi ,
>
>
>
> I have a use case where the file path of the json records stored in s3 are
> coming as a kafka
>
> message in kafka. I have to process the data using spark structured
> streaming.
>
>
>
> The design which I thought is as follows:
>
> 1. In kafka Spark structures streaming, read the message containing the
> data path.
>
> 2. Collect the message record in driver. (Messages are small in sizes)
>
> 3. Create the dataframe from the datalocation.
>
>
>
> *kafkaDf*.select(*$"value"*.cast(StringType))
>   .writeStream.foreachBatch((batchDf:DataFrame, batchId:Long) =>  {
>
> //rough code
>
> //collec to driver
>
> *val *records = batchDf.collect()
>
> //create dataframe and process
> records foreach((rec: Row) =>{
>   *println*(*"records:##"*,rec.toString())
>   val path = rec.getAs[String](*"data_path"*)
>
>   val dfToProcess =spark.read.json(path)
>
>   
>
> })
>
> }
>
> I would like to know the views, if this approach is fine? Specifically if 
> there is some problem with
>
> with creating the dataframe after calling collect.
>
> If there is any better approach, please let know the same.
>
>
>
> Regards
>
> Amit Joshi
>
>


[Spark Structured Streaming] Processing the data path coming from kafka.

2021-01-18 Thread Amit Joshi
Hi ,

I have a use case where the file path of the json records stored in s3 are
coming as a kafka
message in kafka. I have to process the data using spark structured
streaming.

The design which I thought is as follows:
1. In kafka Spark structures streaming, read the message containing the
data path.
2. Collect the message record in driver. (Messages are small in sizes)
3. Create the dataframe from the datalocation.

kafkaDf.select($"value".cast(StringType))
  .writeStream.foreachBatch((batchDf:DataFrame, batchId:Long) =>  {

//rough code

//collec to driver

val records = batchDf.collect()

//create dataframe and process
records foreach((rec: Row) =>{
  println("records:##",rec.toString())
  val path = rec.getAs[String]("data_path")

  val dfToProcess =spark.read.json(path)

  

})

}

I would like to know the views, if this approach is fine? Specifically
if there is some problem with

with creating the dataframe after calling collect.

If there is any better approach, please let know the same.


Regards

Amit Joshi


Re: Missing required configuration "partition.assignment.strategy" [ Kafka + Spark Structured Streaming ]

2020-12-08 Thread Amit Joshi
Hi All,

Can someone pls hellp with this.

Thanks

On Tuesday, December 8, 2020, Amit Joshi  wrote:

> Hi Gabor,
>
> Pls find the logs attached. These are truncated logs.
>
> Command used :
> spark-submit --verbose --packages org.apache.spark:spark-sql-
> kafka-0-10_2.12:3.0.1,com.typesafe:config:1.4.0 --master yarn
> --deploy-mode cluster --class com.stream.Main --num-executors 2
> --driver-memory 2g --executor-cores 1 --executor-memory 4g --files
> gs://x/jars_application.conf,gs://x/log4j.properties
> gs://x/a-synch-r-1.0-SNAPSHOT.jar
> For this I used a snapshot jar, not a fat jar.
>
>
> Regards
> Amit
>
> On Mon, Dec 7, 2020 at 10:15 PM Gabor Somogyi 
> wrote:
>
>> Well, I can't do miracle without cluster and logs access.
>> What I don't understand why you need fat jar?! Spark libraries normally
>> need provided scope because it must exist on all machines...
>> I would take a look at the driver and executor logs which contains the
>> consumer configs + I would take a look at the exact version of the consumer
>> (this is printed also in the same log)
>>
>> G
>>
>>
>> On Mon, Dec 7, 2020 at 5:07 PM Amit Joshi 
>> wrote:
>>
>>> Hi Gabor,
>>>
>>> The code is very simple Kafka consumption of data.
>>> I guess, it may be the cluster.
>>> Can you please point out the possible problem toook for in the cluster?
>>>
>>> Regards
>>> Amit
>>>
>>> On Monday, December 7, 2020, Gabor Somogyi 
>>> wrote:
>>>
>>>> + Adding back user list.
>>>>
>>>> I've had a look at the Spark code and it's not modifying 
>>>> "partition.assignment.strategy"
>>>> so the problem
>>>> must be either in your application or in your cluster setup.
>>>>
>>>> G
>>>>
>>>>
>>>> On Mon, Dec 7, 2020 at 12:31 PM Gabor Somogyi <
>>>> gabor.g.somo...@gmail.com> wrote:
>>>>
>>>>> It's super interesting because that field has default value:
>>>>> *org.apache.kafka.clients.consumer.RangeAssignor*
>>>>>
>>>>> On Mon, 7 Dec 2020, 10:51 Amit Joshi, 
>>>>> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> Thnks for the reply.
>>>>>> I did tried removing the client version.
>>>>>> But got the same exception.
>>>>>>
>>>>>>
>>>>>> Thnks
>>>>>>
>>>>>> On Monday, December 7, 2020, Gabor Somogyi 
>>>>>> wrote:
>>>>>>
>>>>>>> +1 on the mentioned change, Spark uses the following kafka-clients
>>>>>>> library:
>>>>>>>
>>>>>>> 2.4.1
>>>>>>>
>>>>>>> G
>>>>>>>
>>>>>>>
>>>>>>> On Mon, Dec 7, 2020 at 9:30 AM German Schiavon <
>>>>>>> gschiavonsp...@gmail.com> wrote:
>>>>>>>
>>>>>>>> Hi,
>>>>>>>>
>>>>>>>> I think the issue is that you are overriding the kafka-clients that
>>>>>>>> comes with  spark-sql-kafka-0-10_2.12
>>>>>>>>
>>>>>>>>
>>>>>>>> I'd try removing the kafka-clients and see if it works
>>>>>>>>
>>>>>>>>
>>>>>>>> On Sun, 6 Dec 2020 at 08:01, Amit Joshi 
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> Hi All,
>>>>>>>>>
>>>>>>>>> I am running the Spark Structured Streaming along with Kafka.
>>>>>>>>> Below is the pom.xml
>>>>>>>>>
>>>>>>>>> 
>>>>>>>>> 1.8
>>>>>>>>> 1.8
>>>>>>>>> UTF-8
>>>>>>>>> 
>>>>>>>>> 2.12.10
>>>>>>>>> 3.0.1
>>>>>>>>> 
>>>>>>>>>
>>>>>>>>> 
>>>>>>>>> org.apache.kafka
>>>>>>>>> kafka-clients
>>>>>>>>> 2.1.0
>>>>>>>>> 
>>>>>>>>>
>&

Re: Missing required configuration "partition.assignment.strategy" [ Kafka + Spark Structured Streaming ]

2020-12-07 Thread Amit Joshi
Hi Gabor,

Pls find the logs attached. These are truncated logs.

Command used :
spark-submit --verbose --packages
org.apache.spark:spark-sql-kafka-0-10_2.12:3.0.1,com.typesafe:config:1.4.0
--master yarn --deploy-mode cluster --class com.stream.Main --num-executors
2 --driver-memory 2g --executor-cores 1 --executor-memory 4g --files
gs://x/jars_application.conf,gs://x/log4j.properties
gs://x/a-synch-r-1.0-SNAPSHOT.jar
For this I used a snapshot jar, not a fat jar.


Regards
Amit

On Mon, Dec 7, 2020 at 10:15 PM Gabor Somogyi 
wrote:

> Well, I can't do miracle without cluster and logs access.
> What I don't understand why you need fat jar?! Spark libraries normally
> need provided scope because it must exist on all machines...
> I would take a look at the driver and executor logs which contains the
> consumer configs + I would take a look at the exact version of the consumer
> (this is printed also in the same log)
>
> G
>
>
> On Mon, Dec 7, 2020 at 5:07 PM Amit Joshi 
> wrote:
>
>> Hi Gabor,
>>
>> The code is very simple Kafka consumption of data.
>> I guess, it may be the cluster.
>> Can you please point out the possible problem toook for in the cluster?
>>
>> Regards
>> Amit
>>
>> On Monday, December 7, 2020, Gabor Somogyi 
>> wrote:
>>
>>> + Adding back user list.
>>>
>>> I've had a look at the Spark code and it's not
>>> modifying "partition.assignment.strategy" so the problem
>>> must be either in your application or in your cluster setup.
>>>
>>> G
>>>
>>>
>>> On Mon, Dec 7, 2020 at 12:31 PM Gabor Somogyi 
>>> wrote:
>>>
>>>> It's super interesting because that field has default value:
>>>> *org.apache.kafka.clients.consumer.RangeAssignor*
>>>>
>>>> On Mon, 7 Dec 2020, 10:51 Amit Joshi, 
>>>> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> Thnks for the reply.
>>>>> I did tried removing the client version.
>>>>> But got the same exception.
>>>>>
>>>>>
>>>>> Thnks
>>>>>
>>>>> On Monday, December 7, 2020, Gabor Somogyi 
>>>>> wrote:
>>>>>
>>>>>> +1 on the mentioned change, Spark uses the following kafka-clients
>>>>>> library:
>>>>>>
>>>>>> 2.4.1
>>>>>>
>>>>>> G
>>>>>>
>>>>>>
>>>>>> On Mon, Dec 7, 2020 at 9:30 AM German Schiavon <
>>>>>> gschiavonsp...@gmail.com> wrote:
>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> I think the issue is that you are overriding the kafka-clients that
>>>>>>> comes with  spark-sql-kafka-0-10_2.12
>>>>>>>
>>>>>>>
>>>>>>> I'd try removing the kafka-clients and see if it works
>>>>>>>
>>>>>>>
>>>>>>> On Sun, 6 Dec 2020 at 08:01, Amit Joshi 
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Hi All,
>>>>>>>>
>>>>>>>> I am running the Spark Structured Streaming along with Kafka.
>>>>>>>> Below is the pom.xml
>>>>>>>>
>>>>>>>> 
>>>>>>>> 1.8
>>>>>>>> 1.8
>>>>>>>> UTF-8
>>>>>>>> 
>>>>>>>> 2.12.10
>>>>>>>> 3.0.1
>>>>>>>> 
>>>>>>>>
>>>>>>>> 
>>>>>>>> org.apache.kafka
>>>>>>>> kafka-clients
>>>>>>>> 2.1.0
>>>>>>>> 
>>>>>>>>
>>>>>>>> 
>>>>>>>> org.apache.spark
>>>>>>>> spark-core_2.12
>>>>>>>> ${sparkVersion}
>>>>>>>> provided
>>>>>>>> 
>>>>>>>> 
>>>>>>>> 
>>>>>>>> org.apache.spark
>>>>>>>> spark-sql_2.12
>>>>>>>> ${sparkVersion}
>>>>>>>>     provided
>>>>>>>> 
>>>>>

Re: Missing required configuration "partition.assignment.strategy" [ Kafka + Spark Structured Streaming ]

2020-12-07 Thread Amit Joshi
Hi Gabor,

The code is very simple Kafka consumption of data.
I guess, it may be the cluster.
Can you please point out the possible problem toook for in the cluster?

Regards
Amit

On Monday, December 7, 2020, Gabor Somogyi 
wrote:

> + Adding back user list.
>
> I've had a look at the Spark code and it's not modifying 
> "partition.assignment.strategy"
> so the problem
> must be either in your application or in your cluster setup.
>
> G
>
>
> On Mon, Dec 7, 2020 at 12:31 PM Gabor Somogyi 
> wrote:
>
>> It's super interesting because that field has default value:
>> *org.apache.kafka.clients.consumer.RangeAssignor*
>>
>> On Mon, 7 Dec 2020, 10:51 Amit Joshi,  wrote:
>>
>>> Hi,
>>>
>>> Thnks for the reply.
>>> I did tried removing the client version.
>>> But got the same exception.
>>>
>>>
>>> Thnks
>>>
>>> On Monday, December 7, 2020, Gabor Somogyi 
>>> wrote:
>>>
>>>> +1 on the mentioned change, Spark uses the following kafka-clients
>>>> library:
>>>>
>>>> 2.4.1
>>>>
>>>> G
>>>>
>>>>
>>>> On Mon, Dec 7, 2020 at 9:30 AM German Schiavon <
>>>> gschiavonsp...@gmail.com> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> I think the issue is that you are overriding the kafka-clients that
>>>>> comes with  spark-sql-kafka-0-10_2.12
>>>>>
>>>>>
>>>>> I'd try removing the kafka-clients and see if it works
>>>>>
>>>>>
>>>>> On Sun, 6 Dec 2020 at 08:01, Amit Joshi 
>>>>> wrote:
>>>>>
>>>>>> Hi All,
>>>>>>
>>>>>> I am running the Spark Structured Streaming along with Kafka.
>>>>>> Below is the pom.xml
>>>>>>
>>>>>> 
>>>>>> 1.8
>>>>>> 1.8
>>>>>> UTF-8
>>>>>> 
>>>>>> 2.12.10
>>>>>> 3.0.1
>>>>>> 
>>>>>>
>>>>>> 
>>>>>> org.apache.kafka
>>>>>> kafka-clients
>>>>>> 2.1.0
>>>>>> 
>>>>>>
>>>>>> 
>>>>>> org.apache.spark
>>>>>> spark-core_2.12
>>>>>> ${sparkVersion}
>>>>>> provided
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> org.apache.spark
>>>>>> spark-sql_2.12
>>>>>> ${sparkVersion}
>>>>>> provided
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> org.apache.spark
>>>>>> spark-sql-kafka-0-10_2.12
>>>>>> ${sparkVersion}
>>>>>> 
>>>>>>
>>>>>> Building the fat jar with shade plugin. The jar is running as expected 
>>>>>> in my local setup with the command
>>>>>>
>>>>>> *spark-submit --master local[*] --class com.stream.Main --num-executors 
>>>>>> 3 --driver-memory 2g --executor-cores 2 --executor-memory 3g 
>>>>>> prism-event-synch-rta.jar*
>>>>>>
>>>>>> But when I am trying to run same jar in spark cluster using yarn with 
>>>>>> command:
>>>>>>
>>>>>> *spark-submit --master yarn --deploy-mode cluster --class 
>>>>>> com.stream.Main --num-executors 4 --driver-memory 2g --executor-cores 1 
>>>>>> --executor-memory 4g  gs://jars/prism-event-synch-rta.jar*
>>>>>>
>>>>>> Getting the this exception:
>>>>>>
>>>>>>  
>>>>>>
>>>>>>
>>>>>> *at org.apache.spark.sql.execution.streaming.StreamExecution.org 
>>>>>> <http://org.apache.spark.sql.execution.streaming.StreamExecution.org>$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:355)
>>>>>> at 
>>>>>> org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:245)Caused
>>>>>>  by: org.apache.kafka.common.config.ConfigException: Missing required 
>>>>>> configuration "partition.assignment.strategy" which has no default 
>>>>>> value. at 
>>>>>> org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:124)*
>>>>>>
>>>>>> I have tried setting up the "partition.assignment.strategy", then also 
>>>>>> its not working.
>>>>>>
>>>>>> Please help.
>>>>>>
>>>>>>
>>>>>> Regards
>>>>>>
>>>>>> Amit Joshi
>>>>>>
>>>>>>


Re: Missing required configuration "partition.assignment.strategy" [ Kafka + Spark Structured Streaming ]

2020-12-07 Thread Amit Joshi
Hi All,

Thnks for the reply.
I did tried removing the client version.
But got the same exception.

Though one point there is some dependent artifacts which I am using, which
contains refrence to the Kafka client saw version.
I am trying to make uber jar, which will choose the closest version.

Thnks


On Monday, December 7, 2020, Gabor Somogyi 
wrote:

> +1 on the mentioned change, Spark uses the following kafka-clients library:
>
> 2.4.1
>
> G
>
>
> On Mon, Dec 7, 2020 at 9:30 AM German Schiavon 
> wrote:
>
>> Hi,
>>
>> I think the issue is that you are overriding the kafka-clients that comes
>> with  spark-sql-kafka-0-10_2.12
>>
>>
>> I'd try removing the kafka-clients and see if it works
>>
>>
>> On Sun, 6 Dec 2020 at 08:01, Amit Joshi 
>> wrote:
>>
>>> Hi All,
>>>
>>> I am running the Spark Structured Streaming along with Kafka.
>>> Below is the pom.xml
>>>
>>> 
>>> 1.8
>>> 1.8
>>> UTF-8
>>> 
>>> 2.12.10
>>> 3.0.1
>>> 
>>>
>>> 
>>> org.apache.kafka
>>> kafka-clients
>>> 2.1.0
>>> 
>>>
>>> 
>>> org.apache.spark
>>> spark-core_2.12
>>> ${sparkVersion}
>>> provided
>>> 
>>> 
>>> 
>>> org.apache.spark
>>> spark-sql_2.12
>>> ${sparkVersion}
>>> provided
>>> 
>>> 
>>> 
>>> org.apache.spark
>>> spark-sql-kafka-0-10_2.12
>>> ${sparkVersion}
>>> 
>>>
>>> Building the fat jar with shade plugin. The jar is running as expected in 
>>> my local setup with the command
>>>
>>> *spark-submit --master local[*] --class com.stream.Main --num-executors 3 
>>> --driver-memory 2g --executor-cores 2 --executor-memory 3g 
>>> prism-event-synch-rta.jar*
>>>
>>> But when I am trying to run same jar in spark cluster using yarn with 
>>> command:
>>>
>>> *spark-submit --master yarn --deploy-mode cluster --class com.stream.Main 
>>> --num-executors 4 --driver-memory 2g --executor-cores 1 --executor-memory 
>>> 4g  gs://jars/prism-event-synch-rta.jar*
>>>
>>> Getting the this exception:
>>>
>>> 
>>>
>>>
>>> *at org.apache.spark.sql.execution.streaming.StreamExecution.org 
>>> <http://org.apache.spark.sql.execution.streaming.StreamExecution.org>$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:355)
>>>at 
>>> org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:245)Caused
>>>  by: org.apache.kafka.common.config.ConfigException: Missing required 
>>> configuration "partition.assignment.strategy" which has no default value. 
>>> at org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:124)*
>>>
>>> I have tried setting up the "partition.assignment.strategy", then also its 
>>> not working.
>>>
>>> Please help.
>>>
>>>
>>> Regards
>>>
>>> Amit Joshi
>>>
>>>


Missing required configuration "partition.assignment.strategy" [ Kafka + Spark Structured Streaming ]

2020-12-05 Thread Amit Joshi
Hi All,

I am running the Spark Structured Streaming along with Kafka.
Below is the pom.xml


1.8
1.8
UTF-8

2.12.10
3.0.1



org.apache.kafka
kafka-clients
2.1.0



org.apache.spark
spark-core_2.12
${sparkVersion}
provided



org.apache.spark
spark-sql_2.12
${sparkVersion}
provided



org.apache.spark
spark-sql-kafka-0-10_2.12
${sparkVersion}


Building the fat jar with shade plugin. The jar is running as expected
in my local setup with the command

*spark-submit --master local[*] --class com.stream.Main
--num-executors 3 --driver-memory 2g --executor-cores 2
--executor-memory 3g prism-event-synch-rta.jar*

But when I am trying to run same jar in spark cluster using yarn with command:

*spark-submit --master yarn --deploy-mode cluster --class
com.stream.Main --num-executors 4 --driver-memory 2g --executor-cores
1 --executor-memory 4g  gs://jars/prism-event-synch-rta.jar*

Getting the this exception:




*at org.apache.spark.sql.execution.streaming.StreamExecution.org
<http://org.apache.spark.sql.execution.streaming.StreamExecution.org>$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:355)
at
org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:245)Caused
by: org.apache.kafka.common.config.ConfigException: Missing required
configuration "partition.assignment.strategy" which has no default
value.  at org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:124)*

I have tried setting up the "partition.assignment.strategy", then also
its not working.

Please help.


Regards

Amit Joshi


Re: [Spark SQL] does pyspark udf support spark.sql inside def

2020-09-30 Thread Amit Joshi
Can you pls post the schema of both the tables.

On Wednesday, September 30, 2020, Lakshmi Nivedita 
wrote:

> Thank you for the clarification.I would like to how can I  proceed for
> this kind of scenario in pyspark
>
> I have a scenario subtracting the total number of days with the number of
> holidays in pyspark by using dataframes
>
> I have a table with dates  date1  date2 in one table and number of
> holidays in another table
> df1 = select date1,date2 ,ctry ,unixtimestamp(date2-date1)
> totalnumberofdays  - df2.holidays  from table A;
>
> df2 = select count(holiays)
> from table B
> where holidate >= 'date1'(table A)
> and holidate < = date2(table A)
> and country = A.ctry(table A)
>
> Except country no other column is not a unique key
>
>
>
>
> On Wed, Sep 30, 2020 at 6:05 PM Sean Owen  wrote:
>
>> No, you can't use the SparkSession from within a function executed by
>> Spark tasks.
>>
>> On Wed, Sep 30, 2020 at 7:29 AM Lakshmi Nivedita 
>> wrote:
>>
>>> Here is a spark udf structure as an example
>>>
>>> Def sampl_fn(x):
>>>Spark.sql(“select count(Id) from sample Where Id = x ”)
>>>
>>>
>>> Spark.udf.register(“sample_fn”, sample_fn)
>>>
>>> Spark.sql(“select id, sampl_fn(Id) from example”)
>>>
>>> Advance Thanks for the help
>>> --
>>> k.Lakshmi Nivedita
>>>
>>>
>>>
>>>
>>
>>


Re: Query around Spark Checkpoints

2020-09-27 Thread Amit Joshi
Hi,

As far as I know, it depends on whether you are using spark streaming or
structured streaming.
In spark streaming you can write your own code to checkpoint.
But in case of structured streaming it should be file location.
But main question in why do you want to checkpoint in
Nosql, as it's eventual consistence.


Regards
Amit

On Sunday, September 27, 2020, Debabrata Ghosh 
wrote:

> Hi,
> I had a query around Spark checkpoints - Can I store the checkpoints
> in NoSQL or Kafka instead of Filesystem ?
>
> Regards,
>
> Debu
>


Re: [pyspark 2.4] broadcasting DataFrame throws error

2020-09-18 Thread Amit Joshi
Hi Rishi,

May be you have aready done these steps.
Can you check the size of the dataframe you are trying to broadcast using
logInfo(SizeEstimator.estimate(df))
and adjust the driver similarly.

There is one more issue which I found was in spark 2.
Broadcast does not work in cache data. It is possible this may not be the
issue. You can check at your end the same problem.

https://github.com/apache/spark/blame/master/sql/core/src/main/scala/org/apache/spark/sql/execution/CacheManager.scala#L219

And can you pls tell what issue was solved in spark 3, which you are
referring.

Regards
Amit


On Saturday, September 19, 2020, Rishi Shah 
wrote:

> Thanks Amit. I have tried increasing driver memory , also tried increasing
> max result size returned to the driver. Nothing works, I believe spark is
> not able to determine the fact that the result to be broadcasted is small
> enough because input data is huge? When I tried this in 2 stages, write out
> the grouped data and use that to join using broadcast, spark has no issues
> broadcasting this.
>
> When I was checking Spark 3 documentation, it seems like this issue may
> have been addressed in Spark 3 but not in earlier version?
>
> On Thu, Sep 17, 2020 at 11:35 PM Amit Joshi 
> wrote:
>
>> Hi,
>>
>> I think problem lies with driver memory. Broadcast in spark work by
>> collecting all the data to driver and then driver broadcasting to all the
>> executors. Different strategy could be employed for trasfer like bit
>> torrent though.
>>
>> Please try increasing the driver memory. See if it works.
>>
>> Regards,
>> Amit
>>
>>
>> On Thursday, September 17, 2020, Rishi Shah 
>> wrote:
>>
>>> Hello All,
>>>
>>> Hope this email finds you well. I have a dataframe of size 8TB (parquet
>>> snappy compressed), however I group it by a column and get a much smaller
>>> aggregated dataframe of size 700 rows (just two columns, key and count).
>>> When I use it like below to broadcast this aggregated result, it throws
>>> dataframe can not be broadcasted error.
>>>
>>> df_agg = df.groupBy('column1').count().cache()
>>> # df_agg.count()
>>> df_join = df.join(broadcast(df_agg), 'column1', 'left_outer')
>>> df_join.write.parquet('PATH')
>>>
>>> The same code works with input df size of 3TB without any modifications.
>>>
>>> Any suggestions?
>>>
>>> --
>>> Regards,
>>>
>>> Rishi Shah
>>>
>>
>
> --
> Regards,
>
> Rishi Shah
>


Re: [pyspark 2.4] broadcasting DataFrame throws error

2020-09-17 Thread Amit Joshi
Hi,

I think problem lies with driver memory. Broadcast in spark work by
collecting all the data to driver and then driver broadcasting to all the
executors. Different strategy could be employed for trasfer like bit
torrent though.

Please try increasing the driver memory. See if it works.

Regards,
Amit


On Thursday, September 17, 2020, Rishi Shah 
wrote:

> Hello All,
>
> Hope this email finds you well. I have a dataframe of size 8TB (parquet
> snappy compressed), however I group it by a column and get a much smaller
> aggregated dataframe of size 700 rows (just two columns, key and count).
> When I use it like below to broadcast this aggregated result, it throws
> dataframe can not be broadcasted error.
>
> df_agg = df.groupBy('column1').count().cache()
> # df_agg.count()
> df_join = df.join(broadcast(df_agg), 'column1', 'left_outer')
> df_join.write.parquet('PATH')
>
> The same code works with input df size of 3TB without any modifications.
>
> Any suggestions?
>
> --
> Regards,
>
> Rishi Shah
>


Re: Submitting Spark Job thru REST API?

2020-09-02 Thread Amit Joshi
Hi,
There is other option like apache Livy which lets you submit the job using
Rest api.
Other option can be using AWS Datapipeline to configure your job as EMR
activity.
To activate pipeline, you need console or a program.

Regards
Amit

On Thursday, September 3, 2020, Eric Beabes 
wrote:

> Under Spark 2.4 is it possible to submit a Spark job thru REST API - just
> like the Flink job?
>
> Here's the use case: We need to submit a Spark Job to the EMR cluster but
> our security team is not allowing us to submit a job from the Master node
> or thru UI. They want us to create a "Docker Container" to submit a job.
>
> If it's possible to submit the Spark job thru REST then we don't need to
> install Spark/Hadoop JARs on the Container. If it's not possible to use
> REST API, can we do something like this?
>
> spark-2.4.6-bin-hadoop2.7/bin/spark-submit \
>  --class myclass --master "yarn url" --deploy-mode cluster \
>
> In other words, instead of --master yarn, specify a URL. Would this still
> work the same way?
>


Re: [Spark Kafka Structured Streaming] Adding partition and topic to the kafka dynamically

2020-08-28 Thread Amit Joshi
Hi Jungtaek,

Thanks for the input. I did tried and it worked.
I got confused earlier after reading some blogs.

Regards
Amit

On Friday, August 28, 2020, Jungtaek Lim 
wrote:

> Hi Amit,
>
> if I remember correctly, you don't need to restart the query to reflect
> the newly added topic and partition, if your subscription covers the topic
> (like subscribe pattern). Please try it out.
>
> Hope this helps.
>
> Thanks,
> Jungtaek Lim (HeartSaVioR)
>
> On Fri, Aug 28, 2020 at 1:56 PM Amit Joshi 
> wrote:
>
>> Any pointers will be appreciated.
>>
>> On Thursday, August 27, 2020, Amit Joshi 
>> wrote:
>>
>>> Hi All,
>>>
>>> I am trying to understand the effect of adding topics and partitions to
>>> a topic in kafka, which is being consumed by spark structured streaming
>>> applications.
>>>
>>> Do we have to restart the spark structured streaming application to read
>>> from the newly added topic?
>>> Do we have to restart the spark structured streaming application to read
>>> from the newly added partition to a topic?
>>>
>>> Kafka consumers have a meta data refresh property that works without
>>> restarting.
>>>
>>> Thanks advance.
>>>
>>> Regards
>>> Amit Joshi
>>>
>>


Re: [Spark Kafka Structured Streaming] Adding partition and topic to the kafka dynamically

2020-08-27 Thread Amit Joshi
Any pointers will be appreciated.

On Thursday, August 27, 2020, Amit Joshi  wrote:

> Hi All,
>
> I am trying to understand the effect of adding topics and partitions to a
> topic in kafka, which is being consumed by spark structured streaming
> applications.
>
> Do we have to restart the spark structured streaming application to read
> from the newly added topic?
> Do we have to restart the spark structured streaming application to read
> from the newly added partition to a topic?
>
> Kafka consumers have a meta data refresh property that works without
> restarting.
>
> Thanks advance.
>
> Regards
> Amit Joshi
>


[Spark Kafka Structured Streaming] Adding partition and topic to the kafka dynamically

2020-08-27 Thread Amit Joshi
Hi All,

I am trying to understand the effect of adding topics and partitions to a
topic in kafka, which is being consumed by spark structured streaming
applications.

Do we have to restart the spark structured streaming application to read
from the newly added topic?
Do we have to restart the spark structured streaming application to read
from the newly added partition to a topic?

Kafka consumers have a meta data refresh property that works without
restarting.

Thanks advance.

Regards
Amit Joshi


[Spark-Kafka-Streaming] Verifying the approach for multiple queries

2020-08-09 Thread Amit Joshi
Hi,

I have a scenario where a kafka topic is being written with different types
of json records.
I have to regroup the records based on the type and then fetch the schema
and parse and write as parquet.
I have tried structured programming. But dynamic schema is a constraint.
So I have used DStreams and though I know the approach I have taken may not
be good.
If anyone can pls let me know if the approach will scale and possible pros
and cons.
I am collecting the grouped records and then again forming the dataframe
for each grouped record.
createKeyValue -> This is creating the key value pair with schema
information.

stream.foreachRDD { (rdd, time) =>
  val offsetRanges = rdd.asInstanceOf[HasOffsetRanges].offsetRanges
  val result = rdd.map(createKeyValue).reduceByKey((x,y) => x ++ y).collect()
  result.foreach(x=> println(x._1))
  result.map(x=> {
val spark =
SparkSession.builder().config(rdd.sparkContext.getConf).getOrCreate()
import spark.implicits._
import org.apache.spark.sql.functions._
val df = x._2 toDF("value")
df.select(from_json($"value", x._1._2, Map.empty[String,String]).as("data"))
  .select($"data.*")
  //.withColumn("entity", lit("invoice"))
  .withColumn("year",year($"TimeUpdated"))
  .withColumn("month",month($"TimeUpdated"))
  .withColumn("day",dayofmonth($"TimeUpdated"))
  
.write.partitionBy("name","year","month","day").mode("append").parquet(path)
  })
  stream.asInstanceOf[CanCommitOffsets].commitAsync(offsetRanges)
}


[SPARK-STRUCTURED-STREAMING] IllegalStateException: Race while writing batch 4

2020-08-07 Thread Amit Joshi
Hi,

I have 2spark structure streaming queries writing to the same outpath in
object storage.
Once in a while I am getting the "IllegalStateException: Race while writing
batch 4".
I found that this error is because there are two writers writing to the
output path. The file streaming sink doesn't support multiple writers.
It assumes there is only one writer writing to the path. Each query needs
to use its own output directory.

Is there a way to write the output to the same path by both queries, as I
need the output at the same path.?

Regards
Amit Joshi


[SPARK-SQL] How to return GenericInternalRow from spark udf

2020-08-06 Thread Amit Joshi
Hi,

<https://stackoverflow.com/posts/63277463/timeline>

I have a spark udf written in scala that takes couuple of columns and apply
some logic and output InternalRow. There is spark schema of StructType also
present. But when I try to return the InternalRow from UDF there is
exception

java.lang.UnsupportedOperationException: Schema for type
org.apache.spark.sql.catalyst.GenericInternalRow is not supported

  val getData = (hash : String, type : String) => {
val schema = hash match {
  case "people" =>
peopleSchema
  case "empl" =>  emplSchema
}
getGenericInternalRow(schema)
  }

  val data = udf(getData)

Spark Version : 2.4.5


Please Help.


Regards

Amit Joshi