unsubscribe
Right now, with the structure of your data, it isn't possible.
The rows aren't duplicates of each other. "a" and "b" both exist in the
array. So Spark is correctly performing the join. It looks like you need to
find another way to model this data to get what you want to achieve.
Are the values
Hi Mich,
Thanks for the solution, But I am getting duplicate result by using
array_contains. I have explained the scenario below, could you help me on
that, how we can achieve i have tried different way bu i could able to
achieve.
For example
data = [
["a"],
["b"],
["d"],
]