hatha,
>>
>> I helped you post this question to another community. There is one answer
>> by someone else for your reference.
>>
>> To access the logical plan or optimized plan, you can register a custom
>> QueryExecutionListener and retrieve the plans during the quer
Hello Winston,
Thanks again for this response, I will check this one out.
On Wed, Aug 2, 2023 at 3:50 PM Winston Lai wrote:
>
> Hi Vibhatha,
>
> I helped you post this question to another community. There is one answer
> by someone else for your reference.
>
> To ac
Hi Vibhatha,
I helped you post this question to another community. There is one answer by
someone else for your reference.
To access the logical plan or optimized plan, you can register a custom
QueryExecutionListener and retrieve the plans during the query execution
process. Here's
0.19)
>>> Type in expressions to have them evaluated.
>>> Type :help for more information.
>>>
>>> scala> val df = spark.range(0, 10)
>>> df: org.apache.spark.sql.Dataset[Long] = [id: bigint]
>>>
>>> scala> df.queryExecution
>>>
, Java 11.0.19)
>> Type in expressions to have them evaluated.
>> Type :help for more information.
>>
>> scala> val df = spark.range(0, 10)
>> df: org.apache.spark.sql.Dataset[Long] = [id: bigint]
>>
>> scala> df.queryExecution
>> res0: org.apache.spark
1.0.19)
> Type in expressions to have them evaluated.
> Type :help for more information.
>
> scala> val df = spark.range(0, 10)
> df: org.apache.spark.sql.Dataset[Long] = [id: bigint]
>
> scala> df.queryExecution
> res0: org.apache.spark.sql.execution.QueryExecut
in expressions to have them evaluated.
Type :help for more information.
scala> val df = spark.range(0, 10)
df: org.apache.spark.sql.Dataset[Long] = [id: bigint]
scala> df.queryExecution
res0: org.apache.spark.sql.execution.QueryExecution =
== Parsed Logical Plan ==
Range (0, 10, step=1, splits=S
wrote:
> Hi Vibhatha,
>
> How about reading the logical plan from Spark UI, do you have access to
> the Spark UI? I am not sure what infra you run your Spark jobs on. Usually
> you should be able to view the logical and physical plan under Spark UI in
> text version at least. It i
Hi Vibhatha,
How about reading the logical plan from Spark UI, do you have access to the
Spark UI? I am not sure what infra you run your Spark jobs on. Usually you
should be able to view the logical and physical plan under Spark UI in text
version at least. It is independent from the language
what platform you are running
> your Spark jobs on, what cloud servies you are using ...
>
> On Wednesday, August 2, 2023, Vibhatha Abeykoon
> wrote:
>
>> Hello,
>>
>> I recently upgraded the Spark version to 3.4.1 and I have encountered a
>> few issues.
ave encountered a
> few issues. In my previous code, I was able to extract the logical plan
> using `df.queryExecution` (df: DataFrame and in Scala), but it seems like
> in the latest API it is not supported. Is there a way to extract the
> logical plan or optimized plan from a datafram
Hello,
I recently upgraded the Spark version to 3.4.1 and I have encountered a few
issues. In my previous code, I was able to extract the logical plan using
`df.queryExecution` (df: DataFrame and in Scala), but it seems like in the
latest API it is not supported. Is there a way to extract
I??m implementing the materialized feature for Spark. I have built a customized
listener that logs the logical plan and physical plan of each sql query. After
some analysis, I can get the most valuable subtree that needs to be
materialized. Then I need to restore the subtree of the plan back
gt; About the other question, you may use `getNumberPartitions`.
>
> On Sat, Apr 20, 2019 at 2:40 PM kanchan tewary
> wrote:
>
>> Dear All,
>>
>> Greetings!
>>
>> I am new to Apache Spark and working on RDDs using pyspark. I am trying
>> to und
r question, you may use `getNumberPartitions`.
On Sat, Apr 20, 2019 at 2:40 PM kanchan tewary
wrote:
> Dear All,
>
> Greetings!
>
> I am new to Apache Spark and working on RDDs using pyspark. I am trying to
> understand the logical plan provided by toDebugString funct
Dear All,
Greetings!
I am new to Apache Spark and working on RDDs using pyspark. I am trying to
understand the logical plan provided by toDebugString function, but I find
two issues a) the output is not formatted when I print the result
b) I do not see number of partitions shown.
Can anyone
What about accumulators ?
> On 14. Aug 2017, at 20:15, Lukas Bradley wrote:
>
> We have had issues with gathering status on long running jobs. We have
> attempted to draw parallels between the Spark UI/Monitoring API and our code
> base. Due to the separation between
Something like this, maybe?
import org.apache.spark.sql.Dataset
import org.apache.spark.sql.catalyst.expressions.AttributeReference
import org.apache.spark.sql.execution.LogicalRDD
import org.apache.spark.sql.catalyst.encoders.RowEncoder
val df: DataFrame = ???
val spark = df.sparkSession
val
We have had issues with gathering status on long running jobs. We have
attempted to draw parallels between the Spark UI/Monitoring API and our
code base. Due to the separation between code and the execution plan, even
having a guess as to where we are in the process is difficult. The
We have had issues with gathering status on long running jobs. We have
attempted to draw parallels between the Spark UI/Monitoring API and our
code base. Due to the separation between code and the execution plan, even
having a guess as to where we are in the process is difficult. The
t sure what could be done here.
>
> Thanks
>
> On Thu, Jun 30, 2016 at 10:10 PM, Reynold Xin <r...@databricks.com> wrote:
>
>> Which version are you using here? If the underlying files change,
>> technically we should go through optimization again.
>>
>>
> Perhaps the real "fix" is to figure out why is logical plan creation so
> slow for 700 columns.
>
>
> On Thu, Jun 30, 2016 at 1:58 PM, Darshan Singh <darshan.m...@gmail.com>
> wrote:
>
>> Is there a way I can use same Logical plan for a query. Everything will
>
A logical plan should not change assuming the same DAG diagram is used
throughout
Have you tried Spark GUI Page under stages? This is Spark 2
example:
[image: Inline images 1]
HTH
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id
Which version are you using here? If the underlying files change,
technically we should go through optimization again.
Perhaps the real "fix" is to figure out why is logical plan creation so
slow for 700 columns.
On Thu, Jun 30, 2016 at 1:58 PM, Darshan Singh <darshan.m...@gma
Is there a way I can use same Logical plan for a query. Everything will be
same except underlying file will be different.
Issue is that my query has around 700 columns and Generating logical plan
takes 20 seconds and it happens every 2 minutes but every time underlying
file is different.
I do
25 matches
Mail list logo