Re: Does Spark have a plan to move away from sun.misc.Unsafe?

2018-10-25 Thread Vadim Semenov
Here you go: the umbrella ticket: https://issues.apache.org/jira/browse/SPARK-24417 and the sun.misc.unsafe one https://issues.apache.org/jira/browse/SPARK-24421 On Wed, Oct 24, 2018 at 8:08 PM kant kodali wrote: > > Hi All, > > Does Spark have a plan to move away from sun.misc.Unsafe to

Re: [Spark UI] Spark 2.3.1 UI no longer respects spark.ui.retainedJobs

2018-10-25 Thread Patrick Brown
Done: https://issues.apache.org/jira/browse/SPARK-25837 On Thu, Oct 25, 2018 at 10:21 AM Marcelo Vanzin wrote: > Ah that makes more sense. Could you file a bug with that information > so we don't lose track of this? > > Thanks > On Wed, Oct 24, 2018 at 6:13 PM Patrick Brown > wrote: > > > >

Re: [Spark UI] Spark 2.3.1 UI no longer respects spark.ui.retainedJobs

2018-10-25 Thread Marcelo Vanzin
Ah that makes more sense. Could you file a bug with that information so we don't lose track of this? Thanks On Wed, Oct 24, 2018 at 6:13 PM Patrick Brown wrote: > > On my production application I am running ~200 jobs at once, but continue to > submit jobs in this manner for sometimes ~1 hour. >

Spark SQL Error

2018-10-25 Thread Sai Kiran Kodukula
Hi all, I am getting the following error message in one of my Spark SQL's. I realize this may be related to the version of Spark or a configuration change but want to know the details and resolution. Thanks spark.sql.codegen.aggregate.map.twolevel.enabled is set to true, but current version of

Re: Watermarking without aggregation with Structured Streaming

2018-10-25 Thread sanjay_awat
Hello peay-2, Were you able to get a solution to your problem ? Were you able to get watermark timestamp available through a function ? Regards, Sanjay peay-2 wrote > Thanks for the pointers. I guess right now the only workaround would be to > apply a "dummy" aggregation (e.g., group by the

Re: [External Sender] Having access to spark results

2018-10-25 Thread Affan Syed
Femi, We have a solution that needs to be both on-prem and also in the cloud. Not sure how that impacts anything, what we want is to run an analytical query on a large dataset (ours is over Cassandra) -- so batch in that sense, but think on-demand --- and then have the result be entirely (not

Fwd: Having access to spark results

2018-10-25 Thread onmstester onmstester
What about using cache() or save as a global temp tableĀ  for subsequent access? Sent using Zoho Mail Forwarded message From : Affan Syed To : "spark users" Date : Thu, 25 Oct 2018 10:58:43 +0330 Subject : Having access to spark results Forwarded message

Re: [External Sender] Having access to spark results

2018-10-25 Thread Femi Anthony
What sort of environment are you running Spark on - in the cloud, on premise ? Is its a real-time or batch oriented application? Please provide more details. Femi On Thu, Oct 25, 2018 at 3:29 AM Affan Syed wrote: > Spark users, > We really would want to get an input here about how the results

Having access to spark results

2018-10-25 Thread Affan Syed
Spark users, We really would want to get an input here about how the results from a Spark Query will be accessible to a web-application. Given Spark is a well used in the industry I would have thought that this part would have lots of answers/tutorials about it, but I didnt find anything. Here