Re: spark as data warehouse?

2022-03-26 Thread Cheng Pan
Sorry I missed the original channel, added it back.

-

I have less knowledge about dbt. If it supports Hive, it should support Kyuubi.
Basically, Kyuubi is gateway between your client(e.g. beeline, hive
jdbc client) and compute engine(e.g. Spark, Flink, Trino), I think the
most valuable things are:
1) Kyuubi reuses the Hive Thrift Protocol, it say you can treat Kyuubi
as a HiveServer2, and continue use beeline, hive jdbc driver to
connect Kyuubi to run SQL(in your compute engine dialect). Ideally, if
a tool claims it supports Hive, then it supports Kyuubi.
2) Kyuubi manages the compute engine lifecycle and share level, makes
a good trade-off between isolation and resource consumption.[1]

PS: Kyuubi's support for Spark is very mature, you can find lots of
production use cases here[2]. The support for Flink & Trino is in beta
phase.

[1] https://kyuubi.apache.org/docs/latest/deployment/engine_share_level.html
[2] https://github.com/apache/incubator-kyuubi/discussions/925

Thanks,
Cheng Pan

---

Thanks, I'll check it out.
I have a use case where we want to use dbt as data middling tool .
Will it take dbt queries and create the resulting model ?
I see it supports Trino , so I am guessing yes .

I will love to contribute to it as well.

Thanks
Deepak

---

Spark SQL can indeed take over your Hive workloads, and if you're
looking for an open source solution, Apache Kyuubi(Incubating)[1]
might help.

[1] https://kyuubi.apache.org/

Thanks,
Cheng Pan

On Sat, Mar 26, 2022 at 4:51 PM Cheng Pan  wrote:
>
> I have less knowledge about dbt. If it supports Hive, it should support 
> Kyuubi.
> Basically, Kyuubi is gateway between your client(e.g. beeline, hive
> jdbc client) and compute engine(e.g. Spark, Flink, Trino), I think the
> most valuable things are:
> 1) Kyuubi reuses the Hive Thrift Protocol, it say you can treat Kyuubi
> as a HiveServer2, and continue use beeline, hive jdbc driver to
> connect Kyuubi to run SQL(in your compute engine dialect). Ideally, if
> a tool claims it supports Hive, then it supports Kyuubi.
> 2) Kyuubi manages the compute engine lifecycle and share level, makes
> a good trade-off between isolation and resource consumption.[1]
>
> PS: Kyuubi's support for Spark is very mature, you can find lots of
> production use cases here[2]. The support for Flink & Trino is in beta
> phase.
>
> [1] https://kyuubi.apache.org/docs/latest/deployment/engine_share_level.html
> [2] https://github.com/apache/incubator-kyuubi/discussions/925
>
> Thanks,
> Cheng Pan
>
> On Sat, Mar 26, 2022 at 4:16 PM Deepak Sharma  wrote:
> >
> > Thanks, I'll check it out.
> > I have a use case where we want to use dbt as data middling tool .
> > Will it take dbt queries and create the resulting model ?
> > I see it supports Trino , so I am guessing yes .
> >
> > I will love to contribute to it as well.
> >
> >
> > Thanks
> > Deepak
> >
> > On Sat, 26 Mar 2022 at 1:24 PM, Cheng Pan  wrote:
> >>
> >> Spark SQL can indeed take over your Hive workloads, and if you're
> >> looking for an open source solution, Apache Kyuubi(Incubating)[1]
> >> might help.
> >>
> >> [1] https://kyuubi.apache.org/
> >>
> >> Thanks,
> >> Cheng Pan
> >>
> >> On Sat, Mar 26, 2022 at 11:45 AM Deepak Sharma  
> >> wrote:
> >> >
> >> > It can be used as warehouse but then you have to keep long running spark 
> >> > jobs.
> >> > This can be possible using cached data frames or dataset .
> >> >
> >> > Thanks
> >> > Deepak
> >> >
> >> > On Sat, 26 Mar 2022 at 5:56 AM,  wrote:
> >> >>
> >> >> In the past time we have been using hive for building the data
> >> >> warehouse.
> >> >> Do you think if spark can used for this purpose? it's even more realtime
> >> >> than hive.
> >> >>
> >> >> Thanks.
> >> >>
> >> >> -
> >> >> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
> >> >>
> >> > --
> >> > Thanks
> >> > Deepak
> >> > www.bigdatabig.com
> >> > www.keosha.net
> >
> > --
> > Thanks
> > Deepak
> > www.bigdatabig.com
> > www.keosha.net

-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org



Re: spark as data warehouse?

2022-03-25 Thread Deepak Sharma
It can be used as warehouse but then you have to keep long running spark
jobs.
This can be possible using cached data frames or dataset .

Thanks
Deepak

On Sat, 26 Mar 2022 at 5:56 AM,  wrote:

> In the past time we have been using hive for building the data
> warehouse.
> Do you think if spark can used for this purpose? it's even more realtime
> than hive.
>
> Thanks.
>
> -
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
> --
Thanks
Deepak
www.bigdatabig.com
www.keosha.net


spark as data warehouse?

2022-03-25 Thread capitnfrakass
In the past time we have been using hive for building the data 
warehouse.
Do you think if spark can used for this purpose? it's even more realtime 
than hive.


Thanks.

-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org



Re: Spark based Data Warehouse

2017-11-17 Thread lucas.g...@gmail.com
We are using Spark on Kubernetes on AWS (it's a long story) but it does
work.  It's still on the raw side but we've been pretty successful.

We configured our cluster primarily with Kube-AWS and auto scaling groups.
There are gotcha's there, but so far we've been quite successful.

Gary Lucas

On 17 November 2017 at 22:20, ashish rawat  wrote:

> Thanks everyone for their suggestions. Does any of you take care of auto
> scale up and down of your underlying spark clusters on AWS?
>
> On Nov 14, 2017 10:46 AM, "lucas.g...@gmail.com" 
> wrote:
>
> Hi Ashish, bear in mind that EMR has some additional tooling available
> that smoothes out some S3 problems that you may / almost certainly will
> encounter.
>
> We are using Spark / S3 not on EMR and have encountered issues with file
> consistency, you can deal with it but be aware it's additional technical
> debt that you'll need to own.  We didn't want to own an HDFS cluster so we
> consider it worthwhile.
>
> Here are some additional resources:  The video is Steve Loughran talking
> about S3.
> https://medium.com/@subhojit20_27731/apache-spark-and-
> amazon-s3-gotchas-and-best-practices-a767242f3d98
> https://www.youtube.com/watch?v=ND4L_zSDqF0
>
> For the record we use S3 heavily but tend to drop our processed data into
> databases so they can be more easily consumed by visualization tools.
>
> Good luck!
>
> Gary Lucas
>
> On 13 November 2017 at 20:04, Affan Syed  wrote:
>
>> Another option that we are trying internally is to uses Mesos for
>> isolating different jobs or groups. Within a single group, using Livy to
>> create different spark contexts also works.
>>
>> - Affan
>>
>> On Tue, Nov 14, 2017 at 8:43 AM, ashish rawat 
>> wrote:
>>
>>> Thanks Sky Yin. This really helps.
>>>
>>> On Nov 14, 2017 12:11 AM, "Sky Yin"  wrote:
>>>
>>> We are running Spark in AWS EMR as data warehouse. All data are in S3
>>> and metadata in Hive metastore.
>>>
>>> We have internal tools to creat juypter notebook on the dev cluster. I
>>> guess you can use zeppelin instead, or Livy?
>>>
>>> We run genie as a job server for the prod cluster, so users have to
>>> submit their queries through the genie. For better resource utilization, we
>>> rely on Yarn dynamic allocation to balance the load of multiple
>>> jobs/queries in Spark.
>>>
>>> Hope this helps.
>>>
>>> On Sat, Nov 11, 2017 at 11:21 PM ashish rawat 
>>> wrote:
>>>
 Hello Everyone,

 I was trying to understand if anyone here has tried a data warehouse
 solution using S3 and Spark SQL. Out of multiple possible options
 (redshift, presto, hive etc), we were planning to go with Spark SQL, for
 our aggregates and processing requirements.

 If anyone has tried it out, would like to understand the following:

1. Is Spark SQL and UDF, able to handle all the workloads?
2. What user interface did you provide for data scientist, data
engineers and analysts
3. What are the challenges in running concurrent queries, by many
users, over Spark SQL? Considering Spark still does not provide spill to
disk, in many scenarios, are there frequent query failures when 
 executing
concurrent queries
4. Are there any open source implementations, which provide
something similar?


 Regards,
 Ashish

>>>
>>>
>>
>
>


Re: Spark based Data Warehouse

2017-11-17 Thread ashish rawat
Thanks everyone for their suggestions. Does any of you take care of auto
scale up and down of your underlying spark clusters on AWS?

On Nov 14, 2017 10:46 AM, "lucas.g...@gmail.com" 
wrote:

Hi Ashish, bear in mind that EMR has some additional tooling available that
smoothes out some S3 problems that you may / almost certainly will
encounter.

We are using Spark / S3 not on EMR and have encountered issues with file
consistency, you can deal with it but be aware it's additional technical
debt that you'll need to own.  We didn't want to own an HDFS cluster so we
consider it worthwhile.

Here are some additional resources:  The video is Steve Loughran talking
about S3.
https://medium.com/@subhojit20_27731/apache-spark-and-amazon-s3-gotchas-and-
best-practices-a767242f3d98
https://www.youtube.com/watch?v=ND4L_zSDqF0

For the record we use S3 heavily but tend to drop our processed data into
databases so they can be more easily consumed by visualization tools.

Good luck!

Gary Lucas

On 13 November 2017 at 20:04, Affan Syed  wrote:

> Another option that we are trying internally is to uses Mesos for
> isolating different jobs or groups. Within a single group, using Livy to
> create different spark contexts also works.
>
> - Affan
>
> On Tue, Nov 14, 2017 at 8:43 AM, ashish rawat  wrote:
>
>> Thanks Sky Yin. This really helps.
>>
>> On Nov 14, 2017 12:11 AM, "Sky Yin"  wrote:
>>
>> We are running Spark in AWS EMR as data warehouse. All data are in S3 and
>> metadata in Hive metastore.
>>
>> We have internal tools to creat juypter notebook on the dev cluster. I
>> guess you can use zeppelin instead, or Livy?
>>
>> We run genie as a job server for the prod cluster, so users have to
>> submit their queries through the genie. For better resource utilization, we
>> rely on Yarn dynamic allocation to balance the load of multiple
>> jobs/queries in Spark.
>>
>> Hope this helps.
>>
>> On Sat, Nov 11, 2017 at 11:21 PM ashish rawat 
>> wrote:
>>
>>> Hello Everyone,
>>>
>>> I was trying to understand if anyone here has tried a data warehouse
>>> solution using S3 and Spark SQL. Out of multiple possible options
>>> (redshift, presto, hive etc), we were planning to go with Spark SQL, for
>>> our aggregates and processing requirements.
>>>
>>> If anyone has tried it out, would like to understand the following:
>>>
>>>1. Is Spark SQL and UDF, able to handle all the workloads?
>>>2. What user interface did you provide for data scientist, data
>>>engineers and analysts
>>>3. What are the challenges in running concurrent queries, by many
>>>users, over Spark SQL? Considering Spark still does not provide spill to
>>>disk, in many scenarios, are there frequent query failures when executing
>>>concurrent queries
>>>4. Are there any open source implementations, which provide
>>>something similar?
>>>
>>>
>>> Regards,
>>> Ashish
>>>
>>
>>
>


Re: Spark based Data Warehouse

2017-11-13 Thread lucas.g...@gmail.com
Hi Ashish, bear in mind that EMR has some additional tooling available that
smoothes out some S3 problems that you may / almost certainly will
encounter.

We are using Spark / S3 not on EMR and have encountered issues with file
consistency, you can deal with it but be aware it's additional technical
debt that you'll need to own.  We didn't want to own an HDFS cluster so we
consider it worthwhile.

Here are some additional resources:  The video is Steve Loughran talking
about S3.
https://medium.com/@subhojit20_27731/apache-spark-and-amazon-s3-gotchas-and-best-practices-a767242f3d98
https://www.youtube.com/watch?v=ND4L_zSDqF0

For the record we use S3 heavily but tend to drop our processed data into
databases so they can be more easily consumed by visualization tools.

Good luck!

Gary Lucas

On 13 November 2017 at 20:04, Affan Syed  wrote:

> Another option that we are trying internally is to uses Mesos for
> isolating different jobs or groups. Within a single group, using Livy to
> create different spark contexts also works.
>
> - Affan
>
> On Tue, Nov 14, 2017 at 8:43 AM, ashish rawat  wrote:
>
>> Thanks Sky Yin. This really helps.
>>
>> On Nov 14, 2017 12:11 AM, "Sky Yin"  wrote:
>>
>> We are running Spark in AWS EMR as data warehouse. All data are in S3 and
>> metadata in Hive metastore.
>>
>> We have internal tools to creat juypter notebook on the dev cluster. I
>> guess you can use zeppelin instead, or Livy?
>>
>> We run genie as a job server for the prod cluster, so users have to
>> submit their queries through the genie. For better resource utilization, we
>> rely on Yarn dynamic allocation to balance the load of multiple
>> jobs/queries in Spark.
>>
>> Hope this helps.
>>
>> On Sat, Nov 11, 2017 at 11:21 PM ashish rawat 
>> wrote:
>>
>>> Hello Everyone,
>>>
>>> I was trying to understand if anyone here has tried a data warehouse
>>> solution using S3 and Spark SQL. Out of multiple possible options
>>> (redshift, presto, hive etc), we were planning to go with Spark SQL, for
>>> our aggregates and processing requirements.
>>>
>>> If anyone has tried it out, would like to understand the following:
>>>
>>>1. Is Spark SQL and UDF, able to handle all the workloads?
>>>2. What user interface did you provide for data scientist, data
>>>engineers and analysts
>>>3. What are the challenges in running concurrent queries, by many
>>>users, over Spark SQL? Considering Spark still does not provide spill to
>>>disk, in many scenarios, are there frequent query failures when executing
>>>concurrent queries
>>>4. Are there any open source implementations, which provide
>>>something similar?
>>>
>>>
>>> Regards,
>>> Ashish
>>>
>>
>>
>


Re: Spark based Data Warehouse

2017-11-13 Thread Affan Syed
Another option that we are trying internally is to uses Mesos for isolating
different jobs or groups. Within a single group, using Livy to create
different spark contexts also works.

- Affan

On Tue, Nov 14, 2017 at 8:43 AM, ashish rawat  wrote:

> Thanks Sky Yin. This really helps.
>
> On Nov 14, 2017 12:11 AM, "Sky Yin"  wrote:
>
> We are running Spark in AWS EMR as data warehouse. All data are in S3 and
> metadata in Hive metastore.
>
> We have internal tools to creat juypter notebook on the dev cluster. I
> guess you can use zeppelin instead, or Livy?
>
> We run genie as a job server for the prod cluster, so users have to submit
> their queries through the genie. For better resource utilization, we rely
> on Yarn dynamic allocation to balance the load of multiple jobs/queries in
> Spark.
>
> Hope this helps.
>
> On Sat, Nov 11, 2017 at 11:21 PM ashish rawat  wrote:
>
>> Hello Everyone,
>>
>> I was trying to understand if anyone here has tried a data warehouse
>> solution using S3 and Spark SQL. Out of multiple possible options
>> (redshift, presto, hive etc), we were planning to go with Spark SQL, for
>> our aggregates and processing requirements.
>>
>> If anyone has tried it out, would like to understand the following:
>>
>>1. Is Spark SQL and UDF, able to handle all the workloads?
>>2. What user interface did you provide for data scientist, data
>>engineers and analysts
>>3. What are the challenges in running concurrent queries, by many
>>users, over Spark SQL? Considering Spark still does not provide spill to
>>disk, in many scenarios, are there frequent query failures when executing
>>concurrent queries
>>4. Are there any open source implementations, which provide something
>>similar?
>>
>>
>> Regards,
>> Ashish
>>
>
>


Re: Spark based Data Warehouse

2017-11-13 Thread ashish rawat
Thanks Sky Yin. This really helps.

On Nov 14, 2017 12:11 AM, "Sky Yin"  wrote:

We are running Spark in AWS EMR as data warehouse. All data are in S3 and
metadata in Hive metastore.

We have internal tools to creat juypter notebook on the dev cluster. I
guess you can use zeppelin instead, or Livy?

We run genie as a job server for the prod cluster, so users have to submit
their queries through the genie. For better resource utilization, we rely
on Yarn dynamic allocation to balance the load of multiple jobs/queries in
Spark.

Hope this helps.

On Sat, Nov 11, 2017 at 11:21 PM ashish rawat  wrote:

> Hello Everyone,
>
> I was trying to understand if anyone here has tried a data warehouse
> solution using S3 and Spark SQL. Out of multiple possible options
> (redshift, presto, hive etc), we were planning to go with Spark SQL, for
> our aggregates and processing requirements.
>
> If anyone has tried it out, would like to understand the following:
>
>1. Is Spark SQL and UDF, able to handle all the workloads?
>2. What user interface did you provide for data scientist, data
>engineers and analysts
>3. What are the challenges in running concurrent queries, by many
>users, over Spark SQL? Considering Spark still does not provide spill to
>disk, in many scenarios, are there frequent query failures when executing
>concurrent queries
>4. Are there any open source implementations, which provide something
>similar?
>
>
> Regards,
> Ashish
>


Re: Spark based Data Warehouse

2017-11-13 Thread Sky Yin
We are running Spark in AWS EMR as data warehouse. All data are in S3 and
metadata in Hive metastore.

We have internal tools to creat juypter notebook on the dev cluster. I
guess you can use zeppelin instead, or Livy?

We run genie as a job server for the prod cluster, so users have to submit
their queries through the genie. For better resource utilization, we rely
on Yarn dynamic allocation to balance the load of multiple jobs/queries in
Spark.

Hope this helps.

On Sat, Nov 11, 2017 at 11:21 PM ashish rawat  wrote:

> Hello Everyone,
>
> I was trying to understand if anyone here has tried a data warehouse
> solution using S3 and Spark SQL. Out of multiple possible options
> (redshift, presto, hive etc), we were planning to go with Spark SQL, for
> our aggregates and processing requirements.
>
> If anyone has tried it out, would like to understand the following:
>
>1. Is Spark SQL and UDF, able to handle all the workloads?
>2. What user interface did you provide for data scientist, data
>engineers and analysts
>3. What are the challenges in running concurrent queries, by many
>users, over Spark SQL? Considering Spark still does not provide spill to
>disk, in many scenarios, are there frequent query failures when executing
>concurrent queries
>4. Are there any open source implementations, which provide something
>similar?
>
>
> Regards,
> Ashish
>


Re: Spark based Data Warehouse

2017-11-13 Thread Deepak Sharma
If you have only 1 user , its still possible to execute non-blocking long
running queries .
Best way is to have different users with pre assigned resources , run their
queries .

HTH

Thanks
Deepak

On Nov 13, 2017 23:56, "ashish rawat" <dceash...@gmail.com> wrote:

> Thanks Everyone. I am still not clear on what is the right way to execute
> support multiple users, running concurrent queries with Spark. Is it
> through multiple spark contexts or through Livy (which creates a single
> spark context only).
>
> Also, what kind of isolation is possible with Spark SQL? If one user fires
> a big query, then would that choke all other queries in the cluster?
>
> Regards,
> Ashish
>
> On Mon, Nov 13, 2017 at 3:10 AM, Patrick Alwell <palw...@hortonworks.com>
> wrote:
>
>> Alcon,
>>
>>
>>
>> You can most certainly do this. I’ve done benchmarking with Spark SQL and
>> the TPCDS queries using S3 as the filesystem.
>>
>>
>>
>> Zeppelin and Livy server work well for the dash boarding and concurrent
>> query issues:  https://hortonworks.com/blog/
>> livy-a-rest-interface-for-apache-spark/
>>
>>
>>
>> Livy Server will allow you to create multiple spark contexts via REST:
>> https://livy.incubator.apache.org/
>>
>>
>>
>> If you are looking for broad SQL functionality I’d recommend
>> instantiating a Hive context. And Spark is able to spill to disk à
>> https://spark.apache.org/faq.html
>>
>>
>>
>> There are multiple companies running spark within their data warehouse
>> solutions: https://ibmdatawarehousing.wordpress.com/2016/10/12/steinbac
>> h_dashdb_local_spark/
>>
>>
>>
>> Edmunds used Spark to allow business analysts to point Spark to files in
>> S3 and infer schema: https://www.youtube.com/watch?v=gsR1ljgZLq0
>>
>>
>>
>> Recommend running some benchmarks and testing query scenarios for your
>> end users; but it sounds like you’ll be using it for exploratory analysis.
>> Spark is great for this ☺
>>
>>
>>
>> -Pat
>>
>>
>>
>>
>>
>> *From: *Vadim Semenov <vadim.seme...@datadoghq.com>
>> *Date: *Sunday, November 12, 2017 at 1:06 PM
>> *To: *Gourav Sengupta <gourav.sengu...@gmail.com>
>> *Cc: *Phillip Henry <londonjava...@gmail.com>, ashish rawat <
>> dceash...@gmail.com>, Jörn Franke <jornfra...@gmail.com>, Deepak Sharma <
>> deepakmc...@gmail.com>, spark users <user@spark.apache.org>
>> *Subject: *Re: Spark based Data Warehouse
>>
>>
>>
>> It's actually quite simple to answer
>>
>>
>>
>> > 1. Is Spark SQL and UDF, able to handle all the workloads?
>>
>> Yes
>>
>>
>>
>> > 2. What user interface did you provide for data scientist, data
>> engineers and analysts
>>
>> Home-grown platform, EMR, Zeppelin
>>
>>
>>
>> > What are the challenges in running concurrent queries, by many users,
>> over Spark SQL? Considering Spark still does not provide spill to disk, in
>> many scenarios, are there frequent query failures when executing concurrent
>> queries
>>
>> You can run separate Spark Contexts, so jobs will be isolated
>>
>>
>>
>> > Are there any open source implementations, which provide something
>> similar?
>>
>> Yes, many.
>>
>>
>>
>>
>>
>> On Sun, Nov 12, 2017 at 1:47 PM, Gourav Sengupta <
>> gourav.sengu...@gmail.com> wrote:
>>
>> Dear Ashish,
>>
>> what you are asking for involves at least a few weeks of dedicated
>> understanding of your used case and then it takes at least 3 to 4 months to
>> even propose a solution. You can even build a fantastic data warehouse just
>> using C++. The matter depends on lots of conditions. I just think that your
>> approach and question needs a lot of modification.
>>
>>
>>
>> Regards,
>>
>> Gourav
>>
>>
>>
>> On Sun, Nov 12, 2017 at 6:19 PM, Phillip Henry <londonjava...@gmail.com>
>> wrote:
>>
>> Hi, Ashish.
>>
>> You are correct in saying that not *all* functionality of Spark is
>> spill-to-disk but I am not sure how this pertains to a "concurrent user
>> scenario". Each executor will run in its own JVM and is therefore isolated
>> from others. That is, if the JVM of one user dies, this should not effect
>> another user who is running their own jobs in their own JVMs. The amount of
>> resources used by a user 

Re: Spark based Data Warehouse

2017-11-13 Thread ashish rawat
Thanks Everyone. I am still not clear on what is the right way to execute
support multiple users, running concurrent queries with Spark. Is it
through multiple spark contexts or through Livy (which creates a single
spark context only).

Also, what kind of isolation is possible with Spark SQL? If one user fires
a big query, then would that choke all other queries in the cluster?

Regards,
Ashish

On Mon, Nov 13, 2017 at 3:10 AM, Patrick Alwell <palw...@hortonworks.com>
wrote:

> Alcon,
>
>
>
> You can most certainly do this. I’ve done benchmarking with Spark SQL and
> the TPCDS queries using S3 as the filesystem.
>
>
>
> Zeppelin and Livy server work well for the dash boarding and concurrent
> query issues:  https://hortonworks.com/blog/livy-a-rest-interface-for-
> apache-spark/
>
>
>
> Livy Server will allow you to create multiple spark contexts via REST:
> https://livy.incubator.apache.org/
>
>
>
> If you are looking for broad SQL functionality I’d recommend instantiating
> a Hive context. And Spark is able to spill to disk à
> https://spark.apache.org/faq.html
>
>
>
> There are multiple companies running spark within their data warehouse
> solutions: https://ibmdatawarehousing.wordpress.com/2016/10/12/
> steinbach_dashdb_local_spark/
>
>
>
> Edmunds used Spark to allow business analysts to point Spark to files in
> S3 and infer schema: https://www.youtube.com/watch?v=gsR1ljgZLq0
>
>
>
> Recommend running some benchmarks and testing query scenarios for your end
> users; but it sounds like you’ll be using it for exploratory analysis.
> Spark is great for this ☺
>
>
>
> -Pat
>
>
>
>
>
> *From: *Vadim Semenov <vadim.seme...@datadoghq.com>
> *Date: *Sunday, November 12, 2017 at 1:06 PM
> *To: *Gourav Sengupta <gourav.sengu...@gmail.com>
> *Cc: *Phillip Henry <londonjava...@gmail.com>, ashish rawat <
> dceash...@gmail.com>, Jörn Franke <jornfra...@gmail.com>, Deepak Sharma <
> deepakmc...@gmail.com>, spark users <user@spark.apache.org>
> *Subject: *Re: Spark based Data Warehouse
>
>
>
> It's actually quite simple to answer
>
>
>
> > 1. Is Spark SQL and UDF, able to handle all the workloads?
>
> Yes
>
>
>
> > 2. What user interface did you provide for data scientist, data
> engineers and analysts
>
> Home-grown platform, EMR, Zeppelin
>
>
>
> > What are the challenges in running concurrent queries, by many users,
> over Spark SQL? Considering Spark still does not provide spill to disk, in
> many scenarios, are there frequent query failures when executing concurrent
> queries
>
> You can run separate Spark Contexts, so jobs will be isolated
>
>
>
> > Are there any open source implementations, which provide something
> similar?
>
> Yes, many.
>
>
>
>
>
> On Sun, Nov 12, 2017 at 1:47 PM, Gourav Sengupta <
> gourav.sengu...@gmail.com> wrote:
>
> Dear Ashish,
>
> what you are asking for involves at least a few weeks of dedicated
> understanding of your used case and then it takes at least 3 to 4 months to
> even propose a solution. You can even build a fantastic data warehouse just
> using C++. The matter depends on lots of conditions. I just think that your
> approach and question needs a lot of modification.
>
>
>
> Regards,
>
> Gourav
>
>
>
> On Sun, Nov 12, 2017 at 6:19 PM, Phillip Henry <londonjava...@gmail.com>
> wrote:
>
> Hi, Ashish.
>
> You are correct in saying that not *all* functionality of Spark is
> spill-to-disk but I am not sure how this pertains to a "concurrent user
> scenario". Each executor will run in its own JVM and is therefore isolated
> from others. That is, if the JVM of one user dies, this should not effect
> another user who is running their own jobs in their own JVMs. The amount of
> resources used by a user can be controlled by the resource manager.
>
> AFAIK, you configure something like YARN to limit the number of cores and
> the amount of memory in the cluster a certain user or group is allowed to
> use for their job. This is obviously quite a coarse-grained approach as (to
> my knowledge) IO is not throttled. I believe people generally use something
> like Apache Ambari to keep an eye on network and disk usage to mitigate
> problems in a shared cluster.
>
> If the user has badly designed their query, it may very well fail with
> OOMEs but this can happen irrespective of whether one user or many is using
> the cluster at a given moment in time.
>
>
>
> Does this help?
>
> Regards,
>
> Phillip
>
>
>
> On Sun, Nov 12, 2017 at 5:50 PM, ashish rawat <dceash...@gmail.

Re: Spark based Data Warehouse

2017-11-12 Thread Patrick Alwell
Alcon,

You can most certainly do this. I’ve done benchmarking with Spark SQL and the 
TPCDS queries using S3 as the filesystem.

Zeppelin and Livy server work well for the dash boarding and concurrent query 
issues:  https://hortonworks.com/blog/livy-a-rest-interface-for-apache-spark/

Livy Server will allow you to create multiple spark contexts via REST: 
https://livy.incubator.apache.org/

If you are looking for broad SQL functionality I’d recommend instantiating a 
Hive context. And Spark is able to spill to disk --> 
https://spark.apache.org/faq.html

There are multiple companies running spark within their data warehouse 
solutions: 
https://ibmdatawarehousing.wordpress.com/2016/10/12/steinbach_dashdb_local_spark/

Edmunds used Spark to allow business analysts to point Spark to files in S3 and 
infer schema: https://www.youtube.com/watch?v=gsR1ljgZLq0

Recommend running some benchmarks and testing query scenarios for your end 
users; but it sounds like you’ll be using it for exploratory analysis. Spark is 
great for this ☺

-Pat


From: Vadim Semenov <vadim.seme...@datadoghq.com>
Date: Sunday, November 12, 2017 at 1:06 PM
To: Gourav Sengupta <gourav.sengu...@gmail.com>
Cc: Phillip Henry <londonjava...@gmail.com>, ashish rawat 
<dceash...@gmail.com>, Jörn Franke <jornfra...@gmail.com>, Deepak Sharma 
<deepakmc...@gmail.com>, spark users <user@spark.apache.org>
Subject: Re: Spark based Data Warehouse

It's actually quite simple to answer

> 1. Is Spark SQL and UDF, able to handle all the workloads?
Yes

> 2. What user interface did you provide for data scientist, data engineers and 
> analysts
Home-grown platform, EMR, Zeppelin

> What are the challenges in running concurrent queries, by many users, over 
> Spark SQL? Considering Spark still does not provide spill to disk, in many 
> scenarios, are there frequent query failures when executing concurrent queries
You can run separate Spark Contexts, so jobs will be isolated

> Are there any open source implementations, which provide something similar?
Yes, many.


On Sun, Nov 12, 2017 at 1:47 PM, Gourav Sengupta 
<gourav.sengu...@gmail.com<mailto:gourav.sengu...@gmail.com>> wrote:
Dear Ashish,
what you are asking for involves at least a few weeks of dedicated 
understanding of your used case and then it takes at least 3 to 4 months to 
even propose a solution. You can even build a fantastic data warehouse just 
using C++. The matter depends on lots of conditions. I just think that your 
approach and question needs a lot of modification.

Regards,
Gourav

On Sun, Nov 12, 2017 at 6:19 PM, Phillip Henry 
<londonjava...@gmail.com<mailto:londonjava...@gmail.com>> wrote:
Hi, Ashish.
You are correct in saying that not *all* functionality of Spark is 
spill-to-disk but I am not sure how this pertains to a "concurrent user 
scenario". Each executor will run in its own JVM and is therefore isolated from 
others. That is, if the JVM of one user dies, this should not effect another 
user who is running their own jobs in their own JVMs. The amount of resources 
used by a user can be controlled by the resource manager.
AFAIK, you configure something like YARN to limit the number of cores and the 
amount of memory in the cluster a certain user or group is allowed to use for 
their job. This is obviously quite a coarse-grained approach as (to my 
knowledge) IO is not throttled. I believe people generally use something like 
Apache Ambari to keep an eye on network and disk usage to mitigate problems in 
a shared cluster.

If the user has badly designed their query, it may very well fail with OOMEs 
but this can happen irrespective of whether one user or many is using the 
cluster at a given moment in time.

Does this help?
Regards,
Phillip

On Sun, Nov 12, 2017 at 5:50 PM, ashish rawat 
<dceash...@gmail.com<mailto:dceash...@gmail.com>> wrote:
Thanks Jorn and Phillip. My question was specifically to anyone who have tried 
creating a system using spark SQL, as Data Warehouse. I was trying to check, if 
someone has tried it and they can help with the kind of workloads which worked 
and the ones, which have problems.

Regarding spill to disk, I might be wrong but not all functionality of spark is 
spill to disk. So it still doesn't provide DB like reliability in execution. In 
case of DBs, queries get slow but they don't fail or go out of memory, 
specifically in concurrent user scenarios.

Regards,
Ashish

On Nov 12, 2017 3:02 PM, "Phillip Henry" 
<londonjava...@gmail.com<mailto:londonjava...@gmail.com>> wrote:
Agree with Jorn. The answer is: it depends.

In the past, I've worked with data scientists who are happy to use the Spark 
CLI. Again, the answer is "it depends" (in this case, on the skills of your 
customers).
Regarding sharing resources, different teams were limited to their own queue so 
they could no

Re: Spark based Data Warehouse

2017-11-12 Thread Vadim Semenov
It's actually quite simple to answer

> 1. Is Spark SQL and UDF, able to handle all the workloads?
Yes

> 2. What user interface did you provide for data scientist, data engineers
and analysts
Home-grown platform, EMR, Zeppelin

> What are the challenges in running concurrent queries, by many users,
over Spark SQL? Considering Spark still does not provide spill to disk, in
many scenarios, are there frequent query failures when executing concurrent
queries
You can run separate Spark Contexts, so jobs will be isolated

> Are there any open source implementations, which provide something
similar?
Yes, many.


On Sun, Nov 12, 2017 at 1:47 PM, Gourav Sengupta <gourav.sengu...@gmail.com>
wrote:

> Dear Ashish,
> what you are asking for involves at least a few weeks of dedicated
> understanding of your used case and then it takes at least 3 to 4 months to
> even propose a solution. You can even build a fantastic data warehouse just
> using C++. The matter depends on lots of conditions. I just think that your
> approach and question needs a lot of modification.
>
> Regards,
> Gourav
>
> On Sun, Nov 12, 2017 at 6:19 PM, Phillip Henry <londonjava...@gmail.com>
> wrote:
>
>> Hi, Ashish.
>>
>> You are correct in saying that not *all* functionality of Spark is
>> spill-to-disk but I am not sure how this pertains to a "concurrent user
>> scenario". Each executor will run in its own JVM and is therefore isolated
>> from others. That is, if the JVM of one user dies, this should not effect
>> another user who is running their own jobs in their own JVMs. The amount of
>> resources used by a user can be controlled by the resource manager.
>>
>> AFAIK, you configure something like YARN to limit the number of cores and
>> the amount of memory in the cluster a certain user or group is allowed to
>> use for their job. This is obviously quite a coarse-grained approach as (to
>> my knowledge) IO is not throttled. I believe people generally use something
>> like Apache Ambari to keep an eye on network and disk usage to mitigate
>> problems in a shared cluster.
>>
>> If the user has badly designed their query, it may very well fail with
>> OOMEs but this can happen irrespective of whether one user or many is using
>> the cluster at a given moment in time.
>>
>> Does this help?
>>
>> Regards,
>>
>> Phillip
>>
>>
>> On Sun, Nov 12, 2017 at 5:50 PM, ashish rawat <dceash...@gmail.com>
>> wrote:
>>
>>> Thanks Jorn and Phillip. My question was specifically to anyone who have
>>> tried creating a system using spark SQL, as Data Warehouse. I was trying to
>>> check, if someone has tried it and they can help with the kind of workloads
>>> which worked and the ones, which have problems.
>>>
>>> Regarding spill to disk, I might be wrong but not all functionality of
>>> spark is spill to disk. So it still doesn't provide DB like reliability in
>>> execution. In case of DBs, queries get slow but they don't fail or go out
>>> of memory, specifically in concurrent user scenarios.
>>>
>>> Regards,
>>> Ashish
>>>
>>> On Nov 12, 2017 3:02 PM, "Phillip Henry" <londonjava...@gmail.com>
>>> wrote:
>>>
>>> Agree with Jorn. The answer is: it depends.
>>>
>>> In the past, I've worked with data scientists who are happy to use the
>>> Spark CLI. Again, the answer is "it depends" (in this case, on the skills
>>> of your customers).
>>>
>>> Regarding sharing resources, different teams were limited to their own
>>> queue so they could not hog all the resources. However, people within a
>>> team had to do some horse trading if they had a particularly intensive job
>>> to run. I did feel that this was an area that could be improved. It may be
>>> by now, I've just not looked into it for a while.
>>>
>>> BTW I'm not sure what you mean by "Spark still does not provide spill to
>>> disk" as the FAQ says "Spark's operators spill data to disk if it does not
>>> fit in memory" (http://spark.apache.org/faq.html). So, your data will
>>> not normally cause OutOfMemoryErrors (certain terms and conditions may
>>> apply).
>>>
>>> My 2 cents.
>>>
>>> Phillip
>>>
>>>
>>>
>>> On Sun, Nov 12, 2017 at 9:14 AM, Jörn Franke <jornfra...@gmail.com>
>>> wrote:
>>>
>>>> What do you mean all possible workloads?
>>>> You cannot prepare any system to do all possible processi

Re: Spark based Data Warehouse

2017-11-12 Thread Gourav Sengupta
Dear Ashish,
what you are asking for involves at least a few weeks of dedicated
understanding of your used case and then it takes at least 3 to 4 months to
even propose a solution. You can even build a fantastic data warehouse just
using C++. The matter depends on lots of conditions. I just think that your
approach and question needs a lot of modification.

Regards,
Gourav

On Sun, Nov 12, 2017 at 6:19 PM, Phillip Henry <londonjava...@gmail.com>
wrote:

> Hi, Ashish.
>
> You are correct in saying that not *all* functionality of Spark is
> spill-to-disk but I am not sure how this pertains to a "concurrent user
> scenario". Each executor will run in its own JVM and is therefore isolated
> from others. That is, if the JVM of one user dies, this should not effect
> another user who is running their own jobs in their own JVMs. The amount of
> resources used by a user can be controlled by the resource manager.
>
> AFAIK, you configure something like YARN to limit the number of cores and
> the amount of memory in the cluster a certain user or group is allowed to
> use for their job. This is obviously quite a coarse-grained approach as (to
> my knowledge) IO is not throttled. I believe people generally use something
> like Apache Ambari to keep an eye on network and disk usage to mitigate
> problems in a shared cluster.
>
> If the user has badly designed their query, it may very well fail with
> OOMEs but this can happen irrespective of whether one user or many is using
> the cluster at a given moment in time.
>
> Does this help?
>
> Regards,
>
> Phillip
>
>
> On Sun, Nov 12, 2017 at 5:50 PM, ashish rawat <dceash...@gmail.com> wrote:
>
>> Thanks Jorn and Phillip. My question was specifically to anyone who have
>> tried creating a system using spark SQL, as Data Warehouse. I was trying to
>> check, if someone has tried it and they can help with the kind of workloads
>> which worked and the ones, which have problems.
>>
>> Regarding spill to disk, I might be wrong but not all functionality of
>> spark is spill to disk. So it still doesn't provide DB like reliability in
>> execution. In case of DBs, queries get slow but they don't fail or go out
>> of memory, specifically in concurrent user scenarios.
>>
>> Regards,
>> Ashish
>>
>> On Nov 12, 2017 3:02 PM, "Phillip Henry" <londonjava...@gmail.com> wrote:
>>
>> Agree with Jorn. The answer is: it depends.
>>
>> In the past, I've worked with data scientists who are happy to use the
>> Spark CLI. Again, the answer is "it depends" (in this case, on the skills
>> of your customers).
>>
>> Regarding sharing resources, different teams were limited to their own
>> queue so they could not hog all the resources. However, people within a
>> team had to do some horse trading if they had a particularly intensive job
>> to run. I did feel that this was an area that could be improved. It may be
>> by now, I've just not looked into it for a while.
>>
>> BTW I'm not sure what you mean by "Spark still does not provide spill to
>> disk" as the FAQ says "Spark's operators spill data to disk if it does not
>> fit in memory" (http://spark.apache.org/faq.html). So, your data will
>> not normally cause OutOfMemoryErrors (certain terms and conditions may
>> apply).
>>
>> My 2 cents.
>>
>> Phillip
>>
>>
>>
>> On Sun, Nov 12, 2017 at 9:14 AM, Jörn Franke <jornfra...@gmail.com>
>> wrote:
>>
>>> What do you mean all possible workloads?
>>> You cannot prepare any system to do all possible processing.
>>>
>>> We do not know the requirements of your data scientists now or in the
>>> future so it is difficult to say. How do they work currently without the
>>> new solution? Do they all work on the same data? I bet you will receive on
>>> your email a lot of private messages trying to sell their solution that
>>> solves everything - with the information you provided this is impossible to
>>> say.
>>>
>>> Then with every system: have incremental releases but have then in short
>>> time frames - do not engineer a big system that you will deliver in 2
>>> years. In the cloud you have the perfect possibility to scale feature but
>>> also infrastructure wise.
>>>
>>> Challenges with concurrent queries is the right definition of the
>>> scheduler (eg fairscheduler) that not one query take all the resources or
>>> that long running queries starve.
>>>
>>> User interfaces: what could help are notebooks (Jupyter etc) but you 

Re: Spark based Data Warehouse

2017-11-12 Thread Phillip Henry
Hi, Ashish.

You are correct in saying that not *all* functionality of Spark is
spill-to-disk but I am not sure how this pertains to a "concurrent user
scenario". Each executor will run in its own JVM and is therefore isolated
from others. That is, if the JVM of one user dies, this should not effect
another user who is running their own jobs in their own JVMs. The amount of
resources used by a user can be controlled by the resource manager.

AFAIK, you configure something like YARN to limit the number of cores and
the amount of memory in the cluster a certain user or group is allowed to
use for their job. This is obviously quite a coarse-grained approach as (to
my knowledge) IO is not throttled. I believe people generally use something
like Apache Ambari to keep an eye on network and disk usage to mitigate
problems in a shared cluster.

If the user has badly designed their query, it may very well fail with
OOMEs but this can happen irrespective of whether one user or many is using
the cluster at a given moment in time.

Does this help?

Regards,

Phillip


On Sun, Nov 12, 2017 at 5:50 PM, ashish rawat <dceash...@gmail.com> wrote:

> Thanks Jorn and Phillip. My question was specifically to anyone who have
> tried creating a system using spark SQL, as Data Warehouse. I was trying to
> check, if someone has tried it and they can help with the kind of workloads
> which worked and the ones, which have problems.
>
> Regarding spill to disk, I might be wrong but not all functionality of
> spark is spill to disk. So it still doesn't provide DB like reliability in
> execution. In case of DBs, queries get slow but they don't fail or go out
> of memory, specifically in concurrent user scenarios.
>
> Regards,
> Ashish
>
> On Nov 12, 2017 3:02 PM, "Phillip Henry" <londonjava...@gmail.com> wrote:
>
> Agree with Jorn. The answer is: it depends.
>
> In the past, I've worked with data scientists who are happy to use the
> Spark CLI. Again, the answer is "it depends" (in this case, on the skills
> of your customers).
>
> Regarding sharing resources, different teams were limited to their own
> queue so they could not hog all the resources. However, people within a
> team had to do some horse trading if they had a particularly intensive job
> to run. I did feel that this was an area that could be improved. It may be
> by now, I've just not looked into it for a while.
>
> BTW I'm not sure what you mean by "Spark still does not provide spill to
> disk" as the FAQ says "Spark's operators spill data to disk if it does not
> fit in memory" (http://spark.apache.org/faq.html). So, your data will not
> normally cause OutOfMemoryErrors (certain terms and conditions may apply).
>
> My 2 cents.
>
> Phillip
>
>
>
> On Sun, Nov 12, 2017 at 9:14 AM, Jörn Franke <jornfra...@gmail.com> wrote:
>
>> What do you mean all possible workloads?
>> You cannot prepare any system to do all possible processing.
>>
>> We do not know the requirements of your data scientists now or in the
>> future so it is difficult to say. How do they work currently without the
>> new solution? Do they all work on the same data? I bet you will receive on
>> your email a lot of private messages trying to sell their solution that
>> solves everything - with the information you provided this is impossible to
>> say.
>>
>> Then with every system: have incremental releases but have then in short
>> time frames - do not engineer a big system that you will deliver in 2
>> years. In the cloud you have the perfect possibility to scale feature but
>> also infrastructure wise.
>>
>> Challenges with concurrent queries is the right definition of the
>> scheduler (eg fairscheduler) that not one query take all the resources or
>> that long running queries starve.
>>
>> User interfaces: what could help are notebooks (Jupyter etc) but you may
>> need to train your data scientists. Some may know or prefer other tools.
>>
>> On 12. Nov 2017, at 08:32, Deepak Sharma <deepakmc...@gmail.com> wrote:
>>
>> I am looking for similar solution more aligned to data scientist group.
>> The concern i have is about supporting complex aggregations at runtime .
>>
>> Thanks
>> Deepak
>>
>> On Nov 12, 2017 12:51, "ashish rawat" <dceash...@gmail.com> wrote:
>>
>>> Hello Everyone,
>>>
>>> I was trying to understand if anyone here has tried a data warehouse
>>> solution using S3 and Spark SQL. Out of multiple possible options
>>> (redshift, presto, hive etc), we were planning to go with Spark SQL, for
>>> our aggregates and processing requirements.
>>&

Re: Spark based Data Warehouse

2017-11-12 Thread ashish rawat
Thanks Jorn and Phillip. My question was specifically to anyone who have
tried creating a system using spark SQL, as Data Warehouse. I was trying to
check, if someone has tried it and they can help with the kind of workloads
which worked and the ones, which have problems.

Regarding spill to disk, I might be wrong but not all functionality of
spark is spill to disk. So it still doesn't provide DB like reliability in
execution. In case of DBs, queries get slow but they don't fail or go out
of memory, specifically in concurrent user scenarios.

Regards,
Ashish

On Nov 12, 2017 3:02 PM, "Phillip Henry" <londonjava...@gmail.com> wrote:

Agree with Jorn. The answer is: it depends.

In the past, I've worked with data scientists who are happy to use the
Spark CLI. Again, the answer is "it depends" (in this case, on the skills
of your customers).

Regarding sharing resources, different teams were limited to their own
queue so they could not hog all the resources. However, people within a
team had to do some horse trading if they had a particularly intensive job
to run. I did feel that this was an area that could be improved. It may be
by now, I've just not looked into it for a while.

BTW I'm not sure what you mean by "Spark still does not provide spill to
disk" as the FAQ says "Spark's operators spill data to disk if it does not
fit in memory" (http://spark.apache.org/faq.html). So, your data will not
normally cause OutOfMemoryErrors (certain terms and conditions may apply).

My 2 cents.

Phillip



On Sun, Nov 12, 2017 at 9:14 AM, Jörn Franke <jornfra...@gmail.com> wrote:

> What do you mean all possible workloads?
> You cannot prepare any system to do all possible processing.
>
> We do not know the requirements of your data scientists now or in the
> future so it is difficult to say. How do they work currently without the
> new solution? Do they all work on the same data? I bet you will receive on
> your email a lot of private messages trying to sell their solution that
> solves everything - with the information you provided this is impossible to
> say.
>
> Then with every system: have incremental releases but have then in short
> time frames - do not engineer a big system that you will deliver in 2
> years. In the cloud you have the perfect possibility to scale feature but
> also infrastructure wise.
>
> Challenges with concurrent queries is the right definition of the
> scheduler (eg fairscheduler) that not one query take all the resources or
> that long running queries starve.
>
> User interfaces: what could help are notebooks (Jupyter etc) but you may
> need to train your data scientists. Some may know or prefer other tools.
>
> On 12. Nov 2017, at 08:32, Deepak Sharma <deepakmc...@gmail.com> wrote:
>
> I am looking for similar solution more aligned to data scientist group.
> The concern i have is about supporting complex aggregations at runtime .
>
> Thanks
> Deepak
>
> On Nov 12, 2017 12:51, "ashish rawat" <dceash...@gmail.com> wrote:
>
>> Hello Everyone,
>>
>> I was trying to understand if anyone here has tried a data warehouse
>> solution using S3 and Spark SQL. Out of multiple possible options
>> (redshift, presto, hive etc), we were planning to go with Spark SQL, for
>> our aggregates and processing requirements.
>>
>> If anyone has tried it out, would like to understand the following:
>>
>>1. Is Spark SQL and UDF, able to handle all the workloads?
>>2. What user interface did you provide for data scientist, data
>>engineers and analysts
>>3. What are the challenges in running concurrent queries, by many
>>users, over Spark SQL? Considering Spark still does not provide spill to
>>disk, in many scenarios, are there frequent query failures when executing
>>concurrent queries
>>4. Are there any open source implementations, which provide something
>>similar?
>>
>>
>> Regards,
>> Ashish
>>
>


Re: Spark based Data Warehouse

2017-11-12 Thread Phillip Henry
Agree with Jorn. The answer is: it depends.

In the past, I've worked with data scientists who are happy to use the
Spark CLI. Again, the answer is "it depends" (in this case, on the skills
of your customers).

Regarding sharing resources, different teams were limited to their own
queue so they could not hog all the resources. However, people within a
team had to do some horse trading if they had a particularly intensive job
to run. I did feel that this was an area that could be improved. It may be
by now, I've just not looked into it for a while.

BTW I'm not sure what you mean by "Spark still does not provide spill to
disk" as the FAQ says "Spark's operators spill data to disk if it does not
fit in memory" (http://spark.apache.org/faq.html). So, your data will not
normally cause OutOfMemoryErrors (certain terms and conditions may apply).

My 2 cents.

Phillip



On Sun, Nov 12, 2017 at 9:14 AM, Jörn Franke  wrote:

> What do you mean all possible workloads?
> You cannot prepare any system to do all possible processing.
>
> We do not know the requirements of your data scientists now or in the
> future so it is difficult to say. How do they work currently without the
> new solution? Do they all work on the same data? I bet you will receive on
> your email a lot of private messages trying to sell their solution that
> solves everything - with the information you provided this is impossible to
> say.
>
> Then with every system: have incremental releases but have then in short
> time frames - do not engineer a big system that you will deliver in 2
> years. In the cloud you have the perfect possibility to scale feature but
> also infrastructure wise.
>
> Challenges with concurrent queries is the right definition of the
> scheduler (eg fairscheduler) that not one query take all the resources or
> that long running queries starve.
>
> User interfaces: what could help are notebooks (Jupyter etc) but you may
> need to train your data scientists. Some may know or prefer other tools.
>
> On 12. Nov 2017, at 08:32, Deepak Sharma  wrote:
>
> I am looking for similar solution more aligned to data scientist group.
> The concern i have is about supporting complex aggregations at runtime .
>
> Thanks
> Deepak
>
> On Nov 12, 2017 12:51, "ashish rawat"  wrote:
>
>> Hello Everyone,
>>
>> I was trying to understand if anyone here has tried a data warehouse
>> solution using S3 and Spark SQL. Out of multiple possible options
>> (redshift, presto, hive etc), we were planning to go with Spark SQL, for
>> our aggregates and processing requirements.
>>
>> If anyone has tried it out, would like to understand the following:
>>
>>1. Is Spark SQL and UDF, able to handle all the workloads?
>>2. What user interface did you provide for data scientist, data
>>engineers and analysts
>>3. What are the challenges in running concurrent queries, by many
>>users, over Spark SQL? Considering Spark still does not provide spill to
>>disk, in many scenarios, are there frequent query failures when executing
>>concurrent queries
>>4. Are there any open source implementations, which provide something
>>similar?
>>
>>
>> Regards,
>> Ashish
>>
>


Re: Spark based Data Warehouse

2017-11-12 Thread Jörn Franke
What do you mean all possible workloads?
You cannot prepare any system to do all possible processing.

We do not know the requirements of your data scientists now or in the future so 
it is difficult to say. How do they work currently without the new solution? Do 
they all work on the same data? I bet you will receive on your email a lot of 
private messages trying to sell their solution that solves everything - with 
the information you provided this is impossible to say.

Then with every system: have incremental releases but have then in short time 
frames - do not engineer a big system that you will deliver in 2 years. In the 
cloud you have the perfect possibility to scale feature but also infrastructure 
wise.

Challenges with concurrent queries is the right definition of the scheduler (eg 
fairscheduler) that not one query take all the resources or that long running 
queries starve.

User interfaces: what could help are notebooks (Jupyter etc) but you may need 
to train your data scientists. Some may know or prefer other tools.

> On 12. Nov 2017, at 08:32, Deepak Sharma  wrote:
> 
> I am looking for similar solution more aligned to data scientist group.
> The concern i have is about supporting complex aggregations at runtime .
> 
> Thanks
> Deepak
> 
>> On Nov 12, 2017 12:51, "ashish rawat"  wrote:
>> Hello Everyone,
>> 
>> I was trying to understand if anyone here has tried a data warehouse 
>> solution using S3 and Spark SQL. Out of multiple possible options (redshift, 
>> presto, hive etc), we were planning to go with Spark SQL, for our aggregates 
>> and processing requirements.
>> 
>> If anyone has tried it out, would like to understand the following:
>> Is Spark SQL and UDF, able to handle all the workloads?
>> What user interface did you provide for data scientist, data engineers and 
>> analysts
>> What are the challenges in running concurrent queries, by many users, over 
>> Spark SQL? Considering Spark still does not provide spill to disk, in many 
>> scenarios, are there frequent query failures when executing concurrent 
>> queries
>> Are there any open source implementations, which provide something similar?
>> 
>> Regards,
>> Ashish


Re: Spark based Data Warehouse

2017-11-11 Thread Deepak Sharma
I am looking for similar solution more aligned to data scientist group.
The concern i have is about supporting complex aggregations at runtime .

Thanks
Deepak

On Nov 12, 2017 12:51, "ashish rawat"  wrote:

> Hello Everyone,
>
> I was trying to understand if anyone here has tried a data warehouse
> solution using S3 and Spark SQL. Out of multiple possible options
> (redshift, presto, hive etc), we were planning to go with Spark SQL, for
> our aggregates and processing requirements.
>
> If anyone has tried it out, would like to understand the following:
>
>1. Is Spark SQL and UDF, able to handle all the workloads?
>2. What user interface did you provide for data scientist, data
>engineers and analysts
>3. What are the challenges in running concurrent queries, by many
>users, over Spark SQL? Considering Spark still does not provide spill to
>disk, in many scenarios, are there frequent query failures when executing
>concurrent queries
>4. Are there any open source implementations, which provide something
>similar?
>
>
> Regards,
> Ashish
>


Spark based Data Warehouse

2017-11-11 Thread ashish rawat
Hello Everyone,

I was trying to understand if anyone here has tried a data warehouse
solution using S3 and Spark SQL. Out of multiple possible options
(redshift, presto, hive etc), we were planning to go with Spark SQL, for
our aggregates and processing requirements.

If anyone has tried it out, would like to understand the following:

   1. Is Spark SQL and UDF, able to handle all the workloads?
   2. What user interface did you provide for data scientist, data
   engineers and analysts
   3. What are the challenges in running concurrent queries, by many users,
   over Spark SQL? Considering Spark still does not provide spill to disk, in
   many scenarios, are there frequent query failures when executing concurrent
   queries
   4. Are there any open source implementations, which provide something
   similar?


Regards,
Ashish