>> We want these use actions respond within 2 to 5 seconds.

I think this goal is a stretch for Spark. Some queries may run faster than
that on a large dataset,
but in general you can't put an SLA like this. For example if you have to
join some huge datasets,
you'll likely will be much over that. Spark is great for huge jobs and
it'll be much faster than MR.
I don't think Spark was designed with interactive queries in mind. For
example, although Spark is
"in-memory", its in-memory is only for a job. It's not like in traditional
RDBMS systems where you
have a persistent "buffer cache" or "in-memory columnar storage" (both are
Oracle terms)
If you have multiple users running interatactive BI queries, results that
were cached for first user
wouldn't be used by second user. Unless you invent something that would
keep a persistent
Spark context and serve users' requests and decided which RDDs to cache,
when and how.
At least that's my understanding how Spark works. If I'm wrong, I will be
glad to hear that as
we ran into the same questions.

As we use Cloudera's CDH, I'm not sure where Hortonworks are with their Tez
project,
but Tez has components that resemble closer to "buffer cache" or "in-memory
columnar storage" caching
from traditional RDBMS systems, and may get better and/or more predictable
performance on
BI queries.



-- 
Ruslan Dautkhanov

On Mon, Jul 20, 2015 at 6:04 PM, renga.kannan <renga.kan...@gmail.com>
wrote:

> All,
> I really appreciate anyone's input on this. We are having a very simple
> traditional OLAP query processing use case. Our use case is as follows.
>
>
> 1. We have a customer sales order table data coming from RDBMs table.
> 2. There are many dimension columns in the sales order table. For each of
> those dimensions, we have individual dimension tables that stores the
> dimension record sets.
> 3. We also have some BI like hierarchies that is defined for dimension data
> set.
>
> What we want for business users is as follows.?
>
> 1. We wanted to show some aggregated values from sales Order transaction
> table columns.
> 2. User would like to filter these with specific dimension values from
> dimension table.
> 3. User should be able to drill down from higher level to lower level by
> traversing hierarchy on dimension
>
>
> We want these use actions respond within 2 to 5 seconds.
>
>
> We are thinking about using SPARK as our backend enginee to sever data to
> these front end application.
>
>
> Has anyone tried using SPARK for these kind of use cases. These are all
> traditional use cases in BI space. If so, can SPARK respond to these
> queries
> with in 2 to 5 seconds for large data sets.
>
> Thanks,
> Renga
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Is-SPARK-is-the-right-choice-for-traditional-OLAP-query-processing-tp23921.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>

Reply via email to