1 hour
> (telling from my personal experience)
>
> I guess that would help.
>
> Regards
> Anurag
>
>
> --
> If you reply to this email, your message will be added to the discussion
> below:
>
> http://apache-spark-user-list.1001560.n3.nabb
ort generation can be summed up as this:
>>>>
>>>> for each report to be generated {
>>>> for each report data point to be calculated {
>>>> calculate data point
>>>> add data point
be calculated {
>>> calculate data point
>>> add data point to report
>>> }
>>> publish report
>>> }
>>>
>>> In order to deal with the upper limits of these values, we will need to
>>> distribute this algorithm t
he upper limits of these values, we will need to
>>> distribute this algorithm to a compute / data cluster as much as possible.
>>>
>>> I've read about frameworks such as Apache Spark but also Hadoop, GridGain,
>>> HazelCast and several others, and am still confu
h as possible.
>
> I've read about frameworks such as Apache Spark but also Hadoop, GridGain,
> HazelCast and several others, and am still confused as to how each of these
> can help us and how they fit together.
>
> Is Spark the right framework for us?
>
>
>
> --
> Vie
s algorithm to a compute / data cluster as much as possible.
>>
>> I've read about frameworks such as Apache Spark but also Hadoop, GridGain,
>> HazelCast and several others, and am still confused as to how each o
possible.
>
> I've read about frameworks such as Apache Spark but also Hadoop, GridGain,
> HazelCast and several others, and am still confused as to how each of these
> can help us and how they fit together.
>
> Is Spark the right framework for us?
>
>
>
> --
> View t
t;>>>>
>>>>> Data is currently stored in a relational database but a migration to a
>>>>> different kind of store is possible.
>>>>>
>>>>> The naive algorithm for report generation can be summed up as this:
>>>>>
>>&
te data point
>>>> add data point to report
>>>> }
>>>> publish report
>>>> }
>>>>
>>>> In order to deal with the upper limits of these values, we will need to
>>>> distribute this algorithm to a compute /
te data point
>>> add data point to report
>>> }
>>> publish report
>>> }
>>>
>>> In order to deal with the upper limits of these values, we will need to
>>> distribute this algorithm to a compute / data cluster as much as
>&g
azelCast and several others, and am still confused as to how each of these
> can help us and how they fit together.
>
> Is Spark the right framework for us?
>
>
>
> --
> View this message in context:
> http://apache-spark
framework for us?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Is-Spark-right-for-us-tp26412.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe, e
12 matches
Mail list logo