Keep forgetting to reply to user list...
On Sun, Apr 15, 2018 at 1:58 PM, Mark Hamstra
wrote:
> Sure, data locality all the way at the basic storage layer is the easy way
> to avoid paying the costs of remote I/O. My point, though, is that that
> kind of storage
Thanks Mark,
I guess this may be broadened to the concept of separate compute from
storage. Your point on " ... can kind of disappear after the data is first
read from the storage layer." reminds of performing Logical IOs as opposed
to Physical IOs. But again as you correctly pointed out on the
Barre metal servers with 2 dedicated clusters (spark and Cassandra) versus
1 cluster with colocation. In both case 10 gbps dedicated network.
Le sam. 14 avr. 2018 à 23:17, Mich Talebzadeh a
écrit :
> Thanks Vincent. You mean 20 times improvement with data being local
Thanks Vincent. You mean 20 times improvement with data being local as
opposed to Spark running on compute nodes?
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
Not with hadoop but with Cassandra, i have seen 20x data locality
improvement on partitioned optimized spark jobs
Le sam. 14 avr. 2018 à 21:17, Mich Talebzadeh a
écrit :
> Hi,
>
> This is a sort of your mileage varies type question.
>
> In a classic Hadoop cluster,
Hi,
This is a sort of your mileage varies type question.
In a classic Hadoop cluster, one has data locality when each node includes
the Spark libraries and HDFS data. this helps certain queries like
interactive BI.
However running Spark over remote storage say Isilon scaled out NAS instead
of