Nothing particularly custom. We've tested with small (4 node)
development clusters, single-node pseudoclusters, and bigger, using
plain-vanilla Hadoop 2.2 or 2.3 or CDH5 (beta and beyond), in Spark
master, Spark local, Spark Yarn (client and cluster) modes, with
total memory resources ranging from 4GB to 256GB+. K On 07/08/2014 12:04 PM, Surendranauth
Hiraman wrote:
|
- Re: Comparative study Nabeel Memon
- Re: Comparative study Sean Owen
- Re: Comparative study Daniel Siegmann
- Re: Comparative study Soumya Simanta
- Re: Comparative study Daniel Siegmann
- Re: Comparative study Kevin Markey
- Re: Comparative study Surendranauth Hiraman
- Re: Comparative study Daniel Siegmann
- Re: Comparative study Kevin Markey
- Re: Comparative study Surendranauth Hiraman
- Re: Comparative study Kevin Markey
- Re: Comparative study Surendranauth Hiraman
- Re: Comparative study Surendranauth Hiraman
- Re: Comparative study Surendranauth Hiraman
- Re: Comparative study Sean Owen
- Re: Comparative study Reynold Xin
- Re: Comparative study Daniel Siegmann
- Re: Comparative study Aaron Davidson
- Re: Comparative study Surendranauth Hiraman
- Re: Comparative study Robert James
- Re: Comparative study Keith Simmons