Hello all, I'm hoping someone can help me with this hardware question. We have 
an upcoming need to run our machine learning application on physical hardware. 
Up until now, we've just rented a cloud-based high performance cluster, so my 
understanding of the real relative performance tradeoffs between different 
processor architectures is not great. 

Assuming the same memory levels, which do you think is preferable for running a 
stand alone Spark deployment: 

Configuration option 1: 
- Intel Xeon® processor E7-8893v2 (6C/12T, 3.4 GHz, TLC:37.5 MB, Turbo: Yes, 
8.0GT/s, Mem bus: 1600 MHz, 155W), 24 cores 

Configuration option 2: 
- 3.2 GHz (SPARC64™ X+), 4 sockets 4U server with 16 cores/socket (total 64 
cores) 

I guess one concern is that I couldn't find any examples of anyone who has run 
Apache Spark on the Fujitsu SPARC64 X+ architecture. In theory there shouldn't 
be any issues simply running it because of the JVM, but we do link back to some 
numerical computation libraries under the hood built with C++ that get called 
as an external process on the nodes via numpy during the iterations. Does 
anyone think that might cause any issue? 

Any insights are welcome! 

Thanks in advance! 
Greg 

Reply via email to