Experts,
One of terms used and I hear is N-tier architecture within Big Data used for
availability, performance etc. I also hear that Spark by means of its query
engine and in-memory caching fits into middle tier (application layer) with
HDFS and Hive may be providing the data tier. Can someone elaborate the role
of Spark here. For example A Scala program that we write uses JDBC to talk to
databases so in that sense is Spark a middle tier application?
I hope that someone can clarify this and if so what would the best practice in
using Spark as middle tier and within Big data.
Thanks