Hi all, is there a way in spark to setup a connection pool?
As example: I'm going to use a relational DB and Cassandra to join data
between them.
How can I control and cache DB connections?
Thanks all!
Mark
--
View this message in context:
Hi, I'd like to submit a possible use case and have some guidance on the
overall architecture.
I have 2 different datasources (a relational PostgreSQL and a Cassandra
cluster) and I'd like to provide to user the ability to query data 'joining'
the 2 worlds.
So, an idea that comes to my mind is:
Hi, I'd like to submit a possible use case and have some guidance on the
overall architecture.
I have 2 different datasources (a relational PostgreSQL and a Cassandra
cluster) and I'd like to provide to user the ability to query data 'joining'
the 2 worlds.
So, an idea that comes to my mind is: