Hi,

I would like to use Phoenix to replace a few of our databases, and I've
been doing some tests on that direction. So far it's been working all right
but I wanted to share it with you to see if I can get some recommendations
from other experiences.

Our dataset has 1 big table (around 200G) and around 100k smaller tables
(the biggest is 5-6G, but 90% are less than 1G), the application runs
mainly joins on one or two of this small tables and the big one to return
just a few rows back to the app. So far it's been working OK in a 4 nodes
test cluster (64G of RAM in total)

All the tables are created with SALT_BUCKETS=32,COMPRESSION='snappy'

Is someone running a similiar setup? any tips on how much RAM shall I use?

Reply via email to