Hey Dev,

I'm working on the email monitoring system for Airavata. Currently I'm trying 
to solve the problem of done emails coming before start emails and making the 
system scalable. Today, we have only one GFaC that handles the email monitoring 
system. As we are moving towards a microservices approach from the monolithic 
code we have, this email monitoring system also needs to adapt to these 
changes. Currently, in the gfac code, a concurrent hashmap is kept to keep 
track of start/end of emails using their respective experiment ID's. Instead of 
keeping the hashmap locally, we should keep it in a global state so in the 
future multiple GFaCs can handle the map. Supun has suggested to use Zookeeper 
for this as it has high avaliability and realibility. I was also thinking that 
since these experiment IDs are a key value pair, Redis would be a good option 
for such a use case. What do you guys think about each one. I understand 
airavata currently uses zookeeper, so development wise there seems to be an 
edge toward it. Would Redis be a good for such use case?




-- shoutout Marcus.

Reply via email to