Hi

What are the design approaches I can follow to ensure that data is consistent 
from an application perspective (not from individual tables perspective). I am 
thinking of issues which arise due to unavailability of rollback or executing 
atomic transactions in Cassandra. Is Cassandra not suitable for my project?

Cassandra recommends creating a new table for each query. This results in data 
duplication (which doesn’t bother me). Take the following scenario. An 
application which allows users to create, share and manage food recipes. Each 
of the function below adds records in a separate database


for {savedRecipe <- saveInRecipeRepository(...)

               recipeTagRepository <- saveRecipeTag(...)
               partitionInfoOfRecipes <- savePartitionOfTheTag(...)
               updatedUserProfile <- updateInUserProfile(...)
               recipesByUser <- saveRecipesCreatedByUser(...)
               supportedRecipes <- updateSupportedRecipesInformation(tag)}

If say updateInUserProfile fails, then I'll have to manage rollback in the 
application itself as Cassandra doesn’t do it. My concerns is that the rollback 
process could itself fail due to network issues say.

Is there a recommended way or a design principle I can follow to keep data 
consistent?

Thanks
Manu

Sent from Mail<https://go.microsoft.com/fwlink/?LinkId=550986> for Windows 10

Reply via email to