Hello,

We have a fairly large static dataset that we load into Postgres. We made the 
tables UNLOGGED and saw a pretty significant performance improvement for the 
loading. This was all fantastic until the server crashed and we were surprised 
to see during a follow up demo that the data had disappeared... Of course, it's 
all our fault for not understanding the implications of UNLOGGED proprely.


However, our scenario is truly a set of tables with 100's of millions of rows 
that are effectively WORMs: we write them once only, and then only read from 
them afterwards. As such, they could not be possibly corrupted post-load (i 
think) during a server crash (short of physical disk defects...).


I'd like to have the performance improvement during a initial batch insert, and 
then make sure the table remains after "unclean" shutdowns, which, as you might 
have it, includes a regular Windows server shut down during patching for 
example. So unlogged tables in practice are pretty flimsy. I tried to ALTER ... 
SET LOGGED, but that takes a VERY long time and pretty much negates the initial 
performance boost of loading into an unlogged table.


Is there a way to get my cake and eat it too?


Thank you,

Laurent Hasson



Reply via email to