After you drop a table, aren't the associated files dropped?

On 04/13/2018 02:29 PM, Ozz Nixon wrote:
There are free utilities that do government leave wipes. The process would be, 
drop the table, shrink the old table space then (if linux based), dd fill the 
drive, and use wipe, 5x or 8x deletion to make sure the drive does not have 
readable imprints on the platers.

Now what Jonathan mentions - sounds like he wants to do the same to the 
physical table. Never dabbling into PSQL’s storage and optimization algorithms, 
I would first assume, a script to do a row by row update table set 
field1…fieldx, different data patterns, existing field value length and field 
max length. Run the script at least 5 to 8 times, then drop the table .. the 
problem will be, does PSQL use a new page as you do this, then you are just 
playing with yourself. Let alone, how does PSQL handle indexes - new pages, or 
overwrite the existing page? And is any NPI (Non-Public-Info) data in the index 
itself?

    * So any PSQL core-engine guys reading?

O.

On Apr 13, 2018, at 3:03 PM, Ron <ronljohnso...@gmail.com> wrote:



On 04/13/2018 12:48 PM, Jonathan Morgan wrote:
For a system with information stored in a PostgreSQL 9.5 database, in which data stored 
in a table that is deleted must be securely deleted (like shred does to files), and where 
the system is persistent even though any particular table likely won't be (so can't just 
shred the disks at "completion"), I'm trying to figure out my options for 
securely deleting the underlying data files when a table is dropped.

As background, I'm not a DBA, but I am an experienced implementor in many 
languages, contexts, and databases. I've looked online and haven't been able to 
find a way to ask PostgreSQL to do the equivalent of shredding its underlying 
files before releasing them to the OS when a table is DROPped. Is there a 
built-in way to ask PostgreSQL to do this? (I might just not have searched for 
the right thing - my apologies if I missed something)

A partial answer we're looking at is shredding the underlying data files for a 
given relation and its indexes manually before dropping the tables, but this 
isn't so elegant, and I'm not sure it is getting all the information from the 
tables that we need to delete.

We also are looking at strategies for shredding free space on our data disk - either 
running a utility to do that, or periodically replicating the data volume, swapping in 
the results of the copy, then shredding the entire volume that was the source so its 
"free" space is securely overwritten in the process.

Are we missing something? Are there other options we haven't found? If we have 
to clean up manually, are there other places we need to go to shred data than 
the relation files for a given table, and all its related indexes, in the 
database's folder? Any help or advice will be greatly appreciated.
I'd write a program that fills all free space on disk with a specific pattern.  
You're probably using a logging filesystem, so that'll be far from perfect, 
though.

--
Angular momentum makes the world go 'round.


--
Angular momentum makes the world go 'round.

Reply via email to