On Sun, May 10, 2015 at 12:30 PM, Yuri Budilov <yuri.budi...@hotmail.com>
wrote:

> MANY THANKS to everyone who replied !
> Keep up great work!
>
> more things (critical for very large and mission critical databases)
>
> - database row/page compression -
>
> it looks to me that there is no page/block compression available on
> PostgreSQL 9.4 along the lines of MS-SQL/Oracle row/page compression
> features?
> I realize that there is some compression of individual varchar/text data
> type columns but there is nothing like a complete row compression, index
> page compression and page/dictionary compression? Is that correct?
>

​Yes that's correct. Only individual field compression supported (for
fields longer that 2Kb usually).​


>
> database and transaction log backup compression? not available?
>

Transaction log backup compression not available (however could be easily
archived via external utilities like bzip2).
Both built-in backup utilities (pg_dump and pg_basebackup) support
compression.



> - recovery from hardware or software corruption -
>
> suppose I am running a mission critical database (which is also relatively
> large, say > 1TB) and I encounter a corruption of some sort (say, due to
> hardware or software bug) on individual database pages or a number of pages
> in a database
>
> How do I recover quickly and without losing any transactions? MS-SQL and
> Oracle can restore individual pages (or sets of pages) or restore
> individual database files and then allow me to roll forward transaction log
> to bring back every last transaction. It can be done on-line or off-line.
> How do I achieve the same in PostgreSQL 9.4? One solution I see may be via
> complete synchronous replication of the database to another server. I am
> but sure what happens to the corrupt page(s) - does it get transmitted
> corrupt to the mirror server so I end up with same corruption on both
> databases or is there some protection against this?
>

​It's depend where a corruption happen, if pages become corrupted due to
some problems with physical storage (filesystem) in that case a replica
data should be ok.
There are no facility to recover individual database files and/or page
ranges from base backup and roll forward the transaction log (not even
offline).

>From my practice using a PostgreSQL for the terabyte scale and/or
mission-critical databases definitely possible but require very careful
design and planning (and good hardware).

​
Maxim Boguk
Senior Postgresql DBA
http://www.postgresql-consulting.ru/ <http://www.postgresql-consulting.com/>
​Melbourne, Australia​


Phone RU: +7 910 405 4718
Phone AU: +61 45 218 5678

LinkedIn: http://www.linkedin.com/pub/maksym-boguk/80/b99/b1b
Skype: maxim.boguk
Jabber: maxim.bo...@gmail.com
МойКруг: http://mboguk.moikrug.ru/

"People problems are solved with people.
If people cannot solve the problem, try technology.
People will then wish they'd listened at the first stage."

Reply via email to