Peter Eisentraut wrote:
Craig Ringer wrote:
So - it's potentially even worth compressing the wire protocol for use
on a 100 megabit LAN if a lightweight scheme like LZO can be used.
LZO is under the GPL though.
But liblzf is BSD-style.
http://www.goof.com/pcg/marc/liblzf.html
On Thu, 2008-11-06 at 00:27 +0100, Ivan Voras wrote:
Peter Eisentraut wrote:
Craig Ringer wrote:
So - it's potentially even worth compressing the wire protocol for use
on a 100 megabit LAN if a lightweight scheme like LZO can be used.
Yes compressing the wire protocol is a benefit. You can
-Original Message-
From: [EMAIL PROTECTED] [mailto:pgsql-general-
[EMAIL PROTECTED] On Behalf Of Ivan Voras
Sent: Wednesday, November 05, 2008 3:28 PM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Are there plans to add data compression feature
to postgresql?
Peter
Peter Eisentraut wrote:
Craig Ringer wrote:
So - it's potentially even worth compressing the wire protocol for use
on a 100 megabit LAN if a lightweight scheme like LZO can be used.
LZO is under the GPL though.
Good point. I'm so used to libraries being under more appropriate
licenses like
Craig Ringer wrote:
So - it's potentially even worth compressing the wire protocol for use
on a 100 megabit LAN if a lightweight scheme like LZO can be used.
LZO is under the GPL though.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your
It doesn't matter that much, anyway, in that deflate would also do the
job quite well for any sort of site-to-site or user-to-site WAN link.
I used to use that, then switched to bzip. Thing is, if your client is
really just issuing SQL, how much does it matter? Compression can't help
with
Scott Ribe wrote:
It doesn't matter that much, anyway, in that deflate would also do the
job quite well for any sort of site-to-site or user-to-site WAN link.
I used to use that, then switched to bzip. Thing is, if your client is
really just issuing SQL, how much does it matter?
It depends
Gregory Stark wrote, On 01-11-08 14:02:
Ivan Sergio Borgonovo [EMAIL PROTECTED] writes:
But sorry I still can't get WHY compression as a whole and data
integrity are mutually exclusive.
...
[snip performance theory]
Postgres *guarantees* that as long as everything else works correctly it
Grzegorz Jaśkiewicz wrote, On 30-10-08 12:13:
it should, every book on encryption says, that if you compress your data
before encryption - its better.
Those books also should mention that you should leave this subject to
experts and have numerous examples on systems that follow the book,
Joris Dobbelsteen wrote:
Also I still have to see an compression algorithm that can sustain over
(or even anything close to, for that matter) 100MB/s on todays COTS
hardware. As TOAST provides compression, maybe that data can be
transmitted in compressed manner (without recompression).
I
On Mon, Nov 03, 2008 at 08:18:54AM +0900, Craig Ringer wrote:
Joris Dobbelsteen wrote:
Also I still have to see an compression algorithm that can sustain over
(or even anything close to, for that matter) 100MB/s on todays COTS
hardware. As TOAST provides compression, maybe that data can be
Sam Mason wrote:
On Mon, Nov 03, 2008 at 08:18:54AM +0900, Craig Ringer wrote:
Joris Dobbelsteen wrote:
Also I still have to see an compression algorithm that can sustain over
(or even anything close to, for that matter) 100MB/s on todays COTS
hardware. As TOAST provides compression, maybe
Craig Ringer [EMAIL PROTECTED] writes:
I get 19 Mbit/s from gzip (deflate) on my 2.4GHz Core 2 Duo laptop. With
lzop (LZO) the machine achieves 45 Mbit/s. In both cases only a single
core is used. With 7zip (LZMA) it only manages 3.1 Mb/s using BOTH cores
together.
It'd be interesting to know
Tom Lane wrote:
Wire protocol compression support in PostgreSQL would probably still be
extremely useful for Internet or WAN based clients, though,
Use an ssh tunnel ... get compression *and* encryption, which you surely
should want on a WAN link.
An ssh tunnel, while very useful, is only
On Mon, Nov 03, 2008 at 10:01:31AM +0900, Craig Ringer wrote:
Sam Mason wrote:
Your lzop numbers look *very* low; the paper suggests
compression going up to ~0.3GB/s on a 2GHz Opteron.
Er ... ENOCOFFEE? . s/Mb(it)?/MB/g . And I'm normally *so* careful about
Mb/MB etc; this was just a
On Sun, Nov 2, 2008 at 7:19 PM, Sam Mason [EMAIL PROTECTED] wrote:
On Mon, Nov 03, 2008 at 10:01:31AM +0900, Craig Ringer wrote:
So - it's potentially even worth compressing the wire protocol for use
on a 100 megabit LAN if a lightweight scheme like LZO can be used.
The problem is that then
Ivan Sergio Borgonovo [EMAIL PROTECTED] writes:
But sorry I still can't get WHY compression as a whole and data
integrity are mutually exclusive.
...
Now on *average* the write operations should be faster so the risk
you'll be hit by an asteroid during the time a fsync has been
requested
Scott Marlowe [EMAIL PROTECTED] writes:
What is the torn page problem? Note I'm no big fan of compressed file
systems, but I can't imagine them not working with databases, as I've
seen them work quite reliably under exhange server running a db
oriented storage subsystem. And I can't
On Thu, Oct 30, 2008 at 9:43 PM, Tom Lane [EMAIL PROTECTED] wrote:
Scott Marlowe [EMAIL PROTECTED] writes:
Sure, bash Microsoft it's easy. But it doesn't address the point, is
a database safe on top of a compressed file system and if not, why?
It is certainly *less* safe than it is on top
On Fri, Oct 31, 2008 at 2:49 AM, Gregory Stark [EMAIL PROTECTED] wrote:
Scott Marlowe [EMAIL PROTECTED] writes:
What is the torn page problem? Note I'm no big fan of compressed file
systems, but I can't imagine them not working with databases, as I've
seen them work quite reliably under
Scott Marlowe escribió:
On Thu, Oct 30, 2008 at 7:37 PM, Alvaro Herrera
[EMAIL PROTECTED] wrote:
Scott Marlowe escribió:
What is the torn page problem? Note I'm no big fan of compressed file
systems, but I can't imagine them not working with databases, as I've
seen them work quite
On Fri, 31 Oct 2008 08:49:56 +
Gregory Stark [EMAIL PROTECTED] wrote:
Scott Marlowe [EMAIL PROTECTED] writes:
What is the torn page problem? Note I'm no big fan of
compressed file systems, but I can't imagine them not working
with databases, as I've seen them work quite reliably
On Fri, Oct 31, 2008 at 3:01 PM, Alvaro Herrera
[EMAIL PROTECTED]wrote:
Scott Marlowe escribió:
On Thu, Oct 30, 2008 at 7:37 PM, Alvaro Herrera
[EMAIL PROTECTED] wrote:
Scott Marlowe escribió:
What is the torn page problem? Note I'm no big fan of compressed file
systems, but I
Ivan Sergio Borgonovo [EMAIL PROTECTED] writes:
On Fri, 31 Oct 2008 08:49:56 +
Gregory Stark [EMAIL PROTECTED] wrote:
Invisible under normal operation sure, but when something fails the
consequences will surely be different and I can't see how you
could make a compressed filesystem safe
[EMAIL PROTECTED] (Scott Marlowe) writes:
I assume hardware failure rates are zero, until there is one. Then I
restore from a known good backup. compressed file systems have little
to do with that.
There's a way that compressed filesystems might *help* with a risk
factor, here...
By
On Fri, 31 Oct 2008 17:08:52 +
Gregory Stark [EMAIL PROTECTED] wrote:
Invisible under normal operation sure, but when something fails
the consequences will surely be different and I can't see how
you could make a compressed filesystem safe without a huge
performance hit.
Pardon my
Scott Marlowe wrote:
On Thu, Oct 30, 2008 at 4:01 PM, Gregory Stark [EMAIL PROTECTED] wrote:
Scott Marlowe [EMAIL PROTECTED] writes:
I'm sure this makes for a nice brochure or power point presentation,
but in the real world I can't imagine putting that much effort into it
when
Chris Browne wrote:
There's a way that compressed filesystems might *help* with a risk
factor, here...
By reducing the number of disk drives required to hold the data, you
may be reducing the risk of enough of them failing to invalidate the
RAID array.
And one more way.
If neither your
Steve Atkins wrote:
The one place where Compression is an immediate benefit is the wire.
It is easy to forget that one of our number one bottlenecks (even at
gigabit) is the amount of data we are pushing over the wire.
Wouldn't ssl_ciphers=NULL-MD5 or somesuch give zlib compression over
the
it should, every book on encryption says, that if you compress your data
before encryption - its better.
On Thu, Oct 30, 2008 at 03:50:20PM +1100, Grant Allen wrote:
One other thing I forgot to mention: Compression by the DB trumps
filesystem compression in one very important area - shared_buffers! (or
buffer_cache, bufferpool or whatever your favourite DB calls its working
memory for caching
currently postgresql is slower on RAID, so something tells me that little
bit of compression underneeth will make it far more worse, than better. But
I guess, Tom will be the man to know more about it.
On Thu, Oct 30, 2008 at 10:53:27AM +1100, Grant Allen wrote:
Other big benefits come with XML ... but that is even more dependent on the
starting point. Oracle and SQL Server will see big benefits in compression
with this, because their XML technology is so mind-bogglingly broken in the
Yes, we are in a data warehouse like environments, where the database server is
used to hold very large volumn of read only historical data, CPU, memory, I/O
and network are all OK now except storage space, the only goal of compression
is to reduce storage consumption.
Date: Thu, 30 Oct
Grzegorz Jaśkiewicz wrote:
currently postgresql is slower on RAID, so something tells me that
little bit of compression underneeth will make it far more worse, than
better. But I guess, Tom will be the man to know more about it.
What? PostgreSQL is slower on RAID? Care to define that better?
On Thu, Oct 30, 2008 at 2:58 PM, Joshua D. Drake [EMAIL PROTECTED]wrote:
Grzegorz Jaśkiewicz wrote:
currently postgresql is slower on RAID, so something tells me that little
bit of compression underneeth will make it far more worse, than better. But
I guess, Tom will be the man to know more
On Oct 30, 2008, at 8:10 AM, Grzegorz Jaśkiewicz wrote:
up to 8.3 it was massively slower on raid1 (software raid on
linux), starting from 8.3 things got lot lot better (we speak 3x
speed improvement here), but it still isn't same as on 'plain' drive.
I'm a bit surprised to hear that; what
Grzegorz Jaśkiewicz wrote:
What? PostgreSQL is slower on RAID? Care to define that better?
up to 8.3 it was massively slower on raid1 (software raid on linux),
starting from 8.3 things got lot lot better (we speak 3x speed
improvement here), but it still isn't same as on 'plain' drive.
On Thu, Oct 30, 2008 at 3:27 PM, Christophe [EMAIL PROTECTED] wrote:
I'm a bit surprised to hear that; what would pg be doing, unique to it,
that would cause it to be slower on a RAID-1 cluster than on a plain drive?
yes, it is slower on mirror-raid from single drive.
I can give you all the
Grzegorz Jaśkiewicz wrote:
On Thu, Oct 30, 2008 at 3:27 PM, Christophe [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
I'm a bit surprised to hear that; what would pg be doing, unique to
it, that would cause it to be slower on a RAID-1 cluster than on a
plain drive?
yes, it
[EMAIL PROTECTED] (Tom Lane) writes:
We already have the portions of this behavior that seem to me to be
likely to be worthwhile (such as NULL elimination and compression of
large field values). Shaving a couple bytes from a bigint doesn't
strike me as interesting.
I expect that there would
Scott Marlowe [EMAIL PROTECTED] writes:
I'm sure this makes for a nice brochure or power point presentation,
but in the real world I can't imagine putting that much effort into it
when compressed file systems seem the place to be doing this.
I can't really see trusting Postgres on a
Chris Browne [EMAIL PROTECTED] writes:
[EMAIL PROTECTED] (Tom Lane) writes:
We already have the portions of this behavior that seem to me to be
likely to be worthwhile (such as NULL elimination and compression of
large field values). Shaving a couple bytes from a bigint doesn't
strike me as
On Thu, Oct 30, 2008 at 4:01 PM, Gregory Stark [EMAIL PROTECTED] wrote:
Scott Marlowe [EMAIL PROTECTED] writes:
I'm sure this makes for a nice brochure or power point presentation,
but in the real world I can't imagine putting that much effort into it
when compressed file systems seem the
Scott Marlowe [EMAIL PROTECTED] writes:
On Thu, Oct 30, 2008 at 4:01 PM, Gregory Stark [EMAIL PROTECTED] wrote:
I can't really see trusting Postgres on a filesystem that felt free to
compress portions of it. Would the filesystem still be able to guarantee that
torn pages won't tear across
On Thu, Oct 30, 2008 at 4:41 PM, Tom Lane [EMAIL PROTECTED] wrote:
Scott Marlowe [EMAIL PROTECTED] writes:
On Thu, Oct 30, 2008 at 4:01 PM, Gregory Stark [EMAIL PROTECTED] wrote:
I can't really see trusting Postgres on a filesystem that felt free to
compress portions of it. Would the
Scott Marlowe [EMAIL PROTECTED] writes:
On Thu, Oct 30, 2008 at 4:41 PM, Tom Lane [EMAIL PROTECTED] wrote:
Scott Marlowe [EMAIL PROTECTED] writes:
On Thu, Oct 30, 2008 at 4:01 PM, Gregory Stark [EMAIL PROTECTED] wrote:
I can't really see trusting Postgres on a filesystem that felt free to
On Thu, Oct 30, 2008 at 6:03 PM, Gregory Stark [EMAIL PROTECTED] wrote:
Scott Marlowe [EMAIL PROTECTED] writes:
Sounds kinda hand wavy to me. If compressed file systems didn't give
you back what you gave them I couldn't imagine them being around for
very long.
I don't know, NFS has lasted
Scott Marlowe escribió:
What is the torn page problem? Note I'm no big fan of compressed file
systems, but I can't imagine them not working with databases, as I've
seen them work quite reliably under exhange server running a db
oriented storage subsystem. And I can't imagine them not being
On Thu, Oct 30, 2008 at 7:37 PM, Alvaro Herrera
[EMAIL PROTECTED] wrote:
Scott Marlowe escribió:
What is the torn page problem? Note I'm no big fan of compressed file
systems, but I can't imagine them not working with databases, as I've
seen them work quite reliably under exhange server
Scott Marlowe [EMAIL PROTECTED] writes:
Sure, bash Microsoft it's easy. But it doesn't address the point, is
a database safe on top of a compressed file system and if not, why?
It is certainly *less* safe than it is on top of an uncompressed
filesystem. Any given hardware failure will affect
Sorry for following up so late, actually I mean compression features like what
other commercial RDBMS have, such as DB2 9.5 or SQL Server 2008. In those
databases, all data types in all tables can be compressed, following are two
features we think very useful:
1. Little integers of types take 8
2008/10/29 小波 顾 [EMAIL PROTECTED]
1. Little integers of types take 8 bytes in the past now only take 4 or 2
bytes if there are not so large.
So what actually happen if I have a table with few mills of values that fit
in 2 bytes, but all of the sudent I am going to add another column with
Data Compression
The new data compression feature in SQL Server 2008 reduces the size of tables,
indexes or a subset of their partitions by storing fixed-length data types in
variable length storage format and by reducing the redundant data. The space
savings achieved depends on the schema and
=?utf-8?Q?=E5=B0=8F=E6=B3=A2_=E9=A1=BE?= [EMAIL PROTECTED] writes:
[ snip a lot of marketing for SQL Server ]
I think the part of this you need to pay attention to is
Of course, nothing is entirely free, and this reduction in space and
time come at the expense of using CPU cycles.
We already
小波 顾 wrote:
Data Compression MSSQL 2008 technots . Your results depend on
your workload, database, and hardware
Sounds cool but i wonder what real world results are??
For IO bound systems lots of pluses
but for CPU bound workloads it would suck
I can imagine my big stats tables , with 300-400M rows, all big ints, that
- mostly - require that sort of length. Gain, none, hassle 100%.
On Wed, Oct 29, 2008 at 10:09 AM, 小波 顾 [EMAIL PROTECTED] wrote:
Data Compression
The new data compression feature in SQL Server 2008 reduces the size of
tables, indexes or a subset of their partitions by storing fixed-length data
types in variable length storage format and by reducing the
Tom Lane wrote:
=?utf-8?Q?=E5=B0=8F=E6=B3=A2_=E9=A1=BE?= [EMAIL PROTECTED] writes:
[ snip a lot of marketing for SQL Server ]
I think the part of this you need to pay attention to is
Of course, nothing is entirely free, and this reduction in space and
time come at the expense of
Grant Allen wrote:
...warehouse...DB2...IBM is seeing typical
storage savings in the 40-60% range
Sounds about the same as what compressing file systems claim:
http://opensolaris.org/os/community/zfs/whatis/
ZFS provides built-in compression. In addition to
reducing space usage by 2-3x,
Ron Mayer wrote:
Grant Allen wrote:
...warehouse...DB2...IBM is seeing typical storage savings in the
40-60% range
Sounds about the same as what compressing file systems claim:
http://opensolaris.org/os/community/zfs/whatis/
ZFS provides built-in compression. In addition to
reducing space
On Oct 29, 2008, at 9:50 PM, Grant Allen wrote:
One other thing I forgot to mention: Compression by the DB trumps
filesystem compression in one very important area - shared_buffers!
(or buffer_cache, bufferpool or whatever your favourite DB calls its
working memory for caching data).
Steve Atkins wrote:
On Oct 29, 2008, at 9:50 PM, Grant Allen wrote:
One other thing I forgot to mention: Compression by the DB trumps
filesystem compression in one very important area - shared_buffers!
(or buffer_cache, bufferpool or whatever your favourite DB calls its
working memory
On Oct 29, 2008, at 10:43 PM, Joshua D. Drake wrote:
Steve Atkins wrote:
On Oct 29, 2008, at 9:50 PM, Grant Allen wrote:
One other thing I forgot to mention: Compression by the DB trumps
filesystem compression in one very important area -
shared_buffers! (or buffer_cache, bufferpool
You might want to try using a file system (ZFS, NTFS) that
does compression, depending on what you're trying to compress.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Note that most data stored in the TOAST table is compressed.
IE a Text type with length greater than around 2K will be stored in the
TOAST table. By default data in the TOAST table is compressed, this can
be overriden.
However I expect that compression will reduce the performance of certain
_
Explore the seven wonders of the world
http://search.msn.com/results.aspx?q=7+wonders+worldmkt=en-USform=QBRE
On Sun, Oct 26, 2008 at 9:54 AM, 小波 顾 [EMAIL PROTECTED] wrote:
Are there plans to add data compression feature to postgresql?
There already is data compression in postgresql.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
Scott-
Straight from Postgres doc
The zlib compression
library will be used by default. If you don't want to use it then you must
specify the --without-zlib option for configure. Using this option disables
support for compressed
archives in pg_dump and pg_restore. Martin
2008/10/26 Martin Gainty [EMAIL PROTECTED]:
Scott-
Straight from Postgres doc
The zlib compression library will be used by default. If you don't want to
use it then you must specify the --without-zlib option for configure. Using
this option disables support for compressed archives in pg_dump
70 matches
Mail list logo