"Josh Berkus" wrote
> > While I'm waiting to figure out how to get the size of the toast table,
at
> > least I can provide the speed of query with/without assumed compression
on
> > the 6K text columns.
>
> Check out the table_size view in the newsysviews project. Andrew computed
the
> regular,
Luke,
> While I'm waiting to figure out how to get the size of the toast table, at
> least I can provide the speed of query with/without assumed compression on
> the 6K text columns.
Check out the table_size view in the newsysviews project. Andrew computed the
regular, toast, and index sizes as
While I'm waiting to figure out how to get the size of the toast table, at
least I can provide the speed of query with/without assumed compression on
the 6K text columns.
To insure that we're actually accessing the data in the rows, I do a regexp
query on the TOASTed rows:
mpptestdb=# select coun
Josh,
On 2/26/06 8:04 PM, "Josh Berkus" wrote:
> Check out SET STORAGE.
I just altered the MIVP data generator in Bizgres MPP to produce the usual
15 column table but with a 6K row size. You'd only expect a few tens of
bytes variance around the 6K, and the data is randomly chosen words from a
Luke,
> As Jim pointed out, we would need a real test to confirm the behavior,
> I'm not yet acquainted with the toast compression, so it's harder for me
> to compose a real test.
Check out SET STORAGE.
--
--Josh
Josh Berkus
Aglio Database Solutions
San Francisco
---(e
Hannu,
On 2/26/06 12:19 PM, "Hannu Krosing" <[EMAIL PROTECTED]> wrote:
>> On DBT-3 data, I've just run some tests meant to simulate the speed
>> differences of compression versus native I/O. My thought is that an
>> external use of gzip on a binary dump file should be close to the speed of
>> LZ
Ühel kenal päeval, P, 2006-02-26 kell 09:31, kirjutas Luke Lonergan:
> Jim,
>
> On 2/26/06 8:00 AM, "Jim C. Nasby" <[EMAIL PROTECTED]> wrote:
>
> > Any idea on how decompression time compares to IO bandwidth? In other
> > words, how long does it take to decompress 1MB vs read that 1MB vs read
> >
Luke Lonergan wrote:
> Jim,
>
> On 2/26/06 10:37 AM, "Jim C. Nasby" <[EMAIL PROTECTED]> wrote:
>
> > So the cutover point (on your system with very fast IO) is 4:1
> > compression (is that 20 or 25%?).
>
> Actually the size of the gzipp'ed binary file on disk was 65MB, compared to
> 177.5MB unco
Jim,
On 2/26/06 10:37 AM, "Jim C. Nasby" <[EMAIL PROTECTED]> wrote:
> So the cutover point (on your system with very fast IO) is 4:1
> compression (is that 20 or 25%?).
Actually the size of the gzipp'ed binary file on disk was 65MB, compared to
177.5MB uncompressed, so the compression ratio is 3
On Sun, Feb 26, 2006 at 09:31:05AM -0800, Luke Lonergan wrote:
> Note that this filesystem can do about 400MB/s, and we routinely see scan
> rates of 300MB/s within PG, so the real comparision is:
>
> Direct seqscan at 300MB/s versus gunzip at 77.5MB/s
So the cutover point (on your system with ve
Jim,
On 2/26/06 8:00 AM, "Jim C. Nasby" <[EMAIL PROTECTED]> wrote:
> Any idea on how decompression time compares to IO bandwidth? In other
> words, how long does it take to decompress 1MB vs read that 1MB vs read
> whatever the uncompressed size is?
On DBT-3 data, I've just run some tests meant
Neil Conway <[EMAIL PROTECTED]> writes:
> toast_compress_datum() considers compression to be "successful" if the
> compressed version of the datum is smaller than the uncompressed
> version. I think this is overly generous: if compression reduces the
> size of the datum by, say, 0.01%, it is likely
On Sat, Feb 25, 2006 at 09:39:34PM -0500, Neil Conway wrote:
> It's true that LZ decompression is fast, so we should probably use the
> compressed version of the datum unless the reduction in size is very
> small. I'm not sure precisely what that threshold should be, however.
Any idea on how decom
Neil Conway wrote:
> toast_compress_datum() considers compression to be "successful" if the
> compressed version of the datum is smaller than the uncompressed
> version. I think this is overly generous: if compression reduces the
> size of the datum by, say, 0.01%, it is likely a net loss to use th
toast_compress_datum() considers compression to be "successful" if the
compressed version of the datum is smaller than the uncompressed
version. I think this is overly generous: if compression reduces the
size of the datum by, say, 0.01%, it is likely a net loss to use the
compressed version of the
15 matches
Mail list logo