Steve Atkins wrote:
The one place where Compression is an immediate benefit is the wire.
It is easy to forget that one of our number one bottlenecks (even at
gigabit) is the amount of data we are pushing over the wire.
Wouldn't ssl_ciphers=NULL-MD5 or somesuch give zlib compression over
the
I feel good about control here, and I certainly don't have any problems. So,
please don't whine :) Especially since I want to run cvs head, and be able
to actually update it from cvs when I want to, that's the only choice.
Postgresql is so easy to get from sources, compared to other software
I think I must have only done a reload on the live server as now I've
tried to restart the service and I've got exactly the same error, so
it's no longer a discrepancy between environments.
The script is actually one which came with the Gentoo package. I can
see it is using both $PGOPTS and
Hi,
is there a schema upgrade howto? I could not find much with google.
There is a running DB and a development DB. The development DB
has some tables, columns and indexes added. What is the preferred way
to upgrade?
I see these solutions:
- pg_dump production DB. Install schema only from dev
I myself noticed that if a client is still connected to the DB server,
then PgSQL won't restart. Are you sure all your clients are/were
disconnected? I myself have the DB on remote a virtual machine.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your
Well that can't really be the problem since it isn't running when
trying to start.
But yes, I've noticed that before which I actually find very useful.
It's a shame there isn't a way for postgres to broadcast to clients
that it wants to shutdown so things like pgAdmin III will say Hey,
the
it should, every book on encryption says, that if you compress your data
before encryption - its better.
On Thu, Oct 30, 2008 at 03:50:20PM +1100, Grant Allen wrote:
One other thing I forgot to mention: Compression by the DB trumps
filesystem compression in one very important area - shared_buffers! (or
buffer_cache, bufferpool or whatever your favourite DB calls its working
memory for caching
currently postgresql is slower on RAID, so something tells me that little
bit of compression underneeth will make it far more worse, than better. But
I guess, Tom will be the man to know more about it.
Thom Brown [EMAIL PROTECTED] writes:
The script is actually one which came with the Gentoo package.
...
Okay, so I've manually tried starting the server now and told it to
output any log to /tmp. This is telling me that the request for a
shared memory segment is higher than my kernel's
On Thu, Oct 30, 2008 at 10:54:46AM +0100, Thomas Guettler wrote:
Hi,
is there a schema upgrade howto? I could not find much with google.
There is a running DB and a development DB. The development DB
has some tables, columns and indexes added.
The only sure way to track such changes is by
This question didn't get any traction on admin list, so I'll try
here:
I want to analyze the entire database with the exception of several
tables.
When I run VACUUM ANALYZE (or vacuumdb -z) on the database, how can
I exclude specific tables from being analyzed?
Is there any place in system
I am however unable to do the same successfully (the Java code simply
hangs, probably as a result of the second psql not getting the input
to
it) from Java code using objects of ProcessBuilder and Process. I have
used threads consume the STDOUT and STDERR streams (I write the STDOUT
stream to
Hi,
I found a way to do it. One problem remains: The order of the columns
can't be changed.
Any change to make postgres support this in the future?
My way:
pg_dump -s prod | strip-schema-dump.py - prod.schema
pg_dump -s devel | strip-schema-dump.py - devel.schema
strip-schema-dump.py
One of the tables we're using in the 8.1.3 setups currently running
includes phone numbers as a searchable field (fti_phone), with the
results of a select on the field generally looking like this: 'MMM':2
'':3 'MMM-':1. MMM is the first three digits, is the
fourth-seventh.
The
I generally write bash one liners for this kind of thing:
for table in $(psql -U postgres --tuples-only -c SELECT schemaname || '.'
|| tablename FROM pg_tables WHERE tablename NOT IN ('table1', 'table2')) ;
do psql -U postgres -c VACUUM ANALYZE $table; done
This is nice because you can bring all
On Thu, Oct 30, 2008 at 02:37:43PM +0100, Thomas Guettler wrote:
Hi,
I found a way to do it.
It's the wrong way. Trust me on this.
One problem remains: The order of the columns can't be changed. Any
change to make postgres support this in the future?
It's been proposed several times :)
On Thu, Oct 30, 2008 at 09:17:00AM -0400, Igor Neyman wrote:
This question didn't get any traction on admin list, so I'll try
here:
I want to analyze the entire database with the exception of several
tables. When I run VACUUM ANALYZE (or vacuumdb -z) on the
database, how can
Why are you
On Thu, Oct 30, 2008 at 10:53:27AM +1100, Grant Allen wrote:
Other big benefits come with XML ... but that is even more dependent on the
starting point. Oracle and SQL Server will see big benefits in compression
with this, because their XML technology is so mind-bogglingly broken in the
Yes, we are in a data warehouse like environments, where the database server is
used to hold very large volumn of read only historical data, CPU, memory, I/O
and network are all OK now except storage space, the only goal of compression
is to reduce storage consumption.
Date: Thu, 30 Oct
Grzegorz Jaśkiewicz wrote:
currently postgresql is slower on RAID, so something tells me that
little bit of compression underneeth will make it far more worse, than
better. But I guess, Tom will be the man to know more about it.
What? PostgreSQL is slower on RAID? Care to define that better?
On Thu, Oct 30, 2008 at 2:58 PM, Joshua D. Drake [EMAIL PROTECTED]wrote:
Grzegorz Jaśkiewicz wrote:
currently postgresql is slower on RAID, so something tells me that little
bit of compression underneeth will make it far more worse, than better. But
I guess, Tom will be the man to know more
On Oct 30, 2008, at 8:10 AM, Grzegorz Jaśkiewicz wrote:
up to 8.3 it was massively slower on raid1 (software raid on
linux), starting from 8.3 things got lot lot better (we speak 3x
speed improvement here), but it still isn't same as on 'plain' drive.
I'm a bit surprised to hear that; what
Grzegorz Jaśkiewicz wrote:
What? PostgreSQL is slower on RAID? Care to define that better?
up to 8.3 it was massively slower on raid1 (software raid on linux),
starting from 8.3 things got lot lot better (we speak 3x speed
improvement here), but it still isn't same as on 'plain' drive.
On Thu, Oct 30, 2008 at 3:27 PM, Christophe [EMAIL PROTECTED] wrote:
I'm a bit surprised to hear that; what would pg be doing, unique to it,
that would cause it to be slower on a RAID-1 cluster than on a plain drive?
yes, it is slower on mirror-raid from single drive.
I can give you all the
Grzegorz Jaśkiewicz wrote:
On Thu, Oct 30, 2008 at 3:27 PM, Christophe [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
I'm a bit surprised to hear that; what would pg be doing, unique to
it, that would cause it to be slower on a RAID-1 cluster than on a
plain drive?
yes, it
On Wed, 2008-10-29 at 09:05 -0400, Greg Smith wrote:
On Tue, 28 Oct 2008, Jason Long wrote:
I also have to ship them off site using a T1 so setting the time to
automatically switch files will just waste bandwidth if they are still
going
to be 16 MB anyway.
The best way to handle
Hi chaps,
I think I'm going to struggle to describe this, but hopefully someone can
squint and see where I'm going wrong.
I've got a c function called ftest, all it does is take some text and prepend
abcdefghijklmnopqr onto it. I use it to pass a key into
pgp_sym_encrypt/decrypt working on a
Works like a charm. Thank you very much Justin.
On Thu, Oct 30, 2008 at 3:49 AM, justin [EMAIL PROTECTED] wrote:
There was a number of code mistakes in my examples as i was just doing it
off the top of my head, just went through it and got it all working.
I had to change the function around
[EMAIL PROTECTED] (Tom Lane) writes:
We already have the portions of this behavior that seem to me to be
likely to be worthwhile (such as NULL elimination and compression of
large field values). Shaving a couple bytes from a bigint doesn't
strike me as interesting.
I expect that there would
On Thu, Oct 30, 2008 at 05:27:58PM +, Glyn Astill wrote:
Hi chaps,
I think I'm going to struggle to describe this, but hopefully someone can
squint and see where I'm going wrong.
I've got a c function called ftest, all it does is take some text and
prepend abcdefghijklmnopqr onto
Hello all,
I've been tring to speed up the restore operation of my database without
success.
I have a 200MB dump file obtained with 'pg_dumpall --clean --oids'.
After restore is produces a database with one single table (1.000.000)
rows. I have also some indexes on that table. that's it.
It
On Thursday 30 October 2008, Joao Ferreira gmail
[EMAIL PROTECTED] wrote:
What other cfg paramenters shoud I touch ?
work_mem set to most of your free memory might help. You're probably just
disk-bound, though. What does vmstat say during the restore?
--
Alan
--
Sent via pgsql-general
On Thu, 30 Oct 2008, Joshua D. Drake wrote:
This reminds me yet again that pg_clearxlogtail should probably get added
to the next commitfest for inclusion into 8.4; it's really essential for a
WAN-based PITR setup and it would be nice to include it with the
distribution.
What is to be gained
ISTM that in this line:
keying = (text *)palloc( keylen + unamelen );
You forgot to include the length of the header VARHDRSZ.
Aha, that'd be it, it's been a long day.
Thanks Martijn
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your
On Thu, 2008-10-30 at 11:39 -0700, Alan Hodgson wrote:
On Thursday 30 October 2008, Joao Ferreira gmail
[EMAIL PROTECTED] wrote:
What other cfg paramenters shoud I touch ?
work_mem set to most of your free memory might help.
I've raised work_mem to 128MB.
still get the same 20 minutes
Greg Smith wrote:
On Thu, 30 Oct 2008, Joshua D. Drake wrote:
This reminds me yet again that pg_clearxlogtail should probably get
added
to the next commitfest for inclusion into 8.4; it's really essential
for a
WAN-based PITR setup and it would be nice to include it with the
distribution.
Martin Gainty wrote:
could you provide a brief explanation of EAV ?
Please avoid HTML and eschew top-posting. The post from Jeff Soules in this
thread included the advice:
See e.g. http://en.wikipedia.org/wiki/Entity-Attribute-Value_model
which points to an explanation.
--
Lew
--
Sent
Hi,
I'm a bit confused as to the logic of nulls. I understand that null
is to represent an unknown value. So it makes sense that the result
of tacking an unknown value onto a known one is unknown, because you
don't know what exactly you just tacked on. So
select null::text || 'hello';
On Thu, Oct 30, 2008 at 07:28:57PM +, Joao Ferreira gmail wrote:
On Thu, 2008-10-30 at 11:39 -0700, Alan Hodgson wrote:
You're probably just
disk-bound, though. What does vmstat say during the restore?
During restore:
# vmstat
procs memory--- ---swap-- -io
On Thursday 30 October 2008, Joao Ferreira gmail
During restore:
# vmstat
procs memory--- ---swap-- -io -system-- cpu
r b swpd free buff cache si so bi bo in cs us sy id wa
3 1 230204 4972 1352 110128 21 17 63 24 56 12 2 85 0
#
Kev [EMAIL PROTECTED] writes:
... should I be careful how to code because this
might change in the future?
Probably. We couldn't even handle nulls within arrays until a release
or two ago. I wouldn't be surprised if someone comes up with a proposal
to make null-array handling a bit more
Alan Hodgson wrote:
On Thursday 30 October 2008, Joao Ferreira gmail
During restore:
# vmstat
procs memory--- ---swap-- -io -system-- cpu
r b swpd free buff cache si so bi bo in cs us sy id wa
3 1 230204 4972 1352 110128 21 17
Greg Smith wrote:
there's no chance it can accidentally look like a valid segment. But
when an existing segment is recycled, it gets a new header and that's
it--the rest of the 16MB is still left behind from whatever was in that
segment before. That means that even if you only write, say,
Kyle Cordes wrote:
Greg Smith wrote:
there's no chance it can accidentally look like a valid segment. But
when an existing segment is recycled, it gets a new header and that's
it--the rest of the 16MB is still left behind from whatever was in
that segment before. That means that even if
On Thu, 30 Oct 2008, Kyle Cordes wrote:
It sure would be nice if there was a way for PG itself to zero the unused
portion of logs as they are completed, perhaps this will make it in as part
of the ideas discussed on this list a while back to make a more out of the
box log-ship mechanism?
Greg Smith wrote:
On Thu, 30 Oct 2008, Kyle Cordes wrote:
It sure would be nice if there was a way for PG itself to zero the
unused portion of logs as they are completed, perhaps this will make
The overhead of clearing out the whole thing is just large enough that
it can be disruptive on
On Thursday 30 October 2008, Joao Ferreira [EMAIL PROTECTED]
wrote:
well. see for yourself... (360 RAM , 524 SWAP) that's what it is...
it supposed to be somewhat an embedded product...
Clearly your hardware is your speed limitation. If you're swapping at all,
anything running on the
Greg Smith [EMAIL PROTECTED] writes:
Now, it would be possible to have that less sensitive archive code path zero
things out, but you'd need to introduce a way to note when it's been done (so
you don't do it for a segment twice) and a way to turn it off so everybody
doesn't go through that
Thomas Guettler wrote:
Hi,
is there a schema upgrade howto? I could not find much with google.
There is a running DB and a development DB. The development DB
has some tables, columns and indexes added. What is the preferred way
to upgrade?
I see these solutions:
- pg_dump production DB.
On Oct 30, 2008, at 2:54 PM, Gregory Stark wrote:
Wouldn't it be just as good to indicate to the archive command the
amount of
real data in the wal file and have it only bother copying up to
that point?
Hm! Interesting question: Can the WAL files be truncated, rather
than zeroed,
Scott Marlowe [EMAIL PROTECTED] writes:
I'm sure this makes for a nice brochure or power point presentation,
but in the real world I can't imagine putting that much effort into it
when compressed file systems seem the place to be doing this.
I can't really see trusting Postgres on a
Chris Browne [EMAIL PROTECTED] writes:
[EMAIL PROTECTED] (Tom Lane) writes:
We already have the portions of this behavior that seem to me to be
likely to be worthwhile (such as NULL elimination and compression of
large field values). Shaving a couple bytes from a bigint doesn't
strike me as
Gregory Stark wrote:
Greg Smith [EMAIL PROTECTED] writes:
Wouldn't it be just as good to indicate to the archive command the amount of
real data in the wal file and have it only bother copying up to that point?
That sounds like a great solution to me; ideally it would be done in a
way that
On Thu, 30 Oct 2008, Gregory Stark wrote:
Wouldn't it be just as good to indicate to the archive command the amount of
real data in the wal file and have it only bother copying up to that point?
That pushes the problem of writing a little chunk of code that reads only
the right amount of
On Thu, Oct 30, 2008 at 4:01 PM, Gregory Stark [EMAIL PROTECTED] wrote:
Scott Marlowe [EMAIL PROTECTED] writes:
I'm sure this makes for a nice brochure or power point presentation,
but in the real world I can't imagine putting that much effort into it
when compressed file systems seem the
Greg Smith [EMAIL PROTECTED] writes:
That pushes the problem of writing a little chunk of code that reads only
the right amount of data and doesn't bother compressing the rest onto the
person writing the archive command. Seems to me that leads back towards
wanting to bundle a contrib
Scott Marlowe [EMAIL PROTECTED] writes:
On Thu, Oct 30, 2008 at 4:01 PM, Gregory Stark [EMAIL PROTECTED] wrote:
I can't really see trusting Postgres on a filesystem that felt free to
compress portions of it. Would the filesystem still be able to guarantee that
torn pages won't tear across
For some reason, I now can include the date range search in my ON (...) clause.
However I would like to know if there is a limit to the number of
conditions I can put. It seems that more than 2 conditions misses some
records.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
On Thu, Oct 30, 2008 at 4:41 PM, Tom Lane [EMAIL PROTECTED] wrote:
Scott Marlowe [EMAIL PROTECTED] writes:
On Thu, Oct 30, 2008 at 4:01 PM, Gregory Stark [EMAIL PROTECTED] wrote:
I can't really see trusting Postgres on a filesystem that felt free to
compress portions of it. Would the
Alvaro Herrera [EMAIL PROTECTED] writes:
I can confirm that when the pager is open, psql does not resize
properly. Maybe psql is ignoring signals while the pager is open, or
something.
Hm, system() is documented to ignore SIGINT and SIGQUIT I wonder if it's
(erroneously?) ignoring
Here is the SQL I am working with:
--
SELECT products.*, orders.response_code FROM products JOIN items ON
products.id = items.product_id
LEFT OUTER JOIN orders ON (items.order_id = orders.id AND
orders.response_code = '0' AND orders.user_id = 2) WHERE (permalink =
E'product-1' AND
Ok I get the problem. It is the LIMIT 1 which was misleading me.
If I remove this limit, I get many returned results, some where orders
were paid, some where orders were not paid, therefore the LIMIT1 picks
the first one, and by chance it lands on an unpaid order.
Am I trying to achieve
Gregory Stark [EMAIL PROTECTED] writes:
Alvaro Herrera [EMAIL PROTECTED] writes:
I can confirm that when the pager is open, psql does not resize
properly. Maybe psql is ignoring signals while the pager is open, or
something.
Hm, system() is documented to ignore SIGINT and SIGQUIT I wonder
I have found a trick to fool the system: I use an ORDER BY
response_code 0 ASC LIMIT 1
As unpaid orders receive a response_code 0, then necessarily the
first record has response_code of 0.
However if more and more orders come into the equation, this means
PgSQL will have to process more
Noah Freire wrote:
On Wed, Oct 29, 2008 at 4:46 PM, Matthew T. O'Connor [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
Is the table being excluded? (see the pg_autovacuum system table
settings)
there's an entry for this table on pg_autovacuum, and it's enabled.
Are
Scott Marlowe [EMAIL PROTECTED] writes:
On Thu, Oct 30, 2008 at 4:41 PM, Tom Lane [EMAIL PROTECTED] wrote:
Scott Marlowe [EMAIL PROTECTED] writes:
On Thu, Oct 30, 2008 at 4:01 PM, Gregory Stark [EMAIL PROTECTED] wrote:
I can't really see trusting Postgres on a filesystem that felt free to
On Thu, Oct 30, 2008 at 6:03 PM, Gregory Stark [EMAIL PROTECTED] wrote:
Scott Marlowe [EMAIL PROTECTED] writes:
Sounds kinda hand wavy to me. If compressed file systems didn't give
you back what you gave them I couldn't imagine them being around for
very long.
I don't know, NFS has lasted
On Thu, 30 Oct 2008, Tom Lane wrote:
The real reason not to put that functionality into core (or even
contrib) is that it's a stopgap kluge. What the people who want this
functionality *really* want is continuous (streaming) log-shipping, not
WAL-segment-at-a-time shipping.
Sure, and that's
Scott Marlowe escribió:
What is the torn page problem? Note I'm no big fan of compressed file
systems, but I can't imagine them not working with databases, as I've
seen them work quite reliably under exhange server running a db
oriented storage subsystem. And I can't imagine them not being
Greg Smith wrote:
On Thu, 30 Oct 2008, Tom Lane wrote:
The real reason not to put that functionality into core (or even
contrib) is that it's a stopgap kluge. What the people who want this
functionality *really* want is continuous (streaming) log-shipping, not
WAL-segment-at-a-time shipping.
On Thu, Oct 30, 2008 at 7:37 PM, Alvaro Herrera
[EMAIL PROTECTED] wrote:
Scott Marlowe escribió:
What is the torn page problem? Note I'm no big fan of compressed file
systems, but I can't imagine them not working with databases, as I've
seen them work quite reliably under exhange server
Scott Marlowe [EMAIL PROTECTED] writes:
Sure, bash Microsoft it's easy. But it doesn't address the point, is
a database safe on top of a compressed file system and if not, why?
It is certainly *less* safe than it is on top of an uncompressed
filesystem. Any given hardware failure will affect
Jason Long wrote:
Sure I would rather have synchronous WAL shipping, but if that is going
to be a while or synchronous would slow down my applicaton I can get
comfortably close enough for my purposes with some highly compressible
WALs.
I'm way out here on the outskirts (just a user with a
74 matches
Mail list logo