APseudoUtopia wrote:
I'm having some problems when inserting special characters into a
column. Here's the table:
--
Table public.users_history_ip
Column |Type |
Alvaro Herrera-7 wrote:
This is why Tom was suggesting you to increase wal_buffers. Did
you try that?
Thanks for the explanation. I will try increasing the wal_buffers.
Unfortunately this is on a system I can't restart for the next few days.
Tomer
--
View this message in context:
postgres collect all necessary stats. Maybe an implicit analyze is
necessary?
Should be: explicit analyze.
BUT I found the real cause of my problem - the fk2 field from my
example had not only an index, but it was also a foreign key to
another table.
That seems unlikely to be the
What first post? The only thing I can find is a reference in a message
by you from yesterday, to a two-year old post that you claim is about
the same problem. Though it's possible that it is the same problem,
you don't provide any data to back that up.
Strange - you can see the full
Hi there
We have a function which runs a set of update clauses and we are considering
putting all the update clauses into one statement.
I would like to understand how postgres handles multiple updates. If we were
to send 5 update statements in one sql statement to the db would it:
Do
On 27/07/2009 09:10, Jamie Lawrence-Jenner wrote:
Apart from saving on the overhead of having to open up 5 separate
connections, what are the benefits to passing in multiple updates in one
statement?
If you do them all within one transaction -
begin;
update
update...
...
On Mon, Jul 27, 2009 at 2:10 AM, Jamie
Lawrence-Jennerjamie.jen...@autovhc.co.uk wrote:
Hi there
We have a function which runs a set of update clauses and we are considering
putting all the update clauses into one statement.
I would like to understand how postgres handles multiple updates.
Hello,
Le 27/07/09 15:07, Raymond O'Donnell a écrit :
On 27/07/2009 09:10, Jamie Lawrence-Jenner wrote:
Apart from saving on the overhead of having to open up 5 separate
connections, what are the benefits to passing in multiple updates in one
statement?
If you do them all within one
On Mon, Jul 27, 2009 at 07:31:37AM -0600, Scott Marlowe wrote:
On Mon, Jul 27, 2009 at 2:10 AM, Jamie
Lawrence-Jennerjamie.jen...@autovhc.co.uk wrote:
Do 1 pass to retrieve the 5 primary keys, then update all rows in parallel
I would do 5 passes. Better to have one update statement to
Hello,
I was looking for a way to get the difference (interval) between 2
timetz values, i.e.:
postgres=# select '2:45+7'::timetz - '2:44+2'::timetz;
ERROR: operator does not exist: time with time zone - time with time
zone
LINE 1: select '2:45+7'::timetz - '2:44+2'::timetz;
I'd expect
Hi There
Our update statements are as follows
Update table set col1=x,col2=y where pkid=1;
Update table set col1=x,col2=y where pkid=2;
Update table set col1=x,col2=y where pkid=3;
Very simple and straight forward. Sometimes there could be as many as 50
update statements to process.
Many
Hello again,
Le 27/07/09 16:48, Jamie Lawrence-Jenner a écrit :
Hi There
Our update statements are as follows
Update table set col1=x,col2=y where pkid=1;
Update table set col1=x,col2=y where pkid=2;
Update table set col1=x,col2=y where pkid=3;
Very simple and straight forward.
On Mon, Jul 27, 2009 at 8:48 AM, Jamie
Lawrence-Jennerjamie.jen...@autovhc.co.uk wrote:
Hi There
Our update statements are as follows
Update table set col1=x,col2=y where pkid=1;
Update table set col1=x,col2=y where pkid=2;
Update table set col1=x,col2=y where pkid=3;
Very simple and
Scott Marlowe scott.marl...@gmail.com writes:
On Mon, Jul 27, 2009 at 8:48 AM, Jamie
Lawrence-Jennerjamie.jen...@autovhc.co.uk wrote:
Our update statements are as follows
Update table set col1=x,col2=y where pkid=1;
Update table set col1=x,col2=y where pkid=2;
Update table set
We are to a point in size and utilization of a set of our Postgres 8.1.17
databases that a vacuumdb -a -z -U postgres is still running after 50 hours and
we have to kill it off because the additional i/o is causing queries to stack
up. We have archived off as much data as possible (100+ GB)
On Jul 27, 2009, at 10:54 , Alexey Klyukin wrote:
Hello,
I was looking for a way to get the difference (interval) between 2
timetz values, i.e.:
I don't have a solution, but am curious what your use case is for
timetz (as opposed to timestamptz).
Michael Glaesemann
grzm seespotcode
Hello,
Is there a way to run a pl/pgsql automatically at server startup ?
Is there a way to run a pl/pgsql function with an infinite loop as a daemon ?
Is there a way to start a pl/pgsql function that would persist after the user
session has closed ?
Is there a way for an unprivileged user to
Saleem EDAH-TALLY wrote:
Is there a way to run a pl/pgsql automatically at server startup ?
in your postgres startup script launch a session with `psql ... -c some
sql commands` or `psql ... -f somescript.sql` ...
Is there a way to run a pl/pgsql function with an infinite loop as a
daemon
Hello
2009/7/27 Saleem EDAH-TALLY nm...@netcourrier.com:
Hello,
Is there a way to run a pl/pgsql automatically at server startup ?
no - only you can modify startup scripts
Is there a way to run a pl/pgsql function with an infinite loop as a daemon
?
infinite loop is possible, but
On Jul 27, 2009, at 6:52 PM, Michael Glaesemann wrote:
On Jul 27, 2009, at 10:54 , Alexey Klyukin wrote:
Hello,
I was looking for a way to get the difference (interval) between 2
timetz values, i.e.:
I don't have a solution, but am curious what your use case is for
timetz (as opposed
I would like to combine multiple databases (same schema) into one
master db. Does anyone know how I can reconcile all my primary and
foreign keys to maintain referential integrity.
Many thanks,
Bob
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to
On Mon, 2009-07-27 at 11:50 -0400, Tom Lane wrote:
Scott Marlowe scott.marl...@gmail.com writes:
On Mon, Jul 27, 2009 at 8:48 AM, Jamie
Lawrence-Jennerjamie.jen...@autovhc.co.uk wrote:
Our update statements are as follows
Update table set col1=x,col2=y where pkid=1;
Update table set
Alexey Klyukin al...@commandprompt.com writes:
On Jul 27, 2009, at 6:52 PM, Michael Glaesemann wrote:
I don't have a solution, but am curious what your use case is for
timetz (as opposed to timestamptz).
I'm writing a custom trigger function that has to compare values of
time* types and
On Sun, 2009-07-26 at 15:36 -0400, Robert James wrote:
I'm working on improving my background database theory, to aid in
practice. I've found learning relational algebra to be very helpful.
One thing which relational algebra doesn't cover is aggregate
functions. Can anyone recommend any
In response to Keaton Adams kad...@mxlogic.com:
We are to a point in size and utilization of a set of our Postgres 8.1.17
databases that a vacuumdb -a -z -U postgres is still running after 50 hours
and we have to kill it off because the additional i/o is causing queries to
stack up. We
Just looking for experiences of people. Are people already using 8.4
in serious live hosting environments? Thanks.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Greetings,
The video recordings of the sessions for PG Day SJC '09 are now
available:
Version 8.4: Easier to Administer than Ever / Josh Berkus / PostgreSQL
Experts
http://media.postgresql.org/pgday-sjc-09/pgday-sjc-09-easier.mov
Rapid Upgrades with pg_Migrator / Bruce Momjian /
On Tue, 2009-07-28 at 03:51 +0800, Phoenix Kiula wrote:
Are people already using 8.4 in serious live hosting environments?
Not yet. There are lots of (important) fixes in CVS which are waiting
for 8.4.1. For production, I'd wait for a while.
--
Devrim GÜNDÜZ, RHCE
Command Prompt -
We are upgrading to 8.3.7 in September, if that helps the situation at all.
I just want to make sure I thoroughly understand how these two statements work
together:
The price is this maintenance requirement: every table in the database must be
vacuumed at least once every billion transactions.
On Mon, Jul 27, 2009 at 12:51 PM, Phoenix Kiulaphoenix.ki...@gmail.com wrote:
Just looking for experiences of people. Are people already using 8.4
in serious live hosting environments? Thanks.
Wait..
8.3 is running fine and dandy. Lots of decent sized changes in 8.4
with awaiting fixes. So
Tory M Blue wrote on 27.07.2009 22:45:
And those that have multiple TB's of data, weee another dump and
restore upgrade (pt!)
Isn't that what pg_migrator is for?
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
On Mon, Jul 27, 2009 at 4:45 PM, Tory M Blue tmb...@gmail.com wrote:
And those that have multiple TB's of data, weee another dump and
restore upgrade (pt!)
pg_migrator doesn't need to dump - restore, it can do an in-place upgrade
of the datafiles for you.
On Mon, Jul 27, 2009 at 2:48 PM, Thomas Kellererspam_ea...@gmx.net wrote:
Tory M Blue wrote on 27.07.2009 22:45:
And those that have multiple TB's of data, weee another dump and
restore upgrade (pt!)
Isn't that what pg_migrator is for?
I use slony for such things, downtime = zero
On Mon, 2009-07-27 at 22:48 +0200, Thomas Kellerer wrote:
Tory M Blue wrote on 27.07.2009 22:45:
And those that have multiple TB's of data, weee another dump and
restore upgrade (pt!)
Isn't that what pg_migrator is for?
It depends, 8.3 and 8.4 are not compatible by default
On Mon, Jul 27, 2009 at 02:21:02PM -0600, Keaton Adams wrote:
We are upgrading to 8.3.7 in September, if that helps the situation at all.
This is good. Since 8.2 VACUUM age is done per table instead of per
database. This should solve most of your problems.
So my questions are:
1. Will the
This is good. Since 8.2 VACUUM age is done per table instead of per
database. This should solve most of your problems.
On older versions you need to do a database-wide vacuum (note this is
not vacuumdb -a) once every billion transactions.
You won't lose data, but you need to do a DB wide (not
Greetings:
Does anyone know if a function written in C and linked into the backend in a
shared library with a statically declared structure, maintain that data for
the life of the backend process such that, when the function is called again,
the structure data is intact?
Thanks for any
Joshua D. Drake j...@commandprompt.com writes:
It depends, 8.3 and 8.4 are not compatible by default (because of
--integer-datetimes). So, yeah if you are running Debian/Ubuntu but if
you are running Cent/RH with the defaults, pg_migrator isn't going to
work unless you compile Pg from source.
On Mon, 2009-07-27 at 19:16 -0400, Tom Lane wrote:
Joshua D. Drake j...@commandprompt.com writes:
It depends, 8.3 and 8.4 are not compatible by default (because of
--integer-datetimes). So, yeah if you are running Debian/Ubuntu but if
you are running Cent/RH with the defaults, pg_migrator
Joshua D. Drake j...@commandprompt.com writes:
On Mon, 2009-07-27 at 19:16 -0400, Tom Lane wrote:
Oh? You think RH/Cent is going to change that default now? Think again.
I thought they would get around to changing it now.
They is me, and it's not changing. I'm not blowing a chance at
Hi. We have a table with 30 M records that is growing by
about 100 K records per day.
The experimentalists, whose data are in the table, have
decided that they will focus on the records for which the
value of one field, rbscore, is greater than a cut-off.
However, they want to continue to store
I'm moving from a long time in BSD-land to using Linux. I've always
been in the habit of building PostgreSQL from the source tarballs. On
Linux, is there an advantage either way to using the RPMs as opposed
to building from source? Thanks!
--
-- Christophe Pettus
x...@thebuild.com
I am trying to monitor replication lag using zabbix. I have written a
simple script in ruby to get the lag it goes like this.
require 'date'
require 'yaml'
y = YAML.load `/usr/lib/postgresql/8.3/bin/pg_controldata
/var/lib/postgresql/8.3/main`
last_checkpoint = DateTime.parse( y['Time of latest
On Mon, 2009-07-27 at 19:44 -0400, Tom Lane wrote:
because RH really can't be used as a production PostgreSQL server (if
date based data is important)
I have open bugs about the lack of in-place upgrade. I have never once
heard a customer complain about FP timestamps. So your position
On Mon, Jul 27, 2009 at 7:52 PM, Janet Jacobsenjsjacob...@lbl.gov wrote:
Can you suggest other strategies?
Something that might be easier to play with is to create a (or
several, to speed up other queries) functional index on the comparison
between rbscore and the cutoff. It won't buy you
On Tue, Jul 28, 2009 at 1:08 AM, David Wilsondavid.t.wil...@gmail.com wrote:
On Mon, Jul 27, 2009 at 7:52 PM, Janet Jacobsenjsjacob...@lbl.gov wrote:
Can you suggest other strategies?
Something that might be easier to play with is to create a (or
several, to speed up other queries)
On Mon, Jul 27, 2009 at 8:24 PM, Greg Starkgsst...@mit.edu wrote:
I think it would be even more interesting to have partial indexes --
ie specified with WHERE rbscore cutoff.
Yes- that's what I actually meant. Word got scrambled between brain
and fingers...
--
- David T. Wilson
Hi. I noticed that when clients (both psql and pgAdmin) disconnect or
cancel, queries are often still running on the server. A few questions:
1) Is there a way to reconnect and get the results?
2) Is there a way to tell postgres to automatically stop all queries when
the client who queried them
* Janet Jacobsen (jsjacob...@lbl.gov) wrote:
If they are going to spend 95% of their time querying the
records that meet the 'good' criteria, what are the good
strategies for ensuring good performance for those queries?
(1) Should I partition the table into two partitions based on
the value
Thanks for all the good replies (both on and off list). It seems the
consensus is for me to read Christopher Date. I found two relevant Date
books:
1) Introduction to Database Systems
On Mon, 2009-07-27 at 21:05 -0400, Robert James wrote:
1) Introduction to Database Systems
http://www.amazon.com/Introduction-Database-Systems-Kannan-Swamynathan/dp/B001BVYKY4/ref=sr_1_5?ie=UTF8s=booksqid=1248742811sr=1-5
and
2) Database in Depth: Relational Theory for Practitioners
Robert James srobertja...@gmail.com writes:
Hi. I noticed that when clients (both psql and pgAdmin) disconnect or
cancel, queries are often still running on the server. A few questions:
1) Is there a way to reconnect and get the results?
No.
2) Is there a way to tell postgres to
Tim Uckun timuc...@gmail.com writes:
What is the rationale for not giving the group any permissions at all?
On lots of systems, giving group permissions is nearly as bad as giving
world permissions (eg, all the users might be in a users group).
So we don't do it by default. If you want to poke
Novice here :). I have PostgreSQL 8.2 installed on a single board computer
running Windows XP Embedded on a Compact Flash drive - 2 databases with no
more than 2000 ro. After 10 power cycles spaced 6 mins apart, I noticed the
postgres.exe processes no longer running. I located log file (see below)
54 matches
Mail list logo