On Monday 17 February 2014 21:14:35 Tom Lane wrote:
Kevin Grittner kgri...@ymail.com writes:
Perhaps we should arrange for a DROP DATABASE command to somehow
signal all backends to close files from that backend?
See commit ff3f9c8de, which was back-patched into 9.1.x as of 9.1.7.
When single row mode is enabled, after retrieving part of the result set,
I'm no longer interested in the rest of it (due to error handling or other
reasons). How can I discard the result set without repeatedly calling
PQgetResult() in such situation ?
The result set may be quite large and it's
2014-02-18 13:44 GMT+04:00 邓尧 tors...@gmail.com:
When single row mode is enabled, after retrieving part of the result set,
I'm no longer interested in the rest of it (due to error handling or other
reasons). How can I discard the result set without repeatedly calling
PQgetResult() in such
2014-02-18 11:53 GMT+01:00 Dmitriy Igrishin dmit...@gmail.com:
2014-02-18 13:44 GMT+04:00 邓尧 tors...@gmail.com:
When single row mode is enabled, after retrieving part of the result set,
I'm no longer interested in the rest of it (due to error handling or other
reasons). How can I discard
Is there a more appropriate place to ask this question? Or was my question
unclear?
I dug some data, and it seems that whenever messages come at a rate of 75,000
per hour, they start picking delays of up to 10 minutes. If I go up to 100,000,
delays pick up to about 20 minutes. And for 300,000
On Fri, Feb 14, 2014 at 7:35 PM, Behrang Saeedzadeh behran...@gmail.com wrote:
Hi,
I just stumbled upon this article from 2012 [1], according to which
(emphasis mine):
Window functions offer yet another way to implement pagination in SQL. This
is a flexible, and above all,
On Mon, Feb 17, 2014 at 8:45 AM, Herouth Maoz hero...@unicell.co.il wrote:
I have a production system using Postgresql 9.1.2.
The system basically receives messages, puts them in a queue, and then
several parallel modules, each in its own thread, read from that queue, and
perform two
It seems that pgfoundry.org is down? Is this the case for everyone?
If so is there any other location to get pg_bulkload 3.1.5+ (It is not on
the FTP mirror of pgFoundry)
Thanks,
*Kyle W. Purdon*
Research Assistant | Center for Remote Sensing of Ice Sheets (CReSIS)
On Tue, Feb 18, 2014 at 6:07 PM, Purdon kylepur...@gmail.com wrote:
It seems that pgfoundry.org is down? Is this the case for everyone?
According to http://www.downforeveryoneorjustme.com/pgfoundry.org it is,
yeah.
--
Magnus Hagander
Me: http://www.hagander.net/
Work:
Thanks Magnus, I too checked that before coming here. I'm really looking
for a reason from someone that may have some more information. Also
potentially another download location for pg_bulkload 3.1.5+.
http://ftp.postgresql.org/pub/projects/pgFoundry/pgbulkload/pg_bulkload-3.1/only
goes to 3.1.4
Hi,
Should I be able to run two syslog facilities simultaneously ( postgres local0,
and a trigger function to local3 ) successfully ?
I have postgresql 9.3.1exit running on freebsd and sending errors to
syslog_facility= local0.
That works fine.
I have created an insert trigger on one of
Day, David d...@redcom.com writes:
Should I be able to run two syslog facilities simultaneously ( postgres
local0, and a trigger function to local3 ) successfully ?
Probably not. libc's support for writing to syslog is not re-entrant.
I have created an insert trigger on one of my
I have data warehousing DB 2 fairly big tables : one contains about 200
million rows and the other one contains about 4 billion rows. Some queries
are now taking way too long to run ( 13 hours). I need to get these queries
to run in an hour or so. The slowdown was gradual, but I eventually
On 02/18/2014 02:10 PM, Samuel Gilbert wrote:
I have data warehousing DB 2 fairly big tables : one contains about 200
million rows and the other one contains about 4 billion rows. Some queries
are now taking way too long to run ( 13 hours). I need to get these queries
to run in an hour or so.
Tom,
I will not claim I've totally observed all the fallout of attempting this.
You may be correct about subsequent local0 output being bollixed but I
certainly have seen some continued output to local0 after the trigger.
I am not committed to this method. It was primarily an experiment for
On 2014-02-18 14:25:59 Adrian Klaver wrote:
On 02/18/2014 02:10 PM, Samuel Gilbert wrote:
I have data warehousing DB 2 fairly big tables : one contains about 200
million rows and the other one contains about 4 billion rows. Some
queries
are now taking way too long to run ( 13 hours). I
Samuel Gilbert samuel.gilb...@ec.gc.ca writes:
All of this was done on PostgreSQL 9.2.0 64-bit compiled from the official
source. Significant changes in postgresql.conf :
Why in the world are you using 9.2.0? You're missing a year and a half
worth of bug fixes, some of them quite serious.
On 02/18/2014 02:42 PM, Samuel Gilbert wrote:
On 2014-02-18 14:25:59 Adrian Klaver wrote:
On 02/18/2014 02:10 PM, Samuel Gilbert wrote:
I have data warehousing DB 2 fairly big tables : one contains about 200
million rows and the other one contains about 4 billion rows. Some
queries
are now
The modification date must be updated if any row is modified in any way.
If that is the case shouldn't the trigger also cover UPDATE?
You completely right about that! I actually have both configured, but I
focused only on the INSERT to try keep the length of my post as short as
possible.
On 2014-02-18 17:59:35 Tom Lane wrote:
Samuel Gilbert samuel.gilb...@ec.gc.ca writes:
All of this was done on PostgreSQL 9.2.0 64-bit compiled from the official
source. Significant changes in postgresql.conf :
Why in the world are you using 9.2.0? You're missing a year and a half
worth
On Fri, Feb 14, 2014 at 10:15 AM, Merlin Moncure mmonc...@gmail.com wrote:
yeah -- you could do this with some gymnastics and some dynamic SQL.
If I were lazy (check), I would just encode the order in the name of
the view somehow.
Thanks. That's exactly what I do already. Apparently, I'm
Juergen,
I've seen this quite a lot in the past, as we do this multiple times a day.
Here's the procedure we use to prevent it:
1) read the PID from postmaster.pid in the data directory
2) Issue service postgresql-9.0 stop (this does a fast shutdown with
-t 600)
3) loop until the PID is no
Does anyone know if there are plans to support plpython in Amazon's RDS? I
(approximately) understand the issue, but I don't know if there's any
effort to remedy the problem or, rather, I shouldn't bother hoping.
Thanks,
Reece
On 02/19/2014 02:14 AM, Antman, Jason (CMG-Atlanta) wrote:
Juergen,
I've seen this quite a lot in the past, as we do this multiple times a day.
Here's the procedure we use to prevent it:
1) read the PID from postmaster.pid in the data directory
2) Issue service postgresql-9.0 stop (this
24 matches
Mail list logo