On Sun, Feb 24, 2008 at 6:52 PM, Jochem van Dieten [EMAIL PROTECTED] wrote:
Or we could have a switch that specifies a directory and have pg_dump
split the dump not just in pre-schema, data and post-schema, but also
split the data in a file for each table. That would greatly facilitate
a
Hi.
Please check it.
build is successful for it in my environment.
Thanks!
Regards,
Hiroshi Saito
- Original Message -
From: Hiroshi Saito [EMAIL PROTECTED]
Hi.
From: Magnus Hagander [EMAIL PROTECTED]
we can include in the next release.:-)
Thanks!
Good news. Can you
On 2/25/08, Florian G. Pflug [EMAIL PROTECTED] wrote:
I'm not sure how a proper fix for this could look like, since the
blocking actually happens inside libpq - but this certainly makes
working with dblink painfull...
Proper fix would be to use async libpq API, then loop on poll(2)
with
Marko Kreen wrote:
On 2/25/08, Florian G. Pflug [EMAIL PROTECTED] wrote:
I'm not sure how a proper fix for this could look like, since the
blocking actually happens inside libpq - but this certainly makes
working with dblink painfull...
Proper fix would be to use async libpq API, then loop
Am Dienstag, 27. November 2007 schrieb Mathias Hasselmann:
Postmaster already has code to announce its services via DNS-SD
(ZeroConf) by using Apple's Bonjour API. This series of patches
implements that capability on top of the Avahi library[1] which
is free software, available for a wider
Hi all,
On the 6th of february, there's been a thread about adding new options to
pg_dump, but it is now too late for me to add comments to this thread, since
all that was said wouldn't be readable at this time, so I add an new thread
here.
I haven't found any option to dump any user-defined
Am Dienstag, 27. November 2007 schrieb Mathias Hasselmann:
Postmaster already has code to announce its services via DNS-SD
(ZeroConf) by using Apple's Bonjour API. This series of patches
implements that capability on top of the Avahi library[1] which
is free software, available for a wider
Mathias Hasselmann wrote:
The patches were in my initial mail, but now I've also uploaded them to
my personal site for convenience:
http://taschenorakel.de/files/pgsql-avahi-support/
Hmm, a quick look at the third patch reveals that it is using the
threaded Avahi client. That's a
On Mon, Feb 25, 2008 at 10:48 AM, David BOURIAUD
[EMAIL PROTECTED] wrote:
Could there be an option to pg_dump (let's say --function [func_name]) to be
abble to dump the complete source code of a function in a separate file, or
on the terminal ?
It's a TODO item. Just not to functions and so
Kaare Rasmussen [EMAIL PROTECTED] writes:
Hi
The database is initialized with utf8, so in order for LIKE to use the index
on
a text field, I used text_pattern_ops when I created it. So far so good.
It's in the documentation, but there's no explanation of why this index will
only work
I saw this strange behavior due to a customer problem who managed to get
dumps which aren't restorable anymore:
CREATE TABLE foo(datum date);
INSERT INTO foo VALUES('-02-29');
INSERT 0 1
SELECT * FROM foo;
datum
---
0001-02-29 BC
(1 row)
COPY foo TO STDOUT;
0001-02-29 BC
Le lundi 25 février 2008, Leonardo Cezar a écrit :
Hi Leonardo,
Thanks for your quick answer, I didn't know it was a TODO item, and that
somepeople were working on it... Keep going, then, cause I'm really waiting
for these features !
On Mon, Feb 25, 2008 at 10:48 AM, David BOURIAUD
[EMAIL
I need to run about 1000 PostgreSQL connections on a server that I can
use about 4 GB of the total 16 GB of total RAM. It seems that each
session creates a process that uses about 15 MB of RAM just for
connecting so I'm running out of RAM rather quickly.
I have these non-default settings:
Am Montag, 25. Februar 2008 schrieb Alvaro Herrera:
Hmm, a quick look at the third patch reveals that it is using the
threaded Avahi client. That's a showstopper.
Could you elaborate why that is?
--
Peter Eisentraut
http://developer.postgresql.org/~petere/
---(end of
Roberts, Jon wrote:
I need to run about 1000 PostgreSQL connections on a server that I can
use about 4 GB of the total 16 GB of total RAM. It seems that each
session creates a process that uses about 15 MB of RAM just for
connecting so I’m running out of RAM rather quickly.
I have these
Tom Lane [EMAIL PROTECTED] writes:
Gregory Stark [EMAIL PROTECTED] writes:
Hm, for a simple = or I think it doesn't matter which operator class you
use. For or it would produce different answers. Postgres isn't clever
enough
to notice that this is equivalent though so I think you would
Gregory Stark [EMAIL PROTECTED] writes:
Hm, for a simple = or I think it doesn't matter which operator class you
use. For or it would produce different answers. Postgres isn't clever
enough
to notice that this is equivalent though so I think you would have to do
something like (untested):
Gregory Stark [EMAIL PROTECTED] writes:
Tom Lane [EMAIL PROTECTED] writes:
I'm intending to get rid of ~=~ and ~~ for 8.4; there's no longer any
reason why those slots in the pattern_ops classes can't be filled by the
plain = and operators. (There *was* a reason when they were first
I need to run about 1000 PostgreSQL connections on a server that I
can
use about 4 GB of the total 16 GB of total RAM. It seems that each
session creates a process that uses about 15 MB of RAM just for
connecting so I'm running out of RAM rather quickly.
I think you're being bitten by
Roberts, Jon [EMAIL PROTECTED] writes:
I need to run about 1000 PostgreSQL connections on a server that I can
use about 4 GB of the total 16 GB of total RAM. It seems that each
session creates a process that uses about 15 MB of RAM just for
connecting so I'm running out of RAM rather
Bernd Helmle [EMAIL PROTECTED] writes:
CREATE TABLE foo(datum date);
INSERT INTO foo VALUES('-02-29');
Since there is no year zero according to Gregorian reckoning, this
should have been rejected to start with.
INSERT INTO foo VALUES('0001-02-29 BC');
ERROR: date/time field value out of
Tom Lane [EMAIL PROTECTED] writes:
Gregory Stark [EMAIL PROTECTED] writes:
Tom Lane [EMAIL PROTECTED] writes:
I'm intending to get rid of ~=~ and ~~ for 8.4; there's no longer any
reason why those slots in the pattern_ops classes can't be filled by the
plain = and operators. (There *was* a
Peter Eisentraut wrote:
Am Montag, 25. Februar 2008 schrieb Alvaro Herrera:
Hmm, a quick look at the third patch reveals that it is using the
threaded Avahi client. That's a showstopper.
Could you elaborate why that is?
Because it creates a new thread under the Postmaster to handle Avahi
I posted earlier about how to tune my server and I think the real
problem is how many connections pgAgent creates for my job needs.
I basically need to run hundreds of jobs daily all to be executed at
4:00 AM. To keep the jobs from killing the other systems, I am
throttling this with a queue
Gregory Stark [EMAIL PROTECTED] writes:
Tom Lane [EMAIL PROTECTED] writes:
How so? If you think this change is a bad idea you'd better speak up
PDQ.
Well I think it's fine for 'foo ' != 'foo' even if they sort similarly.
But I'm not sure it makes sense for 'foo ','a' to sort after
Tom Lane [EMAIL PROTECTED] writes:
Gregory Stark [EMAIL PROTECTED] writes:
Tom Lane [EMAIL PROTECTED] writes:
How so? If you think this change is a bad idea you'd better speak up
PDQ.
Well I think it's fine for 'foo ' != 'foo' even if they sort similarly.
But I'm not sure it makes sense
--On Montag, Februar 25, 2008 12:00:05 -0500 Tom Lane [EMAIL PROTECTED]
wrote:
regression=# select '0001-02-28 BC'::date + 1;
?column?
---
0001-02-29 BC
(1 row)
regression=# select '0002-02-28 BC'::date + 1;
?column?
---
0002-03-01 BC
(1 row)
I stepped
Am Mittwoch, 6. Februar 2008 schrieb Paul van den Bogaard:
I was hoping someone in the community already has a makefile that
just creates object files from C-sources directly that I can use to
try out the effect of in-lining to the performance of postgres.
This is now the default in 8.4devel.
On Mon, Feb 25, 2008 at 12:33 PM, David BOURIAUD
[EMAIL PROTECTED] wrote:
Le lundi 25 février 2008, Leonardo Cezar a écrit :
Hi Leonardo,
Thanks for your quick answer, I didn't know it was a TODO item, and that
somepeople were working on it... Keep going, then, cause I'm really waiting
Gregory Stark [EMAIL PROTECTED] writes:
It may be more right in an abstract ideal world -- the reality is that text
collation is annoyingly complex. But this may be a case where we can get away
with just eliding this hassle.
If anyone actually complains about it, I think we can point to the
Bernd Helmle [EMAIL PROTECTED] writes:
I stepped through the code in datetime.c and it seems the culprit here is
DecodeDate(). It get's the date string from DecodeDateTime(), but without
the 'BC' century notation. However, it then performs the following check
Yeah, I had just come to the
Am Montag, den 25.02.2008, 15:05 +0100 schrieb Peter Eisentraut:
Am Dienstag, 27. November 2007 schrieb Mathias Hasselmann:
Postmaster already has code to announce its services via DNS-SD
(ZeroConf) by using Apple's Bonjour API. This series of patches
implements that capability on top of
The libpq.rc make rule says:
# depend on Makefile.global to force rebuild on re-run of configure
$(srcdir)/libpq.rc: libpq.rc.in $(top_builddir)/src/Makefile.global
sed -e 's/\(VERSION.*\),0 *$$/\1,'`date '+%y%j' | sed 's/^0*//'`'/'
$ $@
However, libpq.rc is also included in the
Mathias Hasselmann [EMAIL PROTECTED] writes:
Am Montag, den 25.02.2008, 15:05 +0100 schrieb Peter Eisentraut:
Is there a reason we couldn't use the Bonjour compatibility layer offered by
Avahi to keep the code differences to a minimum?
1) The Avahi's compatibility layer doesn't implement the
On Sun, 2008-02-24 at 09:47 -0800, Joshua D. Drake wrote:
A less hacker and more DBA bottleneck will be to limit the number of
backends being created for restore. We don't really want to have more
than one backend per CPU, otherwise we just start switching.
Are you sure that it would always
Am Montag, den 25.02.2008, 14:32 -0300 schrieb Alvaro Herrera:
Peter Eisentraut wrote:
Am Montag, 25. Februar 2008 schrieb Alvaro Herrera:
Hmm, a quick look at the third patch reveals that it is using the
threaded Avahi client. That's a showstopper.
Could you elaborate why that is?
Get thee to a connection pooler ASAP.
We've got systems where we establish ~1K connections, but that's on
UNIX, where the handling of large systems is *WAY* more mature than
Windows.
Any time those kinds of quantities of connections appear necessary, it
seems highly preferable to be using
On Mon, 2008-02-25 at 12:05 -0800, Joshua D. Drake wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On Mon, 25 Feb 2008 11:36:56 -0800
Jeff Davis [EMAIL PROTECTED] wrote:
If there is any significant I/O latency for a single backend, it seems
like a context switch could be a win
Mathias Hasselmann [EMAIL PROTECTED] writes:
Just to be sure we talk about the same topic: I assume the prohibition
you talk about is something like no use of threads in Postmaster?
Correct.
If that's the case: Are there some docs, mails, ... explaining the
rationale behind this restriction?
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On Mon, 25 Feb 2008 11:36:56 -0800
Jeff Davis [EMAIL PROTECTED] wrote:
If there is any significant I/O latency for a single backend, it seems
like a context switch could be a win for processor utilization. It
might not be a win overall, but at
--On Montag, Februar 25, 2008 14:04:18 -0500 Tom Lane [EMAIL PROTECTED]
wrote:
The other issue is whether to throw error for year zero, rather than
silently interpreting it as 1 BC. I can't recall whether that behavior
was intentional at the time, but given our current rather strict
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On Mon, 25 Feb 2008 12:17:10 -0800
Jeff Davis [EMAIL PROTECTED] wrote:
I would personally rather keep it simple, hard core, and data
shoving as possible without any issue with scheduling etc..
Just a thought. After it's actually
On Mon, Feb 25, 2008 at 7:21 PM, Peter Eisentraut [EMAIL PROTECTED] wrote:
The libpq.rc make rule says:
# depend on Makefile.global to force rebuild on re-run of configure
$(srcdir)/libpq.rc: libpq.rc.in $(top_builddir)/src/Makefile.global
sed -e 's/\(VERSION.*\),0 *$$/\1,'`date
On Mon, 2008-02-25 at 12:28 -0800, Joshua D. Drake wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On Mon, 25 Feb 2008 12:17:10 -0800
Jeff Davis [EMAIL PROTECTED] wrote:
I would personally rather keep it simple, hard core, and data
shoving as possible without any issue with
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On Mon, 25 Feb 2008 14:05:58 -0800
Jeff Davis [EMAIL PROTECTED] wrote:
Yep :) but as a note:
I am currently testing on the data set that is giving us all these
issues. Previously we were pushing ~ 22G an hour over a single
thread. I am
Dave Page wrote:
It's used on Windows to ensure that installers can do the right thing
when replacing a copy of libpq.dll. The daily build number was the
most maintenance-free way of getting a fourth value for the version
resource.
Isn't that what the shared library version numbers are for?
On Mon, Feb 25, 2008 at 10:03 PM, Peter Eisentraut [EMAIL PROTECTED] wrote:
Dave Page wrote:
It's used on Windows to ensure that installers can do the right thing
when replacing a copy of libpq.dll. The daily build number was the
most maintenance-free way of getting a fourth value for
Pavel Stehule wrote:
Hello,
I found easy implementation of variadic functions. It's based on
adapation FuncnameGetCandidates. When I found variadic function, then
I should create accurate number of last arguments (diff between
pronargs and nargs). Variadic function can be signed via flag or
Dave Page wrote:
Yes, because newer builds may be linked against updated runtime
versions. We need to be sure the installer will upgrade the file so it
definitely matches any runtimes (or other dependencies) that we're
also installing/upgrading.
If it is so very important to update this file
On Mon, Feb 25, 2008 at 11:48 PM, Peter Eisentraut [EMAIL PROTECTED] wrote:
Dave Page wrote:
Yes, because newer builds may be linked against updated runtime
versions. We need to be sure the installer will upgrade the file so it
definitely matches any runtimes (or other dependencies) that
On Mon, 25 Feb 2008 21:39:27 -0500
Tom Lane [EMAIL PROTECTED] wrote:
Hm, just noticed another little annoyance: CVS is going to complain about
the objfiles.txt files unless we add a .cvsignore entry to every last
subdirectory of the backend. That seems like a lot of maintenance
tedium. I
Hi,
Refering to this request http://momjian.us/mhonarc/patches_hold/msg00022.html
I have created a patch. The output doesn't exaclty match with what is stated
here
http://momjian.us/mhonarc/patches_hold/msg00023.html.
However, it does tell the required details in a similar format. I had
D'Arcy J.M. Cain [EMAIL PROTECTED] writes:
On Mon, 25 Feb 2008 21:39:27 -0500
Tom Lane [EMAIL PROTECTED] wrote:
Hm, just noticed another little annoyance: CVS is going to complain about
the objfiles.txt files unless we add a .cvsignore entry to every last
subdirectory of the backend. That
On Mon, 25 Feb 2008 14:11:16 -0800
Joshua D. Drake [EMAIL PROTECTED] wrote:
O.k. with 24 connections 3.5 hours. Testing with 12 now. 6 never
finished due to a bug.
Observations:
As simple as this solution is, it is not eloquent nor is it smart.
Using this method, if you have a 100GB table
Simon Riggs [EMAIL PROTECTED] writes:
... So it would be good if we could dump objects in 3 groups
1. all commands required to re-create table
2. data
3. all commands required to complete table after data load
[ much subsequent discussion snipped ]
BTW, what exactly was the use-case for
On Mon, 2008-02-25 at 21:18 -0800, Joshua D. Drake wrote:
As simple as this solution is, it is not eloquent nor is it smart.
Using this method, if you have a 100GB table (which is very common)
you are still bound in a bad way by a single connection and you are
holding up everyone else.
In
Simon Riggs [EMAIL PROTECTED] wrote:
One of the reasons why I hadn't wanted to pursue earlier ideas to use
LOCK was that applying a lock will prevent running in parallel, which
ultimately may prevent further performance gains.
Is there a way of doing this that will allow multiple concurrent
On Mon, 25 Feb 2008 22:29:32 -0800
Jeff Davis [EMAIL PROTECTED] wrote:
For me it would still be very helpful. If that 100GB table has several
indexes, particularly on localized text, that can take a lot of
processor time to rebuild (even for a substantially smaller dataset,
like in the 7 hour
Peter Eisentraut [EMAIL PROTECTED] writes:
Tom Lane wrote:
Hm, just noticed another little annoyance: CVS is going to complain about
the objfiles.txt files unless we add a .cvsignore entry to every last
subdirectory of the backend.
Complain how?
Try a cvs diff or cvs update while these
Tom Lane wrote:
Hm, just noticed another little annoyance: CVS is going to complain about
the objfiles.txt files unless we add a .cvsignore entry to every last
subdirectory of the backend.
Complain how? Why should it complain more or less than about the SUBSYS.o
files?
--
Peter Eisentraut
60 matches
Mail list logo