Am 23.01.14 02:14, schrieb Jim Nasby:
On 1/19/14, 5:51 PM, Dave Chinner wrote:
Postgres is far from being the only application that wants this; many
people resort to tmpfs because of this:
https://lwn.net/Articles/499410/
Yes, we covered the possibility of using tmpfs much earlier in the
Tom Lane wrote:
In testing the TRIGGER WHEN patch, I notice that pg_dump is relying on
pg_get_triggerdef(triggeroid, true) (ie, pretty mode) to dump
triggers. This means that trigger WHEN conditions will be dumped
without adequate parenthesization to ensure they are interpreted the
same way
Andrew Dunstan wrote:
But Pg
should have some pretty print function - it is easy implemented there.
Personally, I prefere Celko's notation, it is little bit more compact
SELECT sh.shoename, sh.sh_avail, sh.slcolor, sh.slminlen,
sh.slminlen * un.un_fact AS slminlen_cm,
Simon Riggs wrote:
No, because as I said, if archive_command has been returning non-zero
then the archive will be incomplete.
Yes. You think that's wrong? How would you like it to behave, then? I
don't think you want the shutdown to wait indefinitely until all files
have been
Andrew Dunstan wrote:
We're in Beta. You can't just go yanking stuff like that. Beta testers
will be justifiably very annoyed.
Please calm down.
pg_standby is useful and needs to be correct. And its existence as a
standard module is one of the things that has made me feel confident
about
Tom Lane wrote:
Not at all, because the database would be very unhappy at restart
if it can't find the checkpoint record pg_control is pointing to.
So for several weeks now all postings just say how it will _not_ work.
Does this boil down to There's no way to make sure that a graceful
Heikki Linnakangas wrote:
No, no crash is involved. Just a normal server shutdown and start:
1. Server shutdown is initiated
2. A shutdown checkpoint is recorded at XLOG point 1234, redo ptr is
also 1234.
3. A XLOG_SWITCH record is written at 1235, right after the checkpoint
record.
4. The
Fujii Masao wrote:
Hi,
On Tue, Apr 21, 2009 at 8:28 PM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
Simon Riggs wrote:
If you do this, then you would have to change the procedure written into
the 8.3 docs also. Docs aren't backpatchable.
What you propose is
Heikki Linnakangas wrote:
Andreas Pflug wrote:
I've been following the thread with growing lack of understanding why
this is so hardly discussed, and I went back to the documentation of
what the restore_command should do (
http://www.postgresql.org/docs/8.3/static/warm-standby.html )
While
I've been following the thread with growing lack of understanding why
this is so hardly discussed, and I went back to the documentation of
what the restore_command should do (
http://www.postgresql.org/docs/8.3/static/warm-standby.html )
While the algorithm presented in the pseudocode isn't
alexander lunyov wrote:
Guillaume Smet wrote:
I want to try new pg_dump to connect to old server, but i can't - old
postgres doesn't listening to network socket. Why postgres 6.5.3 not
binding to network socket? It started with this line:
Maybe you should just dump schema and data separately
alexander lunyov wrote:
Andreas Pflug wrote:
I want to try new pg_dump to connect to old server, but i can't - old
postgres doesn't listening to network socket. Why postgres 6.5.3 not
binding to network socket? It started with this line:
Maybe you should just dump schema and data separately
David E. Wheeler wrote:
How about a simple rule, such as that machine-generated comments start
with ##, while user comments start with just #? I think that I've
seen such a rule used before. At any rate, I think that, unless you
have some sort of line marker for machine-generated comments,
Gregory Stark wrote:
Andreas Pflug [EMAIL PROTECTED] writes:
Why do so many people here insist on editing postgresql.conf as primary means
of changing config params?
Isn't a psql -c SET foo=bar; MAKE PERSISTENT just as good as sed'ing
postgresql.conf or doing it manually?
no, it's
Gregory Stark wrote:
So all you have is our existing file except with an additional layer of
quoting to deal with, a useless SET keyword to annoy users, and a file that
you need a bison parser
Don't you think that's a little over the top, throwing bison at the
simple task to extend
Gregory Stark wrote:
Text config files are NOT friendly for beginner and mediocre users. IMHO the
current restriction on GUC changes is a major obstacle towards pgsql tuning
tools, e.g. written as a Google SoC project. Graphic tools aren't too popular
at pgsql-hackers, but please contemplate
Tom Lane wrote:
I grow weary of this thread. I will say it once more: I do not believe
for one instant that the current formatting of postgresql.conf is the
major impediment, or even a noticeable impediment, to producing a useful
configuration wizard. If you wish to prove otherwise, provide a
Gregory Stark wrote:
Andreas Pflug [EMAIL PROTECTED] writes:
I personally wouldn't even think about starting such a wizard, unless I have
an
idea how to push the result into the database. No, not a file, but via SQL!
So
your statement you won't react unless a wizard is almost ready
Greg Smith wrote:
On Thu, 5 Jun 2008, Magnus Hagander wrote:
We really need a proper API for it, and the stuff in pgAdmin isn't
even enough to base one on.
I would be curious to hear your opinion on whether the GUC overhaul
discussed in this thread is a useful precursor to building such a
Decibel! wrote:
There's no reason that the server has to deal with a text file. I
completely agree that there must be a method to change settings even if
the database isn't running, but that method does not necessarily need to
be a text file. If we can come up with a standard API for reading
Aidan Van Dyk wrote:
* Andreas Pflug [EMAIL PROTECTED] [080604 10:20]:
Hiding the storage of config parameters opaquely behind an API is
something I've been hating for a long time on win32.
;-)
When reading this thread, I'm wondering if anybody ever saw a config
file
Tom Lane wrote:
* Can we present the config options in a more helpful way (this is 99%
a documentation problem, not a code problem)?
* Can we build a configuration wizard to tell newbies what settings
they need to tweak?
It's certainly one thing to create an initial postgresql.conf from
Florian Pflug wrote:
But maybe you could store the whitespace appearing before (or after?)
a token in the parse tree that is stored for a view. That might not
allow reconstructing the *precise* statement, but at least the
reconstructed statement would preserve newlines and indention - which
Tom Lane wrote:
stephen layland [EMAIL PROTECTED] writes:
I've written a quick patch against the head branch (8.4DEV, but it also
works with 8.1.3 sources) to fix LDAP authentication support to
work with LDAPS servers that do not need start TLS. I'd be interested
to hear your opinions on
Robert Treat wrote:
On Friday 25 January 2008 06:40, Simon Riggs wrote:
Notes: As the syntax shows, these would be statement-level triggers
(only). Requesting row level triggers will cause an error. [As Chris
Browne explained, if people really want, they can use these facilities
to create a
Simon Riggs wrote:
My thinking was if you load a 1000 rows and they all have the same key
in your summary table then you'll be doing 1000 updates on a single row.
This is true because the statement level triggers are still rudimentary,
with no OLD and NEW support. A single AFTER statement
Michael Glaesemann wrote:
On Oct 12, 2007, at 10:19 , Gregory Stark wrote:
It would make Postgres inconsistent and less integrated with the rest
of the
OS. How do you explain that Postgres doesn't follow the system's
configurations and the collations don't agree with the system
collations?
Alexey Klyukin wrote:
For what use cases do you think your WAL-based approach is better than
Slony/Skytools trigger-based one ?
A pure trigger based approach can only replicate data for the commands
which fire triggers. AFAIK Slony is unable to replicate TRUNCATE
command
It could
Andrew Dunstan wrote:
I have no idea why that's done - it goes back to the origins of the
syslogger - probably because someone mistakenly thinks all WIndows
text files have to have CRLF line endings.
I tried changing that to _O_BINARY, and calling _setmode on both the
pipe before it's duped
Andrew Dunstan wrote:
I have no idea why that's done - it goes back to the origins of the
syslogger - probably because someone mistakenly thinks all WIndows
text files have to have CRLF line endings.
Yes this was intentional, notepad still doesn't like LF line endings.
Not my preferred text
Andrew Dunstan wrote:
Not for Wordpad though, and it's pretty universal too. And Notepad
won't load a file of any great size anyway. Furthermore, we just can't
have this alongside the pipe chunking protocol, so I'm inclined to
blow it away altogether, unless there are pretty loud squawks.
Simon Riggs wrote:
The objections to applying this patch originally were:
2. it would restrict number of digits to 508 and there are allegedly
some people that want to store 508 digits.
If 508 digits are not enough, are1000 digits be sufficient? Both limits
appear quite arbitrary to me.
Tom Lane wrote:
Andreas Pflug [EMAIL PROTECTED] writes:
Simon Riggs wrote:
The objections to applying this patch originally were:
2. it would restrict number of digits to 508 and there are allegedly
some people that want to store 508 digits.
If 508 digits are not enough
Tom Lane wrote:
=?ISO-8859-1?Q?Ott=F3_Havasv=F6lgyi?= [EMAIL PROTECTED] writes:
When using views built with left joins, and then querying against these
views, there are a lot of join in the plan that are not necessary, because I
don't select/use any column of each table in the views every
Luke Lonergan wrote:
I advocate the following:
- Enable specification of TOAST policy on a per column basis
As a first step, then:
- Enable vertical partitioning of tables using per-column specification of
storage policy.
Wouldn't it be enough to enable having the toast table on a
Tom Lane wrote:
Simon Riggs [EMAIL PROTECTED] writes:
On Fri, 2007-03-09 at 11:15 -0500, Tom Lane wrote:
It strikes me that allowing archive_command to be changed on the fly
might not be such a good idea though, or at least it shouldn't be
possible to flip it from empty to nonempty
Magnus Hagander wrote:
The easy fix for this is to remove the calls. Which obviously will break
some client apps. A fairly easy fix for the WSAStartup() call is to have
a check in the connection functions against a global variable that will
then make sure to call WSAStartup() the first time
Chris Browne wrote:
The trouble is that there needs to be a sufficient plurality in favor
of *a particular move onwards* in order for it to happen.
Right now, what we see is:
- Some that are fine with status quo
- Some that are keen on Subversion
- Others keen on Monotone
- Others
Tom Lane wrote:
Jacob Rief [EMAIL PROTECTED] writes:
I tried to write a trigger using C++.
That is most likely not going to work anyway, because the backend
operating environment is C not C++. If you dumb it down enough
--- no exceptions, no RTTI, no use of C++ library --- then it
Dave Page wrote:
Andreas Pflug wrote:
Not much function to re-create here, single
exception is extracting cluster wide data, the -g option, that's why I
mentioned scripting. But apparently this didn't get into pgadmin svn any
more, so I need to retract this proposal.
Eh? Your
Jim C. Nasby wrote:
It might make sense to provide a programmatic interface to pg_dump to
provide tools like pgAdmin more flexibility.
Are you talking about pg_dump in a lib? Certainly a good idea, because
it allows better integration (e.g. progress bar).
But it certainly doesn't make sense
Dave Page wrote:
In pgAdmin we use pg_dump's -f option to write backup files. The IO
streams are redirected to display status and errors etc. in the GUI.
In order to enhance the interface to allow backup of entire clusters as
well as role and tablespace definitions, we need to be able to get
Neil Conway wrote:
Why does adminpack install functions into pg_catalog? This is
inconsistent with the rest of the contrib/ packages, not to mention the
definition of pg_catalog itself (which ought to hold builtin object
definitions). And as AndrewSN pointed out on IRC, it also breaks
Neil Conway wrote:
On Fri, 2006-10-20 at 05:52 +0100, Dave Page wrote:
The adminpack was originally written and intended to become builtin
functions
This is not unique to adminpack: several contrib modules might
eventually become (or have already become) builtins, but adminpack is
Andrew Dunstan wrote:
Marlon Petry wrote:
pg_dump and pg_restore do not need to run on the server machine.
Why not
just run them where you want the dump stored?
But I would need to have installed pg_dump and pg_restore in machine
client?
Without having installed pg_dump
Tom Lane wrote:
Andrew Dunstan [EMAIL PROTECTED] writes:
Then after you recover from your head exploding you start devising some
sort of sane API ...
That's the hard part. There is no percentage in having a library if
it doesn't do anything significantly different from what you
Simon Riggs wrote:
Zero administration overhead now possible (Alvaro)
With autovacuum enabled, all required vacuuming will now take place
without administrator intervention enabling wider distribution of
embedded databases.
This was true for 8.1 already, no?
Regards,
Andreas
Bruce Momjian wrote:
Done, because most people will turn autovacuum on, even if it isn't on
by default.
I wonder how many distros will turn on autovacuum as well, making it the
de-facto standard anyway.
Regards,
---(end of broadcast)---
TIP
Peter Eisentraut wrote:
With time, it becomes ever clearer to me that prepared SQL statements are
just
a really bad idea. On some days, it seems like half the performance problems
in PostgreSQL-using systems are because a bad plan was cached somewhere. I'd
say, in the majority of cases
Merlin Moncure wrote:
On 8/31/06, Peter Eisentraut [EMAIL PROTECTED] wrote:
With time, it becomes ever clearer to me that prepared SQL statements
are just
a really bad idea. On some days, it seems like half the performance
problems
in PostgreSQL-using systems are because a bad plan was
Tom Lane wrote:
My objection here is basically that this proposal passed on the
assumption that it would be very nearly zero effort to make it happen.
We are now finding out that we have a fair amount of work to do if we
want autovac to not mess up the regression tests, and I think that has
Tom Lane wrote:
Andreas Pflug [EMAIL PROTECTED] writes:
Tom Lane wrote:
My objection here is basically that this proposal passed on the
assumption that it would be very nearly zero effort to make it happen.
Kicking out autovacuum as default is a disaster, it took far
Peter Eisentraut wrote:
Am Dienstag, 29. August 2006 11:14 schrieb Andreas Pflug:
already pointed out, all win32 installations have it on by default, to
take them to the safe side. Disabling it for modules a retail user
will never launch appears overreacting.
Well, the really big
Tom Lane wrote:
My take on all this is that there's no one-size-fits-all replication
solution, and therefore the right approach is to have multiple active
subprojects.
Anybody knowing a little about the world of replication needs will
agree with you here. Unfortunately, AFAICS pgcluster
Tom Lane wrote:
Almost everything I just said is already how it works today; the
difference is that today you do not have the option to drop t1 without
dropping the sequence, because there's no (non-hack) way to remove the
dependency.
As far as I understand your proposal I like it, but
Tom Lane wrote:
If you insist on initially creating the sequence by saying SERIAL for
the first of the tables, and then saying DEFAULT nextval('foo_seq')
for the rest, then under both 8.1 and my proposal you'd not be able to
drop the first table without dropping the sequence (thus requiring
Magnus Hagander wrote:
Since I have a stuck backend without client again, I'll have to
kill
-SIGTERM a backend. Fortunately, I do have console access to
that
machine and it's not win32 but a decent OS.
You do know that on Windows you can use pg_ctl
Tom Lane wrote:
Andrew Dunstan [EMAIL PROTECTED] writes:
I am more than somewhat perplexed as to why the NUL device should be a
security risk ... what are they thinking??
Frankly, I don't believe it; even Microsoft can't be that stupid.
And I can't find any suggestion that they've
Bruce Momjian wrote:
Andreas Pflug wrote:
Tom Lane wrote:
Andrew Dunstan [EMAIL PROTECTED] writes:
I am more than somewhat perplexed as to why the NUL device should be a
security risk ... what are they thinking??
Frankly, I don't believe it; even Microsoft can't be that stupid
Tom Lane wrote:
Andreas Pflug [EMAIL PROTECTED] writes:
what issues might arise if the output is redirected to a legal tmp file?
Well, (1) finding a place to put the temp file, ie a writable directory;
(2) ensuring the file is removed afterwards; (3) not exposing the user
Tom Lane wrote:
The other, probably more controversial bit of functionality is that there
needs to be a way to cause a backend to load a PL plugin shared library
without any cooperation from the connected client application. For
interactive use of a PL debugger it might be sufficient to tell
Tom Lane wrote:
I'd turn that around: I think you are arguing for a way to change GUC
settings on-the-fly for a single existing session, without cooperation
from the client.
Ok, implemented that way would solve it (partially)
Something like pg_set_guc(pid int4, varname text, value text)
Bruce Momjian wrote:
Right, hence usability, not new enterprise features.
I'm not too happy about the label usability.
Ok, maybe postgres gets usable finally by supporting features that
MySQL had for a long time a MySql guy would say.
Regards,
Andreas
---(end
Andrew Dunstan wrote:
Andreas Pflug wrote:
Since I have a stuck backend without client again, I'll have to kill
-SIGTERM a backend. Fortunately, I do have console access to that
machine and it's not win32 but a decent OS.
You do know that on Windows you can use pg_ctl to send a pseudo
Tom Lane wrote:
Andreas Pflug [EMAIL PROTECTED] writes:
utils/adt/misc.c says:
//* Disabled in 8.0 due to reliability concerns; FIXME someday *//
Datum
*pg_terminate_backend*(PG_FUNCTION_ARGS)
Well, AFAIR there were no more issues raised about code paths that don't
clean up
Tom Lane wrote:
Andreas Pflug [EMAIL PROTECTED] writes:
Tom Lane wrote:
No, you have that backwards. The burden of proof is on those who want
it to show that it's now safe.
If the backend's stuck, I'll have to SIGTERM it, whether there's
pg_terminate_backend
Csaba Nagy wrote:
On Thu, 2006-08-03 at 18:10, Csaba Nagy wrote:
You didn't answer the original question: is killing SIGTERM a backend
^^^
Nevermind, I don't do that. I do 'kill backend_pid' without specifying
the signal, and
Bruce Momjian wrote:
I am not sure how you prove the non-existance of a bug. Ideas?
Would be worth at least the Nobel prize :-)
Regards,
Andreas
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
Csaba Nagy wrote:
man kill says the default is SIGTERM.
OK, so that means I do use it... is it known to be dangerous ? I thought
till now that it is safe to use.
Apparently you never suffered any problems from that; neither did I.
What about select pg_cancel_backend()
That's the
Since I have a stuck backend without client again, I'll have to kill -SIGTERM a
backend. Fortunately, I do
have console access to that machine and it's not win32 but a decent OS. For
other cases I'd really really really
appreciate if that function would make it into 8.2.
utils/adt/misc.c
Peter Eisentraut wrote:
Bort, Paul wrote:
The Linux kernel changed to the standard years ago. And that's just a
few more lines of code than PostgreSQL. (
http://kerneltrap.org/node/340 and others )
For your entertainment, here are the usage numbers from the linux-2.6.17
kernel:
Josh Berkus wrote:
Andreas,
Some weeks ago I proposed a PROGRESS parameter for COPY, to enable
progress feedback via notices. tgl thinks nobody needs that...
Well, *Tom* doesn't need it. What mechanism did you propose to make this
work?
Extended the parser to accept that
Andrew Dunstan wrote:
It strikes me that this is actually a bad thing for pgadmin3 to be
doing. It should use its own file, not the deafult location, at least
if the libpq version is = 8.1. We provided the PGPASSFILE environment
setting just so programs like this could use alternative
Gregory Stark wrote:
Has anyone looked thought about what it would take to get progress bars from
clients like pgadmin? (Or dare I even suggest psql:)
Some weeks ago I proposed a PROGRESS parameter for COPY, to enable
progress feedback via notices. tgl thinks nobody needs that...
Tom Lane wrote:
Zeugswetter Andreas DCP SD [EMAIL PROTECTED] writes:
The solution to the foreign key problem seems easy if I
modify PostgreSQL implementation and take off the ONLY word
from the SELECT query, but it's not an option for me, as I'm
I think that the ONLY was wrong from day
Bruce Momjian wrote:
For use case, consider this:
COPY mytable TO '| rsh [EMAIL PROTECTED] test ';
so you can COPY to another server directly.
Why not rsh psql -c \copy foobar to test ?
Regards,
Andreas
---(end of broadcast)---
I've been playing around with COPYing large binary data, and implemented
a COMPRESSION transfer format. The server side compression saves
significant bandwidth, which may be the major limiting factor when large
amounts of data is involved (i.e. in many cases where COPY TO/FROM
STDIN/STDOUT is
Tom Lane wrote:
Andreas Pflug [EMAIL PROTECTED] writes:
The attached patch implements COPY ... WITH [BINARY] COMPRESSION
(compression implies BINARY). The copy data uses bit 17 of the flag
field to identify compressed data.
I think this is a pretty horrid idea, because it changes
Tom Lane wrote:
After re-reading what I just wrote to Andreas about how compression of
COPY data would be better done outside the backend than inside, it
struck me that we are missing a feature that's fairly common in Unix
programs. Perhaps COPY ought to have the ability to pipe its output
to a
Andreas Pflug wrote:
Won't help too much, until gzip's output is piped back too, so a
replacement for COPY .. TO STDOUT COMPRESSED would be
COPY ... TO '| /bin/gzip |' STDOUT, to enable clients to receive the
reduced stuff.
Forgot to mention:
COPY COMPRESSED was also meant to introduce
Dave Page wrote:
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Andreas Pflug
Sent: 31 May 2006 16:41
Cc: Tom Lane; pgsql-hackers@postgresql.org
Subject: Re: [HACKERS] Possible TODO item: copy to/from pipe
Andreas Pflug wrote:
Won't help too
Tom Lane wrote:
Andreas Pflug [EMAIL PROTECTED] writes:
Do you have a comment about the progress notification and its impact on
copy to stdout?
I didn't bother to comment on it because I think it's useless,
It's useful to see anything at all, and to be able to estimate how long
the whole
Joshua D. Drake wrote:
I dislike putting this into the backend precisely because it's trying to
impose a one-size-fits-all compression solution. Someone might wish to
use bzip2 instead of gzip, for instance, or tweak the compression level
options of gzip. It's trivial for the user to do that
Chris Browne wrote:
[EMAIL PROTECTED] (Andreas Pflug) writes:
Dave Page wrote:
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Andreas
Pflug
Sent: 31 May 2006 16:41
Cc: Tom Lane; pgsql-hackers@postgresql.org
Subject: Re: [HACKERS] Possible TODO item
Dave Page wrote:
It's not about a primarily GUI based OS not being able to do
everything a traditionally command line based OS can do on the
command line, it's about providing a solution that will work on
either and remain portable. Whilst I agree with your objection to
using
Jim C. Nasby wrote:
Also, regarding needing to place an archiver command in
pg_start_backup_online, another option would be to depend on the
filesystem backup to copy the WAL files, and just let them pile up in
pg_xlog until pg_stop_backup_online. Of course, that would require a
two-step
Tom Lane wrote:
I wrote:
I'm off for a little visit with oprofile...
It seems the answer is that fwrite() does have pretty significant
per-call overhead, at least on Fedora Core 4. The patch I did yesterday
still ended up making an fwrite() call every few characters when dealing
with bytea
Tom Lane wrote:
Andreas Pflug [EMAIL PROTECTED] writes:
That's right, but my proposal would implicitely switch on archiving
while backup is in progress, thus explicitely enabling/disabling
archiving wouldn't be necessary.
I'm not sure you can expect that to work. The system is not built
Simon Riggs wrote:
On Thu, 2006-05-25 at 17:25 +0200, Andreas Pflug wrote:
Currently, I have to
edit postgresql.conf and SIGHUP to turn on archiving configuring a
(hopefully) writable directory, do the backup, edit postgresql.conf and
SIGHUP again. Not too convenient...
You're doing
Tom Lane wrote:
Andreas Pflug [EMAIL PROTECTED] writes:
Tom Lane wrote:
I'm not sure you can expect that to work. The system is not built to
guarantee instantaneous response to mode changes like that.
Um, as long as xlog writing stops immediate recycling when
pg_start_backup is executed
Tom Lane wrote:
Andreas Pflug [EMAIL PROTECTED] writes:
Tom Lane wrote:
Looking at CopySendData, I wonder whether any traction could be gained
by trying not to call fwrite() once per character. I'm not sure how
much per-call overhead there is in that function. We've done a lot of
work
Tom Lane wrote:
Andreas Pflug [EMAIL PROTECTED] writes:
Here are the results, with the copy patch:
psql \copy 1.4 GB from table, binary:
8.0 8.1 8.2dev
36s 34s 36s
psql \copy 6.6 GB from table, std:
8.0 8.1 8.2dev
375s362s290s (second:283s)
Hmph
Currently, WAL files will be archived as soon as archive_command is set.
IMHO, this is not desirable if no permanent backup is wanted, but only
scheduled online backup because; it will flood the wal_archive
destination with files that will never be used.
I propose to introduce a GUC
Tom Lane wrote:
Andreas Pflug [EMAIL PROTECTED] writes:
I propose to introduce a GUC permanent_archiving or so, to select
whether wal archiving happens permanently or only when a backup is in
progress (i.e. between pg_start_backup and pg_stop_backup).
This is silly. Why not just turn
Tom Lane wrote:
Andreas Pflug [EMAIL PROTECTED] writes:
Tom Lane wrote:
This is silly. Why not just turn archiving on and off?
Not quite. I want online backup, but no archiving. Currently, I have to
edit postgresql.conf and SIGHUP to turn on archiving configuring a
(hopefully) writable
Tom Lane wrote:
Andreas Pflug [EMAIL PROTECTED] writes:
When dumping the table with psql \copy (non-binary), the resulting file
would be 6.6GB of size, taking about 5.5 minutes. Using psql \copy WITH
BINARY (modified psql as posted to -patches), the time was cut down to
21-22 seconds
Jim Nasby wrote:
On May 25, 2006, at 11:24 AM, Andreas Pflug wrote:
BTW, I don't actually understand why you want this at all. If you're
not going to keep a continuing series of WAL files, you don't have any
PITR capability. What you're proposing seems like a bulky, unportable,
hard-to-use
Marc Munro wrote:
Veil http://pgfoundry.org/projects/veil is currently not a very good
Postgres citizen. It steals what little shared memory it needs from
postgres' shared memory using ShmemAlloc().
For Postgres 8.2 I would like Veil to be a better citizen and use only
what shared memory has
Martijn van Oosterhout wrote:
The biggest headache I find with using postgres is that various GPL
licenced programs have trouble directly shipping postgresql support
because of our use of OpenSSL. Each and every one of those program
needs to add an exception to their licence for distributors to
Christopher Kings-Lynne wrote:
I think Martin Oosterhout's nearby email on coverity bug reports might
make a good SoC project, but should it also be added to the TODO list?
I may as well put up phpPgAdmin for it. We have plenty of projects
available in phpPgAdmin...
Same with pgAdmin3.
1 - 100 of 503 matches
Mail list logo