On 1/28/08, Greg Sabino Mullane [EMAIL PROTECTED] wrote:
I am not thrilled about moving _some_ of pgcrypto into the backend ---
pgcrypto right now seems well designed and if we pull part of it out it
seems it will be less clear than what we have now. Perhaps we just need
to document that
* Bruce Momjian:
I am not thrilled about moving _some_ of pgcrypto into the backend ---
pgcrypto right now seems well designed and if we pull part of it out it
seems it will be less clear than what we have now. Perhaps we just need
to document that md5() isn't for general use and some
Jeff Davis wrote:
On Mon, 2008-01-28 at 23:13 +, Heikki Linnakangas wrote:
It's a good point that we don't want pg_dump to screw up the cluster
order, but that's the only use case I've seen this far for disabling
sync scans. Even that wouldn't matter much if our estimate for
clusteredness
Is it possible to authenticate using GSSAPI over the V2 protocol? Is
there any documentation on the message formats for V2?
Kris Jurka
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
Dear All,
Haveyou ever store image to postgresql using VB6? Could you give me an example?
Thank you,
Premsun
NETsolutions Asia
Limited
+66 (2) 237 7247
It's a good point that we don't want pg_dump to screw up the cluster
order, but that's the only use case I've seen this far for disabling
sync scans. Even that wouldn't matter much if our estimate for
clusteredness didn't get screwed up by a table that looks
like this:
5 6 7 8 9 1 2 3
Zeugswetter Andreas ADI SD [EMAIL PROTECTED] writes:
Sorry, but I don't grok this at all. Why the heck would we care if we have 2
parts of the table perfectly clustered, because we started in the middle ?
Surely our stats collector should recognize such a table as perfectly
clustered. Does
Am Montag, 28. Januar 2008 schrieb Neil Conway:
To parse the ORDER BY clause, we'd need to special-case array_agg() in
the grammar
The ORDER BY clause would also used in XMLAGG, so we should try to parse this
in a generalized way.
--
Peter Eisentraut
http://developer.postgresql.org/~petere/
On Mon, Jan 28, 2008 at 09:11:31PM -0500, Tom Lane wrote:
Hi Magnus,
Would you take a look at the patch I just committed in fe-connect.c?
I found out today that PQreset() wasn't working on a GSSAPI connection,
because closePGconn hadn't been patched to clear out the GSSAPI state
+1. If we go with 'enable_sync_seqcans' for 8.3, and in a future
release
cycle we do test the cases Simon described above and we agree we
need to
do a fine tune to benefit from this feature, we will need to
deprecate
'enable_sync_seqscans' and invent another one
Simon Riggs wrote:
On Tue, 2008-01-22 at 20:53 +0100, Pavel Stehule wrote:
Agreed. I think Pavel Stehule was doing some experiments with them, I
don't know if he got anywhere.
I did only first research. Any hack is possible - you can stack
current transaction, but real
Bruce Momjian wrote:
Plus I think we'd be able to improve the code for CREATE INDEX under
HOT, and probably a few other wrinkly bits of code.
Added to TODO:
* Add anonymous transactions
http://archives.postgresql.org/pgsql-hackers/2008-01/msg00893.php
Sorry, updated to Add
On Tue, 29 Jan 2008, Cristian Gafton wrote:
I have a ~150GB sized server, containing two databases that are active in
mostly read mode. I have noticed lately that the global/pgstat.stat file is
somewhere around 1MB freshly after a restart, but at some point it baloons to
74MB in size for no
Hello all,
I have a ~150GB sized server, containing two databases that are active in
mostly read mode. I have noticed lately that the global/pgstat.stat file
is somewhere around 1MB freshly after a restart, but at some point it
baloons to 74MB in size for no apparent reason, after a few
On Tue, Jan 29, 2008 at 10:40:40AM +0100, Zeugswetter Andreas ADI SD wrote:
It's a good point that we don't want pg_dump to screw up the cluster
order, but that's the only use case I've seen this far for disabling
sync scans. Even that wouldn't matter much if our estimate for
Euler Taveira de Oliveira [EMAIL PROTECTED] writes:
+1. If we go with 'enable_sync_seqcans' for 8.3, and in a future release
cycle we do test the cases Simon described above and we agree we need to
do a fine tune to benefit from this feature, we will need to deprecate
'enable_sync_seqscans'
I guess the fastest way is to:
- create a column of type text. ex. create table foo( myimage text );
- read the contents of your image from file and encode it in base64 using:
http://www.vbforums.com/attachment.php?s=42957f48bac95dd18ca8bffcf7578dcc
On Tue, 2008-01-29 at 10:55 +, Gregory Stark wrote:
Zeugswetter Andreas ADI SD [EMAIL PROTECTED] writes:
Sorry, but I don't grok this at all. Why the heck would we care if we have 2
parts of the table perfectly clustered, because we started in the middle ?
Surely our stats collector
Simon Riggs wrote:
And if you have a partitioned table with partitions inconveniently
sized? You'd need to *reduce* shared_buffers specifically to get synch
scans and BAS to kick in. Or increase partition size. Both of which
reduce the impact of the benefits we've added.
I don't think the
On Tue, Jan 29, 2008 at 1:09 PM, in message [EMAIL PROTECTED],
Tom Lane [EMAIL PROTECTED] wrote:
Or is someone prepared to argue that there are no applications out
there that will be broken if the same query, against the same unchanging
table, yields different results from one trial to the
On Tue, 29 Jan 2008, Tom Lane wrote:
(Pokes around in the code...) I think the problem here is that the only
active mechanism for flushing dead stats-table entries is
pgstat_vacuum_tabstat(), which is invoked by a VACUUM command or an
autovacuum. Once-a-day VACUUM isn't gonna cut it for you
Caleb Welton [EMAIL PROTECTED] writes:
Forgive me if I'm being dense, but could you explain why
int4_sum(bigint,int) must not be strict, but
int4_avg_accum(bytea, int) can be strict?
The former is used with a NULL initial value, the latter isn't.
Read the fine print in the CREATE AGGREGATE
Kevin Grittner [EMAIL PROTECTED] writes:
Tom Lane [EMAIL PROTECTED] wrote:
Or is someone prepared to argue that there are no applications out
there that will be broken if the same query, against the same unchanging
table, yields different results from one trial to the next?
If geqo kicks
On Tue, 29 Jan 2008, Tom Lane wrote:
Cristian Gafton [EMAIL PROTECTED] writes:
Autovacuum is disabled, since the database is mostly read only. There is a
vacuumdb -a -z running nightly on the box. However, the application that
queries it does a lot of work with temporary tables - would those
Zeugswetter Andreas ADI SD [EMAIL PROTECTED] writes:
synchronize[d]_seqscan sounds a bit better in my ears than the plural
synchronize_seqscans.
The plural seems better to me; there's no such thing as a solitary
synchronized scan, no? The whole point of the feature is to affect
the behavior of
Cristian Gafton [EMAIL PROTECTED] writes:
Autovacuum is disabled, since the database is mostly read only. There is a
vacuumdb -a -z running nightly on the box. However, the application that
queries it does a lot of work with temporary tables - would those bloat
the stats at all?
Cristian Gafton [EMAIL PROTECTED] writes:
On Tue, 29 Jan 2008, Cristian Gafton wrote:
I have a ~150GB sized server, containing two databases that are active in
mostly read mode. I have noticed lately that the global/pgstat.stat file is
somewhere around 1MB freshly after a restart, but at
On Tue, 29 Jan 2008, Tom Lane wrote:
On Tue, 29 Jan 2008, Cristian Gafton wrote:
I have a ~150GB sized server, containing two databases that are active in
mostly read mode. I have noticed lately that the global/pgstat.stat file is
somewhere around 1MB freshly after a restart, but at some point
Cristian Gafton [EMAIL PROTECTED] writes:
We are churning through a bunch of short-lived temp tables.
I think that's probably the root of the problem ...
Since I
reported the problem, the pgstat file is now sitting at 85M, yet the
pg_stat* tables barely have any entries in them:
Ron Mayer [EMAIL PROTECTED] writes:
Tom Lane wrote:
Or is someone prepared to argue that there are no applications out
there that will be broken if the same query, against the same unchanging
table, yields different results from one trial to the next?
Won't even autovacuum analyze cause
Thanks Tom,
Forgive me if I'm being dense, but could you explain why
int4_sum(bigint,int) must not be strict, but
int4_avg_accum(bytea, int) can be strict?
It seems that both of them have transition datatypes that are different from
the input.
-Caleb
On 1/28/08 7:31 PM, Tom Lane [EMAIL
Tom Lane wrote:
Kevin Grittner [EMAIL PROTECTED] writes:
Tom Lane [EMAIL PROTECTED] wrote:
Or is someone prepared to argue that there are no applications out
there that will be broken if the same query, against the same unchanging
table, yields different results from one trial to the
Cristian Gafton [EMAIL PROTECTED] writes:
I just ran a vacuumdb -a on the box - the pgstat file is still 90MB in
size. If vacuum is supposed to clean up the cruft from pgstat, then I
don't know if we're looking at the right cruft - I kind of expected the
pgstat file to go down in size and
Jeff Davis wrote
Well clusteredness is used or could be used for a few
different heuristics,
not all of which this would be quite as well satisfied as
readahead. But for
Can you give an example? Treating a file as a circular structure does
not impose any significant cost that I can
When building with MSVC 2005 (Express) and the Longhorn Platform SDK
(version 6.0a), it seems that IPPROTO_IPV6 is only defined when
_WIN32_WINNT = 0x0501. This results in a compiler error when trying to
build pqcomm.c (line 389). According to [1], building for Windows 2000
(that is, _WIN32_WINNT
In the recent discussion of bug #3883 we decided that for safety's sake,
TRUNCATE, CLUSTER, and REINDEX ought to error out if there are any
active scans on the table (or index in the case of REINDEX). This is
essentially the same as the test currently applied by ALTER TABLE,
which uses this code:
Tom Lane wrote:
I would like to export this routine and have it be used by all four
commands, instead of duplicating this logic everywhere. However,
that brings up the question of whether the error messages are
generic enough for all four commands; and if not, how we want them
to read. I'm
Alvaro Herrera [EMAIL PROTECTED] writes:
Tom Lane wrote:
I'm tempted to rephrase both messages along the line of
cannot %s \%s\ because ...
where the first %s is replaced by a SQL command name, viz ALTER TABLE,
CLUSTER, etc. I'm not sure how nice this is for translation though.
I suggest
On Jan 29, 2008 8:09 PM, Tom Lane [EMAIL PROTECTED] wrote:
Zeugswetter Andreas ADI SD [EMAIL PROTECTED] writes:
synchronize[d]_seqscan sounds a bit better in my ears than the plural
synchronize_seqscans.
The plural seems better to me; there's no such thing as a solitary
synchronized scan,
I said...
On Windows XP, using Process Explorer with the lower pane
showing Handles, not all postgres.exe processes are including
an Event type with a description of what the process is doing.
I've had difficulty reproducing this, but I now suspect that it is only
happening when running
We've had a couple of discussions recently revolving around the
inefficiency of using hashjoin/hashaggregation output to update a target
table, because of the resulting very random access pattern. I believe
this same mechanism is underlying the slowness of Stephen Denne's
alternate query
Tom Lane wrote:
Alvaro Herrera [EMAIL PROTECTED] writes:
I suggest
cannot execute \%s\ on \%s\ because ...
Hmm, why not just
cannot execute %s \%s\ because ...
?
Hmm, yeah, that seems fine too. Thinking more about it, from the POV of
the translator probably the three
From: Tom Lane [mailto:[EMAIL PROTECTED]
doubtless-oversimplified
It looks equivalent.
With that patch, I got the results
...
- Hash Join (cost=123330.50..1207292.72 rows=408
width=618) (actual time=20186.510..721120.455 rows=400 loops=1)
The plan from here is equivalent to
Stephen Denne [EMAIL PROTECTED] writes:
How hard is it to match, recognise potential benefit, and rewrite the query
from
UPDATE ONLY document_file AS df SET document_type_id =
d.document_type_id FROM document AS d WHERE d.id = document_id;
to
UPDATE ONLY document_file AS df SET
At the planner level that would be entirely the wrong way to go about
it, because that's forcing the equivalent of a nestloop join, which is
very unlikely to be faster for the numbers of rows that we're talking
about here. The reason it looks faster to you is that the benefits of
updating
http://www.scientificcomputing.com/ShowPR~PUBCODE~030~ACCT~300100~IS
SUE~0801~RELTYPE~HPCC~PRODCODE~~PRODLETT~C.html
http://www.nvidia.com/object/cuda_learn.html
http://www.nvidia.com/object/cuda_get.html
46 matches
Mail list logo