On 5/10/06, Tom Lane <[EMAIL PROTECTED]> wrote:
Dennis Bjorklund <[EMAIL PROTECTED]> writes:
> Yesterday I helped a guy on irc with a locking problem, he thought
> that locking in postgresql was broken. It turned out that he had a PHP
> function that he called inside his transaction and the funct
Tom Lane wrote:
Dennis Bjorklund <[EMAIL PROTECTED]> writes:
Yesterday I helped a guy on irc with a locking problem, he thought
that locking in postgresql was broken. It turned out that he had a PHP
function that he called inside his transaction and the function did BEGIN
and COMMIT. Since BEGIN
We could, but it'd probably break about as many apps as it fixed.
I wonder whether php shouldn't be complaining about this, instead
--- doesn't php have its own ideas about controlling where the
transaction commit points are?
All PHP does is when the connection is returned to the pool, if it is
Dennis Bjorklund <[EMAIL PROTECTED]> writes:
> Yesterday I helped a guy on irc with a locking problem, he thought
> that locking in postgresql was broken. It turned out that he had a PHP
> function that he called inside his transaction and the function did BEGIN
> and COMMIT. Since BEGIN inside a t
Hi
Yesterday I helped a guy on irc with a locking problem, he thought
that locking in postgresql was broken. It turned out that he had a PHP
function that he called inside his transaction and the function did BEGIN
and COMMIT. Since BEGIN inside a transaction is just a warning what
happend was th
On 5/2/06, Brandon Black <[EMAIL PROTECTED]> wrote:
On 4/30/06, Heikki Linnakangas <[EMAIL PROTECTED]> wrote:> On Fri, 28 Apr 2006, Brandon Black wrote:>> > I dug around in CVS to have a look for this, and I did eventually find
> > it (well, I found the corresponding docs patch that removed the not
Does anyone know how to discover the tables (with schemas or oid) involved
on a select statement?
I tried EXPLAIN but it only reports the table names, not schemas or oid.
I am developing a SNAPSHOT project for PostgreSQL called Pg::snapshots. It
can be found here:
http://cunha17.theicy.net/person
Tom Lane wrote:
> Bruce Momjian writes:
> > Documentation updated to mention "dimmensions" are not enforced.
>
> This patch seems entirely pointless, since the point is already made
> (with correct spelling even) in the very next line.
Thanks, reverted.
--
Bruce Momjian http://candle.pha.p
On 5/10/06, Dhanaraj M <[EMAIL PROTECTED]> wrote:
2. walks through all known tuning parameters (in /etc/system and
postgresql.conf) on the system
3. lists out what parameters are optimal/suboptimal for that
platform, and makes recommendations.
I thought that's why people still have DBAs ;}
I see that in CVS head, if you turn on standard_conforming_strings,
it still whines about backslashes:
regression=# set standard_conforming_strings to 1;
SET
regression=# select '1234\5678';
WARNING: nonstandard use of escape in a string literal
LINE 1: select '1234\5678';
^
HINT:
On Tue, 2006-05-09 at 01:20 +0200, Albert Cervera Areny wrote:
> In my particular case (don't know about the SQL standard or other
> cases),
> it'd be enough if when an inherited table is created:
> - A primary key in the inherited table is created with the same columns
> as
> the su
PFC <[EMAIL PROTECTED]> writes:
> Fun thing is, the rowcount from a temp table (which is the problem
> here)
> should be available without ANALYZE ; as the temp table is not concurrent,
> it would be simple to inc/decrement a counter on INSERT/DELETE...
No, because MVCC rules still appl
Bruce Momjian writes:
> Documentation updated to mention "dimmensions" are not enforced.
This patch seems entirely pointless, since the point is already made
(with correct spelling even) in the very next line.
regards, tom lane
---(end of broadcas
> SELECT * FROM somewhere WHERE id IN (SELECT id FROM result)
Well, you can either
SELECT * FROM somewhere JOIN (SELECT id FROM result GROUP BY id) AS
a USING (id);
It's the same thing (and postgres knows it)
You might want to use PL to store values, say PLperl, or even C
On 5/9/06, PFC <[EMAIL PROTECTED]> wrote:
> You might consider just selecting your primary key or a set of
> primary keys to involved relations in your search query. If you
> currently use "select *" this can make your result set very large.
>
> Copying all the result set to the temp. costs you
Does the time for commit change much if you leave out the analyze?
Yes, when I don't ANALYZE the temp table, commit time changes from 30 ms
to about 15 ms ; but the queries get horrible plans (see below) :
Fun thing is, the rowcount from a temp table (which is the problem here)
should
On Mon, 08 May 2006 03:58:54 +0800
imacat <[EMAIL PROTECTED]> wrote:
> On Sun, 07 May 2006 13:09:48 -0400
> Tom Lane <[EMAIL PROTECTED]> wrote:
> > imacat <[EMAIL PROTECTED]> writes:
> > > I cannot compile PL/Perl and PL/Python under x86_64.
> > Works fine for me (without any tinkering with fla
Documentation updated to mention "dimmensions" are not enforced.
---
Stefan Kaltenbrunner wrote:
> Thomas Hallgren wrote:
> > I can create a function that takes a two dimension int array:
> >
> > CREATE FUNCTION twodims(in
Creating cursors for a simple plan like a single sequential scan is fast
because it's using the original data from the table.
I used the following query :
SELECT * FROM bookmarks ORDER BY annonce_id DESC LIMIT 20
It's a backward index scan + limit... not a seq scan. And it's
PFC <[EMAIL PROTECTED]> writes:
>
> I really like this. It's clean, efficient, and easy to use.
>
> This would be a lot faster than using temp tables.
> Creating cursors is very fast so we can create two, and avoid doing
> twice the same work (ie. hashing the ids from the res
PFC <[EMAIL PROTECTED]> writes:
> Feature proposal :
> A way to store query results in a named buffer and reuse them in the
> next
> queries.
Why not just fix the speed issues you're complaining about with temp
tables? I see no reason to invent a new concept.
(Now, "just fix" mig
I am currently looking at some performance issues. Our product uses the
postgres as the backend
and it uses a huge storage space.
Is there a script available, which does the following:
1. When you point it to a postgres installation, -> 1.the script first
identifies what platform (solaris
Hi!
On Mon, Apr 03, 2006 at 11:56:13PM +0100, Robert Watson wrote:
> >>This is why it's disabled by default, and the jail documentation
> >>specifically advises of this possibility. Excerpt below.
> >
> >Ah, I see, glad to see it's accurately documented.
>
> As it has been for the last five ye
It would be interesting to know what the bottleneck is for temp tables
for you. They do not go via the buffer-cache, they are stored in
private memory in the backend, they are not xlogged. Nor flushed to
disk on backend exit. They're about as close to in-memory tables as
you're going to get...
On Tue, May 09, 2006 at 12:10:37PM +0200, PFC wrote:
> Yes, but in this case temp tables add too much overhead. I wish
> there were RAM based temp tables like in mysql. However I guess the
> current temp table slowness comes from the need to mark their existence in
> the system ca
Additionally to your query you are already transferring the whole result
set multiple times. First you copy it to the result table. Then you
read it again. Your subsequent queries will also have to read over
all the unneeded tuples just to get your primary key.
Considering that the resul
Hi,
On Tue, 9 May 2006, PFC wrote:
You might consider just selecting your primary key or a set of
primary keys to involved relations in your search query. If you
currently use "select *" this can make your result set very large.
Copying all the result set to the temp. costs you additional I
Hi,
On Tue, 9 May 2006, PFC wrote:
Back to the point : I can't use the temp table method, because temp
tables are too slow.
Creating a temp table, filling it, analyzing it and then dropping it
takes about 100 ms. The search query, on average, takes 10 ms.
just some thoughts:
You might con
You might consider just selecting your primary key or a set of
primary keys to involved relations in your search query. If you
currently use "select *" this can make your result set very large.
Copying all the result set to the temp. costs you additional IO
that you propably dont need.
It i
The moral of the story is that you're probably better off running a
bunch of small selects than in trying to optimize things with one
gargantuan select.
Ever experiment with loading the parameters into a temp table and
joining to that?
Also, it might be worth re-testing that conclusion with
30 matches
Mail list logo