I am a bit late posting, I use Zeos DBO controls, they can be found on
sourceforge..
I actually use the V5 of the components with some bug fixes I done
myself, but my project is too far along to change to the newer components..
They are free and all source provided.. :)
Postgres Admin
Hi Listers,
I need to know the number of affected (Inserted, deleted, updated)
rows by a Query. In MsSQL I use SELECT @@ROWCOUNT, similar to SELECT
@@IDENTITY. Is there any @@RowCount similar statement in PostGres??
Alessandro - Liga One Sistemas
[EMAIL PROTECTED]
- Original Message
[Please start a new thread when asking new questions.]
On Fri, Jul 08, 2005 at 08:56:56AM -0300, jimmy.olsen wrote:
I need to know the number of affected (Inserted, deleted, updated)
rows by a Query. In MsSQL I use SELECT @@ROWCOUNT, similar to SELECT
@@IDENTITY. Is there any
On Fri, Jul 08, 2005 at 05:03:37 +0200,
PFC [EMAIL PROTECTED] wrote:
It's the first time I see a MySQLism in postgres !
This has meaning in more ways than one.
However I like it, cos it might subvert some MySQL users, and
provide easy answers to The Weekly Question on
SELECT SUM( CASE WHEN COALESCE( DEBIT , 0 ) 0 THEN
COALESCE( AMOUNT , 0 ) ELSE 0 END ) AS DEBIT_AMT , SUM( CASE WHEN
COALESCE( CREDIT , 0 ) 0 THEN COALESCE( AMOUNT , 0 ) ELSE 0
END ) AS CREDIT_AMT FROM TABLE NAME
I don't know if it will use indexes (bitmapped OR indexes in 8.1 ?)...
On Fri, Jul 08, 2005 at 15:49:20 +0200,
PFC [EMAIL PROTECTED] wrote:
SELECT SUM( CASE WHEN COALESCE( DEBIT , 0 ) 0 THEN
COALESCE( AMOUNT , 0 ) ELSE 0 END ) AS DEBIT_AMT , SUM( CASE WHEN
COALESCE( CREDIT , 0 ) 0 THEN COALESCE( AMOUNT , 0 ) ELSE 0
END ) AS CREDIT_AMT FROM TABLE
Chris Browne wrote:
None of those transactions have COMMITted, so there are some 78 tuples
in limbo spread across 16 transactions.
If there were some single secret place with a count, how would you
suggest it address those 78 tuples and 16 transactions that aren't yet
(and maybe never will
On Fri, Jul 08, 2005 at 07:12:26 -0700,
Steve Wampler [EMAIL PROTECTED] wrote:
Hmmm, I understand this and don't doubt it, but out of curiousity, how
does the current SELECT COUNT(*) handle this? It doesn't lock the entire
It only counts tuples visible to the current transaction.
table
SELECT (SELECT sum(amount) FROM table WHERE debit=X) AS debit, (SELECT
sum(amount) FROM table WHERE credit=x) AS credit;
If most of the records are credits or debits you don't want to do this.
A single sequential scan through the table will be the best plan.
I thought that debit = source
Bruno Wolff III wrote:
No, it is an exact count.
Yes, for the transaction, but it's an approximation of the number of
tuples in the table - which is probably what the people who worry about
its cost are more interested in (an approximate count for the table).
I'm also claiming that a true count
So, leave COUNT(*) alone. But it would be very handy to have a
way to get an approximate table size that is more accurate than is
provided by a pg_class.reltuples that is only updated on vacuums.
Create 2 sequences, one for counting tuple additions and one for
counting tuple deletions.
When
On 7/8/05, Steve Wampler [EMAIL PROTECTED] wrote:
None of those transactions have COMMITted, so there are some 78 tuples
in limbo spread across 16 transactions.
If there were some single secret place with a count, how would you
suggest it address those 78 tuples and 16 transactions that
On Fri, Jul 08, 2005 at 08:07:27AM -0700, Steve Wampler wrote:
Bruno Wolff III wrote:
No, it is an exact count.
Yes, for the transaction, but it's an approximation of the number of
tuples in the table - which is probably what the people who worry about
its cost are more interested in (an
On 7/8/05, Rod Taylor [EMAIL PROTECTED] wrote:
Create 2 sequences, one for counting tuple additions and one for
counting tuple deletions.
When you INSERT a tuple, bump the added sequence (select nextval());
When you DELETE a tuple, bump the deleted sequence (select nextval());
To
Steve Wampler [EMAIL PROTECTED] writes:
So, leave COUNT(*) alone. But it would be very handy to have a
way to get an approximate table size that is more accurate than is
provided by a pg_class.reltuples that is only updated on vacuums.
If you want something cheap, you could use the same
On Fri, 2005-07-08 at 17:34 +0200, Dawid Kuroczko wrote:
On 7/8/05, Rod Taylor [EMAIL PROTECTED] wrote:
Create 2 sequences, one for counting tuple additions and one for
counting tuple deletions.
When you INSERT a tuple, bump the added sequence (select nextval());
When you DELETE a
On Fri, Jul 08, 2005 at 16:49:44 +0200,
PFC [EMAIL PROTECTED] wrote:
SELECT (SELECT sum(amount) FROM table WHERE debit=X) AS debit, (SELECT
sum(amount) FROM table WHERE credit=x) AS credit;
If most of the records are credits or debits you don't want to do this.
A single sequential scan
Tom Lane wrote:
Steve Wampler [EMAIL PROTECTED] writes:
So, leave COUNT(*) alone. But it would be very handy to have a
way to get an approximate table size that is more accurate than is
provided by a pg_class.reltuples that is only updated on vacuums.
If you want something cheap, you could
which wouldn't have helped.) If true, I can handle that parsing myself
easily enough without exposing RelationGetNumberOfBlocks.
Is there a way to get EXPLAIN results in a non-text-formatted way for
easier use ?
I'm asking, because it seems the feature set grows by the minute in
Greetings,
A question about creating index for the following expression.
CREATE INDEX idx_t1 ON test (col1 || '-' || col2);
May I know is it possible and how I may create index for the expression
such as col1 || '-' || col2 for a table please?
Thanks a lot,
Emi
Ying Lu [EMAIL PROTECTED] writes:
A question about creating index for the following expression.
CREATE INDEX idx_t1 ON test (col1 || '-' || col2);
You need more parentheses:
CREATE INDEX idx_t1 ON test ((col1 || '-' || col2));
regards, tom lane
Steve Wampler [EMAIL PROTECTED] writes:
Tom Lane wrote:
If you want something cheap, you could use the same technique the
planner uses nowadays: take RelationGetNumberOfBlocks() (which is
guaranteed accurate) and multiply by reltuples/relpages.
Yes - this would be an excellent approximation
On Fri, Jul 08, 2005 at 12:08:41PM -0400, Ying Lu wrote:
CREATE INDEX idx_t1 ON test (col1 || '-' || col2);
May I know is it possible and how I may create index for the expression
such as col1 || '-' || col2 for a table please?
See Indexes on Expressions in the documentation:
I'm also claiming that a true count for any active table is
meaningless and am *not* suggesting that effort be spent on trying
to produce such a true count.
That's a pretty big assumption that would in fact be WRONG.
We have managers interested in counting the number of objects we have
around
On Fri, Jul 08, 2005 at 12:08:41 -0400,
Ying Lu [EMAIL PROTECTED] wrote:
Greetings,
A question about creating index for the following expression.
CREATE INDEX idx_t1 ON test (col1 || '-' || col2);
May I know is it possible and how I may create index for the expression
such as col1 ||
On Fri, Jul 08, 2005 at 12:08:41PM -0400, Ying Lu wrote:
CREATE INDEX idx_t1 ON test (col1 || '-' || col2);
May I know is it possible and how I may create index for the expression
such as col1 || '-' || col2 for a table please?
The syntax of the CREATE INDEX command normally
Christopher Browne wrote:
I'm also claiming that a true count for any active table is
meaningless and am *not* suggesting that effort be spent on trying
to produce such a true count.
That's a pretty big assumption that would in fact be WRONG.
Please reread the message from Bruno and
On Fri, Jul 08, 2005 at 06:08:03PM +0200, PFC wrote:
which wouldn't have helped.) If true, I can handle that parsing myself
easily enough without exposing RelationGetNumberOfBlocks.
Is there a way to get EXPLAIN results in a non-text-formatted way
for easier use ?
I have what I call a dictionary table which supports a master table.
This dictionary table is designed to hold generic data :
CREATE TABLE sup_data (link_id uniqueidentifier, field_name varchar(255),
field_data text) WITH OIDS; ...
It works well when you're digging into it to pull the
Is it even possible to cluster a table based on the clustering scheme
(which is
not the link_id ...) from the master table?
Can you gurus think of a better strategy? :) (Please??) :)
You can create a functional index on a function which returns the desired
order by looking in the main
30 matches
Mail list logo