Tom Lane wrote:
DelGurth [EMAIL PROTECTED] writes:
As you can see we tried some indexes, to see if we could
get the queries on the views to become faster.
Indexes:
mm_insrel_table_pkey PRIMARY KEY, btree (number)
mm_insrel_dir_not_one_idx btree (dnumber, snumber) WHERE dir 1
Tom Lane wrote:
DelGurth [EMAIL PROTECTED] writes:
On 8/21/06, Tom Lane [EMAIL PROTECTED] wrote:
It might be interesting also to examine the output of just
explain select * from mm_insrel_table where dnumber=558332 and dir1
with different subsets of these indexes in place.
Ok. Did that
Tom Lane wrote:
Looking back at Alban's original post, I finally see what the planner
is up to:
- Merge Join (cost=0.00..165.07 rows=1 width=28) (actual
time=53.890..129.310 rows=1 loops=1)
Merge Cond: (outer.number = inner.number)
- Nested Loop
Alban Hertroys wrote:
Tom Lane wrote:
I'm thinking that removing the indexes it's erroneously using now could
help performance, as it can no longer use that index. It may however
pick the primary key index (likely), or - if we remove even that one - a
sequential scan... Experimenting will
Dear fellow PostgreSQL users,
we are happy to announce the release of GNUmed 0.2 Librarian.
GNUmed is an electronic medical record which uses PostgreSQL
for its backend.
Some of the new features since the last release:
- server can be installed on MS/Windows, too
Courtesy of PostgreSQL now
Alban Hertroys [EMAIL PROTECTED] writes:
zorgweb_solaris= select * from pg_stats where attname = 'number' and
tablename IN ('mm_insrel_table', 'mm_product_table', 'mm_object');
tablename | mm_product_table
histogram_bounds | {2930,3244,3558,3872,4186,4500,4814,5128,5442,5756,6070}
Tom Lane wrote:
Alban Hertroys [EMAIL PROTECTED] writes:
zorgweb_solaris= select * from pg_stats where attname = 'number' and
tablename IN ('mm_insrel_table', 'mm_product_table', 'mm_object');
tablename | mm_product_table
histogram_bounds |
Alban Hertroys [EMAIL PROTECTED] writes:
However, mm_product.number always matches either mm_insrel.snumber or
mm_insrel.dnumber (source and destination respectively). The other way
around this isn't the case; then snumber and dnumber match number-fields
in other tables (they always do).
I had the same problem reported by VACUUM, and I traced it down to an
individual table, for which SELECT * would return the exact same
message.
As far as I know, no process or person has tried to drop the table at
any point. Luckily it's a table populated by an importer every day, and
I have a
folks
I think my trigger need transaction ,but the pgsql
compiler refuse to compile 'begin .. commit ' sequence
I use the perform , to do the works
i'm wrong?
tia.
any help be appreciated.
MDC
code below ( note (*) for perform instruction)
CREATE OR REPLACE FUNCTION
On 8/22/06, marcelo Cortez [EMAIL PROTECTED] wrote:
I think my trigger need transaction ,but the pgsql
compiler refuse to compile 'begin .. commit ' sequence
I use the perform , to do the works
Stored functions already execute inside the context of some
already-running transaction. You don't
Tom Lane wrote:
Alban Hertroys [EMAIL PROTECTED] writes:
However, mm_product.number always matches either mm_insrel.snumber or
mm_insrel.dnumber (source and destination respectively). The other way
around this isn't the case; then snumber and dnumber match number-fields
in other tables (they
On Tue, Aug 22, 2006 at 10:38:31AM -0300, marcelo Cortez wrote:
I think my trigger need transaction ,but the pgsql
compiler refuse to compile 'begin .. commit ' sequence
I use the perform , to do the works
Functions can't start or end transactions because they're already
being executed in the
Alban Hertroys [EMAIL PROTECTED] writes:
tablename | mm_product_table
attname | number
histogram_bounds | {2930,3244,3558,3872,4186,4500,4814,5128,5442,5756,6070}
tablename | mm_insrel_table
attname | snumber
histogram_bounds |
Thanks for the good suggestion. I spent some time last night verifying
that it works, and that I can get the values in and out correctly
through JDBC. (When going from Java to the database, for example, it's
easiest to compose the value directly into the SQL query rather than
using a parameter.)
Tom Lane wrote:
Alban Hertroys [EMAIL PROTECTED] writes:
tablename | mm_product_table
attname | number
histogram_bounds | {2930,3244,3558,3872,4186,4500,4814,5128,5442,5756,6070}
tablename | mm_insrel_table
attname | snumber
histogram_bounds |
Alban Hertroys [EMAIL PROTECTED] writes:
I sure hope we get this mystery unveiled...
I'm confused too. Would it be possible for you to send me a dump of
your database? I need the full schema definitions of these tables,
indexes, and views, but you could null out all but the various
number
Vance,nice that it was of help.(When going from Java to the database, for example, it's
easiest to compose the value directly into the SQL query rather thanusing a parameter.)This seems quite viable.Please allow me to recommend to you to NOT go this seemingly easy way. I went there myself (but did
Hi Michael
I set aside the procedure you sent to me as it resulted in multiple rows of
the same information. (In fact one variation produced 100 rows for each of
the 9 new fields creating a 900 row table.
I went back to an earlier procedure which has been performing successfully.
--
I am having problems with my libpq programs crashing. This seems to be
a version incompatibility and I want to find out how to best proceed.
My main database is running Fedora Core 5 with the supplied PostgreSQL
8.1.4.
My web server is running Fedora Core 4 with the supplied PostgreSQL
Thanks, I've gotten this working nicely now (after some offline
exchanges with Harald).
In JDBC, inet values can be read a couple of ways -- the easiest is to
call ResultSet.getString(). And to write them, the easiest is to use
prepared statements of the form
INSERT INTO xxx VALUES (?::inet,
Hi,Can anyone give me pointers for how to pass arguments to a trigger function.I think it is done using tg_argv or something but not very sure how to do it.Regards,Harpreet
Micheal
This isn't a trigger function. Are you sure
trigger is the
word you meant?
yes i do
CREATE TABLE actlocat
(
id_actlocal numeric(2) NOT NULL,
d_actlocal char(8) NOT NULL,
f_novedad float8 NOT NULL,
ordenado_por char(18) NOT NULL,
CONSTRAINT pk_actlocat PRIMARY KEY
Hi
I'm having problems with concurrent transactions that involve foreign key
constraints that use SELECT ... FOR UPDATE. These constraints result in
one transaction waiting for another (failing to get a ShareLock on a
transaction which already has an ExclusiveLock).
The solutions I could
Purely out of curiosity, what sort of execution time do you get if you
use:
SELECT *
FROM (SELECT username,acctstarttime FROM radacct WHERE username='user'
ORDER BY acctstarttime ) allstarttimes
LIMIT 50;
Graeme Hinchliffe wrote:
Now, if I do the following:
SELECT username,acctstarttime
Hi,
I cannon find in online doc from postgresql.org either using google
so...
I need to pass variable into postgresql database from external system.
I have no idea if in postgresql exist sth like session variables or sth
like that.
In details I would like to pass login field from external client
good'ay everyone.
i have this problem which may not be a big deal, however, it'd be great
if you could help me if possible.
when i restore a database (using a backup file), it shows me some error
messages which do not seem to matter at all (there is no data lost in
the database, and maybe
Hello, I met the following problem when i am using PostgreSQL as a backend database: I have a table tabrel(a int primary key, b int), and now there are for example 100 records with column 'a' be from 1 to 100. When i'm going to add one to each 'a' and intended to get 'a' varing from 2 to 101, i
I've found a thread discussing the use of text vs. varchar; what
I'd like to know if there's any performance difference between using
varchar(n) vs. varchar, ie, should I constrain a name column to
an arbitrary length to improve performance, or will it actually degrade
performance because of the
Title: [8.1.4] Create index on timestamp fails
Hi all,
Using postgresql 8.1.4
I have a table with an column:
entry_date | timestamp with time zone| not null
And when I try to create an index on it like so:
create index entry_date_idx on
am Tue, dem 22.08.2006, um 8:39:00 +0800 mailte gao iqiang folgendes:
Hello,
I met the following problem when i am using PostgreSQL as a backend
database:
I have a table tabrel(a int primary key, b int), and now there are for
example 100 records with column 'a' be from 1 to 100.
It appears that 8.1 is stricter on checking the type of function. Look at your user_tracking function. It is probably set as volatile. You need to change it to be immutable.This should fix the issue.Chris
On 8/21/06, Arturo Perez [EMAIL PROTECTED] wrote:
Hi all,
Using postgresql
On Tue, Aug 22, 2006 at 01:18:44PM -0400, Harpreet Dhaliwal wrote:
Can anyone give me pointers for how to pass arguments to a trigger function.
I think it is done using tg_argv or something but not very sure how to do
it.
Here's an example:
CREATE TABLE foo (id integer PRIMARY KEY, t text, x
On Tue, Aug 22, 2006 at 09:02:52AM -0700, Bob Pawley wrote:
I set aside the procedure you sent to me as it resulted in multiple rows of
the same information. (In fact one variation produced 100 rows for each of
the 9 new fields creating a 900 row table.
If it was doing that then it would be
On Tue, Aug 22, 2006 at 12:26:53PM -0400, Bryan White wrote:
I am having problems with my libpq programs crashing. This seems to be
a version incompatibility and I want to find out how to best proceed.
snip
My dev server was running the same setup as the web server. The
difference is
Bryan White [EMAIL PROTECTED] writes:
I am having problems with my libpq programs crashing. This seems to be
a version incompatibility and I want to find out how to best proceed.
My main database is running Fedora Core 5 with the supplied PostgreSQL
8.1.4.
My web server is running Fedora
Alban Hertroys [EMAIL PROTECTED] writes:
I'm confused too. Would it be possible for you to send me a dump of
your database?
Attached is a cleaned out database, the full schema is included, but
only the relevant tables contain any data.
Thanks. After digging through it a bit, I understand
I wrote:
Thanks. After digging through it a bit, I understand what's happening,
but I'm not seeing any simple fix.
I forgot to mention that although I could reproduce your bad plan in
8.1, CVS HEAD doesn't fall into the trap. I don't believe we've done
anything to fix the fundamental problem
Tom Lane wrote:
We might be able to do something about actually solving the statistical
problem in 8.3, but I fear it's too late to think about it for 8.2.
I take it you mean you already have a very concrete idea on how to solve
it. Come on, illuminate us poor dumb souls.
--
Alvaro Herrera
Alvaro Herrera [EMAIL PROTECTED] writes:
Tom Lane wrote:
We might be able to do something about actually solving the statistical
problem in 8.3, but I fear it's too late to think about it for 8.2.
I take it you mean you already have a very concrete idea on how to solve
it. Come on,
Philip de Nier [EMAIL PROTECTED] writes:
The solutions I could find in the mailing lists were either to upgrade to
version 8.1 which uses SELECT ... FOR SHARE (I'm currently using 8.0),
stop using foreign keys or add DEFERRABLE INITIALLY DEFERRED to the
constraints. For now I'd prefer to
nuno [EMAIL PROTECTED] writes:
when i restore a database (using a backup file), it shows me some error
messages which do not seem to matter at all (there is no data lost in
the database, and maybe they're just warning messages...).
pg_restore: [archiver (db)] could not execute query: ERROR:
Daniel Serodio [EMAIL PROTECTED] writes:
I've found a thread discussing the use of text vs. varchar; what
I'd like to know if there's any performance difference between using
varchar(n) vs. varchar, ie, should I constrain a name column to
an arbitrary length to improve performance, or will it
Arturo Perez [EMAIL PROTECTED] writes:
I have a table with an column:
entry_date | timestamp with time zone| not null
And when I try to create an index on it like so:
create index entry_date_idx on =
user_tracking(date_part('year',entry_date));
I get a
ERROR: functions
Hi list,
Information first:
gds2=# select version();
version
Tom Lane wrote:
Bryan White [EMAIL PROTECTED] writes:
I am having problems with my libpq programs crashing. This seems to be
a version incompatibility and I want to find out how to best proceed.
My main database is running Fedora Core 5 with the supplied PostgreSQL
8.1.4.
My web server
Martijn van Oosterhout wrote:
Can you provide the output of ldd? The libraries are supposed to be
reasonably compatable.
web server:
linux-gate.so.1 = (0x00ab)
libexpat.so.0 = /usr/lib/libexpat.so.0 (0x00411000)
libpq.so.4 = /usr/lib/libpq.so.4 (0x00324000)
Bryan White [EMAIL PROTECTED] writes:
Tom Lane wrote:
One thing you should check is whether both libs were built with the same
options (compare pg_config --configure output from the 8.0 and 8.1
installations).
I think that might be the problem. These are the differences in
pg_config
On Tue, Aug 22, 2006 at 04:24:46PM -0500, Don Isgitt wrote:
gds2=# create function sd(_float8) returns float as '' language 'plr';
CREATE FUNCTION
gds2=# select round(sd('{1.23,1.31,1.42,1.27}'::_float8)::numeric,8);
server closed the connection unexpectedly
This probably means the
On Tue, Aug 22, 2006 at 02:37:19PM -0300, marcelo Cortez wrote:
This isn't a trigger function. Are you sure trigger is the
word you meant?
yes i do
I see: the function you originally posted is called by a trigger
function. In any case the answer is the same: functions can't start
or
Hi all,
I'm checking out table partitioning and wondering how to handle it
dynamically.
For example, if I want to partition by year month, do I need to create
all of the partitions ( rules) before hand or can I handle all of that
another way (a pl/pgsql function? something else?)
Any
Title: Re: [JDBC] question regarding upgrade from pg74.215.jdbc3.jar to
hi,
I am using Postgres 8.0.0,
i am getting the following exception in my code, can
you please suggest at to what may be going wrong?
The postgres logs do not tell anything.
thanks,
regards
Surabhi
Chris,I'm using a cron job which creates a new partition table every night at midnight. After I create the table, I update the rule on the parent table so that new records go into the new partition table. Then I update the constraints on the tables so that constraint exclusion works. I'm doing it
hi all,
I am using Postgres 8.0.0
I am getting the following exception in my code,
please help as to what may be going wrong.
thanks,
regards
Surabhi
org.postgresql.util.PSQLException: An I/O error occured while sending to the
backend.
at
On Tue, Aug 22, 2006 at 08:39:00AM +0800, gao iqiang wrote:
Hello,
I met the following problem when i am using PostgreSQL as a backend
database:
I have a table tabrel(a int primary key, b int), and now there are for
example 100 records with column 'a' be from 1 to 100. When i'm going to
55 matches
Mail list logo