On Sat, Jul 02, 2005 at 04:49:23PM -0400, Greg Steffensen wrote:
Hey, I'm trying to write some plpython procedures that read binary data from
images on the disk and store it in bytea fields. I'm basically trying to
write a plpython procedure that accepts a varchar and returns a bytea, with
I am trying to write a trigger function that will do something when either a
row had been deleted or a field has been updated to null.
The problem I am having is determining in the function if this is a delete
or not. I would like to say:
if this is delete trigger or new.field1 is null then
I found the TG_OP variable which tels me which operation is currently being
done.
Sim Zacks [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]
I am trying to write a trigger function that will do something when either
a
row had been deleted or a field has been updated to null.
The
On Tue, Jun 28, 2005 at 10:36:58AM -0700, Dann Corbit wrote:
Nope, truncate is undoubtedly faster. But it also means you would have
downtime as you mentioned. If it were me, I'd probably make the
trade-off of using a delete inside a transaction.
For every record in a bulk loaded table?
Basically I want this trigger to work after a language record in my
languages table is added.
CREATE TRIGGER language_add_trig AFTER INSERT ON languages
FOR EACH ROW EXECUTE PROCEDURE trigger_language_add();
Here is my function but it is not working. I am wanting to loop for
each
On Friday 01 July 2005 19:49, you wrote:
In Postgres 8 I tried commad
DELETE FROM customer WHERE id=123
(snip)
---(end of broadcast)---
TIP 8: explain analyze is your friend
Automatically answered?! :-)
explain analyze DELETE FROM
I forgot to add, this is of course a simplistic approach which:
1. may be simply wrong
2. assumes data is available to user in nformation_schema (I guess the
information schema lists only data owned by user; yet I am not sure
about that).
3. assumes foreign keys have really simple set up (no
Does the application really need superuser privileges or is that
just a convenience? It's usually a good idea to follow the Principle
of Least Privilege -- do some searches on that phrase to learn
more about it and the rationale for following it.
Whether this approach is secure and better
Greg,
using views would be nice.
I have also a add privilege which allows to add only new documents. I think
that this requires writing triggers in Postgres.
This seems to be a lot of work.
I do'nt have enough knowledge to implement this in Postgres.
So it seems to more reasonable to run my
Hi,
I am application that we have built using postgresql 8.0.3 and tomcat
5.0.28. In the java code i have put a state set autocomit to false
which should be fine but comes up with the error SET AUTOCOMMIT TO OFF
is no longer supported.
I have tried using 2 drivers 7.4 and 8 jdbc driver
On Mon, Jul 04, 2005 at 08:59:25AM +1000, Jamie Deppeler wrote:
I am application that we have built using postgresql 8.0.3 and tomcat
5.0.28. In the java code i have put a state set autocomit to false
which should be fine but comes up with the error SET AUTOCOMMIT TO OFF
is no longer
I would strongly suggest that you create a database specific user,
one that has read/write access within this database, and that your
application use that user instead of the pg super user.
In general, the super user should never be used, except for
specific administrative tasks. This
On 7/4/05, Gregory Youngblood [EMAIL PROTECTED] wrote:
I would strongly suggest that you create a database specific user,
one that has read/write access within this database, and that your
application use that user instead of the pg super user.
In general, the super user should never be
I recently moved a database to a new machine:
PostgreSQL 7.4.7 on i386-portbld-freebsd5.3, compiled by GCC cc (GCC)
3.4.2 [FreeBSD] 20040728
any queries related to tsearch2 give me this error:
ERROR: cache lookup failed for function 17188
I am running the same versions of Postgres,
hi,
we are using postgresql to analyze our web log, we got a 6M table,
and while doing the query:
SELECT url,sum(ct) as ctperkw from ctrraw group by url order by ctperkw
desc limit 1000;
the table structure is:
CREATE TABLE ctrRAW
(
cdate date,
ip inet,
kw varchar(128),
prd varchar(6),
pos int,
On Sun, 3 Jul 2005, Matthew Terenzio wrote:
I recently moved a database to a new machine:
PostgreSQL 7.4.7 on i386-portbld-freebsd5.3, compiled by GCC cc (GCC) 3.4.2
[FreeBSD] 20040728
any queries related to tsearch2 give me this error:
ERROR: cache lookup failed for function 17188
I am
laser wrote:
SELECT url,sum(ct) as ctperkw from ctrraw group by url order by ctperkw
desc limit 1000;
and the query run out of memory, the log file attached.
Have you run ANALYZE recently? You might be running into the well-known
problem that hashed aggregation can consume an arbitrary amount
Neil Conway [EMAIL PROTECTED] writes:
Have you run ANALYZE recently? You might be running into the well-known
problem that hashed aggregation can consume an arbitrary amount of
memory -- posting the EXPLAIN for the query would confirm that.
It would be useful to confirm whether this behavior
Have you run ANALYZE recently? You might be running into the well-known
problem that hashed aggregation can consume an arbitrary amount of
memory -- posting the EXPLAIN for the query would confirm that.
-Neil
yes, I run VACUUM ANALYZE VERBOSE then run the query,
and finally got the out of
19 matches
Mail list logo