[philosophical post regarding a missing feature of Postgres]
Hi all,
This is exactly the problem I'm facing right now, and I found there's no
good solution to this in postgres.
Basically I have a complex database operation, which spans a transaction
across multiple simple DB operations which can
On Thu, 4 Sep 2003, Tom Lane wrote:
Greg Stark [EMAIL PROTECTED] writes:
Tom Lane [EMAIL PROTECTED] writes:
Yes, that's the real crux of the matter. Should the optimizer spend
cycles on *every* query to detect cases where the user has written
useless sort keys? I've got grave doubts
On 4 Sep 2003 at 12:17, Csaba Nagy wrote:
This problem would be easily solved if the current transaction would not
be automatically rolled back on the failed insert. Given this, it would
be as easy as trying the insert, and if fails, do the update.
That violates definition of a transaction.
On 03/09/2003 22:56 Bjørn T Johansen wrote:
Oki, thx... Any ideas to what is sensible values also?
(Like number of maxIdle compared to maxActive, etc..)
I think sensible values are going to be very dependent on the number of
requests. One way to find out might be to write a little utility
This problem is discussed on erserver-general. There are some patches
available, that solves the problem and Andrew Sullivan posted workarounds
for those didn't get their
tables replicated. If you're not subscribed already, you can read the
archive here:
This will be a little vague, it was last night and I can't now do the test in
that db (see below) so can't give the exact wording.
I seem to remember a report a little while ago about tsearch v2 causing
unexpected backend exit messages with 7.3.4 and now I'm getting similar
messages
Nigel J. Andrews wrote:
This will be a little vague, it was last night and I can't now do the test in
that db (see below) so can't give the exact wording.
I seem to remember a report a little while ago about tsearch v2 causing
unexpected backend exit messages with 7.3.4 and now I'm getting
On Thu, 4 Sep 2003, Teodor Sigaev wrote:
Nigel J. Andrews wrote:
I don't seem to be able to find the mailing list thread that covered pretty
much this exact unexpect exit fault. So, can anyone help with a fix,
explanation or link to the relevent thread please?
Have you a core
Jules Alberts wrote:
Op 26 Aug 2003 (12:38), schreef Robert Treat [EMAIL PROTECTED]:
On Tue, 2003-08-26 at 07:28, Jules Alberts wrote:
Hello everyone,
I'm working on a tiny trigger function that needs to ensure that all
values entered in a field are lowercase'd. I can't use pl/pgsql
Tom Lane [EMAIL PROTECTED] writes:
Um ... I *am* an optimizer-geek.
That explains so much :)
I stand by my comment that there's a tradeoff between the potential gain
from an optimization and the time spent to find it.
Well there are always tradoffs in engineering. I'm just trying to push
Andreas Muck [EMAIL PROTECTED] writes:
We have an application running on Linux (SuSE 7.2, kernel 2.4.16) that
opens lots of connections to a Postgres database and occasionaly dies
with segfault.
The program seems to crash when it returns from pqWaitTimed(). As
pqWaitTimed uses select() to
Hi!!
This is my first post to the list. I have a problem.
I have a table with a two index , int4, but when I do a search using the
field indexed but when I applied an explain to the query it says Seq Scan ,
I mean , that it doesnot use the index.
I cant use the cast field::int4 because I use
From: Tom Lane [EMAIL PROTECTED]
[EMAIL PROTECTED] writes:
How do i get the core files to examine? There never seem to be any
produced, even outside the debuggers.
Most likely you have launched the postmaster under ulimit -c 0,
which
prevents core dumps. This seems to be the
Hi Tom,
Tom Lane wrote:
=?ISO-8859-1?Q?Josu=E9_Maldonado?= [EMAIL PROTECTED] writes:
Noticed that the loop does not go through all fields:
Hard to believe. Could you give us a complete example, not a partial
one?
This is the code in the trigger function:
-- Function: public.audit_log()
--
Hi [EMAIL PROTECTED]
There is a something way to know which are the IP from conexcion to Postgres
since plPgsql
something like CURRENT_IP
*---*
*-Edwin Quijada
*-Developer DataBase
*-JQ Microsistemas
*-809-747-2787
* Si deseas lograr cosas
=?ISO-8859-1?Q?Josu=E9_Maldonado?= [EMAIL PROTECTED] writes:
Tom Lane wrote:
Hard to believe. Could you give us a complete example, not a partial
one?
This is the code in the trigger function:
I tried this with a table created like so:
regression=# create table tinv (
regression(# f1 int,
-BEGIN PGP SIGNED MESSAGE-
Could anyone explain about the max_connections in postgres?
We ran into the connection limit on one of our production servers. One reason
is that we have more apps that use postgres running on it (RT3 and Webcollab
were added not long ago). Webcollab, for
Tom,
This is my table definition:
-- Table: public.tinv
-- DROP TABLE public.tinv;
CREATE TABLE public.tinv
(
dep_codigo varchar(6) NOT NULL,
dep_nombre varchar(35),
dep_cantid int4,
dep_fecadq date,
dep_vidaut int4,
dep_pordep int4,
dep_valdep float4,
dep_acumul float4,
What query are you running?
If you run
select * from indexed_table
you'll get a seq scan because you're hitting every row.
If you're doing
select * from indexed_table where indexed_field=1
you should get an index scan.
Anything in between the query planner will decide. Unlike some older
On Thu, Sep 04, 2003 at 09:01:08AM -0600, Josu? Maldonado wrote:
Tom,
This is my table definition:
[etc]
It would help if you send the complete example, including the part
where you actually drop a column from the table.
--
Alvaro Herrera ([EMAIL PROTECTED])
La victoria es para quien se
On Mon, 1 Sep 2003, Tom Lane wrote:
Claudio Lapidus [EMAIL PROTECTED] writes:
So? Is there a way to add the sequence to an existing table?
Sure. You have to break the SERIAL down to its component parts though.
Something like
CREATE SEQUENCE seq;
ALTER TABLE tab ADD COLUMN
Alvaro Herrera Munoz [EMAIL PROTECTED] writes:
It would help if you send the complete example, including the part
where you actually drop a column from the table.
Oh, I forgot about that part (should read the Subject: line again ;-)).
I do see a failure after dropping a column. I've applied
[EMAIL PROTECTED] writes:
I have set ulimit -c unlimited as you sugested,
i then copied postmaster to /home/postgres
and ran it as postgres from there...
but still no core files. Where should they appear?
In $PGDATA/base/yourdbnumber/core (under some OSes the file name
might
be core
Alvaro,
Actually I build a new table from the old one, so the table in metion is
a new table with the same structure as before, the way I droped the
column was right click on it from pgadmin III then delete, why should
that matter?
Thanks
Alvaro Herrera Munoz wrote:
On Thu, Sep 04, 2003 at
First - apologies for the stuff about i don't understand why there's
only one core file, i now have a post-it note now saying ulimit gets
reset at reboot (i assume thats what happened).
So please find below a potentially more useful core file gdb output:
Core was generated by `postgres: mat
Harald Fuchs [EMAIL PROTECTED] writes:
You can't. The only thing MySQL has to offer is
Actually I found two things related to this:
http://www.mysql.com/doc/en/INSERT.html
http://www.mysql.com/doc/en/REPLACE.html
You can do
INSERT INTO tab (...) VALUES (...) ON DUPLICATE KEY UPDATE
[EMAIL PROTECTED] writes:
#0 SN_create_env (S_size=0, I_size=2, B_size=1) at api.c:6
6 z-p = create_s();
Hm. Is it possible you're running out of memory? If the crash is right
there, and not inside create_s(), it seems like a null return from calloc
is the only explanation. This
On Sun, 31 Aug 2003, Jefim M wrote:
I have the following setup for my PostgreSQL box:
4GB Ram
Dual CPU
Redhat 7.3 with 2.4.20-18.7smp kernel
2 network cards - one is dedicated to internal traffic to the NetApp filer
and other is used for other traffic
and all the Postgres database
Great! Many thanks.
Casting the type did the trick.
Even though it is somehow clear now, I don't see why the type is (can)
not (be) inferred from the field type.
--
Carlos G Mendioroz [EMAIL PROTECTED] LW7 EQI Argentina
---(end of broadcast)---
It seems to work for me...
create table foobar (
col1varchar,
col2varchar)
;
create function testfunc() returns trigger language pltcl as '
foreach id [array names NEW] {
elog NOTICE $NEW($id)
}
';
create trigger testtrig after insert or update on foobar for each row
Well, it's unfortunate that you feel that way, because SQL Server handles it
correctly.
Tom Lane [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]
Greg Stark [EMAIL PROTECTED] writes:
Bupp Phillips [EMAIL PROTECTED] writes:
select * from customer order by customer_id, first_name;
I spent this aft enabling debugging on the postmaster side, and determined
that there was a critical index missing on one of the tables ... there was
no index on the url.rec_id field, so a query that looks like:
SELECT rec_id, site_id, pop_rank FROM url WHERE rec_id IN
At 02:25 PM 9/4/03 -0400, Jan Wieck wrote:
H ... would need to take a look at the code ... but a wild guess would
be the communication between the backends and the collector daemon. I'm
not sure if the UDP crap^H^H^H^Hstuff I used in there is IPV6 safe. What
version of PostgreSQL was that?
On
The arrays NEW and OLD only contain entries for attributes in the
respective rows that are not NULL. The list $TG_relatts contains a list
of all column names of the triggering relation with a leading empty
element (to make the index in that list reflect pg_attribute.attnum).
Jan
[EMAIL
close the connection and reconnect to get a fresh backend that doesn't
cache anything.
Jan
Josué Maldonado wrote:
Hello list,
The TCL trigger that uses NEW and OLD arrays failed after after I
removed a unused column, now I got this error:
pltcl: Cache lookup for attribute
Josué Maldonado wrote:
I'm sorry, found my mistake, the way I'm calling the function is wrong
it should be audit_log ('xtable','internal'), it works.
Hmmm, that should not work. The function should be declared taking no
arguments and you yet find the arguments from the CREATE TRIGGER in the
Should one use pg_dumpall to backup the database or is it more practical to just copy the data directory?
Regards,
BTJ
--
---
Bjrn T Johansen (BSc,MNIF)
Executive Manager
[EMAIL PROTECTED]
On Thu, 4 Sep 2003, Bupp Phillips wrote:
I can't find a way to create descending Indexes in PG. Is this possible?
If so, can you point me in the right direction?
If not, can I request this to be an enhancement?
You can do it, but it's a bit difficult. You need to make an operator
class for
On Thu, 4 Sep 2003, Richard Ellis wrote:
On Thu, Sep 04, 2003 at 12:17:35PM +0200, Csaba Nagy wrote:
[philosophical post regarding a missing feature of Postgres]
I found there's no way to avoid failed inserts because of
unique constraint violations, causing automatic roll-back of the
Adam Kavan wrote:
At 10:48 AM 9/4/03 -0400, Matthew T. O'Connor wrote:
On Thu, 2003-09-04 at 09:01, Jan Wieck wrote:
Do the stats_* options appear twice in the config file? If that isn't
the case, the only possible problem left I can think of is that you're
running on a platform where no
On Wed, 3 Sep 2003, Bupp Phillips wrote:
Well, it's unfortunate that you feel that way, because SQL Server handles it
correctly.
For some definition of correctly. If you're in a system which gets
penalized .001 seconds for each query planning that uses a multi-column
order by and you do 100
Hi Jan,
You are right, that's why the loop doesn't goes through all the columns.
Thanks.
Jan Wieck wrote:
The arrays NEW and OLD only contain entries for attributes in the
respective rows that are not NULL. The list $TG_relatts contains a list
of all column names of the triggering relation
However, that doesn't cover the case where you want to update the record
if it already exists.
Jon
insert into test (a, b, c, d)
(select 1, 2, 3, 4 where not exists
(select 1 from test where a=1 and b=2 and c=3 and d=4)
);
If your table contains a=1, b=2, c=3, and d=4, nothing will
Ian Harding wrote:
A tcl function is just like any other, you can call it from within a
plpgsql trigger function in the same way you would call a built-in, or
any other function. Having said that, I am not sure what the exact
plpgsql options are besides SELECT myfunction(' 'foo' ', ' 'bar'
Is index a stop word in the archives? It doesn't seem to return any
results when typing in index with anything (ex: index correlation).
FWIW, just a data point. -sc
--
Sean Chittenden
---(end of broadcast)---
TIP 8: explain analyze is your
ok!
U can tell me about the best setup and performance?
and best parameters to setup I am so newbie ?
I am using Postgres to emigrate a system in 4D Server to PgSql With Delphi
for insurance company.
My server is Pentium IV 2.0 Ghz , 1 GB Ram, 40 GB HD 7200RPM?
Now I wanna know what is the best
BTJ == Bjørn T Johansen Bj writes:
BTJ Should one use pg_dumpall to backup the database or is it more
yes, or pg_dump specific databases.
BTJ practical to just copy the data directory?
this would do you no good, since the files are not necessarily in a
consistent state when you copy many of
Bjørn T Johansen [EMAIL PROTECTED] writes:
Should one use pg_dumpall to backup the database or is it more practical
to just copy the data directory?
The data directory will not be consistent unless the server is
stopped. pg_dumpall works well, produces a consistent backup and is
easily
So I have a query in which some of the select values are
subqueries. The
subqueries are aggregates so I don't want to turn this into a join,
it would
become too complex and postgres would have trouble optimizing things.
So my question is, is there some way to have a subselect return
multiple
On Thu, 2003-09-04 at 14:08, Edwin Quijada wrote:
ok!
U can tell me about the best setup and performance?
and best parameters to setup I am so newbie ?
I am using Postgres to emigrate a system in 4D Server to PgSql With Delphi
for insurance company.
My server is Pentium IV 2.0 Ghz , 1 GB
On Thu, Sep 04, 2003 at 12:29:17PM -0700, Stephan Szabo wrote:
On Thu, 4 Sep 2003, Richard Ellis wrote:
On Thu, Sep 04, 2003 at 12:17:35PM +0200, Csaba Nagy wrote:
[philosophical post regarding a missing feature of Postgres]
I found there's no way to avoid failed inserts because of
On Thu, 2003-09-04 at 15:40, Doug McNaught wrote:
Bjørn T Johansen [EMAIL PROTECTED] writes:
Should one use pg_dumpall to backup the database or is it more practical
to just copy the data directory?
The data directory will not be consistent unless the server is
stopped. pg_dumpall
Richard Huxton schrieb:
On Wednesday 03 September 2003 12:19, Amin Schoeib wrote:
Hi,
How can I make textoutput (for example for tracing the code) in PLPGSQL.
In Oracle you can do that using dbms.output.put_line
RAISE NOTICE ''My counter is % with total %'',i,run_tot;
Double-quoted string as
You want this document:
http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html
great article on performance tuning postgresql. After reading it and
digesting all you can, you should have a noticeably faster machine.
On Thu, 4 Sep 2003, Edwin Quijada wrote:
ok!
U can tell me about the
yes it is/was ...I've just removed it from the list of stop words, but am
not going to force a re-index of 250k URLs ;( So, any new index'ng will
pull in index as a valid search word, and when the current URLs
expire/re-index,they will get pulled in then also ...
On Thu, 4 Sep 2003, Sean
yes it is/was ...I've just removed it from the list of stop words,
but am not going to force a re-index of 250k URLs ;( So, any new
index'ng will pull in index as a valid search word, and when the
current URLs expire/re-index,they will get pulled in then also ...
Hrm... if I search for
On Thu, 4 Sep 2003, Sean Chittenden wrote:
yes it is/was ...I've just removed it from the list of stop words,
but am not going to force a re-index of 250k URLs ;( So, any new
index'ng will pull in index as a valid search word, and when the
current URLs expire/re-index,they will get
Stephan Szabo [EMAIL PROTECTED] writes:
On Thu, 4 Sep 2003, Bupp Phillips wrote:
I can't find a way to create descending Indexes in PG. Is this possible?
You can do it, but it's a bit difficult.
Do you really *need* a descending index? In most cases a backwards scan
on a regular index gets
Whatever you guy's try or suggest, it's doomed to suffer.
The whole problem stems from using a non-standard feature. And in my
opinion MySQL's REPLACE INTO is less a feature or extension to the
standard than more another stupid and lesser thought through addition of
apparently speed gaining
Stephan Szabo [EMAIL PROTECTED] writes:
On Wed, 3 Sep 2003, Bupp Phillips wrote:
Well, it's unfortunate that you feel that way, because SQL Server handles it
correctly.
For some definition of correctly. If you're in a system which gets
penalized .001 seconds for each query planning
Richard Ellis [EMAIL PROTECTED] writes:
True, if the row does not already exist. But in that situation,
because of the unique constraint premise in the original quote, there
is always going to be at least one failed transaction. So the battle
is already lost before it's even begun.
Well,
On Thu, Sep 04, 2003 at 11:21:05AM -0700, Jenny - wrote:
I think the locks would actually by represented by PROCLOCK structures.
The LOCK structures are for lockable objects, not for actual locks.
Well,from what i understand, PROCLOCK stores the TransactionID and the LOCK
its holding lock
Alvaro Herrera Munoz [EMAIL PROTECTED] writes:
Anyway, I think the LOCK structure represents something that can be locked.
Right.
The PROCLOCK struct represents that some process is holding a lock on said
object.
IIRC, a PROCLOCK is created as soon as some backend tries to lock some
lockable
Manfred Koizar wrote:
[ still brainstorming ... ]
On Thu, 21 Aug 2003 17:16:50 -0400, Tom Lane [EMAIL PROTECTED]
wrote:
Whenever a backend encounters a dead tuple it inserts a reference to
its page into the RSM.
This assumes that backends will visit dead tuples with significant
Jan Wieck wrote:
Tom Lane wrote:
Jan Wieck [EMAIL PROTECTED] writes:
Okay, my proposal would be to have a VACUUM mode where it tells the
buffer manager to only return a page if it is already in memory, and
some not cached if it would have to read it from disk, and simply skip
the
I am working on a project that involves displaying locking information about
each lock taken, whether it be a row level or table leve llock.
When dealing with struct LOCK (src/include/storage) i have noticed that
postgreSQL creates a single LOCK struct for each table in the db. Like if
i
66 matches
Mail list logo