> "Christopher Kings-Lynne" <[EMAIL PROTECTED]> writes:
> > Latest CVS, timetz and horology is failing...
>
> Would you poke into it and see why?
I can, but I'm not sure what you want me to do - I'm not really familiar
with it all bar th
> "Damjan Pipan" <[EMAIL PROTECTED]> writes:
> > I have dropped some columns and have quite some problems now
> with RECTYPE
> > variables types.
> > My question: Can I RECREATE dropped columns?
> > I checked the pg_attribute table and there I can change values
> of attname,
> > attstattarget and a
> > set regex_flavor = advanced
> > set regex_flavor = extended
> > set regex_flavor = basic
> [snip]
> > Any suggestions about the name of the parameter?
>
> Actually I think 'regex_flavor' sounds fine.
Not more Americanisms in our config files!! :P
Chris
-
> "Christopher Kings-Lynne" <[EMAIL PROTECTED]> writes:
> >> Actually I think 'regex_flavor' sounds fine.
>
> > Not more Americanisms in our config files!! :P
>
> You want regex_flavour? ;-)
Hehe - yeah I don't really care. I have
> Hmmm... does anyone remember the name of that NFS testing tool the
> FreeBSD guys were using? Think it came from Apple. They used it to
> find and isolate bugs in the FreeBSD code a while ago.
fsx
Chris
---(end of broadcast)---
TIP 1: subscr
It's a
7.3 feature only.
Chris
-Original Message-From:
[EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]]On Behalf Of
mail.luckydigital.comSent: Sunday, 2 February 2003 2:19
PMTo: [EMAIL PROTECTED]Subject: [HACKERS] 7.2
result sets and plpgsql
I've had a good look and to
> > Nice ! We'll send you archive with new tsearch and short
> > info, so you could test it and write documentation.
>
> I have a live DB, is it possible to install the new alpha tsearch
> module w/o conflicting with the existing production one?
Can you install it to a different schema?
Chris
Hi guys,
Looking at the log_duration postgresql.conf option. How about adding an
option log_duration_min which is a value in milliseconds that is the minimum
time a query must run for before being logged. Basically, what I'd be
interested in is "please log the SQL query and duration of all queri
Hrm. I just saw that the PHP ADODB guy just published a bunch of database
benchmarks. It's fairly evident to me that benchmarking PostgreSQL on
Win32 isn't really fair:
http://php.weblogs.com/oracle_mysql_performance
*sigh*
Chris
---(end of broadcast)
Why don't we include a postgresql.conf.recommended along with our
postgresql.conf.sample. That shouldn't be too hard. We can just jack up
the shared buffers and wal buffers and everything - it doesn't matter if
it's not perfect, but it will at least give people an idea of what needs to
be increas
> >After it's all said and done, I would rather someone simply say, "it's
> >beyond my skill set", and attempt to get help or walk away. That seems
> >better than them being able to run it and say, "it's a dog", spreading
> >word-of-mouth as such after they left PostgreSQL behind. Worse yet,
> >t
Hi Everyone,
I have just completed a basic set of benchmarking on our new database
server. I wanted to figure out a good value for shared_buffers before we go
live.
We are a busy ecommerce-style website and so we probably get 10 or 20 to 1
read transactions vs. write transactions. We also don't
> > I am now going to leave it on 5000 and play with wal_buffers.
> > Is there anything else people are interested in me trying?
>
> Keenly interested. Who wouldn't want to know how to optimize it?
> That's the hardest guideline to find.
Oops - what that sentence was supposed to say is "Is there
> > Machine:
> > 256MB RAM, FreeBSD 4.7, EIDE HDD, > 1 Ghz
>
> Seems like a small amount of memory to be memory based tests with.
Perhaps, but I'm benchmarking for that machine, not for any other. The
results have to include the 256MB spec.
Also, the peak was 25MB of SHM, which still leave 231MB
> Okay, here's one: most Unix systems store all of the configuration
> files in a well known directory: /etc. These days it's a hierarchy of
> directories with /etc as the root of the hierarchy. When an
> administrator is looking for configuration files, the first place he's
> going to look is in
> Had this happen at a previous employer, and it definitely is bad. I
> believe we had to do a reboot to clear it up. And we saw the problem a
> couple of times since the sys admin wasn't able to deduce what had
> happened the first time we got it. IIRC the problem hit somewhere around
> 150 connec
> Seriously, I know Linux can change these on the fly, and I'm pretty sure
> Solaris can too. I haven't played with BSD for a while so can't speak
> about that. Anyone else know?
You cannot change SHMMAX on the fly on FreeBSD.
Chris
---(end of broadcast)-
I'm still getting failure, but it gets weirder:
regression=# drop table timetz_tbl;
DROP TABLE
regression=# CREATE TABLE TIMETZ_TBL (f1 time(2) with time zone);
INSERT INTO TIMETZ_TBL VALUES ('00:01 PDT');
CREATE TABLE
regression=#
regression=# INSERT INTO TIMETZ_TBL VALUES ('00:01 PDT');
INSERT
Oops - just to clarify I accidentally copied too many INSERTs into the
email. The result of the first SELECT is correct compared to the INSERTs
that I did. I copied the INSERTs and CREATE TABLE as-is from timetz.sql
Chris
> -Original Message-
> From: Christopher Kings-Lynne [
Hi Everyone,
I've just spent the last day and a half trying to benchmark our new database
installation to find a good value for wal_buffers. The quick answer - there
isn't, just leave it on the default of 8.
The numbers just swing up and down so much it's impossible to say that one
setting is be
> I don't think this is based on a useful test for wal_buffers. The
> wal_buffers setting only has to be large enough for the maximum amount
> of WAL log data that your system emits between commits, because a commit
> (from anyone) is going to flush the WAL data to disk (for everyone).
> So a benc
> I don't think this is based on a useful test for wal_buffers. The
> wal_buffers setting only has to be large enough for the maximum amount
> of WAL log data that your system emits between commits, because a commit
> (from anyone) is going to flush the WAL data to disk (for everyone).
> So a benc
> "Christopher Kings-Lynne" <[EMAIL PROTECTED]> writes:
> > Here's a question then - what is the _drawback_ to having 1024
> wal_buffers
> > as opposed to 8?
>
> Waste of RAM? You'd be better off leaving that 8 meg available for use
> as genera
OK,
Here's a stab at some extra conf files. Feel free to shoot them down.
If we can come up with at least _some_ alternative files that we can put
somewhere for them to see when postgres is installed, then at least people
can see what variables will affect what...
I didn't see the point of a 'w
No, not really - I can do some more testing with pgbench to see what
happens though...I'll do it on monday
Chris
On Fri, 14 Feb 2003, Tom Lane wrote:
> Manfred Koizar <[EMAIL PROTECTED]> writes:
> > In postgresql.conf.sample-writeheavy you have:
> > commit_delay = 1
> > Is this still nee
You can just turn it on in one second in your postgresql.conf file...
Chris
On Fri, 14 Feb 2003, Kevin Brown wrote:
>
> One of the functions of the DBA is to monitor what people are doing to
> the database. My experience is that "ps" is often sorely lacking in
> this regard: its output is somew
> >> We could retarget to try to stay under SHMMAX=4M, which I think is
> >> the next boundary that's significant in terms of real-world platforms
> >> (isn't that the default SHMMAX on some BSDen?). That would allow us
> >> 350 or so shared_buffers, which is better, but still not really a
> >> se
> A separate line of investigation is "what is the lowest common
> denominator nowadays?" I think we've established that SHMMAX=1M
> is obsolete, but what replaces it as the next LCD? 4M seems to be
> correct for some BSD flavors, and I can confirm that that's the
> current default for Mac OS X -
> Someone at Red Hat is working on point-in-time recovery, also known as
> incremental backups. It will be in 7.4.
Does that mean that the poor guy/gal is implementing redo for all the index
types?
Chris
---(end of broadcast)---
TIP 3: if posting
> Tom is right here. log_duration _just_ prints the duration, so we would
> need to basically create a merged param that does log_duration and
> log_statement and have it activate only if the statement takes more than
> X milliseconds, something like log_long_statement, or something like
> that.
>
> Someone asked about this at FOSDEM. The only way I know to do it is look
> in the pgsql_temp directory, but they disappear pretty quickly. Folks,
> do we need something to report sort file usage?
How about a new GUC variable: log_sort_tempfiles
And in the code that creates the temp file, if th
Australia next year?
>
> -----------
>
> Christopher Kings-Lynne wrote:
> > Linux.conf.au Report
> >
> >
> > The Linux.conf.au is an international Linux/Open Source event that a
> Averaging over three trials on an unloaded system, I got 21.0 seconds
> with stats_command_string off, 27.7 with it on, or about 32% overhead.
>
> My conclusion is that stats_command_string overhead is non-negligible
> for simple commands. So I stand by my previous opinion that it should
> not b
OK, this is the problem:
I want to write a bit of SQL that if a row exists in a table, then update
it, otherwise it will update it. Problem is, there is a very high chance of
simultaneous execute of this query on the same row (the rows have a unique
index).
So, strategy one:
begin;
update row;
> Do the update, then try to insert if the update found nothing, and put
> a retry loop around the whole transaction in case you fail because of
> concurrent inserts.
>
> Realistically you will need a retry loop in all but the most trivial
> cases anyway --- certainly so if you want to use serializ
> If you don't have a primary key already, create a unique index on the
> combination you want to be unique. Then:
>
> . Try to insert the record
> . If you get a duplicate key error
> then do update instead
>
> No possibilities of duplicate records due to race conditions. If two
people
> try to
> I ended up with few only indexes on the operations table, because the
> processes that fill it up do minimal lookups to see if data is already in the
> table, if not do inserts. Then at regular intervals, the table is cleaned up -
> that is, a process to remove the duplicate is run. This unfortun
> This is a fairly spectacular failure :-(. As far as I can see from the
> semctl and shmctl man pages, the only plausible reason for EINVAL is
> that something had deleted the semaphores and shared memory out from
> under Postgres. I do not believe that Postgres itself could have done
> that ---
> On Tue, Feb 18, 2003 at 13:13:30 +0100,
> Christoph Haller <[EMAIL PROTECTED]> wrote:
> > > >
> > > > Moreover, are there any ANSI standards for this kind of thing? Or
> > each one
> > > > to his own?
>
> Based on discussions in the past that I have loosely followed, I believe
> there is some k
Hi,
I've been looking at the dependency API and I notice that there is a
function to delete ALL dependencies on an object and a function to add a
dependency, but there doesn't seem to be any way of deleting a dependency
between two _particular_ objects.
Is there any other way of doing this other
> "Christopher Kings-Lynne" <[EMAIL PROTECTED]> writes:
> > I've been looking at the dependency API and I notice that there is a
> > function to delete ALL dependencies on an object and a function to add a
> > dependency, but there doesn't seem to
When I run adddepend on my FreeBSD system, I get this:
> /usr/local/bin/adddepend -d usa
install_driver(Pg) failed: Can't locate DBD/Pg.pm in @INC (@INC contains:
/usr/local/lib/perl5/site_perl/5.005/i386-freebsd
/usr/local/lib/perl5/site_perl/5.005 . /usr/libdata/perl/5.00503/mach
/usr/libdata/pe
*sigh* My system was stuffed. Fixed now. Please disregard this post -
sorry for spamming the list...
Chris
- Original Message -
From: "Christopher Kings-Lynne" <[EMAIL PROTECTED]>
To: "Hackers" <[EMAIL PROTECTED]>
Sent: Friday, February 21, 2003 10:
Hi,
I just noticed a bug in adddepend:
The below commands will upgrade the foreign key style. Shall I execute
them?
DROP TRIGGER "RI_ConstraintTrigger_1105102" ON news_authors;
DROP TRIGGER "RI_ConstraintTrigger_1105103" ON news_authors;
DROP TRIGGER "RI_ConstraintTrigge
Hi,
The new cluster is cool in that :
1. It works
2. It saves the indisclustered status
However, after a dump and restore, this indisclustered status will be lost.
Would it be an idea to issue a CLUSTER command after the CREATE TABLE
statement in SQL dumps for tables that have an indisclustered
> The new cluster is cool in that :
>
> 1. It works
> 2. It saves the indisclustered status
>
> However, after a dump and restore, this indisclustered status will be
lost.
> Would it be an idea to issue a CLUSTER command after the CREATE TABLE
> statement in SQL dumps for tables that have an indisc
> While I don't see the syntax of:
>
> update table set (col...) = ( val...)
>
> as valuable compared to separate col=val assignments, I do see a value
> in allowing subqueries in such assignments:
>
> update table set (col...) = ( select val ..)
>
> Without it, you have to do separate subquery
> BTW, looking at the SQL99 standard, I see that you can do
>
> UPDATE table SET ROW = foo WHERE ...
>
> where foo is supposed to yield a row of the same rowtype as table
> --- I didn't dig through the spec in detail, but I imagine foo can
> be a sub-select. I don't care a whole lot for that, t
OK,
I have discovered a problem with my auto-naming patch. It's do to with
dumping serial columns with pg_dump, eg:
--
-- TOC entry 2 (OID 1004551)
-- Name: users_users; Type: TABLE; Schema: public; Owner: chriskl
--
CREATE TABLE users_users (
userid serial NOT NULL,
firstname character
> > REPLACE INTO anyone? ;)
>
> The upcoming SQL 200x standard includes a MERGE command that appears to
> fulfill that purpose.
Is there somewhere that I can read that spec?
Or can you just post the MERGE syntax for us?
*sigh* It's just like a standard to come up with a totally new syntax for a
> "Christopher Kings-Lynne" <[EMAIL PROTECTED]> writes:
> >> Would it be an idea to issue a CLUSTER command after the CREATE TABLE
> >> statement in SQL dumps for tables that have an indisclustered index?
>
> Yeah...
> A CLUSTER command issued just
> > See how it's HTMLised the foreign key name? I cannot find how $keyname in
> > the code is being html escaped. Perhaps it's some weird taint mode thing?
>
> Not to mention when it's , the code is setup to ignore the
> constraint name entirely (line 267).
I worked around it by checking to see
> No; directly manipulating the system catalogs in dump scripts is a
> crummy idea, because (a) it only works if you're superuser, and (b)
> it creates a nasty backwards-compatibility problem if we change the
> catalogs involved.
>
> A CLUSTER command issued just after table creation, while it's st
Hi Guys,
Were the pgattribute.attstorage and pg_type.typstorage fields available in
7.1?
Also, if they existed in 7.2, but the ALTER TABLE/SET STORAGE command didn't
exist, should I still dump it if they are dumping a 7.2 database?
Chris
---(end of broadcast)--
Hi guys,
I notice that we're still dumping CHECK constraints as part of the CREATE
TABLE statement, and not as an ALTER TABLE statement after the data has been
loaded.
Should we move it to after the data for speed purposes, like we have with
all other constraints?
Chris
--
> "Christopher Kings-Lynne" <[EMAIL PROTECTED]> writes:
> > I notice that we're still dumping CHECK constraints as part of the
CREATE
> > TABLE statement, and not as an ALTER TABLE statement after the data has
been
> > loaded.
>
> > Should we mo
> Although I suggested doing a single unconditional ANALYZE at the end
> of the script, second thought leads me to think the per-table ANALYZE
> (probably issued right after the table's data-load step) might be
> better. That way you'd not have any side-effects on already-existing
> tables in the
> Two things I regret: one is being unable to see the changes as patches
> the way you applied them, to get a sense of how the code evolved.
> Unfortunately the interface to CVS via web does not allow me to see it,
> or I don't know how to use it. It's not that important, however,
> because I was
> Two things I regret: one is being unable to see the changes as patches
> the way you applied them, to get a sense of how the code evolved.
> Unfortunately the interface to CVS via web does not allow me to see it,
> or I don't know how to use it. It's not that important, however,
> because I was
> * Make CLUSTER error out if the target index is not of an 'amindexnulls'
> index AM. This would amount to restricting CLUSTER to b-trees, which is
> annoying.
I think this solution is fine - we just need to fix GiST to index nulls
one day :)
> It occurs to me also that the same kind of pitfall
Hey, with this new ALTER SEQUENCE patch, how about this for an idea:
I submitted a patch to always generate non-colliding index and sequence
names. Seemed like an excellent idea. However, 7.3 dumps tables like this:
CREATE TABLE blah
a SERIAL
);
SELECT SETVAL('blah_a_seq', 10);
Sort of th
Hi Brandon,
> Are we supposed to report bugs with the CVS tip of PostgreSQL, or are
> we to assume that the developers are well aware of problems there and
> are already working on them?
You're supposed to report them! If we were aware of the problems, we'd fix
them ;)
Cheers,
Chris
---
Why not just run PostgreSQL like everyone else does (from a hard drive) and
simply give it heaps of buffers - let PostgreSQL manage its RAM itself?
Unless you have your xlog on physical storage, you are asking for trouble.
Actually, unless you have all your database in physical storage, you are
as
Hi,
I tried to go buy a shirt off the pgsql.com site, but when it comes to
shipping it just has:
international shipping zone 1
international shipping zone 2
international shipping zone 3
international shipping zone 4
international shipping zone 5
How am I supposed to know which zone Australia is
> Feels like we've been isolating a whole bunch of bugs in 7.3.2 recently,
> some of which are causing crashes out in the real world.
>
> Wondering when we feel it'd be good to start assembling a 7.3.3? I'm
> thinking in about two weeks or so, to give a bit more time to catch bugs
> and stuff.
I
> Was this resolved. Christopher, do you have a reproducible case?
It wasn't resolved, in fact I'd forgotten about it :)
I do have a reproducible case (our live server), however it seems like it's
basically a case of an invalid set of triggers. I really need to manually
remove some of the trigg
> Was this resolved. Christopher, do you have a reproducible case?
Oh sorry, I answered the wrong question!
Yes, I resolved it by reinstalling my DBD perl stuff.
I still have the problem of left over constraint triggers, but they do look
like they're broken, so it's not an adddepend problem...
> The optimizer does not think that "pbx_date = CURRENT_DATE" satisfies the
> partial index's WHERE condition. I don't see any really good way around
> this; to improve matters there'd need to be some concept of a plan that
> is only good for a limited time.
It's the same as the slight issue I ha
> 3. Create a runtime parameter (GUC variable) which when set causes us
> to assume SCROLL is present even if it's not stated. Setting this
> to TRUE would allow existing applications to work without modification;
> when it's FALSE, we'd enforce the spec behavior. The trouble with this
> is the T
> > I'd be in favour of creating whole sets of backwards-compatibility GUC's
> > whenever we break backwards compatibility.
> >
> > eg.
> > use_72_compat = yes
> > use_73_compat = yes
>
> That sounds like a recipe for a maintenance nightmare to me.
We only have to keep them for one major version,
> So if I understand correctly, all instances of anyarray and anyelement
> in a function definition would need to be self-consistent, but the group
> could represent essentially any datatype with its corresponding array
> type. If we need more than one of these self consistent groups, we could
> I had been leaning to May 1 beta, but am happy to switch to June 1 if
> you feel that makes an improvement in the odds of completing the Windows
> port. (I think it will also improve the odds of finishing this protocol
> stuff I've taken on...) I don't want to see it pushed further than that
>
> > One addition I would personally like to see (it comes up in my apps
> > code) is the ability to detect wheather the server is big endian or
> > little endian. When using binary cursors this is necessary in order to
> > read int data.
>
> Actually, my hope is to eliminate that business entirely
When I create a new table with a serial column, the first row defaults to
inserting '1'.
If I delete all the rows from the table and want to reset the sequence, I
can't:
ERROR: users_health_types_type_id_seq.setval: value 0 is out of bounds
(1,9223372036854775807)
How do I set the sequence to h
> implode(text[], text) returns text - join array elements into a
> string using given string delimiter
>
> I'm open to opinions on implode() -- I only picked implode() because
> that's what it is called in PHP. Any suggestions?
It's also called 'join' in PHP...
Chris
---
> Comments?
All the error stuff sounds really neat. I volunteer for doing lots of elog
changes when the time comes.
Would it be possible to do a command line app?
bash$ pg_error 1200D
Severity: ERROR
Message: Division by zero
Detail:
Hint: Modify statement to prevent zeros appearing in denomina
> Great work, Tom!
>
> While we're effectively changing every elog call site in the backend,
> would it also be a good idea to adopt a standard for the format of error
> messages? (e.g. capitalization, grammar, etc.)
I 100% agree with this - a style guide!
Chris
---(end
> On Thu, 2003-03-13 at 21:16, Christopher Kings-Lynne wrote:
> > Would it be possible to do a command line app?
> >
> > bash$ pg_error 1200D
> > Severity: ERROR
> > Message: Division by zero
> > Detail:
> > Hint: Modify statement to prevent zero
> > Obviously there is cost, but doing a lookup only on demand, has got to be
> > cheaper in the long run than including the entire column definition in the
> > message whether it's wanted or not?
>
> So if there are 100 fields, should we ask the backend
> the column name 100 times ?
No, you do a
> > I suggested using names to Tom for this reason, but he preferred to use
> > attrelid/attnum.
>
> Oh, and what happenned to the attlognum idea? If something that needs
> it is going to be implemented the column should probably be added now
> and used instead of attnum.
Wll, it'd be nice, b
Having a little problem with 7.3's tsearch:
usa=# select brand,description, ftiidx from food_foods where description
ilike '%frapp%';
brand | description
| ftiidx
---+---
> the command:
>
> createlang --pglib=/usr/local/pgsql/lib --dbname=apache_auth
> plpgsql
Try just this:
createlang plpgsql apache_auth
Chris
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
> Anyway, on to MySQL. The had a booth there. I asked their technical guy
> a few questions and he seemed to have a decent understanding. When I
> asked the question, "Why would I choose MySQL over something like
> PostgreSQL?" his response was "There is one company driving MySQL. Also,
> when
I keep getting this:
2003-03-20 08:15:49 WARNING: Rel users_sessions: Uninitialized page 3195 -
fixing
2003-03-20 08:15:49 WARNING: Rel users_sessions: Uninitialized page 3196 -
fixing
2003-03-20 08:15:49 WARNING: Rel users_sessions: Uninitialized page 3197 -
fixing
2003-03-20 08:15:49 WARNING:
Does the new ALTER TABLE / CLUSTER ON syntax support the ONLY modifier - it
isn't documented if it is?? I guess it's not really relevant is it?
Chris
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
I just managed to break the CLUSTER ON patch:
test=# create table test (a int4 primary key, b int4 unique, c int4);
NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index 'test_pkey'
for table 'test'
NOTICE: CREATE TABLE / UNIQUE will create implicit index 'test_b_key' for
table 'test'
NO
This behaviour I find unusual:
usa=# explain analyze select user_id, plan_next from users_profiles where
plan_next = '2003-01-01';
QUERY PLAN
-
> Surely the planner is aware that '2003-01-01'::date - interval '1 week' is
a
> constant???
Actually, turns out that the planner was smarter than me I think. 2003-01-01
occurs very rarely in the system, but other dates occupy 1/7th of the table,
so it's not so easy to plan...
Chris
---
> "Christopher Kings-Lynne" <[EMAIL PROTECTED]> writes:
> > I keep getting this:
> > 2003-03-20 08:15:49 WARNING: Rel users_sessions: Uninitialized page
3195 -
> > fixing
>
> Hmm. In 7.2.* I'd have said this was a known problem, but in 7.3.* it
> > select "time"(abstime(timestamp 'now')) from bookings;
> > select "time"(timestamp 'now') from bookings;
>
> First of all, thanks, it worked..
>
> And What's so holy about "" if it is a function?
It's really old 7.1 syntax, not supported from 7.2+.
Basically it's because time can now have a
How can it recurse, actually - there won't be an index with the same name
in the subtable?
On Fri, 21 Mar 2003, Alvaro Herrera wrote:
> On Fri, Mar 21, 2003 at 11:21:16AM +0800, Christopher Kings-Lynne wrote:
> > Does the new ALTER TABLE / CLUSTER ON syntax support the ONLY
> I think we should stick with the existing naming convention. The only
> actual problem that's been pointed out here is that an ALTER TABLE
> (or COLUMN) RENAME on a serial column doesn't update the sequence name
> to match. Seems to me we could fix that with less effort than any of
> these solu
I want to fix this bug, however I can't see how the example below is
failing... (Obeys dropped columns) I'm not up with my SRFs, so would
someone be able to post a concise SQL script that demonstrates the failure?
I can see in the code that it should be failing, but I need a demonstrated
example
> (the list of SET expressions in an UPDATE statement;
> section 14.12) allows a on the
> right-hand side of SET assignments. One of the possibilities for a
> is DEFAULT (section 6.5).
>
> In other words, this syntax should be legal:
>
> UPDATE sometable SET somecolumn = DEFAULT;
>
> but it does
> Uh, why exactly do you think this is race-free?
>
> It looks fancy, but AFAICS the SELECT will return info that is correct
> as of its starting timestamp; which is not enough to guarantee that the
> INSERT won't conflict with another transaction doing the same thing
> concurrently.
How about:
I seem to be getting this:
gmake[3]: Entering directory `/home/chriskl/pgsql-temp/src/backend/parser'
bison -y -d gram.y
gram.y:4260.4-4307.10: type clash (`boolean' `keyword') on default action
gram.y:4307.11: parse error, unexpected ":", expecting ";" or "|"
gmake[3]: *** [parse.h] Error 1
Chr
> AFAIK the "except" select won't see other inserts in uncommitted
> transactions. If those transactions are committed you will end up with the
> same problem. You can try it yourself, by manually doing two separate
> transactions in psql.
Yeah, I see that now.
> You either have to lock the whole
> > There are (at least) two distinct problems involved here. One is
> > getting plpgsql to deal correctly with rowtypes that include dropped
> > columns. The other is getting it to react when someone alters a table
> > whose rowtype is relied on by already-compiled functions.
I'm working on thi
1. Did that fix to not allow cluster on partial and non-null indexes get
backpatched?
2. How can I deliberately cause a deadlock in order to test some code?
Chris
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
http://www.post
I'm always getting deadlocks like this:
[30-Mar-2003 19:19:51] PHP Fatal error: postgres7 error: [0: ERROR:
deadlock detected
] in EXECUTE("INSERT INTO users_foods (user_id, date, meal_id, quantity,
eaten, food_id) VALUES ('55283', '2003-04-07', '1', '1.00', 'f', '779')")
in /usr/local/www/gener
501 - 600 of 1024 matches
Mail list logo