Re: [SQL] Create table doesn't work in plpgsql
I believe (although I haven't tried it) that pltcl will allow you to do things such as this. On Thu, 21 Dec 2000, Volker Paul wrote: > Hi, > > > I don't think you can use DDL(data definition language) in PL/SQL. > > create table is not DML(data munipulation language) instead > > it's a DDL. > Thanks, but that leaves me with a problem. > What I really want to do is something like > select str from person where id=1234; > where str is a string that contains an expression like > famname || ', ' || givname > i.e. the final select is > select famname || ', ' || givname from person where id=1234; > I know it's possible by building the select e.g. in bash > and calling psql with it as an argument, but do you see a possibility > that is closer to Postgres, e.g. in plpgsql?
Re: [SQL] Invoice number
> I'm wondering how people creates guaranteed sequential numbers - in my case > for invoice numbers. > > - Sequences are not rollback'able. > - It seems overkill to have a table just for this. > - What else? You'll probably need a table (although you may be able to get away with only one for all of these you want to do). The big issue here is locking since a second transaction looking to get a number needs to wait for an earlier transaction that has already gotten a number to either commit or rollback to reuse the number if necessary.
Re: [SQL] Create table doesn't work in plpgsql
I believe a couple of months back... a EXECUTE command was added to plpgsql to allow users to dynamic sql statements. So if you get the current development version you should be able to call EXECUTE CREATE TABLE or whatever sql you like. (I think this is still not in 7.03, not sure though) I'm not sure about the exact syntax, but if you look in past threads for "execute" I'm sure you'll find it. Have fun. Merry Xmas. Keith. At 02:00 PM 22/12/2000 +0100, Volker Paul wrote: > > Can this be done using tcl or perl? > >I'll try them and report what I find out. > >V.Paul
[SQL] Fw: Optimization recommendations request
Well, this message never made it through, but I managed to answer my own question -- I never ran vacuum analyze which caused a table scan instead of an index scan. After running vacuum analyze the query returns immediately. I would still be interested in any hints to optimize performance for very large tables (>10M tuples). Thanks, Joe > Hello, > > I'm working on an application where I need to design for one table to grow > to an extremely large size. I'm already planning to partition the data into > multiple tables, and even possibly multiple servers, but even so each table > may need to grow to the 10 - 15 million tuple range. This table will be used > for a keyed lookup and it is very important that the query return in well > under a second. I've done a small test using a dual ppro 200 server with 512 > MB RAM, running RedHat Linux 6.1, and a late November 7.1 pre-beta snapshot > of PostgreSQL (PGOPTS="-D/opt/pgsql/data -i -B 25000"). I used a test table > with about 5 million tuples. > > Details: > > CREATE TABLE foo( > guid varchar(20) not null, > ks varchar(20) not null > ); > > --> insert 5,000,000 rows, both guid and ks are 20 byte sha-1 hashes > -- tried this first > -- create index foo_idx1 on foo(guid); > -- then tried > create index foo_idx1 on foo using HASH (guid); > > SELECT ks FROM foo WHERE guid = 'f9ee1930f4010ba34cc5ca6efd27d988eb4f921d'; > > The query currently takes in excess of 40 seconds. I would appreciate any > suggestions for optimizing to bring this down substantially. > > Thanks in advance, > > Joe Conway >