Re: [ADMIN] relation does not exist
Try select * from "Dx"; If you want to keep case (as opposed to the normal lower-casing) you'll need to double quote the name. On Fri, 23 Feb 2001 [EMAIL PROTECTED] wrote: > > > I am trying to convert a MS SQL database to postgres. I have done it > this way: > > 1) use Access -> link to MS SQL > 2) install postgress ODBC driver > 3) use Access -> copy -> ODBC compliant database. > 4) set up the database in postgress > 5) do the copy from access. > > the file Dx for the table Dx is created in the base/sw_codes database. the > file contains the data, so it is being copied correctly by access to > postgres.. > > however I cannot access the table from psql. I get the following > message: > > Welcome to psql, the PostgreSQL interactive terminal. > > Type: \copyright for distribution terms >\h for help with SQL commands >\? for help on internal slash commands >\g or terminate with semicolon to execute query >\q to quit > > sw_codes=# select * from Dx; > ERROR: Relation 'dx' does not exist > > sw_codes=# \dt > List of relations > Name | Type | Owner > --+---+-- > Dx | table | rshepard > (1 row) > > can some one help? > > postgres v 7.03 > odbc v 6.5 (can't find odbc v 7.x) > > thanks > > bob shepard > [EMAIL PROTECTED] > > > >
Re: [ADMIN] Re: what means "INSERT xxx yyy" ?
mike wrote: > > Hi ! > > > > Does anyone knows what means, after an INSERT for exemple the message : > > > > INSERT 19331808 1 > > > > What the meaning of the two numbers ? > > > > I had a problem with a DB because I reached the max transaction ID. > > > > So, how could i know what is the maximum ? and what is the current > > transaction ID ? > > Is it the first number ? > > > > Thank you > > INSERT 19331808 1 > > That just tells you waht you did "INSERT" and it displays the oid for that > insert "19331808", as for the 1, uncertain waht it is, but its always there > > Mike The one is the number of records inserted.
Re: [HACKERS] Re: [ADMIN] v7.1b4 bad performance
On Fri, Feb 23, 2001 at 01:09:37PM +0200, Hannu Krosing wrote: > Dmitry Morozovsky wrote: > > > DM> I just done the experiment with increasing HZ to 1000 on my own machine > > DM> (PII 374). Your test program reports 2 ms instead of 20. The other side > > DM> of increasing HZ is surely more overhead to scheduler system. Anyway, it's > > DM> a bit of data to dig into, I suppose ;-) > > DM> > > DM> Results for pgbench with 7.1b4: (BTW, machine is FreeBSD 4-stable on IBM > > DM> DTLA IDE in ATA66 mode with tag queueing and soft updates turned on) > > Is this unmodified pgbench or has it Hiroshi tweaked behaviour of > connecting each client to its own database, so that locking and such > does not shade the possible benefits (was it about 15% ?) of delay>1 > > also, IIRC Tom suggested running with at least -B 1024 if you can. Just try this: explain select * from where = (Use for fieldname an indexed field). If postgres is using an sequential scan in stead of an index scan. You have to vacuum your database. This will REALLY remove deleted data from your indexes. Hope it will work, Dave Mertens System Administrator ISM, Netherlands
[ADMIN] Re: [GENERAL] problem while compiling user c functions in 7.1beta4
On Thu, 22 Feb 2001 20:28, hubert depesz lubaczewski wrote: > since in 7.1beta4 there is no postgres.h i changed this to: I did a cvsup update about 12 hours ago and look:- 22:05:23 chris@berty:/usr/src/cvs/pgsql $ find . -name postgres.h ./src/include/postgres.h 22:16:22 chris@berty:/usr/src/cvs/pgsql $ -- Sincerely etc., NAME Christopher Sawtell CELL PHONE 021 257 4451 ICQ UIN45863470 EMAIL csawtell @ xtra . co . nz CNOTES ftp://ftp.funet.fi/pub/languages/C/tutorials/sawtell_C.tar.gz -->> Please refrain from using HTML or WORD attachments in e-mails to me <<--
[ADMIN] HELP: m$ access -> psql howto ?
Hi, I cannot use any kind of odbc because my customers have his local m$ access db's locally then export them on .txt with tab or | separated, then put on my server trought ftp. and is working ok except that the customers are on spanish databases then a data like: --DATE-NAME-LANG-- 1/6/2000|Ferran Adrià|Castellano| when sended trought ftp on my server is converted to: --DATE-NAMELANG-- 1/6/2000|Ferran Adri\xe0|Castellano| so when imported on Postgresql with: COPY products FROM '/var/lib/postgres/iii2.txt' USING DELIMITERS '|' \g --DATE-NAME---LANG-- 1/6/2000|Ferran Adri\xe0|Castellano|NULL on the same cell, ignoring the '|' completelly on 'postmaster.init' I have: LANG=es_ES but doesnt' works... using tabulators as a separators also causes same problem... any pointers to solve this will be really apreciated the other problem is that if a m$ access database has a return carraige on a text cell the import also fails. bests from barcelona, teixi.
[ADMIN] lock the database
Hello I am using postgreSQL 7.0.2. Is it necessary to lock the database before backup the database (using pg_dump), if not necessary, will the database corrupt when someone is accessing the database during pg_dump processing? Thanks in advance! Maggie
[ADMIN] relation does not exist
I am trying to convert a MS SQL database to postgres. I have done it this way: 1) use Access -> link to MS SQL 2) install postgress ODBC driver 3) use Access -> copy -> ODBC compliant database. 4) set up the database in postgress 5) do the copy from access. the file Dx for the table Dx is created in the base/sw_codes database. the file contains the data, so it is being copied correctly by access to postgres.. however I cannot access the table from psql. I get the following message: Welcome to psql, the PostgreSQL interactive terminal. Type: \copyright for distribution terms \h for help with SQL commands \? for help on internal slash commands \g or terminate with semicolon to execute query \q to quit sw_codes=# select * from Dx; ERROR: Relation 'dx' does not exist sw_codes=# \dt List of relations Name | Type | Owner --+---+-- Dx | table | rshepard (1 row) can some one help? postgres v 7.03 odbc v 6.5 (can't find odbc v 7.x) thanks bob shepard [EMAIL PROTECTED]
Re: [HACKERS] Re: [ADMIN] v7.1b4 bad performance
Tom Lane wrote: > > > platform) i686-pc-linux-gnu, compiled by GCC egcs-2.91.60(turbolinux 4.2) > > min delay) 10msec according to your test program. > > -B) 64 (all other settings are default) > > Thanks. Could I trouble you to run it again with a larger -B, say > 1024 or 2048? What I've found is that at -B 64, the benchmark is > so constrained by limited buffer space that it doesn't reflect > performance at a more realistic production setting. > Hmm the result doesn't seem that obvious. First I got the following result. [CommitDelay=0] 1)tps = 23.024648(including connections establishing) tps = 23.856420(excluding connections establishing) 2)tps = 30.276270(including connections establishing) tps = 30.996459(excluding connections establishing) [CommitDelay=1] 1)tps = 23.065921(including connections establishing) tps = 23.866029(excluding connections establishing) 2)tps = 34.024632(including connections establishing) tps = 35.671566(excluding connections establishing) The result seems inconstant and after disabling checkpoint process I got the following. [CommitDelay=0] 1)tps = 24.060970(including connections establishing) tps = 24.416851(excluding connections establishing) 2)tps = 21.361134(including connections establishing) tps = 21.605583(excluding connections establishing) 3)tps = 20.377635(including connections establishing) tps = 20.646523(excluding connections establishing) [CommitDelay=1] 1)tps = 22.164379(including connections establishing) tps = 22.790772(excluding connections establishing) 2)tps = 22.719068(including connections establishing) tps = 23.040485(excluding connections establishing) 3)tps = 24.341675(including connections establishing) tps = 25.869479(excluding connections establishing) Unfortunately I have no more time to check today. Please check the similar test case. [My test case] I created and initialized 10 datatabases as follows. 1) create databases. createdb inoue1 craetedb inoue2 . createdb inoue10 2) pgbench -i inoue1 pgbench -i inoue2 . pgbench -i inoue10 3) invoke a modified pgbench pgbench -c 10 -t 100 inoue I've attached a patch to change pgbench so that each connection connects to different database whose name is '%d'( is the specified database? name). Regards, Hiroshi Inoue Index: pgbench.c === RCS file: /home/cvs/pgcurrent/contrib/pgbench/pgbench.c,v retrieving revision 1.1 diff -c -r1.1 pgbench.c *** pgbench.c 2001/02/20 07:55:21 1.1 --- pgbench.c 2001/02/20 09:31:13 *** *** 540,545 --- 540,546 PGconn *con; PGresult *res; + char dbn[48]; while ((c = getopt(argc, argv, "ih:nvp:dc:t:s:S")) != EOF) { *** *** 639,645 } /* opening connection... */ ! con = PQsetdb(pghost, pgport, NULL, NULL, dbName); if (PQstatus(con) == CONNECTION_BAD) { fprintf(stderr, "Connection to database '%s' failed.\n", dbName); --- 640,648 } /* opening connection... */ ! /*con = PQsetdb(pghost, pgport, NULL, NULL, dbName);*/ ! sprintf(dbn, "%s1", dbName); ! con = PQsetdb(pghost, pgport, NULL, NULL, dbn); if (PQstatus(con) == CONNECTION_BAD) { fprintf(stderr, "Connection to database '%s' failed.\n", dbName); *** *** 726,732 /* make connections to the database */ for (i = 0; i < nclients; i++) { ! state[i].con = PQsetdb(pghost, pgport, NULL, NULL, dbName); if (PQstatus(state[i].con) == CONNECTION_BAD) { fprintf(stderr, "Connection to database '%s' failed.\n", dbName); --- 729,737 /* make connections to the database */ for (i = 0; i < nclients; i++) { ! /*state[i].con = PQsetdb(pghost, pgport, NULL, NULL, dbName);*/ ! sprintf(dbn, "%s%d", dbName, i + 1); ! state[i].con = PQsetdb(pghost, pgport, NULL, NULL, dbn); if (PQstatus(state[i].con) == CONNECTION_BAD) { fprintf(stderr, "Connection to database '%s' failed.\n", dbName);
RE: [HACKERS] Re: [ADMIN] v7.1b4 bad performance
> -Original Message- > From: Tom Lane > > Hannu Krosing <[EMAIL PROTECTED]> writes: > > Is this unmodified pgbench or has it Hiroshi tweaked behaviour of > > connecting each client to its own database, so that locking and such > > does not shade the possible benefits (was it about 15% ?) of delay>1 > > I didn't much like that approach to altering the test, since it also > means that all the clients are working with separate tables and hence > not able to share read I/O; that doesn't seem like it's the same > benchmark at all. I agree with you at this point. Generally speaking the benchmark has little meaning if it has no conflicts in the test case. I only borrowed pgbench's source code to implement my test cases. Note that there's only one database in my last test case. My modified "pgbench" isn't a pgbench any more and I didn't intend to change pgbench's spec like that. Probably it was my mistake that I had posted my test cases using the form of patch. My intension was to clarify the difference of my test cases. However heavy conflicts with scaling factor 1 doesn't seem preferable at least as the default of pgbench. Regards, Hiroshi Inoue
RE: [ADMIN] select * from pgadmin_users; causes error
> -Original Message- > From: Tom Lane [mailto:[EMAIL PROTECTED]] > Sent: 23 February 2001 21:07 > To: Dave Page > Cc: [EMAIL PROTECTED]; 'PostgreSQL Admin News' > Subject: Re: [ADMIN] select * from pgadmin_users; causes error > > > Dave Page <[EMAIL PROTECTED]> writes: > > Basically it isn't so much as the user ID that created > > the views that is an issue, it's that fact that pgAdmin > then didn't issue a > > 'GRANT ALL ON pgadmin_users TO PUBLIC'. > > The particular case being complained of here would not be > fixed by that. > What exactly was the problem? I must have missed an earlier part of the thread as I don't read pgsql-admin - I barely get time to read ODBC & Interfaces :-( Regards, Dave.
RE: [ADMIN] select * from pgadmin_users; causes error
> -Original Message- > From: Tom Lane [mailto:[EMAIL PROTECTED]] > Sent: 23 February 2001 15:23 > To: [EMAIL PROTECTED] > Cc: 'PostgreSQL Admin News'; Dave Page > Subject: Re: [ADMIN] select * from pgadmin_users; causes error > > > John Hatfield <[EMAIL PROTECTED]> writes: > > It looks as though the views are created the first time you > login to > > pgAdmin. So when I logged first as an ordinary user not as > the postgres > > (database superuser), the views were created with this user > as the owner. > > > A trap for beginners!! > > Indeed. I wonder whether pgadmin actually needs to see the password > column. If not, perhaps it could make a view of pg_user instead of > pg_shadow to avoid this problem. If it does, selecting directly from > pg_shadow with no view seems like the most reliable way... > > regards, tom lane pgAdmin uses the view for a couple of reasons: 1) It is one of a set of views that are created and upgraded as required by pgAdmin to simplify updating pgAdmin if the PostgreSQL system tables change from version to version (there are loads of queries in pgAdmin, so upgrading for a new release is far easier if I just have to update one set of views). 2) pgadmin_users includes the OID of the user tuple in pg_users which is not in pg_shadow. The problem in question has been fixed in the current dev code of pgAdmin (as it has been noted as an issue by a couple of other users), soon to be released as v7.1.0. Basically it isn't so much as the user ID that created the views that is an issue, it's that fact that pgAdmin then didn't issue a 'GRANT ALL ON pgadmin_users TO PUBLIC'. It may help to know that all pgAdmin created SSOs (we dubbed them Server Side Objects 'cos it sounded good!) can be dropped and recreated using the options on the Advanced Menu. This may be useful when preparing a finished database for release to a customer for example. Regards, Dave.
Re: [HACKERS] Re: [ADMIN] v7.1b4 bad performance
> I didn't much like that approach to altering the test, since it also > means that all the clients are working with separate tables and hence > not able to share read I/O; that doesn't seem like it's the same > benchmark at all. What would make more sense to me is to increase the > number of rows in the branches table. > > Right now, at the default "scale factor" of 1, pgbench makes tables of > these sizes: > > accounts 10 > branches 1 > history 0 (filled during test) > tellers 10 > > It seems to me that the branches table should have at least 10 to 100 > entries, and tellers about 10 times whatever branches is. 10 > accounts rows seems enough though. Those numbers are defined in the TPC-B spec. But pgbench is not an official test tool anyway, so you could modify it if you like. That is the benefit of the open source:-) -- Tatsuo Ishii
RE: [ADMIN] select * from pgadmin_users; causes error
> -Original Message- > From: Tom Lane [mailto:[EMAIL PROTECTED]] > Sent: 23 February 2001 22:29 > To: Dave Page > Cc: [EMAIL PROTECTED]; 'PostgreSQL Admin News' > Subject: Re: [ADMIN] select * from pgadmin_users; causes error > > > Dave Page <[EMAIL PROTECTED]> writes: > > Basically it isn't so much as the user ID that created > > the views that is an issue, it's that fact that pgAdmin > >> then didn't issue a > >> 'GRANT ALL ON pgadmin_users TO PUBLIC'. > >> > >> The particular case being complained of here would not be > >> fixed by that. > >> > > > What exactly was the problem? > > The problem was that pgadmin was first started in a > particular database > by a non-privileged user. So it created the pgadmin_users > view as owned > by that non-privileged user. Then the view does not work, > even for the > superuser, because its attempt to access pg_shadow is checked > under the > permissions of its owner not of the invoker. Hmm yes. I've just been playing with it and that is exactly the case. > Deleting and recreating the view (to make it owned by the superuser) > will fix this problem, and I'm glad to hear that pgadmin provides a > reasonably painless way to do that. But the average Joe > isn't going to > realize what the problem is or what he has to do to fix it. It'd be > better if the problem couldn't occur in the first place. If > you really > need a view on pg_shadow, can you postpone creating it until > you are run > as superuser? That shouldn't be a major problem, I'll look into it for the next release. Regards, Dave.
Re: [HACKERS] Re: [ADMIN] v7.1b4 bad performance
> Okay, plan B then: let's ask people to redo their benchmarks with > -s bigger than one. Now, how much bigger? > > To the extent that you think this is a model of a real bank, it should > be obvious that the number of concurrent transactions cannot exceed the > number of tellers; there should never be any write contention on a > teller's table row, because only that teller (client) should be issuing > transactions against it. Contention on a branch's row is realistic, > but not from more clients than there are tellers in the branch. > > As a rule of thumb, then, we could say that the benchmark's results are > not to be believed for numbers of clients exceeding perhaps 5 times the > scale factor, ie, half the number of teller rows (so that it's not too > likely we will have contention on a teller row). At least -s 5 seems reasonable for me too. Maybe we should make it as the default setting for pgbench? -- Tatsuo Ishii
Re: [ADMIN] select * from pgadmin_users; causes error
Dave Page <[EMAIL PROTECTED]> writes: > Basically it isn't so much as the user ID that created > the views that is an issue, it's that fact that pgAdmin >> then didn't issue a >> 'GRANT ALL ON pgadmin_users TO PUBLIC'. >> >> The particular case being complained of here would not be >> fixed by that. >> > What exactly was the problem? The problem was that pgadmin was first started in a particular database by a non-privileged user. So it created the pgadmin_users view as owned by that non-privileged user. Then the view does not work, even for the superuser, because its attempt to access pg_shadow is checked under the permissions of its owner not of the invoker. Deleting and recreating the view (to make it owned by the superuser) will fix this problem, and I'm glad to hear that pgadmin provides a reasonably painless way to do that. But the average Joe isn't going to realize what the problem is or what he has to do to fix it. It'd be better if the problem couldn't occur in the first place. If you really need a view on pg_shadow, can you postpone creating it until you are run as superuser? regards, tom lane
Re: [ADMIN] select * from pgadmin_users; causes error
Dave Page <[EMAIL PROTECTED]> writes: > Basically it isn't so much as the user ID that created > the views that is an issue, it's that fact that pgAdmin then didn't issue a > 'GRANT ALL ON pgadmin_users TO PUBLIC'. The particular case being complained of here would not be fixed by that. regards, tom lane
Re: [HACKERS] Re: [ADMIN] v7.1b4 bad performance
Tom Lane wrote: > > "Hiroshi Inoue" <[EMAIL PROTECTED]> writes: > >> Hmm, you mean you set up a separate test database for each pgbench > >> "client", but all under the same postmaster? > > > Yes. Different database is to make the conflict as less as possible. > > The conflict among backends is a greatest enemy of CommitDelay. > > Okay, so this errs in the opposite direction from the original form of > the benchmark: there will be *no* cross-backend locking delays, except > for accesses to the common WAL log. That's good as a comparison point, > but we shouldn't trust it absolutely either. > Of cource it's only one of the test cases. Because I've ever seen one-sided test cases, I had to provide this test case unwillingly. There are some obvious cases that CommitDelay is harmful and I've seen no test case other than such cases i.e 1) There's only one session. 2) The backends always conflict(e.g pgbench with scaling factor 1). > >> What platform is this on --- in particular, how long a delay > >> is CommitDelay=1 in reality? What -B did you use? > > > platform) i686-pc-linux-gnu, compiled by GCC egcs-2.91.60(turbolinux 4.2) > > min delay) 10msec according to your test program. > > -B) 64 (all other settings are default) > > Thanks. Could I trouble you to run it again with a larger -B, say > 1024 or 2048? What I've found is that at -B 64, the benchmark is > so constrained by limited buffer space that it doesn't reflect > performance at a more realistic production setting. > OK I would try it later though I'm not sure I could increase -B that large in my current environment. Regards, Hiroshi Inoue
RE: [HACKERS] Re: [ADMIN] v7.1b4 bad performance
> -Original Message- > From: Tom Lane [mailto:[EMAIL PROTECTED]] > > Hiroshi Inoue <[EMAIL PROTECTED]> writes: > > I changed pgbench so that different connection connects > > to the different database and got the following results. > > Hmm, you mean you set up a separate test database for each pgbench > "client", but all under the same postmaster? > Yes. Different database is to make the conflict as less as possible. The conflict among backends is a greatest enemy of CommitDelay. > > The results of > > pgbench -c 10 -t 100 > > > [CommitDelay=0] > > 1st)tps = 18.484611(including connections establishing) > > tps = 19.827988(excluding connections establishing) > > 2nd)tps = 18.754826(including connections establishing) > > tps = 19.352268(excluditp connections establishing) > > 3rd)tps = 18.771225(including connections establishing) > > tps = 19.261843(excluding connections establishing) > > [CommitDelay=1] > > 1st)tps = 20.317649(including connections establishing) > > tps = 20.975151(excluding connections establishing) > > 2nd)tps = 24.208025(including connections establishing) > > tps = 24.663665(excluding connections establishing) > > 3rd)tps = 25.821156(including connections establishing) > > tps = 26.842741(excluding connections establishing) > > What platform is this on --- in particular, how long a delay > is CommitDelay=1 in reality? What -B did you use? > platform) i686-pc-linux-gnu, compiled by GCC egcs-2.91.60(turbolinux 4.2) min delay) 10msec according to your test program. -B) 64 (all other settings are default) Regards, Hiroshi Inoue
Re: [HACKERS] Re: [ADMIN] v7.1b4 bad performance
Tom Lane wrote: > > Hiroshi Inoue <[EMAIL PROTECTED]> writes: > > In your test cases I always see "where bid = 1" at "update branches" > > i.e. > > update branches set bbalance = bbalance + ... where bid = 1 > > > ISTM there's no multiple COMMIT in your senario-s due to > > their lock conflicts. > > Hmm. It looks like using a 'scaling factor' larger than 1 is necessary > to spread out the updates of "branches". AFAIR, the people who reported > runs with scaling factors > 1 got pretty much the same results though. > People seem to believe your results are decisive and would raise your results if the evidence is required. All clients of pgbench execute the same sequence of queries. There could be various conflicts e.g. oridinary lock, buffer lock, IO spinlock ... I've been suspicious if pgbench is an (unique) appropiriate test case for evaluaing commit_delay. Regards, Hiroshi Inoue
Re: [ADMIN] v7.0.3 Regress Tests Errors
Hi Tom, I reran the tests setting PGHOST=localhost. Same errors. I do not know about broken Unix domain sockets. How can I check/fix that? Cheers, Paul Tom Lane wrote: > > Paul Huppe <[EMAIL PROTECTED]> writes: > > It was acutally Peter Eisentraut who asked me to post to the list. He > > wanted to see the .diff file because he tought that I had a problem with > > broken Unix domain sockets. > > Hm, I should think that broken sockets would lead to total non function, > rather than selected (even if extensive) errors. > > But, if you're suspicious of that, try running the tests with > environment variable PGHOST set to localhost, so that TCP connections > will be used (and make sure you started the postmaster with -i of > course). > > > Platform: Sun SPARCserver 10 > > OS: Solaris 8 > > Compiler: gcc 2.95.2 > > Configuration Options: none, used defaults > > That doesn't sound particularly risky. Odd ... > > regards, tom lane begin:vcard n:Huppé;Paul tel;fax:(613) 995-2339 tel;work:(613) 943-0996 x-mozilla-html:FALSE url:http://www.nrcan.gc.ca org:http://www.nrcan.gc.ca/">http://www.nrcan.gc.ca/graphics/nrcan_fip.gif" border=0>;Natural Resources Canada / Earth Sciences Sector / GeoInformatics adr:;;601 Booth Street;Ottawa;Ontario;K1A 0E8;Canada version:2.1 email;internet:[EMAIL PROTECTED] title:Senior Systems Consultant fn:Paul Huppé end:vcard
Re: [HACKERS] Re: [ADMIN] v7.1b4 bad performance
Tom Lane wrote: > > Hiroshi Inoue <[EMAIL PROTECTED]> writes: > > I've been suspicious if pgbench is an (unique) > > appropiriate test case for evaluaing commit_delay. > > Of course it isn't. Never trust only one benchmark. > > I've asked the Great Bridge folks to run their TPC-C benchmark with both > zero and small nonzero commit_delay. It will be a couple of days before > we have the results, however. Can anyone else offer any comparisons > based on other multiuser benchmarks? > I changed pgbench so that different connection connects to the different database and got the following results. The results of pgbench -c 10 -t 100 [CommitDelay=0] 1st)tps = 18.484611(including connections establishing) tps = 19.827988(excluding connections establishing) 2nd)tps = 18.754826(including connections establishing) tps = 19.352268(excluditp connections establishing) 3rd)tps = 18.771225(including connections establishing) tps = 19.261843(excluding connections establishing) [CommitDelay=1] 1st)tps = 20.317649(including connections establishing) tps = 20.975151(excluding connections establishing) 2nd)tps = 24.208025(including connections establishing) tps = 24.663665(excluding connections establishing) 3rd)tps = 25.821156(including connections establishing) tps = 26.842741(excluding connections establishing) Regards, Hiroshi Inoue
Re: [HACKERS] Re: [ADMIN] v7.1b4 bad performance
Tom Lane wrote: > > I wrote: > > Thus, our past arguments about whether a few microseconds of delay > > before commit are a good idea seem moot; we do not have any portable way > > of implementing that, and a ten millisecond delay for commit is clearly > > Not Good. > > I've now finished running a spectrum of pgbench scenarios, and I find > no case in which commit_delay = 0 is worse than commit_delay > 0. > Now this is just one benchmark on just one platform, but it's pretty > damning... > In your test cases I always see "where bid = 1" at "update branches" i.e. update branches set bbalance = bbalance + ... where bid = 1 ISTM there's no multiple COMMIT in your senario-s due to their lock conflicts. Regards, Hiroshi Inoue
Re: [ADMIN] v7.0.3 Regress Tests Errors
Paul Huppe <[EMAIL PROTECTED]> writes: > It was acutally Peter Eisentraut who asked me to post to the list. He > wanted to see the .diff file because he tought that I had a problem with > broken Unix domain sockets. Hm, I should think that broken sockets would lead to total non function, rather than selected (even if extensive) errors. But, if you're suspicious of that, try running the tests with environment variable PGHOST set to localhost, so that TCP connections will be used (and make sure you started the postmaster with -i of course). > Platform: Sun SPARCserver 10 > OS: Solaris 8 > Compiler: gcc 2.95.2 > Configuration Options: none, used defaults That doesn't sound particularly risky. Odd ... regards, tom lane
Re: [ADMIN] v7.0.3 Regress Tests Errors
> sequential test create_function_1... ./run_check.sh: sql/create_function_1.sql: >cannot open > diff: expected/create_function_1.out: No such file or directory Now that I look ... did you run 'make all' before 'make runtest' in the regress-test directory? regards, tom lane
Re: [ADMIN] v7.0.3 Regress Tests Errors
Hi Tom, It was acutally Peter Eisentraut who asked me to post to the list. He wanted to see the .diff file because he tought that I had a problem with broken Unix domain sockets. Platform: Sun SPARCserver 10 OS: Solaris 8 Compiler: gcc 2.95.2 Configuration Options: none, used defaults Paul Tom Lane wrote: > > Paul Huppe <[EMAIL PROTECTED]> writes: > > I am installing v7.0.3 and just finished compiling the source. I ran > > the regression tests and I am getting quite a few errors. I have > > attached the output from the tests. > > You did not need to send the list 350K of not-very-interesting test > diffs in order to report that you have a broken installation. What > would have been helpful is some information about your platform, > compiler, configuration options selected, etc. > > regards, tom lane > > PS: Hey Marc, why didn't majordomo bounce this? I thought we had > a 40K posting limit... begin:vcard n:Huppé;Paul tel;fax:(613) 995-2339 tel;work:(613) 943-0996 x-mozilla-html:FALSE url:http://www.nrcan.gc.ca org:http://www.nrcan.gc.ca/">http://www.nrcan.gc.ca/graphics/nrcan_fip.gif" border=0>;Natural Resources Canada / Earth Sciences Sector / GeoInformatics adr:;;601 Booth Street;Ottawa;Ontario;K1A 0E8;Canada version:2.1 email;internet:[EMAIL PROTECTED] title:Senior Systems Consultant fn:Paul Huppé end:vcard
Re: [ADMIN] v7.0.3 Regress Tests Errors
Paul Huppe <[EMAIL PROTECTED]> writes: > I am installing v7.0.3 and just finished compiling the source. I ran > the regression tests and I am getting quite a few errors. I have > attached the output from the tests. You did not need to send the list 350K of not-very-interesting test diffs in order to report that you have a broken installation. What would have been helpful is some information about your platform, compiler, configuration options selected, etc. regards, tom lane PS: Hey Marc, why didn't majordomo bounce this? I thought we had a 40K posting limit...
Re: [ADMIN] select * from pgadmin_users; causes error
John Hatfield <[EMAIL PROTECTED]> writes: > It looks as though the views are created the first time you login to > pgAdmin. So when I logged first as an ordinary user not as the postgres > (database superuser), the views were created with this user as the owner. > A trap for beginners!! Indeed. I wonder whether pgadmin actually needs to see the password column. If not, perhaps it could make a view of pg_user instead of pg_shadow to avoid this problem. If it does, selecting directly from pg_shadow with no view seems like the most reliable way... regards, tom lane
[ADMIN] v7.0.3 Regress Tests Errors
Hi, I am installing v7.0.3 and just finished compiling the source. I ran the regression tests and I am getting quite a few errors. I have attached the output from the tests. Thanks, Paul. *** expected/create_type.outWed Jan 5 12:31:08 2000 --- results/create_type.out Fri Feb 23 08:14:26 2001 *** *** 7,12 --- 7,13 output = widget_out, alignment = double ); + ERROR: TypeCreate: function 'widget_in(opaque)' does not exist CREATE TYPE city_budget ( internallength = 16, input = int44in, -- *** expected/triggers.out Sat Jan 15 14:18:23 2000 --- results/triggers.outFri Feb 23 08:14:46 2001 *** *** 24,33 --- 24,35 for each row execute procedure check_primary_key ('fkey1', 'fkey2', 'pkeys', 'pkey1', 'pkey2'); + ERROR: CreateTrigger: function check_primary_key() does not exist create trigger check_fkeys_pkey2_exist before insert or update on fkeys for each row execute procedure check_primary_key ('fkey3', 'fkeys2', 'pkey23'); + ERROR: CreateTrigger: function check_primary_key() does not exist -- -- For fkeys2: --(fkey21, fkey22)--> pkeys (pkey1, pkey2) *** *** 37,42 --- 39,45 for each row execute procedure check_primary_key ('fkey21', 'fkey22', 'pkeys', 'pkey1', 'pkey2'); + ERROR: CreateTrigger: function check_primary_key() does not exist -- -- For pkeys: --ON DELETE/UPDATE (pkey1, pkey2) CASCADE: *** *** 48,53 --- 51,57 execute procedure check_foreign_key (2, 'cascade', 'pkey1', 'pkey2', 'fkeys', 'fkey1', 'fkey2', 'fkeys2', 'fkey21', 'fkey22'); + ERROR: CreateTrigger: function check_foreign_key() does not exist -- -- For fkeys2: --ON DELETE/UPDATE (pkey23) RESTRICT: *** *** 57,62 --- 61,67 before delete or update on fkeys2 for each row execute procedure check_foreign_key (1, 'restrict', 'pkey23', 'fkeys', 'fkey3'); + ERROR: CreateTrigger: function check_foreign_key() does not exist insert into fkeys2 values (10, '1', 1); insert into fkeys2 values (30, '3', 2); insert into fkeys2 values (40, '4', 5); *** *** 63,69 insert into fkeys2 values (50, '5', 3); -- no key in pkeys insert into fkeys2 values (70, '5', 3); - ERROR: check_fkeys2_pkey_exist: tuple references non-existing key in pkeys insert into fkeys values (10, '1', 2); insert into fkeys values (30, '3', 3); insert into fkeys values (40, '4', 2); --- 68,73 *** *** 70,91 insert into fkeys values (50, '5', 2); -- no key in pkeys insert into fkeys values (70, '5', 1); - ERROR: check_fkeys_pkey_exist: tuple references non-existing key in pkeys -- no key in fkeys2 insert into fkeys values (60, '6', 4); - ERROR: check_fkeys_pkey2_exist: tuple references non-existing key in fkeys2 delete from pkeys where pkey1 = 30 and pkey2 = '3'; - NOTICE: check_pkeys_fkey_cascade: 1 tuple(s) of fkeys are deleted - ERROR: check_fkeys2_fkey_restrict: tuple referenced in fkeys delete from pkeys where pkey1 = 40 and pkey2 = '4'; - NOTICE: check_pkeys_fkey_cascade: 1 tuple(s) of fkeys are deleted - NOTICE: check_pkeys_fkey_cascade: 1 tuple(s) of fkeys2 are deleted update pkeys set pkey1 = 7, pkey2 = '70' where pkey1 = 50 and pkey2 = '5'; - NOTICE: check_pkeys_fkey_cascade: 1 tuple(s) of fkeys are deleted - ERROR: check_fkeys2_fkey_restrict: tuple referenced in fkeys update pkeys set pkey1 = 7, pkey2 = '70' where pkey1 = 10 and pkey2 = '1'; ! NOTICE: check_pkeys_fkey_cascade: 1 tuple(s) of fkeys are deleted ! NOTICE: check_pkeys_fkey_cascade: 1 tuple(s) of fkeys2 are deleted DROP TABLE pkeys; DROP TABLE fkeys; DROP TABLE fkeys2; --- 74,86 insert into fkeys values (50, '5', 2); -- no key in pkeys insert into fkeys values (70, '5', 1); -- no key in fkeys2 insert into fkeys values (60, '6', 4); delete from pkeys where pkey1 = 30 and pkey2 = '3'; delete from pkeys where pkey1 = 40 and pkey2 = '4'; update pkeys set pkey1 = 7, pkey2 = '70' where pkey1 = 50 and pkey2 = '5'; update pkeys set pkey1 = 7, pkey2 = '70' where pkey1 = 10 and pkey2 = '1'; ! ERROR: Cannot insert a duplicate key into unique index pkeys_i DROP TABLE pkeys; DROP TABLE fkeys; DROP TABLE fkeys2; *** *** 135,145 --- 130,142 for each row execute procedure ttdummy (price_on, price_off); + ERROR: CreateTrigger: function ttdummy() does not exist create trigger ttserial before insert or update on tttest for each row execute procedure autoinc (price_on, ttdummy_seq); + ERROR: CreateTrigger: function autoinc() does not exist insert into tttest values (1, 1, null); insert into tttest values (2, 2, null); insert into tttest values (3, 3, 0); *** *** 146
Re: [HACKERS] Re: [ADMIN] v7.1b4 bad performance
On Fri, 23 Feb 2001, Hannu Krosing wrote: HK> > DM> I just done the experiment with increasing HZ to 1000 on my own machine HK> > DM> (PII 374). Your test program reports 2 ms instead of 20. The other side HK> > DM> of increasing HZ is surely more overhead to scheduler system. Anyway, it's HK> > DM> a bit of data to dig into, I suppose ;-) HK> > DM> HK> > DM> Results for pgbench with 7.1b4: (BTW, machine is FreeBSD 4-stable on IBM HK> > DM> DTLA IDE in ATA66 mode with tag queueing and soft updates turned on) HK> HK> Is this unmodified pgbench or has it Hiroshi tweaked behaviour of HK> connecting each client to its own database, so that locking and such HK> does not shade the possible benefits (was it about 15% ?) of delay>1 HK> also, IIRC Tom suggested running with at least -B 1024 if you can. It was original pgbench. Maybe, duritng this weekend I'll make new kernel with big SHM table and try to test with larger -B (for now, -B 256 is the most I can set) Sincerely, D.Marck [DM5020, DM268-RIPE, DM3-RIPN] *** Dmitry Morozovsky --- D.Marck --- Wild Woozle --- [EMAIL PROTECTED] ***
Re: [HACKERS] Re: [ADMIN] v7.1b4 bad performance
Dmitry Morozovsky wrote: > On Sun, 18 Feb 2001, Dmitry Morozovsky wrote: > > DM> I just done the experiment with increasing HZ to 1000 on my own machine > DM> (PII 374). Your test program reports 2 ms instead of 20. The other side > DM> of increasing HZ is surely more overhead to scheduler system. Anyway, it's > DM> a bit of data to dig into, I suppose ;-) > DM> > DM> Results for pgbench with 7.1b4: (BTW, machine is FreeBSD 4-stable on IBM > DM> DTLA IDE in ATA66 mode with tag queueing and soft updates turned on) Is this unmodified pgbench or has it Hiroshi tweaked behaviour of connecting each client to its own database, so that locking and such does not shade the possible benefits (was it about 15% ?) of delay>1 also, IIRC Tom suggested running with at least -B 1024 if you can. - Hannu