Re: [GENERAL] [Off-topic, Meta-question] Problems with psql-general list
Tom Lane wrote: > Allan Engelhardt <[EMAIL PROTECTED]> writes: > > 1. Am I the only one who are having problems with the listerver not honouring the >NOMAIL option? > > Marc said yesterday that he'd been forced to rebuild the subscriber list > for pgsql-general. Sounds like your NOMAIL preference got lost in the > shuffle :-(. You should be able to reconfigure it the way you want by > issuing the appropriate command to majordomo. I've tried that three times over the last week. I tried again on Friday. I'll try again now, but there is a problem somewhere. I'd really rather not have to unsubscribe.. Thanks for the answers. Allan. ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] error codes when running pg_dumpall from a perl script.
You'll want to ask on the perl groups, but briefly you get an illegal seek when your command writes to the error file descriptor: % perl -e '`echo hi 1>&2` or warn "Oops: $! $?";' hi Oops: Illegal seek 0 at -e line 1. It's a perl feature, not a bug ;-) Try % man perlop for self therapy... Allan. Simon Crute wrote: > Hi, > If this was better posed in a different postgres group, please let me > know. > > I'm having problems when running pg_dumpall from a perl script. > > Here's the line that's executing the backup. > > `pg_dumpall -o |bzip2 -z -c > $os_bkup_loc/byt_pgdump_full.bz2` or warn > "Error while running probably all OK. $! \n"; > > During the run the following error is reported > > "Error while running probably all OK.. Illegal seek " > i.e. the error code reported by $! is "Illegal seek" > > I've ran a backup manually, and there are no error messages. > When running the backup from with a perl script, (but not sending the output > through bzip2) the error still occurs, but diffing the output of that with > the output of pg_dumpall -o when ran from the command line and there's only > two minor differences (which seem to be within some objectIDs. I can post > that diff if anyone thinks it may help. > > the environment variables PGUSER and PGPASSWORD are set earlier in the > script. > I've searched the archives of the mail lists, and found no references to > illegal seeks WRT running pg_dump. > > Anyone got any clues ? > > -- > Simon Crute ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] inet types and LIKE doesn't work as expected
Phil Mayers wrote: > Try this: > > hdbdev=> create table ips ( ip inet, ip_txt varchar(15) ); > hdbdev=> insert into ips (ip,ip_txt) values ('192.168.1.1','192.168.1.1'); > hdbdev=> select * from ips where ip like '192.168.1.1'; > ip | ip_txt > + > (0 rows) > > hdbdev=> select * from ips where ip_txt like '192.168.1.1'; > ip | ip_txt > -+- > 192.168.1.1 | 192.168.1.1 > (1 row) > > That seems inconsistent? The default text conversion for the inet type includes a mask: test=# select text(ip) from ips; text 192.168.1.1/32 (1 row) so select * from ips where ip like '192.168.1.1%'; works as expected. The joys of semanically rich data types :-) Hope this helps a little. Allan. ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[GENERAL] Re: Perfomance decreasing
Tom Lane wrote: > >> I'm doing vacuum periodically (once a hour), but perfomance > >> still falls down. > > It sounds to me like you may be running into index growth problems. > VACUUM is presently not good about shrinking indexes. I always enjoy Tom's comments - he is the master of understatement and always helpful. In this case, however, I think he may be understating too much. I read the original question as "PostgreSQL is not useful for production systems." Call me melodramatic if you like: you are probably right. The point, I guess, is this: it would be really useful to have a document somewhere that honestly described the limitations of (the current version of) PostgreSQL. Don't use inheritance, don't use on 24x7 systems, whatever. It doesn't have to be fancy formatting, a brain-dump to a text file would be excellent (This is a hint, Tom et al!! :-)) > If you drop > and recreate the indexes used by your most important queries, does > the performance go back to where it was? For what it's worth: I observed a similar issue and found that a dump and restore of all the databases helped. I haven't tried just recreating the index. I'll try it out and maybe post a test script to reproduce the issue. (where?) --- Allan. ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://www.postgresql.org/search.mpl
[GENERAL] Re: Would a PostgreSQL database on a DVD be usable?
Lee Harr wrote: > > (Is there any good reason for this, btw.? The major vendors [Oracle, SQL > Server, Sybase] support having logfiles (txn, redo, archive) on separate file > systems/devices for reliability and performance.) > > > > My understanding is that you _can_ do this, by shutting down the server > moving the files to the places where you want them, and linking to them > with symlinks from their original locations. thanks to Lee and tom: I found the pg_xlog subdirectory. No I can go symlink mad :-) I still think it sould be possible to set this from within the RDBMS environment. You can use WITH LOCATION on CREATE DATABASE; why not have a WITH LOGFILE (-DIRECTORY?) option as well, a la Oracle? Presumably there is no real speed gain, but it would kind of make it symmetric and obvious. And, let's face it: for a *production* system, when would you *ever* want the data and the logs on the same physical disk (or even on the same RAID channel)? Maybe I'm being overly optimistic on behalf of the optimizations that PostgrSQL performs here.but if so, think future-proof :-) Allan. ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
[GENERAL] Known problem with HASH index?
Is there a known problem with HASH type index in PostgreSQL 7.1.2 4PGDG on Red Hat Linux 7.1 (2.4.2 kernel)? I can't find a lot of documentation, but this is what I observe: [playpen]$ dropdb test; createdb test; psql -f create_table.sql test; psql -c "COPY clients FROM '/tmp/input.txt';" test; psql -c 'CREATE INDEX clients_idx ON clients USING HASH (tel);' test; vacuumdb test; vacuumdb --analyze test DROP DATABASE CREATE DATABASE CREATE COPY CREATE NOTICE: Index clients_idx: NUMBER OF INDEX' TUPLES (916864) IS NOT THE SAME AS HEAP' (100). Recreate the index. VACUUM NOTICE: Index clients_idx: NUMBER OF INDEX' TUPLES (916864) IS NOT THE SAME AS HEAP' (100). Recreate the index. VACUUM [playpen]$ cat create_table.sql CREATE TABLE clients ( ClientID integer, firstname varchar(5), surname varchar(22), area varchar(3), neigh varchar(27), dimos varchar(50), tel varchar(7) The input file is a bit big to include, but was created using this brain-damaged perl script (somebody please teach me how to do random letter strings :-)) #!/usr/local/bin/perl -w my $i; my $j; srand(time||$$); for ($i = 0; $i<1_000_000; $i++) { print int(rand(1<<31)), "\t"; foreach $j (1..5) { printf "%c", int(rand(57)+65); } print "\t"; foreach $j (1..22) { printf "%c", int(rand(57)+65); } print "\t"; foreach $j (1..3) { printf "%c", int(rand(57)+65); } print "\t"; foreach $j (1..27) { printf "%c", int(rand(57)+65); } print "\t"; foreach $j (1..50) { printf "%c", int(rand(57)+65); } print "\t"; foreach $j (1..7) { printf "%c", int(rand(10)+48); } print "\n"; } print "\\.\n"; ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[GENERAL] Re: postgres slower than grep?
I could only be botherd to try a million lines, but my results from 7.1.2 are below. Basically: 1. I find about 50% database storage overhead in this case. That's not completely silly, considering this is structured data, but seems a little high. I don't know the internal structures well enough to really see what's happening. 2. Why would it be faster than grep? This has to match structured data, in this case varchar, and not just bytes. It has to worry about transactions and logs, not just a stream of data. Besides, in my tests it is not *that* slow (3 sec, compared with 1/2). Dunno what's up with your system. 3. As you said: With an index it rocks, easily beating grep. Use an index - it's your friend :-) Allan. [playpen]$ wc -l /tmp/input.txt 101 /tmp/input.txt [playpen]$ ls -lh /tmp/input.txt -rw-rw-r--1 allane allane 124M Jul 9 22:12 input.txt [playpen]$ psql -f create_table.sql test CREATE [playpen]$ time psql -c "COPY clients FROM '/tmp/input.txt';" test COPY real 1m26.543s user 0m0.000s sys 0m0.010s [playpen]$ du -sh $PGDATA/base/963549/ 179M /var/lib/pgsql/data/base/963549 [playpen]$ time psql -c 'SELECT * FROM clients WHERE tel=1234567;' test test clientid | firstname | surname | area | neigh | dimos | tel --+---+-+--+---+---+- (0 rows) real 0m24.281s user 0m0.010s sys 0m0.000s [playpen]$vacuumdb -a; vacuumdb -a --analyze ... [playpen]$ time psql -c 'SELECT * FROM clients WHERE tel=1234567;' test clientid | firstname | surname | area | neigh | dimos | tel --+---+-+--+---+---+- (0 rows) real 0m3.808s user 0m0.010s sys 0m0.000s [playpen]$ time grep '1234567' /tmp/input.txt ... real 0m0.505s user 0m0.150s sys 0m0.360s [playpen]$ dropdb test DROP DATABASE [playpen]$ createdb -E LATIN1 test CREATE DATABASE [playpen]$ psql -f create_table.sql test CREATE [playpen]$ time psql -c "COPY clients FROM '/tmp/input.txt';" test COPY real 1m23.927s user 0m0.010s sys 0m0.000s [playpen]$ time psql -c 'SELECT * FROM clients WHERE tel=1234567;' test clientid | firstname | surname | area | neigh | dimos | tel --+---+-+--+---+---+- (0 rows) real 0m23.934s user 0m0.010s sys 0m0.010s [playpen]$vacuumdb -a; vacuumdb -a --analyze ... [playpen]$ time psql -c 'SELECT * FROM clients WHERE tel=1234567;' test clientid | firstname | surname | area | neigh | dimos | tel --+---+-+--+---+---+- (0 rows) real 0m3.796s user 0m0.010s sys 0m0.000s [playpen]$ psql -c 'CREATE INDEX clients_idx_tel ON clients (tel);' test CREATE [playpen]$ vacuumdb --analyze test VACUUM [playpen]$ time psql -c 'SELECT * FROM clients WHERE tel=1234567;' test clientid | firstname | surname | area | neigh | dimos | tel --+---+-+--+---+---+- (0 rows) real 0m0.189s user 0m0.010s sys 0m0.000s Spiros Ioannou wrote: > Hello all, > > before you start reading, have in mind that this is not post to advertise > one db over another, I just want to know what happens, that's why I did > the comparisons with other db. > to the point: > > This is my table: > >Table "table1" > Attribute |Type | Modifier > ---+-+-- > ClientID | integer | > firstname | varchar(5) | > surname | varchar(22) | > area | varchar(3) | > neigh | varchar(27) | > dimos | varchar(50) | > tel | varchar(7) | > > The Facts: > -- > The table exported to a text file is about 330MB, and contains about 5 > milion lines. > > The same table takes 670MB in postgres. Why? > (I tried mysql, and it uses 340MB). > > issuing the following query: > select * from table1 where tel='7485842'; > takes about 1min and 40 seconds to complete. (query has one result) > > explain of the above returns: > Seq Scan on table1 (cost=0.00..147835.01 rows=23 width=76) > > issuing a grep in the text file takes about 25 seconds! *(for the whole grep > to finish - worse case)* > > issuing the same query in mysql takes about 25 seconds. > > -to test the disk speed, I doubled the size of the text file by copying > it twice and the same grep query took 51 seconds (as expected) > > when creating an index the query completes of course in no time > > yes , i tried vacuum > > Postgres is version 7.0 > system is solaris 2.7 > hard disk is 1rpm, ultraSCSI > cpu is UltraSparcIIi 333Mhz > physical memory is 384MB > > and now the questions > - > -shouldn't postgres be at least as fast as the grep? > -shouldn't the table data file be smaller to reduce disk activity? Why is it > double as the mysql same data file or the text file? I also noticed that > an index file for column "tel" is about 130MB large and the mysql's one > was 64MB, is this normal? > > Thank you for your time, > > Spiros Ioannou > e-mail:[EMAIL PROTECTED] > --- > Ima