Re: [GENERAL] PostgreSQL and mySQL database size question

2001-04-30 Thread Tom Lane

Fred Ingham [EMAIL PROTECTED] writes:
 PostgreSQL  pinndex_seq  8
 pinndx   7,856
 pinndx_pkey  6,984
 parent_ndx   6,952
 tagpath_ndx  5,552
 tagname_ndx  5,560
 atrname_ndx  5,696
 pinnum_ndx   6,160
 nvalue_ndx   5,832
 value_ndx6,424
 57,024

Hm.  All but value_ndx are indexes on integer columns, so the keys are
only 4 bytes.  The index tuple overhead will be either 12 or 16 bytes
per entry depending on whether your hardware has any datatypes that
require 8-byte alignment (I think not on PCs --- what is MAXIMUM_ALIGNOF
in your src/include/config.h?).  16 bytes times 92000 entries is only
about a meg and a half; even allowing for the traditional 70% fill
factor of btrees, you shouldn't see more than a couple meg per index.

That assumes random loading of the index, however, and I think there may
be pathological cases where the indexes come out less dense after
initial load.  Was there any particular order to the data values when
you imported them?  If you drop any of these indexes and CREATE it
again, is the result noticeably smaller?

 Based on this information, I conclude that PostgreSQL is using significantly
 more space for its indexes than mySQL (the .MYI file contains all of the
 indexes on the pinndx table in mySQL).

I find it hard to believe that MySQL is storing nine indexes on a
92000-entry table in only 4Mb.  Storing the keys alone would take 3.3Mb,
never mind making the keys point to anything.  Are you sure that you've
accounted for all of their index storage?

 In any case, is there anyway to reduce the size of the PostgreSQL
 index tables (are there different indexing schemes)?

Do you actually *need* an index on each column?  It seems highly
unlikely that each one of these indexes will pay its keep.

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[GENERAL] do functions cache views?

2001-04-30 Thread Thomas F. O'Connell

are there circumstances under which a pl/pgsql function will cache the 
contents of a view?

i can do this sequence of actions just fine:

create table foo (
id int2 primary key
);

create view foo_view as
select *
from foo;

create function get_new_foo() returns int2 as '
declare
v_max_foo int2;
begin
select into v_max_foo max( id )
from foo;
return v_max_foo;
end;

then, if i run get_new_foo() while modifying the values in foo, the 
function seems to get the correct values.

unfortunately, i have a function/view pair that looks roughly like this:

create view significant_records_view as
select *
from significant_records
where
/*
  * certain status conditions hold.
  * i don't think i need to include this logic since the joins here are
  * only to other standard tables. no other functions or views.
  */
;

create function get_next_significant_date( int4 ) returns date as '
declare
v_id alias for $1;
v_significant_date date;
begin
select into v_significant_date
max( other_date )
from more_significant_records msr, significant_records_view srv
where msr.significant_records_id = srv.id
and srv.significant_id = v_id;
if not found then
select into v_significant_date min( significant_date )
from significant_records_view srv
where srv.significant_id = v_id;
end if;
return v_significant_date;
end;
' language 'plpgsql';

now, this function works for existing records in the table 
significant_records. but as i add records, i end up having to drop and 
recreate the function in order for it to record the existence of the new 
records.

when i notice this happening, i attempt to run the two select statements 
of the function. the first one, as i expect, returns null. the second 
one, as i expect, returns a date. but i'm suspecting that the function 
may be caching the null for the second function and ends up returning that.

if this is the case, why does the foo example at top work correctly? if 
this is not the case, why does my function not work?

i would be happy to provide further details if this is not sufficient 
for a reasonable response.

thanks.

-tfo


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [GENERAL] do functions cache views?

2001-04-30 Thread Tom Lane

Thomas F. O'Connell [EMAIL PROTECTED] writes:
 now, this function works for existing records in the table 
 significant_records. but as i add records, i end up having to drop and 
 recreate the function in order for it to record the existence of the new 
 records.

This is hard to believe.

I am not sure that if not found means anything after a select max()
query.  The select will always return exactly one row --- even if it's
just a NULL --- so I'd expect the if not found never to succeed.
Perhaps you want to be testing whether v_significant_date is NULL or
not, instead.

Another issue, since you omitted the details of the view and of what
version you are running, is whether the view involves GROUP BY and/or
aggregates.  An aggregate over a grouped view won't work properly in
versions before 7.1.

If it's not those issues then we'll need more details --- preferably
a self-contained example.

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[GENERAL] chr() command in PG7.03?

2001-04-30 Thread Dr. Evil


I notice in 7.1, there is a chr() command which converst an intenger
into a char, but it doesn't seem to work in 7.03.  Is there an
equivalent command?  I couldn't find anything in the docs.


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[GENERAL] wierd problems with DBI/DBD::pg?

2001-04-30 Thread Michelle Murrain

I recently upgraded from 6.5 to 7.1, and it mostly went smoothly (fixed the 
PHP problem, thanks to a list member). But now some of my perl stuff is being 
a bit strange, and I'm wondering whether other folks have noticed strangeness 
when they upgraded their Postgres as well as upgraded to the new DBD::Pg.

First off, my error strings seem to be cut off - so when an error happens, I 
only get the first few (not at all predictable, either) characters from the 
error string. 

Second, I have this strange situation where a script that does a query on one 
table of a database gives a reasonable result, but returns an internal server 
error when a different table is queried. (this worked prior to the upgrade) - 
plus, the wierd thing is that this script has a graceful exit subroutine, and 
shouldn't just barf like that.

And of course, part of what is going on is that it's really hard to debug, 
since DBI is not returning the full error string, so it's hard to know what's 
going on.

Thanks in advance.

Michelle
-- 

Michelle Murrain, Ph.D.
President
Norwottuck Technology Resources
[EMAIL PROTECTED]
http://www.norwottuck.com

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [GENERAL] Locking a database

2001-04-30 Thread Michelle Murrain

On Monday 30 April 2001 02:37 pm, Jose Norberto wrote:
 Hello all!
 I want to make a pg_dump of a database, but I must be sure that no one is
 logged in. Which is the best way to do it?
 Thanks in advance

Change the permissions in the pg_hba.conf file 
(/usr/local/pgsql/data/pg_hba.conf is the default placement)

The bottom of the file looks like:

# Put your actual configuration here
# --
 
# This default configuration allows any local user to connect as any
# PostgreSQL username, over either UNIX domain sockets or IP:
 
localall   trust
host all 127.0.0.1 255.255.255.255 trust
 
# If you want to allow non-local connections, you will need to add more
# "host" records (and don't forget to start the postmaster with "-i"!).
 
# CAUTION: if you are on a multiple-user machine, the above default
# configuration is probably too liberal for you --- change it to use
# something other than "trust" authentication.

Michelle
-- 

Michelle Murrain, Ph.D.
President
Norwottuck Technology Resources
[EMAIL PROTECTED]
http://www.norwottuck.com

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [GENERAL] Locking a database

2001-04-30 Thread Tom Lane

Jose Norberto [EMAIL PROTECTED] writes:
 I want to make a pg_dump of a database, but I must be sure that no one is 
 logged in.

Why?

pg_dump will return a consistent state of the database even if other
transactions are proceeding meanwhile.

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[GENERAL] Re: Performance with Large Volumes of Data

2001-04-30 Thread John Coers

I recently posted this on the admin list and got no response. Could
anyone here help me?

 
 Hi,
 
 I am non a real sysadmin or dba, but got stuck doing it and am trying to learn 
via a fairly
 difficult problem that my group must deal with: a LARGE volume of data.  I have been 
working
 from scratch on this for about 3 weeks and have runs lots of tests.
 
 I am running postgres7.1 on a Solaris5.7 with 1GB RAM and 2 300MHZ processors and a 
6GB partition.
 
 The application I am using it for is to COPY a LARGE amount of data (avg of 15k rows
 of 3 ints every 15 minutes or so avg from 170 machines day/250 at night) into a db
 and then do a query after the fact.  The COPIES are done via the libq PQputline()
 subroutine.  The after-the-fact query will postprocess the data and reduce the
 amount and granularity of data then load it into a new table.  Ultimately, I will have
 1 db with a table of about 250M rows and several other dbs each with 10's of 
millions...
 Multiple GBs of data.
 
 Here are the options I run with postmaster:
 
 postmaster -D /evsx/aus16/coers -o -S 32768 -i -B 8192 -N 2
 
 Here are my IPC params:
 set shmsys:shminfo_shmmax=524288000
 set shmsys:shminfo_shmmin=16
 set shmsys:shminfo_shmmni=200
 set shmsys:shminfo_shmseg=200
 set semsys:seminfo_semmni=500
 set semsys:seminfo_semmns=500
 set semsys:seminfo_semmsl=500
 set semsys:seminfo_semume=100
 
 Here are my questions:
 
 1)  NUM OF CONNECTIONS: I use -N 2 because that seems to optimize performance.  If I 
allow more connections,
 the server bogs down, ultimately to a near-stand still if I allow too many 
connections.  I assume
 this is because all of the competing connections are all trying to COPY to the same 
database
 and they block all but one and all the semaphore chasing slows everything down.  The 
weird
 thing is that the CPU, iowait and swap waits on top do not elevate very much.  What 
is slowing
 things down in this case?  Currently, I have each client try for a connection and if 
it fails,
 wait for rand()%4+1 and then try again.  This actually works pretty well, but it 
seems to me
 that the server should be handling this and be doing a more efficient job.  Also, am 
I correct
 in assuming there is no way to keep multiple COPIES to the same db  table from 
blocking?  When
 I tried to setnonblocking, data got dropped.
 
 2)  BOTTLENECK:  I was running 2 queries on the 65M rows of data I had collected 
after I had finished loading.
 I had not indexed the tables.  Based on the top reading below, what is the bottleneck 
that is
 slowing the query down?  The same phenomenon occurs when COPYing data into the table.
 
 last pid: 15973;  load averages:  0.98,  0.92,  0.78
 41 processes:  38 sleeping, 1 running, 2 on cpu
 CPU states: 48.5% idle, 43.8% user,  5.2% kernel,  2.6% iowait,  0.0% swap
 Memory: 1024M real, 17M free, 109M swap in use, 2781M swap free
 
   PID USERNAME THR PRI NICE  SIZE   RES STATE   TIMECPU COMMAND
 15919 postgres   1  400   71M   69M cpu0   30:58 22.12% postgres
 15966 postgres   1  300   71M   69M run12:02 23.63% postgres
 
 3)  CONGIGURATION/SETTINGS:  Are my IPC params and postmaster options set right for 
my application?
 My thinking is that I need lots of shared memory to reduce disk access.  Am I missing 
something?  Are
 there any other configurable kernal params that I need to know about?  What exactly 
will the sort
 mem (-o -S) buy me?
 
 Thanks for your patience!
 


-- 
John CoersIntrinsity, Inc.
[EMAIL PROTECTED]  Austin, Texas

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



Re: [GENERAL] PostgreSQL and mySQL database size question

2001-04-30 Thread Tom Lane

Fred Ingham [EMAIL PROTECTED] writes:
 After dropping and recreating the indexes I have (in KB):

Ah, that's more like it.

 So, it appears that I am running into the pathological case.

Actually, a simpler explanation would be if you'd done a bunch of
updates and/or deletes at some point.  VACUUM currently never shrinks
indexes, so the excess index space might just represent the peak size
of your tables.  (Yes, improving that is on the to-do list.)

 In short, they are all needed for acceptable performance querying and
 retrieving values from the database.

You've got common queries that do selects with WHERE clauses referencing
each one of these columns?  Possible, certainly, but seems a bit odd...

 With respect to mySQL, I did verify that mySQL did in fact have all of the
 indexes I created and that they were saved in a single file (the *.MYI).

Hm.  Anyone know how mySQL represents indexes?  That still seems awfully
small.

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[GENERAL] Problem installing Postgresql

2001-04-30 Thread Al

I am running OpenLinux 2.3 (Caldera) out of the box.  Trying to install 
PostGreSQL I seem to be unable to get past the 'make' process.  Errors I 
get at the end of the process are:

make[3]: *** [pg_backup_custom.o] Error 1
make[3]: Leaving directory `/addon/postgresql-7.1/src/bin/pg_dump'
make[2]: *** [all] Error 2
make[3]: Leaving directory `/addon/postgresql-7.1/src/bin/'
make[1]: *** [all] Error 2
make[3]: Leaving directory `/addon/postgresql-7.1/src'
make: *** [all] Error 2

Some examples of errors I see in the output are:

pg_backup_custom.c: In function `_DoDeflate':
pg_backup_custom.c:925:`z_streamp' undeclared (first use in this function)
pg_backup_custom.c:925: parse error before `zp'
pg_backup_custom.c:928: 'ctx' undecleared (first use in this function)

Please note that I am fairly new to Linux so a lot of this isn't making 
sense.  Any ideas? 


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



Re: [GENERAL] wierd problems with DBI/DBD::pg?

2001-04-30 Thread Gilles DAROLD

Hi,

I widly use DBI/DBD::Pg and doesn't notice any problem. Perhaps because
there's no error or they are well handled :-)). I'm interested to know more
about this problem because I have many script to review in that case.

Could you please send more explanation of the problem and some source code
related to them. Do you use mod_perl ? I presume you're  'Internal error' is
from Apache log... What version of DBI and DBD are you using ?

Regards,

Gilles

I'm not here tomorow but can take a look on wenesday...

Michelle Murrain wrote:

 I recently upgraded from 6.5 to 7.1, and it mostly went smoothly (fixed the
 PHP problem, thanks to a list member). But now some of my perl stuff is being
 a bit strange, and I'm wondering whether other folks have noticed strangeness
 when they upgraded their Postgres as well as upgraded to the new DBD::Pg.

 First off, my error strings seem to be cut off - so when an error happens, I
 only get the first few (not at all predictable, either) characters from the
 error string.

 Second, I have this strange situation where a script that does a query on one
 table of a database gives a reasonable result, but returns an internal server
 error when a different table is queried. (this worked prior to the upgrade) -
 plus, the wierd thing is that this script has a graceful exit subroutine, and
 shouldn't just barf like that.

 And of course, part of what is going on is that it's really hard to debug,
 since DBI is not returning the full error string, so it's hard to know what's
 going on.

 Thanks in advance.

 Michelle
 --
 
 Michelle Murrain, Ph.D.
 President
 Norwottuck Technology Resources
 [EMAIL PROTECTED]
 http://www.norwottuck.com

 ---(end of broadcast)---
 TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [GENERAL] Problem installing Postgresql

2001-04-30 Thread Gilles DAROLD

Hi,

What package do you have downloaded ? You need the full install not just
the base package.

Regards,

Gilles

Al wrote:

 I am running OpenLinux 2.3 (Caldera) out of the box.  Trying to install
 PostGreSQL I seem to be unable to get past the 'make' process.  Errors I
 get at the end of the process are:

 make[3]: *** [pg_backup_custom.o] Error 1
 make[3]: Leaving directory `/addon/postgresql-7.1/src/bin/pg_dump'
 make[2]: *** [all] Error 2
 make[3]: Leaving directory `/addon/postgresql-7.1/src/bin/'
 make[1]: *** [all] Error 2
 make[3]: Leaving directory `/addon/postgresql-7.1/src'
 make: *** [all] Error 2

 Some examples of errors I see in the output are:

 pg_backup_custom.c: In function `_DoDeflate':
 pg_backup_custom.c:925:`z_streamp' undeclared (first use in this function)
 pg_backup_custom.c:925: parse error before `zp'
 pg_backup_custom.c:928: 'ctx' undecleared (first use in this function)

 Please note that I am fairly new to Linux so a lot of this isn't making
 sense.  Any ideas?

 ---(end of broadcast)---
 TIP 6: Have you searched our list archives?

 http://www.postgresql.org/search.mpl


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [GENERAL] wierd problems with DBI/DBD::pg?

2001-04-30 Thread Tom Lane

Michelle Murrain [EMAIL PROTECTED] writes:
 And of course, part of what is going on is that it's really hard to
 debug, since DBI is not returning the full error string, so it's hard
 to know what's going on.

You could look in the postmaster's log (you are keeping one I trust ;-))
to find the full error string reported by the backend.

No idea what's causing DBD::Pg to misbehave like that, though.  Perhaps
its latest release is a tad broken?  Which DBI/DBD versions are you
using, exactly?

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



[GENERAL] More src install probs...

2001-04-30 Thread David Pieper

I'm trying to build 7.1 from source on a RedHat box. 
Here's what I keep ending up with:

[root@dp1 postgresql-7.1]# gmake
gmake -C doc all
gmake[1]: Entering directory `/opt/src/postgresql-7.1/doc'
gmake[1]: Nothing to be done for `all'.
gmake[1]: Leaving directory `/opt/src/postgresql-7.1/doc'
gmake -C src all
gmake[1]: Entering directory `/opt/src/postgresql-7.1/src'
gmake -C backend all
gmake[2]: Entering directory `/opt/src/postgresql-7.1/src/backend'
gmake -C access all
gmake[3]: Entering directory `/opt/src/postgresql-7.1/src/backend/access'
gmake -C common SUBSYS.o
gmake[4]: Entering directory `/opt/src/postgresql-7.1/src/backend/access/common'
gcc -O2 -Wall -Wmissing-prototypes -Wmissing-declarations
-I../../../../src/include   -c -o heaptuple.o heaptuple.c
In file included from /usr/include/bits/posix1_lim.h:126,
 from /usr/include/limits.h:30,
 from 
/usr/lib/gcc-lib/i386-redhat-linux/egcs-2.91.66/include/limits.h:117,
 from 
/usr/lib/gcc-lib/i386-redhat-linux/egcs-2.91.66/include/syslimits.h:7,
 from 
/usr/lib/gcc-lib/i386-redhat-linux/egcs-2.91.66/include/limits.h:11,
 from ../../../../src/include/utils/nabstime.h:17,
 from ../../../../src/include/access/xact.h:19,
 from ../../../../src/include/utils/tqual.h:19,
 from ../../../../src/include/access/relscan.h:17,
 from ../../../../src/include/access/heapam.h:19,
 from heaptuple.c:23:
/usr/include/bits/local_lim.h:27: linux/limits.h: No such file or directory
gmake[4]: *** [heaptuple.o] Error 1
gmake[4]: Leaving directory `/opt/src/postgresql-7.1/src/backend/access/common'
gmake[3]: *** [common-recursive] Error 2
gmake[3]: Leaving directory `/opt/src/postgresql-7.1/src/backend/access'
gmake[2]: *** [access-recursive] Error 2
gmake[2]: Leaving directory `/opt/src/postgresql-7.1/src/backend'
gmake[1]: *** [all] Error 2
gmake[1]: Leaving directory `/opt/src/postgresql-7.1/src'
gmake: *** [all] Error 2


It seems like I'm missing something, but what?

Enjoy,
David Pieper

-- 
programmer n. /pro gram er/ A device for transmuting caffeine into code.

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[GENERAL] Re: Problem installing Postgresql

2001-04-30 Thread mike

You should be using gmake instead of make.

./configure
gmake
gmake install

Mike

 I am running OpenLinux 2.3 (Caldera) out of the box.  Trying to install
 PostGreSQL I seem to be unable to get past the 'make' process.  Errors I
 get at the end of the process are:

 make[3]: *** [pg_backup_custom.o] Error 1
 make[3]: Leaving directory `/addon/postgresql-7.1/src/bin/pg_dump'
 make[2]: *** [all] Error 2
 make[3]: Leaving directory `/addon/postgresql-7.1/src/bin/'
 make[1]: *** [all] Error 2
 make[3]: Leaving directory `/addon/postgresql-7.1/src'
 make: *** [all] Error 2

 Some examples of errors I see in the output are:

 pg_backup_custom.c: In function `_DoDeflate':
 pg_backup_custom.c:925:`z_streamp' undeclared (first use in this function)
 pg_backup_custom.c:925: parse error before `zp'
 pg_backup_custom.c:928: 'ctx' undecleared (first use in this function)

 Please note that I am fairly new to Linux so a lot of this isn't making
 sense.  Any ideas?


 ---(end of broadcast)---
 TIP 6: Have you searched our list archives?

 http://www.postgresql.org/search.mpl



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[GENERAL] Problem with restore on upgrading to 7.1

2001-04-30 Thread Joseph Shraibman

I just updated to 7.1 and tried to restore my database that was dumped
with 7.0.3

psql:dumpall-2001-4-27:8452: ERROR:  copy: line 8933, Bad timestamp
external representation '2001-01-17 19:37:60.00-05'
psql:dumpall-2001-4-27:8452: PQendcopy: resetting connection
CREATE
CREATE
snip

[root@d1 /var/lib/pgsql] tail +8933  dumpall-2001-4-27 | head 
372 2   76124   2001-01-17 20:38:47.54-05   2001-01-17
20:38:50-05 12   13

[root@d1 /var/lib/pgsql] grep 19:37:60 dumpall-2001-4-27 
175 2   71904   2001-01-17 19:37:60.00-05   2001-01-17
19:38:01-05 12   13

It appears that all my data was restored except for the table which had
the error(which happened to be my biggest table) of which none was
restored.
-- 
Joseph Shraibman
[EMAIL PROTECTED]
Increase signal to noise ratio.  http://www.targabot.com

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [GENERAL] Problem with restore on upgrading to 7.1

2001-04-30 Thread Tom Lane

Joseph Shraibman [EMAIL PROTECTED] writes:
 psql:dumpall-2001-4-27:8452: ERROR:  copy: line 8933, Bad timestamp
 external representation '2001-01-17 19:37:60.00-05'

Are you on Mandrake by any chance?  Looks like your 7.0 installation
had that infamous roundoff problem.

I recommend manually changing 37:60 to 38:00 in the dump file,
then you should be able to load it.

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [GENERAL] More src install probs...

2001-04-30 Thread Lamar Owen

David Pieper wrote:
 
 I'm trying to build 7.1 from source on a RedHat box.
 Here's what I keep ending up with:

From the gcc version it appears you are on a RedHat 6.x box.

 It seems like I'm missing something, but what?

kernel-headers?
--
Lamar Owen
WGCR Internet Radio
1 Peter 4:11

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



[GENERAL] Copy

2001-04-30 Thread John Coers

Does a COPY FROM block?  In case I am using the wrong terminology,
what I need to know is if I have multiple clients performing COPY FROM...PQputline()
using asynchronous connections, will I lose data?  Will the server simply execute
them serially?

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



Re: [GENERAL] More src install probs...

2001-04-30 Thread David Pieper

David Pieper wrote:
 
 I'm trying to build 7.1 from source on a RedHat box.
 Here's what I keep ending up with:
 
 [root@dp1 postgresql-7.1]# gmake
 gmake -C doc all
 gmake[1]: Entering directory `/opt/src/postgresql-7.1/doc'
 gmake[1]: Nothing to be done for `all'.
 gmake[1]: Leaving directory `/opt/src/postgresql-7.1/doc'
 gmake -C src all
 gmake[1]: Entering directory `/opt/src/postgresql-7.1/src'
 gmake -C backend all
 gmake[2]: Entering directory `/opt/src/postgresql-7.1/src/backend'
 gmake -C access all
 gmake[3]: Entering directory `/opt/src/postgresql-7.1/src/backend/access'
 gmake -C common SUBSYS.o
 gmake[4]: Entering directory `/opt/src/postgresql-7.1/src/backend/access/common'
 gcc -O2 -Wall -Wmissing-prototypes -Wmissing-declarations
 -I../../../../src/include   -c -o heaptuple.o heaptuple.c
 In file included from /usr/include/bits/posix1_lim.h:126,
  from /usr/include/limits.h:30,
  from 
/usr/lib/gcc-lib/i386-redhat-linux/egcs-2.91.66/include/limits.h:117,
  from 
/usr/lib/gcc-lib/i386-redhat-linux/egcs-2.91.66/include/syslimits.h:7,
  from 
/usr/lib/gcc-lib/i386-redhat-linux/egcs-2.91.66/include/limits.h:11,
  from ../../../../src/include/utils/nabstime.h:17,
  from ../../../../src/include/access/xact.h:19,
  from ../../../../src/include/utils/tqual.h:19,
  from ../../../../src/include/access/relscan.h:17,
  from ../../../../src/include/access/heapam.h:19,
  from heaptuple.c:23:
 /usr/include/bits/local_lim.h:27: linux/limits.h: No such file or directory
 gmake[4]: *** [heaptuple.o] Error 1
 gmake[4]: Leaving directory `/opt/src/postgresql-7.1/src/backend/access/common'
 gmake[3]: *** [common-recursive] Error 2
 gmake[3]: Leaving directory `/opt/src/postgresql-7.1/src/backend/access'
 gmake[2]: *** [access-recursive] Error 2
 gmake[2]: Leaving directory `/opt/src/postgresql-7.1/src/backend'
 gmake[1]: *** [all] Error 2
 gmake[1]: Leaving directory `/opt/src/postgresql-7.1/src'
 gmake: *** [all] Error 2
 
 It seems like I'm missing something, but what?
 


OK. So I had /usr/src/linux linked to a kernel I deleted the other week. I fixed
the link to point to the current one I'm using, and gmake is happy. It compiled
fine and is running now.

I still get a lot of -Wmissing-prototypes -Wmissing-declarations should I
worry? 

Enjoy,
David Pieper
 
--
programmer n. /pro gram er/ A device for transmuting caffeine into code.

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



Re: [GENERAL] Copy

2001-04-30 Thread Tom Lane

John Coers [EMAIL PROTECTED] writes:
 Does a COPY FROM block?  In case I am using the wrong terminology,
 what I need to know is if I have multiple clients performing COPY FROM...PQputline()
 using asynchronous connections, will I lose data?

No.

 Will the server simply execute them serially?

They'll be executed in parallel, just the same as if each client had
done a bunch of INSERTs wrapped in a BEGIN/END block.  You'd only see
problems if clients inserted conflicting data (eg, identical keys in
a column with a UNIQUE index).  You may care to read the documentation
about multi-version concurrency control and locking.

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



Re: [GENERAL] Problem with restore on upgrading to 7.1

2001-04-30 Thread Joseph Shraibman

No, I have a redhat 6.x system and I built the postgres myself from the
7.0.3 source.

Tom Lane wrote:
 
 Joseph Shraibman [EMAIL PROTECTED] writes:
  psql:dumpall-2001-4-27:8452: ERROR:  copy: line 8933, Bad timestamp
  external representation '2001-01-17 19:37:60.00-05'
 
 Are you on Mandrake by any chance?  Looks like your 7.0 installation
 had that infamous roundoff problem.
 
 I recommend manually changing 37:60 to 38:00 in the dump file,
 then you should be able to load it.
 
 regards, tom lane

-- 
Joseph Shraibman
[EMAIL PROTECTED]
Increase signal to noise ratio.  http://www.targabot.com

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[GENERAL] DBI/AutoCommit/Postgres

2001-04-30 Thread Fran Fabrizio


Hello all,

I'm trying to speed up some insert statements.  I have been tinkering
with the postmaster and DBI parameters I did some timings on my insert
and copy commands.  Here is a sample insert query:

010430.18:31:18.199  [2604] query: insert into log values
(0,0,lower('blah.blah.mydomain.com'),lower('foo'),lower('bar'),lower('blah'),upper('Me'),
upper('Myself'), upper('I'), upper('INFO'), 'String Here', '20010430
16:00:00')

Pretty straightforward.  Table log looks like:

 Table log
  Attribute   |   Type| Modifier
--+---+--
 site_id  | bigint|
 host_id  | bigint|
 fqdn | varchar() | not null
 site | varchar() | not null
 region   | varchar() | not null
 hostname | varchar() | not null
 product  | varchar() | not null
 class| varchar() | not null
 subclass | varchar() | not null
 status   | varchar() | not null
 msg  | varchar() | not null
 remote_stamp | timestamp | not null
 tstamp   | timestamp | not null

Here are my non-scientific timings:
with AutoCommit on, using DBI across TCP/IP:   1.3 INSERTS/second
with AutoCommit off, DBI, TCP/IP, committing after every 100:   1.6
INSERTS/second
using psql -h host -U user -c copy log from stdin dbname  datafile
1.73 rows/second
using COPY LOG FROM 'filename' on the db machine itself:  1.73
rows/second

Another crucial piece of information is that each insert kicks off a
trigger.  I did not write the trigger, and do not know how to write
triggers, but I think that might be the contributing factor to the
slowness.  Here is the text file used to create the trigger:

drop function update_host_table();
drop trigger incoming_trigger on incoming ;

create function update_host_table()
returns opaque
as 'declare

myrec   record;
new_hostid  int4;
begin

new.timestamp := now() ;
/* check to see if we have see this machine before */

select * into myrec
from knownhosts k
where k.fqdn = new.fqdn and
k.hostname = new.hostname ;

/* -- if we have not found the machine name we are going to
insert a new record into the knownhosts table and set the init_contact
to now
*/

if not found
then
insert into knownhosts
values (new.fqdn,new.hostname,new.timestamp,new.timestamp) ;
else
update knownhosts
set last_contact = new.timestamp
where knownhosts.fqdn = new.fqdn ;
end if ;
/* now we are going to update the status table with the new record */

select * into myrec
from status s where
s.fqdn = new.fqdn and s.hostname=new.hostname
and s.class=new.class and s.sub_class=new.sub_class ;

if not found
then
insert into status
values (new.fqdn,new.hostname,new.class,
new.sub_class,new.level,new.msg,new.timestamp) ;
else
update status
set level = new.level,
timestamp = new.timestamp
where fqdn=new.fqdn and hostname=new.hostname and
class = new.class and sub_class = new.sub_class ;
end if;

return new;
end ;'
language 'plpgsql';

create trigger incoming_trigger
before insert on incoming
for each row
execute procedure update_host_table();

1.73 INSERTS/second seems awfully slow, but maybe I have set my
expectations too high.  Now that you all can see the table and the kind
of data I am trying to put into it, do you have any suggestions?  The
hardware specs of the database machine are:   Pentium III 733Mhz, 512
megs memory, 7 gigs free on the partition.  Seems like I should be
getting a lot more horsepower.  I really need to speed this up somehow.
Does anyone see anything in the trigger or otherwise that would cause
this to be so slow?

Thank you very much,
Fran


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[GENERAL] Re: PHPPgAdmin or MS Access

2001-04-30 Thread Joel Burton

On Mon, 30 Apr 2001, Randall Perry wrote:

 Got a simple PgSQL database running on a linux server which I need the
 client to access for inserts and updates from a Win box.
 
 What's the simplest way to do this; using PHPPgAdmin, or MS Access via ODBC?
 
 Where can I find detailed info on using ODBC with Access and PgSQL?

FAQ at www.scw.org/pgaccess

-- 
Joel Burton   [EMAIL PROTECTED]
Director of Information Systems, Support Center of Washington


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [GENERAL] DBI/AutoCommit/Postgres

2001-04-30 Thread Tom Lane

Fran Fabrizio [EMAIL PROTECTED] writes:
 Another crucial piece of information is that each insert kicks off a
 trigger.  I did not write the trigger, and do not know how to write
 triggers, but I think that might be the contributing factor to the
 slowness.

Bingo.  Your trigger is adding four complete queries (two selects and
two inserts or updates) for every row inserted.

Think about ways to avoid some or all of those.  (For example, does the
knownhosts table really need a last_contact column, or could you obtain
that on-the-fly from a query over the incoming or status table when
needed?)  Also look to see if these are being done with reasonable query
plans ... perhaps you need some indexes and/or vacuum operations ...

regards, tom lane

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[GENERAL] TRUNCATE and INDEXes

2001-04-30 Thread Eugene Lee

If there's one or more indexes on a table that gets TRUNCATEd, are those
indexes automatically updated as expected from INSERTs and DELETEs?


-- 
Eugene Lee
[EMAIL PROTECTED]

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



[GENERAL] Minor documentation bug

2001-04-30 Thread Joseph Shraibman

In /usr/local/pgsql/doc/html/runtime-config.html:

DEBUG_PRINT_PARSE (boolean), DEBUG_PRINT_PLAN (boolean),
DEBUG_PRINT_REWRITTEN (boolean), DEBUG_PRINT_QUERY
(boolean),DEBUG_PRETTY_PRINT (boolean)

  For any executed query, prints either the query, the parse tree,
the execution plan, or the query rewriter output to the server log.
DEBUG_PRETTY_PRINT selects
  are nicer but longer output format. 

The order of the lists do not seem to match.

BTW can I send to the bugs list w/o subscribing?  I'm not sure so I'm
sending to general.

-- 
Joseph Shraibman
[EMAIL PROTECTED]
Increase signal to noise ratio.  http://www.targabot.com

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[GENERAL] fsync on 7.1

2001-04-30 Thread Joseph Shraibman

If I turn off fsync on 7.1 does that mean that the wal file is sync'd
(according to WAL_SYNC_METHOD in the log file) and other files are not? 
Or does fsync apply to all file equally?

-- 
Joseph Shraibman
[EMAIL PROTECTED]
Increase signal to noise ratio.  http://www.targabot.com

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html