Re: [GENERAL] slow inserts and updates on large tables

1999-02-17 Thread Jim Mercer

> > if anyone has any pointers as to why this is so slow, lemme know.
> 
> Have you checked the usual stuff:
> 
> 1) Each insert and update creates its own transaction. That is, an
>insert is in fact:
> 
>BEGIN TRANSACTION;
>INSERT...;
>COMMIT;
> 
>So, to make things faster you should BEGIN TRANSACTION explicitly
>before all the inserts and COMMIT after them. Or separate into
>manageable bulks if you run into a memory problem.

i have a script which times various methods:

--
data is 6047 records
insertama time was 56 seconds
psqlinsert time was 51 seconds
traninsert time was 51 seconds
psqlselect time was 0
--

the script drops and recreates the testtable before each test.

insertama read the data from a text file and uses PQexec to insert each record.
it does not use begin/commit.

psqlinsert is "psql -f insert.sql" where insert.sql is a file of insert
commands (created by printf'ing the PQexec args from insertama).

traninsert is the same as psqlinsert, except it has "begin transaction" and
"commit transaction" at the start and end.

psqlselect is a "psql -c select count(callid) from testtable".

so, according to my tests, the begin/commit stuff doesn't do me any good.

> 2) Indexing tables mean faster SELECTs at the expense of slower INSERTs
>and UPDATEs. There is no magic. The reasoning is that normally you
>query the data a lot more than you change it.
> 
>Thus, it is preferable, before doing bulk inserts, to drop the
>indices and recreate them afterwards. This is true when you are not
>expectind the database to be queried at the same time the inserts are
>made.

the production database currently has over 5 million records, and is expected
to run at 10-15 million is full production.

the inserts happen in batches, every 10 minutes or so.

recreating the index takes some 30 minutes as it is.

however, if i use the above bench test script, but with the index removed,
i get:

--
data is 6047 records
insertama time was 53 seconds
psqlinsert time was 47 seconds
traninsert time was 48 seconds
psqlselect time was 0 seconds
--

as you can see, it didn't make much difference.

>As for updates, it's trickier, because you actually use the index
>for the WHERE part of the update. If speed is of an essence, I would
>probably try the following:
> 
>SELECT * INTO TABLE temp_table FROM your_table WHERE update_condition;
>DELETE FROM your_table WHERE update_condition;
>DROP INDEX...;
>INSERT INTO your_table SELECT ... FROM temp_table; -- update within select
>CREATE INDEX...;

as stated above, the size of the table means i can't do this on every update.

> 3) Back to the issue of INSERTS - copies are faster. If you can transform
>the data into tab-delimited format as required by COPY, you save a lot
>of time on parsing, planning etc.

this sorta defeats the purpose of putting the data in an SQL database. 8^)

-- 
[ Jim MercerReptilian Research  [EMAIL PROTECTED]   +1 416 410-5633 ]
[ The telephone, for those of you who  have forgotten, was a commonly used ]
[ communications technology in the days before electronic mail.]
[ They're still easy to find in most large cities. -- Nathaniel Borenstein ]



Re: [GENERAL] slow inserts and updates on large tables

1999-02-17 Thread Jim Mercer

> How about -o -F ?

the startup script is:

[ -x /usr/local/pgsql/bin/postmaster ] && {
   su -l pgsql -c 'exec /usr/local/pgsql/bin/postmaster -i -S -o -F -d 3 \
   -D/usr/local/pgsql/data' >> /usr/local/pgsql/errlog 2>&1
   echo -n ' pgsql'
}


-- 
[ Jim MercerReptilian Research  [EMAIL PROTECTED]   +1 416 410-5633 ]
[ The telephone, for those of you who  have forgotten, was a commonly used ]
[ communications technology in the days before electronic mail.]
[ They're still easy to find in most large cities. -- Nathaniel Borenstein ]



Re: [GENERAL] slow inserts and updates on large tables

1999-02-17 Thread Jim Mercer

> At 16:10 +0200 on 17/2/99, Jim Mercer wrote:
> You probably didn't understand me. If you convert it to tab delimited text
> and then use COPY table_name FROM filename/stdin instead of INSERT, it will
> be much faster, because you don't have to do the parsing and planning on
> each line, but only on the whole copy.
> 
> I didn't tell you to use the data directly from those text files...
> 
> PQexec( con, "COPY table1 FROM stdin" );
> 
> while (data_still_coming) {
> 
>sprintf( line, "%s\t%s\t%s\n" , item1, item2, item3 );
>PQputline( con, line );
> 
> }
> 
> PQputline( con, ".\n" );
> PQendcopy(con);

i will test this with my insertama program, but i see some problems with this.

firstly, it assumes that all of your applications programs are updated each
time you modify the structure of the table.

i am using "insert into testtable (fieldname1, fieldname2) values ('1', '2');"

this allows the applications to remain unchanged if new fields are added.

also, it doesn't seem to address the issue of updates, which suffer from worse
performance than inserts.

what is a realistic number of inserts per second under postgresql, with or
without an index?

-- 
[ Jim MercerReptilian Research  [EMAIL PROTECTED]   +1 416 410-5633 ]
[ The telephone, for those of you who  have forgotten, was a commonly used ]
[ communications technology in the days before electronic mail.]
[ They're still easy to find in most large cities. -- Nathaniel Borenstein ]



Re: [GENERAL] restricting select statements

1999-02-24 Thread Jim Mercer

> I'm working with JDBC and if I issue a broad SELECT statement, with a
> good amount of records (2000) it takes quite a while to execute the
> query and send the results back, even up to a minute, when I only want
> about the first 100 records. If I restrict the search, it is satisfied
> in a much more reasonable amount of time. The problem is that I can only
> know how big the database. I could do a count and get an idea of how
> large the database is and make a guess that the data is uniformly
> distributed. There was a java command to restrict the number of records
> saved setMaxSize(); but this does not seem to decrease the time of
> execution of executeQuery(), and probably only uses less local memory.

have a look at using cursors.

worked for me.

-- 
[ Jim MercerReptilian Research  [EMAIL PROTECTED]   +1 416 410-5633 ]
[ The telephone, for those of you who  have forgotten, was a commonly used ]
[ communications technology in the days before electronic mail.]
[ They're still easy to find in most large cities. -- Nathaniel Borenstein ]



[GENERAL] inet and cidr type problems

1999-03-03 Thread Jim Mercer

i have been playing with the inet and cidr types, and i have noticed a couple
problems, and would entertain suggestions as to how to fix them without
necessarily breaking how other people might be using them.

- a table with an element of type inet, will show "0.0.0.0/0" as "00/0"

i suspect this can be fixed in src/backend/adt/network.c or some such.

- when creating a table with either type inet or type cidr as a primary, unique
   key, the elements "198.68.123.0/24" and "198.68.123.0/27" are considered
   equal.

i considered editing src/backend/adt/network.c and changing the various
comparison functions to take the subnet bits into account, but i wasn't sure
what the proper fix is.

my feeling is to make "/27" to be greater than "/24", such that when fetching
in order, the "/24" will come first.

if i make such changes to the source, will it break other people's code if
the changes get added to the core source (6.4.3 or 6.5)?

-- 
[ Jim MercerReptilian Research  [EMAIL PROTECTED]   +1 416 410-5633 ]
[ The telephone, for those of you who  have forgotten, was a commonly used ]
[ communications technology in the days before electronic mail.]
[ They're still easy to find in most large cities. -- Nathaniel Borenstein ]



Re: [GENERAL] dbase converter

1999-04-08 Thread Jim Mercer

> Is there a dbase3 <=> postgresql converter available at all? I found
> dbf2sql v2.2 which converts to (but not from) Postgres95, with notes such as
> 
> -p  primary:
> Select the primary key. You have to give the exact
> field-name. (Only for dbf2msql, Postgres95 doesn't have a primary key)
> 
> which of course PostgreSQL does. Anyone know of something more recent?

i used the dbf2sql library to create a utility to dump dbf file into two
ascii text files.

one file generated is an sql create statement and the other is a series of
sql insert statements, "insert into table (attr, attr) values (val, val);"

for me, it was easy enough to modify the test files to my tastes with
external scripts or manual editting, than to rework the code to do actual
inserts.

feel free to have a look:

ftp://ftp.reptiles.org/pub/pgsql/dbf2sql-2.0-rep.tar.gz

this is the dbf2sql-2.0 stuff, plus the source for my program (mydb).
the makefile is minimally modified to my tastes.

-- 
[ Jim MercerReptilian Research  [EMAIL PROTECTED]   +1 416 410-5633 ]
[ The telephone, for those of you who  have forgotten, was a commonly used ]
[ communications technology in the days before electronic mail.]
[ They're still easy to find in most large cities. -- Nathaniel Borenstein ]



Re: [GENERAL] virtual fields ( proxy execution ?? )

1999-05-10 Thread Jim Mercer

> Hi
> Err maybe that was a bad example. How about this ... I have ( not really )
> a large number of programmable power supplies connected to my remote Linux box
> where the details about each supply are stored in a database.( eg 100 supplies
> = 100 records ). One of the fields in each record might be called voltage. If
> I query a particular field then the current voltage of the supply is returned.
> If I write to that same field then that particular supply would have its
> voltage adjusted to the written value. This would allow me to manipulate the
> power supply parameters using SQL and would be a powerful ( no pun  ) tool for
> doing all sorts of things in response to alarms etc ( including sending email
> ).
> 
> Thanks again.

looks like an application for triggers.

-- 
[ Jim MercerReptilian Research  [EMAIL PROTECTED]   +1 416 410-5633 ]
[ The telephone, for those of you who  have forgotten, was a commonly used ]
[ communications technology in the days before electronic mail.]
[ They're still easy to find in most large cities. -- Nathaniel Borenstein ]



[GENERAL] detecting locks

1999-05-17 Thread Jim Mercer


i have a number of processes which deal with the same table.

i've run into locking contention.

is there a way for me to tell if there are connections waiting on a share
or exclusive lock?

i'm using libpq and C.

-- 
[ Jim MercerReptilian Research  [EMAIL PROTECTED]   +1 416 410-5633 ]
[ The telephone, for those of you who  have forgotten, was a commonly used ]
[ communications technology in the days before electronic mail.]
[ They're still easy to find in most large cities. -- Nathaniel Borenstein ]



Re: [GENERAL] Question about indexes.

1999-06-03 Thread Jim Mercer

> This table has no indexes at all. Each query made (using EXPLAIN) reveals a
> "Seq Scan".
> So far, everything is OK.
> Then I define 2 indexes, one affects a varchar field, for example 'lastname'
> an the other a float type field, let´s call it 'id'.
> When I perform a query such as SELECT * FROM table WHERE lastname =
> 'Douglas', EXPLAIN reveals that index is being used ("Index Scan").
> But when the query is like: SELECT * FROM table WHERE id = 10, no index is
> used, a "Seq Scan" is made. So it looks like if an index defined on a
> numeric type field, doesn't work. (same thing happens with an index on
> integer field).
> Finally I made the index on 'id' field CLUSTERED. Repeating the query using
> EXPLAIN I get a lower cost and "Index Scan".

i found a similar problem with 6.5beta, only i was using index fields of
int8, float8, datestamp and abstime.

be nice to know what the issue is.

jim (wandering off to look up "CLUSTERED")

-- 
[ Jim MercerReptilian Research  [EMAIL PROTECTED]   +1 416 410-5633 ]
[ The telephone, for those of you who  have forgotten, was a commonly used ]
[ communications technology in the days before electronic mail.]
[ They're still easy to find in most large cities. -- Nathaniel Borenstein ]



[GENERAL] why so much freaking swap?!?!?

1999-08-14 Thread Jim Mercer


i've got a database with several million records.

i've found that from time to time i need to nuke the primary index in order
to get a vacuum done on it.

i don't like this, but i can live with it.

now, i'm vacuuming it and the backen process is growing to enormous size:

(i was gonna cut and paste part of a top session here, but the vacuum just
blew out the 750M swap partition on the server, and now i need to reboot it).

why does this vacuum require so much core memory?

-- 
[ Jim MercerReptilian Research  [EMAIL PROTECTED]   +1 416 410-5633 ]
[ The telephone, for those of you who  have forgotten, was a commonly used ]
[ communications technology in the days before electronic mail.]
[ They're still easy to find in most large cities. -- Nathaniel Borenstein ]



[GENERAL] huge backend processes

1999-08-18 Thread Jim Mercer


maybe i'm doing something wrong here:

CREATE TABLE samples
  (
  mark  abstime,
  subnetinet,
  bytes_in  float8,
  bytes_out float8
  );
CREATE INDEX samples_mark ON samples (mark);

--- fill it with lots and lots of data

BEGIN WORK;
DECLARE mycurs CURSOR FOR
   SELECT DATE_PART('epoch', mark), subnet, bytes_in, bytes_out
   FROM samples
   WHERE mark >= 'epoch or another date'::abstime;

-- LOOP
FETCH FORWARD 1000 IN mycurs

END WORK;


given the above, actually done using C/libpq, i run my program, which
does a PQclear after each FETCH.

after reading 25 records, top says:

  PID USERNAME PRI NICE SIZERES STATETIME   WCPUCPU COMMAND
13748 postgres105   0 22724K 20588K RUN  3:05 86.14% 86.14% postgres

at some point, it runs out of memory (or something):

 279001 records read (1177 rec/sec)
   testprog: query failed - FETCH FORWARD 1000 IN samples;
   testprog: (7) pqReadData() -- backend closed the channel unexpectedly.
   This probably means the backend terminated abnormally before or while
   processing the request.

i have had this problem with other processes on other tables.

the problem is usually if i am selecting a huge number of records,
and defining some conversion or something in the selected fields.

if i revert my code to (note: less the DATE_PART conversion):

DECLARE mycurs CURSOR FOR
   SELECT mark, subnet, bytes_in, bytes_out
   FROM samples
   WHERE mark >= 'epoch or another date'::abstime;

it works fine.

-- 
[ Jim MercerReptilian Research  [EMAIL PROTECTED]   +1 416 410-5633 ]
[ The telephone, for those of you who  have forgotten, was a commonly used ]
[ communications technology in the days before electronic mail.]
[ They're still easy to find in most large cities. -- Nathaniel Borenstein ]


Check out "PostgreSQL Wearables" @ http://www.pgsql.com



Re: [GENERAL] huge backend processes

1999-08-18 Thread Jim Mercer

> if i revert my code to (note: less the DATE_PART conversion):
> 
> DECLARE mycurs CURSOR FOR
>SELECT mark, subnet, bytes_in, bytes_out
>FROM samples
>WHERE mark >= 'epoch or another date'::abstime;
> 
> it works fine.

as a followup, if i use:

DECLARE mycurs CURSOR FOR
   SELECT mark::int8, subnet, bytes_in, bytes_out
   FROM samples
   WHERE mark >= 'epoch or another date'::abstime;

it works fine as well.

-- 
[ Jim MercerReptilian Research  [EMAIL PROTECTED]   +1 416 410-5633 ]
[ The telephone, for those of you who  have forgotten, was a commonly used ]
[ communications technology in the days before electronic mail.]
[ They're still easy to find in most large cities. -- Nathaniel Borenstein ]


Check out "PostgreSQL Wearables" @ http://www.pgsql.com



Re: [GENERAL] initdb error->ld-elf.so.1: Shared object "libpq.so.2" not found^H

1999-10-26 Thread Jim Mercer

> I got the above error [see subject] when I attempted to "initdb".
> 
> What's wrong and how do I fix it ?

what operating system?

-- 
[ Jim Mercer [EMAIL PROTECTED]  +1 416 506-0654 ]
[  Reptilian Research -- Longer Life through Colder Blood  ]
[  Don't be fooled by cheap Finnish imitations; BSD is the One True Code.  ]





[GENERAL] possible problem with indexes postgresql-6.5.2

1999-11-08 Thread Jim Mercer


i have a large table that gets added to frequently (2-3 times per hour, 50K
or more records each time).

this table has a single index, unique, a char(20) field.

every now and then, the index seems to grow abnormally, although it appears
to function fine.

here is an ls -l, before:

-rw---  1 pgsql  wheel  1073741824 Nov  6 23:34 gt
-rw---  1 pgsql  wheel  1073741824 Nov  6 23:36 gt.1
-rw---  1 pgsql  wheel  1073741824 Nov  6 23:37 gt.2
-rw---  1 pgsql  wheel   796180480 Nov  7 17:11 gt.3
-rw---  1 pgsql  wheel   0 Oct 28 00:15 gt.4
-rw---  1 pgsql  wheel  1073111040 Nov  7 17:11 gt_callid_key

another round of inserts hit, and then i did:
"drop index gt_callid_key;"
"vacuum verbose;"
"create unique index gt_callid_key on gt (callid);"

-rw---  1 pgsql  wheel  1073741824 Nov  7 17:22 gt
-rw---  1 pgsql  wheel  1073741824 Nov  7 17:24 gt.1
-rw---  1 pgsql  wheel  1073741824 Nov  6 23:37 gt.2
-rw---  1 pgsql  wheel   797941760 Nov  7 18:12 gt.3
-rw---  1 pgsql  wheel   0 Oct 28 00:15 gt.4
-rw---  1 pgsql  wheel   359399424 Nov  7 18:12 gt_callid_key

from time to time, i have seen gt_callid_key grow to in excess of 50% the size
of the main table.

any ideas?

this is postgres-6.5.2 (patched from 6.5.1) on intel hardware under FreeBSD.

-- 
[ Jim Mercer [EMAIL PROTECTED]  +1 416 506-0654 ]
[  Reptilian Research -- Longer Life through Colder Blood  ]
[  Don't be fooled by cheap Finnish imitations; BSD is the One True Code.  ]





[GENERAL] ERROR: cannot read block 122600 of gt

1999-11-09 Thread Jim Mercer


i'm getting the following error on one of my tables:

error: SQL: FETCH FORWARD 500 IN cur_NON_110916;
error: database (7) ERROR:  cannot read block 122600 of gt

there are no messages from the OS about disk read failures.

this is postgres-6.5.2 on FreeBSD-3.3-STABLE

-- 
[ Jim Mercer [EMAIL PROTECTED]  +1 416 506-0654 ]
[  Reptilian Research -- Longer Life through Colder Blood  ]
[  Don't be fooled by cheap Finnish imitations; BSD is the One True Code.  ]





Re: [GENERAL] Cleaning up vacuums

1999-11-24 Thread Jim Mercer

On Wed, Nov 24, 1999 at 02:25:11PM +, Mark Jewiss wrote:
> Each time I vacuum (from either the script or the command line), I see
> things like:
> 
> DEBUG:  --Relation pg_indexes--
> DEBUG:  Pages 0: Changed 0, Reapped 0, Empty 0, New 0; Tup 0: Vac 0,
> Keep/VTL 0/0, Crash 0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail.
> Space 0/0; EndEmpty/Avail. Pages 0/0. Elapsed 0/0 sec.
> 
> Redirecting STDERR or STDOUT somewhere doesn't seem to have any effect on
> these messages.
> 
> Any ideas anyone?

must be something in your perl script.

i do similar things using /bin/sh scripts as:

  psql -h host -d database -c "vacuum verbose;" > errors 2>&1
  if [ $? -ne 0 ] ; then
  Mail -s "vacuum errors" someone < errors
  fi
  rm -f errors

works fine.

also, if you enable syslog (include/config.h) and set up a pg_options file,
i've noticed that verbose level 2 will put the output of vacuum into
the syslog file.

-- 
[ Jim Mercer [EMAIL PROTECTED]  +1 416 506-0654 ]
[  Reptilian Research -- Longer Life through Colder Blood  ]
[  Don't be fooled by cheap Finnish imitations; BSD is the One True Code.  ]





Re: [GENERAL] identifying performance hits: how to ???

2000-01-12 Thread Jim Mercer

On Wed, Jan 12, 2000 at 12:23:23PM -0400, The Hermit Hacker wrote:
> Have/do you perform reasonably regular vacuum's of the database?  

on my databases, i have only been successful in doing a "VACUUM VERBOSE" on
my tables.

i suspect i've got the syntax wrong or something:

nagoss=> \h vacuum
Command: vacuum
Description: vacuum the database, i.e. cleans out deleted records, updates statistics
Syntax:
VACUUM [VERBOSE] [ANALYZE] [table]
or
VACUUM [VERBOSE]  ANALYZE  [table [(attr1, ...attrN)]];

nagoss=> vacuum verbose analyse switches;
ERROR:  parser: parse error at or near "switches"
nagoss=> vacuum verbose switches;
NOTICE:  --Relation switches--
NOTICE:  Pages 1: Changed 1, Reapped 0, Empty 0, New 0; Tup 9: Vac 0, Keep/VTL 0/0, 
Crash 0, UnUsed 0, MinLen 78, MaxLen 78; Re-using: Free/Avail. Space 0/0; 
EndEmpty/Avail. Pages 0/0. Elapsed 0/0 sec.
NOTICE:  Index switch_name: Pages 2; Tuples 9. Elapsed 0/0 sec.
VACUUM

what is the correct syntax for doing a vacuum analyse?

-- 
[ Jim Mercer [EMAIL PROTECTED]  +1 416 506-0654 ]
[  Reptilian Research -- Longer Life through Colder Blood  ]
[  Don't be fooled by cheap Finnish imitations; BSD is the One True Code.  ]





Re: [GENERAL] identifying performance hits: how to ???

2000-01-12 Thread Jim Mercer

On Wed, Jan 12, 2000 at 01:09:08PM -0500, [EMAIL PROTECTED] wrote:
> J> nagoss=> \h vacuum
> J> Command: vacuum
> J> Description: vacuum the database, i.e. cleans out deleted records, updates 
>statistics
> J> Syntax:
> J> VACUUM [VERBOSE] [ANALYZE] [table]
> J> or
> J> VACUUM [VERBOSE]  ANALYZE  [table [(attr1, ...attrN)]];
> 
> J> what is the correct syntax for doing a vacuum analyse?
> 
> The correct syntax as shown in the help message is 
> 'vacuum analyze'. Notice the alternative spelling of analyse with a
> 'z' not an 's'.
> 
> Hope you have a very nice day, :-)

[ picture of jim slapping himself in the forehead ]

geez.

i know it was right in front of me, but, g.

[ in his best homer simpson voice ] Doh!

-- 
[ Jim Mercer [EMAIL PROTECTED]  +1 416 506-0654 ]
[  Reptilian Research -- Longer Life through Colder Blood  ]
[  Don't be fooled by cheap Finnish imitations; BSD is the One True Code.  ]





Re: [GENERAL] Unsigned Datatypes

2000-01-20 Thread Jim Mercer

On Thu, Jan 20, 2000 at 05:18:36PM -0500, Bruce Momjian wrote:
> > The standard doesn't provide for specifically unsigned types, so I do not
> > believe anyone would want to go through that trouble. Sorry.
> 
> Actually, oid data type is an unsigned int4.  If it does not behave that
> way, it is a bug.

is this just in pgsql, or in SQL in general?

-- 
[ Jim Mercer [EMAIL PROTECTED]  +1 416 506-0654 ]
[  Reptilian Research -- Longer Life through Colder Blood  ]
[  Don't be fooled by cheap Finnish imitations; BSD is the One True Code.  ]





[GENERAL] finding last element of an index

2000-01-27 Thread Jim Mercer


if i have a table:

CREATE TABLE booga
(
name   varchar(20),
addr   varchar(20)
);
CREATE UNIQUE INDEX booga_idx ON booga (name);

is there a fast way to find the last entry in index booga_idx?


-- 
[ Jim Mercer [EMAIL PROTECTED]  +1 416 506-0654 ]
[  Reptilian Research -- Longer Life through Colder Blood  ]
[  Don't be fooled by cheap Finnish imitations; BSD is the One True Code.  ]





[GENERAL] spontaneous reboots FreeBSD 3.x and 4.0-stable

2000-03-21 Thread Jim Mercer


i've got two systems, one a pentium 233 with adaptec 2940, scsi, using vinum
running a big postgresql database, and a new one, dual pentium 550, dual adaptec
29160, not using vinum, and another big postgresql database.

the first system runs 3.4-stable and gets spontaneous reboots infrequently,
and there are no messages (dmesg, console or syslog) indicating why it rebooted.

initially, i though it might be the vinum.

the new system runs 4.0-stable, doesn't have vinum, and is spontaneously
rebooting (several times in 48 hrs).  again no messages indicting the problem.

postgres runs as a non-root user.

on the second system, it has rebooted during heavy database processing, but
mysterously, has rebooted again within 30-40 minutes, even though the
database processing doesn't automatically restart (ie. the second reboot
was not caused by heavy processing, as there was no processing happening).

has anybody had similar issues?  solutions?  suggestions?

-- 
[ Jim Mercer [EMAIL PROTECTED]  +1 416 506-0654 ]
[  Reptilian Research -- Longer Life through Colder Blood  ]
[  Don't be fooled by cheap Finnish imitations; BSD is the One True Code.  ]



[GENERAL] Re: spontaneous reboots FreeBSD 3.x and 4.0-stable

2000-03-21 Thread Jim Mercer

On Tue, Mar 21, 2000 at 12:06:05PM +0100, Pawe³ Dubin wrote:
> > the first system runs 3.4-stable and gets spontaneous reboots infrequently,
> > and there are no messages (dmesg, console or syslog) indicating why it rebooted.
> > the new system runs 4.0-stable, doesn't have vinum, and is spontaneously
> > rebooting (several times in 48 hrs).  again no messages indicting the problem.
> 
> We have similar problem (the same configuration but RH 5.1) (I dont know what
> is vinum). It is probably from scsi board when we have 4 devices it reboots
> every 3 days. now with 2 devices every 2 weeks.
> 
> It is probably caused by intel -> driver -> scsi cooperation, because  on K6
> system works ok. Maybe newer kernel with recognized pentium bugs.

i wonder, it might actually be heat.

in the first system, it was originally housed in an IBM PC 325 cabinet and
rebooted quite frequently, then it was moved to a 48VDC rackmount chassis,
with somewhat better airflow, and it rebooted less frequently.

the new server is a rackmount case, but it doesn't have much in the way of
extra fans.

is it possible the reboots we are seeing are due to some component overheating?

> Sorry for my english

your english is fine.

-- 
[ Jim Mercer [EMAIL PROTECTED]  +1 416 506-0654 ]
[  Reptilian Research -- Longer Life through Colder Blood  ]
[  Don't be fooled by cheap Finnish imitations; BSD is the One True Code.  ]



Re: [GENERAL] followup to SELECT/INSERT problem

2000-06-23 Thread Jim Mercer

On Fri, Jun 23, 2000 at 12:00:31PM -0400, Tom Lane wrote:
> Jim Mercer <[EMAIL PROTECTED]> writes:
> > it appears that each call to UPDATE seems to be taking a long, long time to
> > complete.
> 
> Poor choice of plan, maybe?  What does EXPLAIN say about how a typical
> example of the UPDATE will be executed?

silly me, i should have know to do more investigation before going to the list.

as it turns out, the index on the key was not being used.

a "vacuum verbose analyze" caused things to run much, much faster.
(hence the UPDATE ... WHERE key = ... was taking extra long)

-- 
[ Jim Mercer [EMAIL PROTECTED]  +1 416 410-5633 ]
[  Reptilian Research -- Longer Life through Colder Blood  ]
[  Don't be fooled by cheap Finnish imitations; BSD is the One True Code.  ]



Re: [GENERAL] RE: [HACKERS] pg_dump & blobs - editable dump?

2000-07-12 Thread Jim Mercer

On Thu, Jul 13, 2000 at 12:17:28AM +1000, Philip Warner wrote:
> At 14:58 12/07/00 +0100, Peter Mount wrote:
> >Why not have it using something like tar, and the first file being stored in
> >ascii?
> >
> >That way, you could extract easily the human readable SQL but still pipe the
> >blobs to stdout.
> 
> If anyone can send me a nice interface for reading and writing a tar file
> from C, I'll do it. I just don't have the inclination to learn about tar
> internals at the moment. By 'nice' I mean that I would like:

i suspect you might find a library of either tar or cpio read functions as
part of the FreeBSD sysinstall utility.

-- 
[ Jim Mercer [EMAIL PROTECTED]  +1 416 410-5633 ]
[  Reptilian Research -- Longer Life through Colder Blood  ]
[  Don't be fooled by cheap Finnish imitations; BSD is the One True Code.  ]



Re: [GENERAL]

2000-07-27 Thread Jim Mercer

On Thu, Jul 27, 2000 at 12:18:23PM -0400, Mehta, Ashok wrote:
> I am running postgres on RedHat Linux It was running fine and our sysadmin
> added a scsi tape device to the kernel and rebooted the machine so the
> postmaster was killed with -9 signal and after that when I start postmaster

did the sysadmin kill postmaster with -9 or reboot, or both.

reboot does not normally kill things off with a -9, depending on the system
it uses a progression of other nicer signals.

killing postmaster with -9 is bad, as it doesn't give the process any chance
to do shutdown/cleanup functions, which might include clearing the socket in
/tmp.

this might also explain why you got the error after kill -9/reboot, but didn't
get it after a normal reboot.

if your sysadmin is going around using "kill -9" as a standard shutdown method,
well, you might want to speak to them about this, as it isn't really a nice
way to deal with things.

> I get
>  FATAL: StreamServerPort: bind() failed: Permission denied 
>  Is another postmaster already running on that port? 
>  If not, remove socket node (/tmp/.s.PGSQL.5432) and retry.
>   postmaster: cannot create UNIX stream port 
> 
> I have rebooted the machine since then and there is no file in /tmp
> directory to be removed. I am absolutly positive that  another postmaster is
> not running and that file does not exist. 
> 
> 
> Any suggestions would be appreciated.
> 
> Thanks
> Ashok
> 

-- 
[ Jim Mercer [EMAIL PROTECTED]  +1 416 410-5633 ]
[  Reptilian Research -- Longer Life through Colder Blood  ]
[  Don't be fooled by cheap Finnish imitations; BSD is the One True Code.  ]



Re: [GENERAL] 7.1 Release Date

2000-10-16 Thread Jim Mercer

On Mon, Oct 16, 2000 at 12:00:59PM -0400, Bruce Momjian wrote:
> > > Migration tools might ease the pain, sure (though I'd still recommend
> > > doing a full backup before a major version upgrade, just on safety
> > > grounds; so the savings afforded by a tool might not be all that much).
> > 
> > What is needed, IMHO, is a replacement to the pg_upgrade script that can
> > do the following:
> > 1.) Read _any_ previous version's format data files;
> > 2.) Write the current version's data files (without a running
> > postmaster).
> 
> Let me ask.  Could people who need to be up all the time dump their
> data, install PostgreSQL on another machine, load that in, then quickly
> copy the new version to the live machine and restart.  Seems like
> downtime would be minimal.

sure, i've only got 25+ gig of tables, two of which are 10+ gig each.

8^(

it certainly would be nice to have a quicker process than dump/reload.

-- 
[ Jim Mercer [EMAIL PROTECTED]  +1 416 410-5633 ]
[  Reptilian Research -- Longer Life through Colder Blood  ]
[  Don't be fooled by cheap Finnish imitations; BSD is the One True Code.  ]



Re: [GENERAL] 7.1 Release Date

2000-10-16 Thread Jim Mercer

On Mon, Oct 16, 2000 at 12:18:15PM -0400, Bruce Momjian wrote:
> > > Let me ask.  Could people who need to be up all the time dump their
> > > data, install PostgreSQL on another machine, load that in, then quickly
> > > copy the new version to the live machine and restart.  Seems like
> > > downtime would be minimal.
> > 
> > sure, i've only got 25+ gig of tables, two of which are 10+ gig each.
> > 
> > 8^(
> > 
> > it certainly would be nice to have a quicker process than dump/reload.
> 
> I see.  Hmmm.

oh, forgot to mention that some of my indexes take 2+ hours to rebuild
from scratch.

-- 
[ Jim Mercer [EMAIL PROTECTED]  +1 416 410-5633 ]
[  Reptilian Research -- Longer Life through Colder Blood  ]
[  Don't be fooled by cheap Finnish imitations; BSD is the One True Code.  ]



[GENERAL] permissions on databases vs. permissions on tables

2000-11-06 Thread Jim Mercer


ok, so i know how to use GRANT/REVOKE to assign permission levels on tables.

however, it eludes me on how to do it on a database level.

i poked about in the manuals and nothing jumped out and said "here's how
you do it".

for example, i have a database called "samples".

in "samples", i have various tables which have been configured using
"GRANT/REVOKE" to allow/disallow various activities by users.

however, any user can create new tables.

how do i stop/restrict who can add tables (and/or indexes) ?

-- 
[ Jim Mercer [EMAIL PROTECTED]  +1 416 410-5633 ]
[  Reptilian Research -- Longer Life through Colder Blood  ]
[  Don't be fooled by cheap Finnish imitations; BSD is the One True Code.  ]



[GENERAL] adding a C function with optional arguments

2000-12-14 Thread Jim Mercer


i've poked about and made a new function "makecrypt(plainpass, salt)", and
it works ok.

basically, it is just a hook to the unix crypt() function.

i was wondering if someone could tell me how, if possible, to implement
a c function such that i can do:

> SELECT makecrypt('secret', 'salt');
or
> SELECT makecrypt('secret');

such that if salt is not specified, the c function would make one up.

-- 
[ Jim Mercer  [EMAIL PROTECTED] ]
[  Reptilian Research -- Longer Life through Colder Blood  ]
[ aka[EMAIL PROTECTED]  +1 416 410-5633 ]



[GENERAL] experience with replication?

2001-01-25 Thread Jim Mercer


and anyone post some of their experiences with using the new replication
stuff in production?

-- 
[ Jim Mercer  [EMAIL PROTECTED] ]
[  Reptilian Research -- Longer Life through Colder Blood  ]
[ aka[EMAIL PROTECTED]  +1 416 410-5633 ]



Re: [GENERAL] SCO Extortion

2004-01-20 Thread Jim Mercer
On Tue, Jan 20, 2004 at 06:16:46PM -0800, Gavin M. Roy wrote:
> Thanks for the feedback thus far.  I should also mention I use freebsd 
> for other stuff, but I am mainly asking in peoples experience, which is 
> the best for PostgreSQL to live on specifically.  In terms of a nice smp 
> high end scsi system.  Sorry for the lack of specifics on that before.

i've built several billing systems for long distance companies using pgsql
on FreeBSD since '97.  i've found them to be quite stable and robust,
including uniprocessor and SMP, using raw big disks, hardware RAID, and also
the incumbent vinum software RAID.

i've found upgrading the core OS, as well as upgrading pgsql and other apps,
to be fairly clean and troublefree.

> 
> Gavin
> 
> 
> Marc G. Fournier wrote:
> 
> >On Tue, 20 Jan 2004, Gavin M. Roy wrote:
> >
> > 
> >
> >>I'm currently one of the targets of SCO's linux licensing extortion
> >>business plan, and am contemplating switching to one of the BSD's to
> >>avoid any potential problems.   I'm curious which BSD people prefer for
> >>large scale databases and why.  Any pointers as to which I should test 
> >>out?
> >>   
> >>
> >
> >for the longest time, the BSDs have been split as:
> >
> >FreeBSD - i386 rock solid
> >NetBSD  - work on as many platforms as possible
> >OpenBSD - be as secure as possible
> >
> >There is alot of code sharing between them all though, so, IMHO, alot of
> >it is personal preferences ... I've been using FreeBSD since '95, and
> >other then having a habit of finding (and, usually pushing) its limits,
> >I've been most happy with it ...
> >
> >
> >
> >Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
> >Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664
> > 
> >
> 
> 
> 
> ---(end of broadcast)---
> TIP 4: Don't 'kill -9' the postmaster

-- 
[ Jim Mercer[EMAIL PROTECTED] +1 416 410-5633 ]
[  I want to live forever, or die trying.]

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] pg_hba.conf and crypt/password

2001-03-31 Thread Jim Mercer

On Sat, Mar 31, 2001 at 10:31:36AM +0200, Peter Eisentraut wrote:
> > what i want is for the pg_shadow file to contain encrypted passwords like
> > /etc/passwd, and for the server to encrypt the plain text password handed
> > to it and compare with the crypto-gunge in pg_shadow.
> 
> This is not possible.

i had a look at the code, and figured i wanted similar behaviour for:

host all 127.0.0.1 255.255.255.255 password /dir/passwd.file

but, rather than have a file, i wanted to use pg_shadow with encrypted
passwords.

so the following patch allows for:

host all 127.0.0.1 255.255.255.255 password pg_shadow

where "pg_shadow" is a special key (like "ident sameuser") to set up this
behaviour.

the patch is done in such a way that it will not impact existing installations

-- 
[ Jim Mercer  [EMAIL PROTECTED] ]
[  Reptilian Research -- Longer Life through Colder Blood  ]
[ aka[EMAIL PROTECTED]  +1 416 410-5633 ]


*** auth.c.orig Fri Mar 30 19:37:08 2001
--- auth.c  Fri Mar 30 19:28:20 2001
***
*** 695,701 
  static int
  checkPassword(Port *port, char *user, char *password)
  {
!   if (port->auth_method == uaPassword && port->auth_arg[0] != '\0')
return verify_password(port->auth_arg, user, password);
  
return crypt_verify(port, user, password);
--- 695,702 
  static int
  checkPassword(Port *port, char *user, char *password)
  {
!   if (port->auth_method == uaPassword && port->auth_arg[0] != '\0'
!   && strcmp(port->auth_arg, "pg_shadow") != 0)
return verify_password(port->auth_arg, user, password);
  
return crypt_verify(port, user, password);
*** crypt.c.origFri Mar 30 19:38:26 2001
--- crypt.c Fri Mar 30 19:39:07 2001
***
*** 280,287 
 * authentication method being used for this connection.
 */
  
!   crypt_pwd =
!   (port->auth_method == uaCrypt ? crypt(passwd, port->salt) : passwd);
  
if (!strcmp(pgpass, crypt_pwd))
{
--- 280,294 
 * authentication method being used for this connection.
 */
  
!   if (port->auth_method == uaCrypt)
!   crypt_pwd = crypt(passwd, port->salt);
!   else
!   {
!   /* if port->auth_arg, encrypt password from client before compare */
!   if (port->auth_arg[0] != 0)
!   pgpass = crypt(pgpass, passwd);
!   crypt_pwd = passwd;
!   }
  
if (!strcmp(pgpass, crypt_pwd))
{

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [GENERAL] template1, can there be a template2/3/4?

2001-06-04 Thread Jim Mercer

On Mon, Jun 04, 2001 at 02:16:35PM -0400, Tom Lane wrote:
> I wrote:
> > Jim Mercer <[EMAIL PROTECTED]> writes:
> >> where are these relations?  is this a compile-time thing, or can it be done
> >> on-the-fly?
> 
> > Compile-time; see SharedSystemRelationNames in
> > backend/utils/init/globals.c.  You'd have to do an initdb after changing
> > it, anyway.
> 
> BTW, the reason that pg_shadow is installation-wide is that it's
> not real clear what the 'ownership' column in pg_database means
> if users are not installation-wide.  Before cutting and hacking,
> you'd need to think carefully about just what semantics you are
> really after.

theoretically, if i nuked everything from SharedSystemRelationNames, then
each database would be wholly stand-alone, and the remaining code should just
work?


-- 
[ Jim Mercer[EMAIL PROTECTED] +1 416 410-5633 ]
[ Now with more and longer words for your reading enjoyment. ]

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl