Re: [GENERAL] pg_dump large-file support 16GB

2005-03-21 Thread Rafael Martinez Guerrero
On Fri, 2005-03-18 at 15:58, Tom Lane wrote:
 Rafael Martinez [EMAIL PROTECTED] writes:
  On Thu, 2005-03-17 at 10:17 -0500, Tom Lane wrote:
  Is that a plain text, tar, or custom dump (-Ft or -Fc)?  Is the behavior
  different if you just write to stdout instead of using --file?
 
  - In this example, it is a plain text (--format=3Dp).
  - If I write to stdout and redirect to a file, the dump finnish without
  problems and I get a dump-text-file over 16GB without problems.
 
 In that case, you have a glibc or filesystem bug and you should be
 reporting it to Red Hat.  The *only* difference between writing to
 stdout and writing to a --file option is that in one case we use
 the preopened stdout FILE* and in the other case we do
 fopen(filename, w).  Your report therefore is stating that there
 is something broken about fopen'd files.
 

Hello again

I have been testing a little more before I open a bug report at RH. I
have a simple test program to test 'fopen' in the samme filesystem I am
having problems. I can not reproduce the problem and the files I produce
with this program can get bigger than 16GB without problems.

Do you use any spesial option when you compile pg_dump, or in the
program that could influence how the program behaves and can help me to
reproduce the problem?

PS.- Be careful with this program . it won't stop and will consume
all the free space in your filesystem ;)


-bash-2.05b$ cat test_fopen.c

#include stdio.h
#include unistd.h

int main(int argc, char **argv){
  
  FILE *fp;
  char *filename = argv[1];

  char output[1024];
  int counter = 0;

  if ((fp = fopen(filename,w)) == NULL){
printf(fopen error\n);
  }

  while (1){

sprintf(output,*** Testing the fopen function in a RHEL server -
Counter: %d ***\n,counter);

if (fputs(output,fp) == EOF){
  printf(fputs error\n);
}

counter++;
  }
  
  fclose(fp);
  return 0;
}

-bash-2.05b$ gcc -D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 test_fopen.c -o
test_fopen
--

Thanks :)
-- 
Rafael Martinez, [EMAIL PROTECTED]
Center for Information Technology Services
University of Oslo, Norway

PGP Public Key: http://folk.uio.no/rafael/


signature.asc
Description: This is a digitally signed message part


Re: [GENERAL] no IF - am I missing something ?

2005-03-21 Thread Harald Fuchs
In article [EMAIL PROTECTED],
Michael Fuhr [EMAIL PROTECTED] writes:

 On Mon, Mar 21, 2005 at 12:35:22AM -0600, Thomas F.O'Connell wrote:
 The number of lines depends merely on where you place your line breaks.
 
 IF(days_complete = 120, job_price, 0)AS Days_120
 
 could be written as:
 
 CASE WHEN days_complete = 120 THEN job_price ELSE 0 END AS Days_120
 
 There might be somewhat less syntactic sugar, but this is not a five 
 line expression and, to me, is more readable than a comma-delimited 
 list where position alone indicates function in the expression.

 CASE is also standard SQL, whereas IF isn't (unless I've overlooked
 it in the SQL:2003 draft).

CASE, despite being standard SQL, is even supported by MySQL ;-)


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Installing PostgreSQL in Debian

2005-03-21 Thread Michael Ben-Nes
Yep :)
but that Expermental
Peter Eisentraut wrote:
Michael Ben-Nes wrote:
 

I recomend you to compile PG from source so you can use the new 8.0.1
   

PostgreSQL 8.0.1 is available in the Debian experimental suite, package 
name postgresql-8.0.

 

--
--
Canaan Surfing Ltd.
Internet Service Providers
Ben-Nes Michael - Manager
Tel: 972-4-6991122
Cel: 972-52-8555757
Fax: 972-4-6990098
http://www.canaan.net.il
--
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] pg_dump large-file support 16GB

2005-03-21 Thread Tom Lane
Rafael Martinez Guerrero [EMAIL PROTECTED] writes:
 Do you use any spesial option when you compile pg_dump, or in the
 program that could influence how the program behaves and can help me to
 reproduce the problem?

In a Linux system we'll add -D_GNU_SOURCE to the compile command line.
Also, pg_config.h sets some #define's that might affect things,
particularly #define _FILE_OFFSET_BITS 64.  I see you did both of
those in your test, but you might want to review pg_config.h to see if
anything else looks promising.

Another line of thought is that there is something broken about the
particular build of Postgres that you are using (eg it was affected by a
compiler bug).  You might try building from source, or grabbing the src
RPM and rebuilding from that, and confirming the bug is still there ---
and if so, back off the CFLAGS to minimal optimization and see if it
changes.

regards, tom lane

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[GENERAL] pl/perl problem

2005-03-21 Thread FERREIRA William (COFRAMI)
Title: RE: [GENERAL] Convert Cursor to array



hi

i 
wrote a store procedure using the pl/perlU language, and the comportment is 
strange.
my 
procedure do a select on my database and some traitments too and write the 
result in a file;

when i 
run the procedure a first time, it works fine, the file is create and data are 
in. but when i run my procedure a second time, the file is create but the data 
aren't write in it. where is the problem ?


i had 
an other problem the past week, but i not able to reproduce it. it was a very 
simple funtion who store a string into a variable and display iton the 
screen : something like this :
my 
$toto = '-';
$toto.='titi';
elog 
NOTICE, $toto;

the 
problem was :the first time i ran the procedure and i get : -titi and 
the second time : -titititi, etc
the 
variable $toto wasn't reinitialize.maybe somebodyhad the same 
problem.
(if i 
have enough time, i will post the code)

thanks 
in advance


 Will


[GENERAL] Time Stamp

2005-03-21 Thread Hrishikesh Deshmukh
Hi All,

I want to get a timestamp of the queries that i run!
Is there a builtin command to do this/ does one need to write a
function/stored procedure!
Any pointers will help.

Thanks,
Hrishi

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] psql variables

2005-03-21 Thread George Essig
On Mon, 14 Mar 2005 11:06:32 -0500, Paul Cunningham
[EMAIL PROTECTED] wrote:
 I use a bash script (similar to following example) to update tables.
 
 psql -v passed_in_var=\'some_value\'  -f script_name 
 
 Is it possible to pass a value back from psql to the bash script?

You can use  '\! [ command ]' to execute shell commands within psql. 
This may not be what you want though.

George Essig

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] Time Stamp

2005-03-21 Thread Richard Huxton
Hrishikesh Deshmukh wrote:
Hi All,
I want to get a timestamp of the queries that i run!
Is there a builtin command to do this/ does one need to write a
function/stored procedure!
Any pointers will help.
You don't say where you want this timestamp.
In psql look into \timing (see the man page)
To record this in the logs, see the configuration section of the manual, 
specifically Error Reporting and Logging

--
  Richard Huxton
  Archonet Ltd
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faq


Re: [GENERAL] Help with transactions

2005-03-21 Thread Stephen Howie
Thanks for the reply.
I've notice a couple things.  I ran a couple test and at first I 
couldn't duplicate my error on some test tables.  But I then added 
inheritance to one of the tables and thats when I got the error again.  
It looks like there is an error when obtaining the seq id (nextval) from 
the original table and using it on a table that has a foreign key to the 
original table by inserting it into the table that inherits the original 
table, within a transaction.  What I think is happening is since the 
insert is on the inherited table the foreign key doesn't see the insert 
into the original table until the transaction is committed.  

Here is a sample of how I duplicated my error.
By running
SELECT insert_data('A123456789','A','A2345');
on the below schema layout I get this error
ERROR:  insert or update on table table2 violates foreign key 
constraint table2_fk_id_fk

=START
CREATE TABLE table1 (
   id serial NOT NULL,
   data character(10) NOT NULL
);
CREATE TABLE table2 (
   id serial NOT NULL,
   fk_id integer NOT NULL,
   more_data character(5) NOT NULL
);
CREATE TABLE inherit_table (
   even_more_data character(1) NOT NULL
)
INHERITS (table1);
ALTER TABLE ONLY table1
   ADD CONSTRAINT table1_pkey PRIMARY KEY (id);
ALTER TABLE ONLY table2
   ADD CONSTRAINT table2_pkey PRIMARY KEY (id);
ALTER TABLE ONLY table2
   ADD CONSTRAINT table2_fk_id_fk FOREIGN KEY (fk_id) REFERENCES 
table1(id) ON UPDATE RESTRICT ON DELETE RESTRICT;

CREATE VIEW view_table1 AS
SELECT table1.id, table1.data
FROM table1;
CREATE VIEW view_table2 AS
SELECT table2.id, table2.fk_id, table2.more_data
FROM table2;
CREATE VIEW view_inherit_table AS
SELECT inherit_table.id, inherit_table.data, inherit_table.even_more_data
FROM inherit_table;
CREATE RULE view_table1_insert AS ON INSERT TO view_table1 DO INSTEAD 
INSERT INTO table1 (id, data) VALUES (new.id, new.data);
CREATE RULE view_table2_insert AS ON INSERT TO view_table2 DO INSTEAD 
INSERT INTO table2 (id, fk_id, more_data) VALUES (new.id, new.fk_id, 
new.more_data);
CREATE RULE view_inherit_table_insert AS ON INSERT TO view_inherit_table 
DO INSTEAD INSERT INTO inherit_table (id, data, even_more_data) VALUES 
(new.id, new.data, new.even_more_data);

CREATE FUNCTION insert_table2 (integer, character) RETURNS integer
   AS '
DECLARE
  table2_id INTEGER;
  table1_id ALIAS FOR $1;
  newdata ALIAS FOR $2;
BEGIN
  table2_id = nextval(''table2_id_seq'');
  INSERT INTO view_table2 (id, fk_id, more_data) VALUES (table2_id, 
table1_id, newdata);
 
  RETURN table2_id;

END;
'
   LANGUAGE plpgsql SECURITY DEFINER;
CREATE FUNCTION insert_inherit_table (character, character) RETURNS integer
   AS '
DECLARE
  table1_id INTEGER;
  newdata ALIAS FOR $1;
  new_even_more_data ALIAS FOR $2;
BEGIN
  table1_id = nextval(''public.table1_id_seq'');
  INSERT INTO view_inherit_table (id, data, even_more_data) VALUES 
(table1_id, newdata, new_even_more_data);

  RETURN table1_id;
END;
'
   LANGUAGE plpgsql SECURITY DEFINER;
CREATE FUNCTION insert_data (character, character, character) RETURNS 
boolean
   AS '
DECLARE

  newdata1 ALIAS FOR $1;
  newdata2 ALIAS FOR $2;
  newdata3 ALIAS FOR $3;
  table1_id INTEGER = 0;
  table2_id INTEGER = 0;
BEGIN
  table1_id = insert_inherit_table(newdata1, newdata2 );
  RAISE LOG ''Table1 ID: %'', table1_id;
  table2_id = insert_table2(table1_id, newdata3);
  IF table2_id  0 THEN
  RETURN TRUE;
  ELSE
  RETURN FALSE;
  END IF;
END;
'
   LANGUAGE plpgsql SECURITY DEFINER;
END===
Also, in my original schema I'm getting an increment of 2 every time I 
run nextval.  I can't duplicate this yet but I'm looking into it.  
Possibly my error somewhere in the function.

Thanks
-
Stephen Howie

Michael Fuhr wrote:
On Fri, Mar 18, 2005 at 09:22:52AM -0500, Stephen Howie wrote:
 

I have a java program that excepts print streams and inserts in into a 
spool table as a bytea.  This fires a pl/pgsql trigger that passes the 
bytea to a pl/perl function to process the bytea and spits the results 
as an array back.  It then proceeds to insert the data into multiple 
tables.  Problem is that two of the tables data is inserted into inside 
this transaction, one has a foreign key to the other.  As you can guess 
I get a foreign key violation because the transaction is not committed 
   

A transaction doesn't need to be committed for operations to see
the effects of previous operations in the same transaction, but
there could be visibility problems related to what happens when.
Could you post the simplest self-contained example that demonstrates
the problem?  It'll be easier to understand the interactions if we
can see the exact code.  In simple tests I successfully did what
you describe, so apparently my experiment didn't duplicate what
you're doing.
What version of PostgreSQL are you using?
 

and as far as I 

Re: [GENERAL] Time Stamp

2005-03-21 Thread Scott Marlowe
On Mon, 2005-03-21 at 09:29, Hrishikesh Deshmukh wrote:
 Hi All,
 
 I want to get a timestamp of the queries that i run!
 Is there a builtin command to do this/ does one need to write a
 function/stored procedure!
 Any pointers will help.

This is actually a pretty wide open question.  Do you want to know how
long it took to run the query, or when it ran?  Do you want a list of
all the times it ran, or just the latest?  Are you basically auditing db
access, or just checking to see how well it's running?

If you're just monitoring for performance et. al. then look at the
logging setting of log_min_duration_statement which tells the backend
how long a query needs to take to be logged.  you use the other settings
in the postgresql.conf file to force it to log every statement and its
duration.

Otherwise, you can use a trigger to force it to store the timestamp of
every row inserted, if that's what you need.  I think there's a basic
example in the online docs for server programming.  There are also many
examples posted to this list, so you could search the archives.

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Time Stamp

2005-03-21 Thread Hrishikesh Deshmukh
Hi,

I want to know how long did it take for my query to run!

Thanks,
Hrishi


On Mon, 21 Mar 2005 15:57:38 +, Richard Huxton dev@archonet.com wrote:
 Hrishikesh Deshmukh wrote:
  Hi All,
 
  I want to get a timestamp of the queries that i run!
  Is there a builtin command to do this/ does one need to write a
  function/stored procedure!
  Any pointers will help.
 
 You don't say where you want this timestamp.
 
 In psql look into \timing (see the man page)
 To record this in the logs, see the configuration section of the manual,
 specifically Error Reporting and Logging
 
 --
Richard Huxton
Archonet Ltd


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


[GENERAL] PostgreSQL users, tell your tale on Slashdot

2005-03-21 Thread Miles Keaton
Slashdot story just posted a few minutes ago:
http://slashdot.org/article.pl?sid=05/03/21/1635210

I've been using PostgreSQL for years on small projects, and I have an
opportunity to migrate my company's websites from Oracle to an
open-source alternative. It would be good to be able to show the PHBs
that PostgreSQL is a viable candidate, but I'm unable to find a list
of high-traffic sites that use it. Does anyone know of any popular
sites that run PostgreSQL?

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Copression

2005-03-21 Thread Greg Stark
Bruce Momjian pgman@candle.pha.pa.us writes:

Bruce Momjian pgman@candle.pha.pa.us writes:

 Stanislaw Tristan wrote:
  It's a possible to compress traffic between server and client while server 
  returns query result?
  It's a very actually for dial-up users.
  What is solution?
 
 No, unless SSL compresses automatically.

Without checking the source, I'll bet it does. 
Any good encryption system should compress first.

-- 
greg


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Tracking row updates - race condition

2005-03-21 Thread Alex Adriaanse
I think that would greatly decrease the chances of a race condition 
occurring, but I don't think it'd solve it.  What if 150 other revisions 
occur between a row update and its corresponding commit?

Alex
Vincent Hikida wrote:
To fetch all updates since the last synchronization, the client would 
calculated a value for $lastrevision by running this query on its 
local database:
SELECT max(revision) AS lastrevision FROM codes;

It would then fetch all updated rows by running this query against 
the server:
SELECT * FROM codes WHERE revision  $lastrevision;

How about
SELECT * FROM codes WHERE revision  $lastrevision - 100
You could use another number other than 100. As you said, the client 
can handle duplicates.

Vincent
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if 
your
 joining column's datatypes do not match

---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


[GENERAL] how do I clear a page, or set an item in a page to 'free'?

2005-03-21 Thread Eric Parusel
I've got a mucked up page in my db, and I can't complete a database dump 
until a particular page is fixed/removed from the table. :(

Could someone let me know the proper way to go about making a change 
such as this?:

1) Clear the page
or
2) Set the bad items in the page to 'free'
I can see the page with pg_filedump. (I know the page number)
Either method would suffice, I just need to get the job done :/
Thanks for any help you can offer,
Eric
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Tracking row updates

2005-03-21 Thread Alex Adriaanse
Qingqing Zhou wrote:
Alex Adriaanse [EMAIL PROTECTED] writes
 

This seems to work, except there exists a race condition.  Consider the
following series of events (in chronological order):
  1. Initially, in the codes table there's a row with id=1, revision=1,
 and a row with id=2, revision=2
  2. Client A begins a transaction, and updates the row with id=1,
 resulting in revision=3 for that row
  3. Client B begins a transaction, and updates the row with id=2,
 resulting in revision=4 for that row
  4. Client B commits the transaction
  5. Client C (which has $lastrevision=2 in its local database)
 synchronizes with the database by doing SELECT * FROM codes WHERE
 revision  2; and retrieves client B's update to the row with
 id=2, revision=4 (it doesn't yet see the update from client A)
  6. Client A commits the transaction
  7. Some time later, Client C synchronizes with the database again.
 $lastrevision for its database is now 4, so doing SELECT * FROM
 codes WHERE revision  4; does not retrieve any rows.  So client C
 never sees client A's update to the row with id=1
Essentially, the race condition occurs when the order of clients
committing transactions (i.e. the updates becoming visible to other
clients) differs from the order of clients generating sequence values.
Do you guys have any suggestions on how to avoid this race condition, or
maybe a more elegant way to synchronize the clients with the server?
   

In my understanding, you are doing something like a CVS does. Say if you
don't check out a file and you make a revision on the version you now
see(say version 1), then when you want to commit, you will probabaly receive
a merge required notice. Since in this interval, the file may have already
updated by another user (to version 2) - he is free to do so since nobody
knows that you might commit an update. To avoid this,  you have to check
out the file, i.e., lock the file to prevent other changes, then you are
free of any merge requirement. The cost is that you locked the file and
nobody could change it.  So the only options are merge or lock.
Regards,
Qingqing
Applying this analogy to our database, wouldn't that require a 
table-level lock during a CVS-like commit (which would mean locking the 
table, getting the revision number, updating the row(s), and committing 
the transaction)?

Alex
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] how do I clear a page, or set an item in a page to 'free'?

2005-03-21 Thread Tom Lane
Eric Parusel [EMAIL PROTECTED] writes:
 I've got a mucked up page in my db, and I can't complete a database dump 
 until a particular page is fixed/removed from the table. :(

 Could someone let me know the proper way to go about making a change 
 such as this?:

 1) Clear the page
 or
 2) Set the bad items in the page to 'free'

Zeroing the page is the most painless way.  dd from /dev/zero will get
it done --- but note that you have to shut down the postmaster meanwhile
to ensure Postgres will see your change.

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] how do I clear a page, or set an item in a page to

2005-03-21 Thread Eric Parusel
Tom Lane wrote:
Eric Parusel [EMAIL PROTECTED] writes:
I've got a mucked up page in my db, and I can't complete a database dump 
until a particular page is fixed/removed from the table. :(
Zeroing the page is the most painless way.  dd from /dev/zero will get
it done --- but note that you have to shut down the postmaster meanwhile
to ensure Postgres will see your change.
ok, I can dd, if=/dev/zero of=base/dbnum/tablenum, but how do I figure 
how where and how much to write over?
Is there an easy to see or calculate offset value for a particular page 
number? (28393)

from pg_filedump:
Block 28393 
Header -
 Block Offset: 0x0ddd2000 Offsets: Lower  56 (0x0038)
 Block: Size 8192  Version2Upper1064 (0x0428)
 LSN:  logid242 recoff 0x9387bd78  Special  8192 (0x2000)
 Items:9   Free Space: 1008
 Length (including item array): 60
So I could take the block offset, convert it from hex (to 232595456... 
oh, /8192 = 28393 :) ), and the block size (which is default: 8192) to do:

dd if=/dev/zero of=base/17760/18804 obs=8192 seek=28393
?
Should I, or do I need to REINDEX after this?
If you could confirm that I answered my own question, that would be 
great :)

Thanks,
Eric
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] Copression

2005-03-21 Thread Jay Guerette
FWIW: If you use an stunnel or ssh connection of some sort, merely for
compression and not security, the ARCFOUR encryption algorithm appears
to have the lowest overhead  fastest throughput. Benchmarked it once
for exactly this purpose.

   It's a possible to compress traffic between server and client while 
   server returns query result?
   It's a very actually for dial-up users.
   What is solution?
 
  No, unless SSL compresses automatically.
 
 Without checking the source, I'll bet it does.
 Any good encryption system should compress first.

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] how do I clear a page, or set an item in a page to

2005-03-21 Thread Scott Marlowe
Wouldn't zero_damaged_pages help here?

http://www.postgresql.org/docs/7.4/static/runtime-config.html#RUNTIME-CONFIG-DEVELOPER

On Mon, 2005-03-21 at 13:28, Eric Parusel wrote:
 Tom Lane wrote:
  Eric Parusel [EMAIL PROTECTED] writes:
  
 I've got a mucked up page in my db, and I can't complete a database dump 
 until a particular page is fixed/removed from the table. :(
  
  Zeroing the page is the most painless way.  dd from /dev/zero will get
  it done --- but note that you have to shut down the postmaster meanwhile
  to ensure Postgres will see your change.
 
 ok, I can dd, if=/dev/zero of=base/dbnum/tablenum, but how do I figure 
 how where and how much to write over?
 Is there an easy to see or calculate offset value for a particular page 
 number? (28393)
 
 from pg_filedump:
 Block 28393 
 Header -
   Block Offset: 0x0ddd2000 Offsets: Lower  56 (0x0038)
   Block: Size 8192  Version2Upper1064 (0x0428)
   LSN:  logid242 recoff 0x9387bd78  Special  8192 (0x2000)
   Items:9   Free Space: 1008
   Length (including item array): 60
 
 So I could take the block offset, convert it from hex (to 232595456... 
 oh, /8192 = 28393 :) ), and the block size (which is default: 8192) to do:
 
 dd if=/dev/zero of=base/17760/18804 obs=8192 seek=28393
 ?
 
 Should I, or do I need to REINDEX after this?
 
 If you could confirm that I answered my own question, that would be 
 great :)
 
 Thanks,
 Eric
 
 ---(end of broadcast)---
 TIP 6: Have you searched our list archives?
 
http://archives.postgresql.org

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] how do I clear a page, or set an item in a page to

2005-03-21 Thread Tom Lane
Eric Parusel [EMAIL PROTECTED] writes:
 ok, I can dd, if=/dev/zero of=base/dbnum/tablenum, but how do I figure 
 how where and how much to write over?
 Is there an easy to see or calculate offset value for a particular page 
 number? (28393)

dd bs=8k seek=28393 count=1

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] how do I clear a page, or set an item in a page to

2005-03-21 Thread Scott Marlowe
ahh, ok.  I just know I'm much more nervous about zeroing stuff by hand
than letting the backend do it for me.

On Mon, 2005-03-21 at 13:54, Tom Lane wrote:
 Scott Marlowe [EMAIL PROTECTED] writes:
  Wouldn't zero_damaged_pages help here?
 
 Only if there's detectable corruption in the page header, which there
 seems not to be.
 
   regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] how do I clear a page, or set an item in a page to

2005-03-21 Thread Tom Lane
Scott Marlowe [EMAIL PROTECTED] writes:
 ahh, ok.  I just know I'm much more nervous about zeroing stuff by hand
 than letting the backend do it for me.

Well, I certainly hope Eric is gonna save aside a copy of the file
(if not the whole database) before he hacks it ;-)

BTW, I missed the point about REINDEX.  Yeah, that's probably a good
idea to get rid of any index entries pointing at the removed rows.

regards, tom lane

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] how do I clear a page, or set an item in a page to

2005-03-21 Thread Tom Lane
Scott Marlowe [EMAIL PROTECTED] writes:
 Wouldn't zero_damaged_pages help here?

Only if there's detectable corruption in the page header, which there
seems not to be.

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Copression

2005-03-21 Thread Michael Fuhr
On Mon, Mar 21, 2005 at 12:45:21PM -0500, Greg Stark wrote:
 Bruce Momjian pgman@candle.pha.pa.us writes:
  
  No, unless SSL compresses automatically.
 
 Without checking the source, I'll bet it does. 
 Any good encryption system should compress first.

I just ran some tests and it doesn't look like OpenSSL does compression
by default, at least not with my setup (PostgreSQL 8.0.1, FreeBSD
4.11-STABLE, OpenSSL 0.9.7d from the FreeBSD source tree).  Here's
what I did:

CREATE TABLE foo (t text);
INSERT INTO foo VALUES (repeat('x', 1000));
SELECT * FROM foo;

I'm assuming that the 1000 x's could be compressed to a much shorter
sequence.  Here are tcpdumps of the SELECT over various connection
types:

Non-SSL PostgreSQL connection:

127.0.0.1.2521  127.0.0.1.5480: P 76:100(24) ack 262 win 57344 
127.0.0.1.5480  127.0.0.1.2521: P 262:1318(1056) ack 100 win 57344

SSL PostgreSQL connection (cipher: DHE-RSA-AES256-SHA, bits: 256)

127.0.0.1.1459  127.0.0.1.5480: P 1372:1462(90) ack 2540 win 57344 
127.0.0.1.5480  127.0.0.1.1459: P 2540:3670(1130) ack 1462 win 57344

Non-SSL PostgreSQL connection over uncompressed SSH tunnel:

127.0.0.1.3165  127.0.0.1.22: P 4480:4544(64) ack 5728 win 57344 
127.0.0.1.22  127.0.0.1.3165: P 5728:6824(1096) ack 4544 win 57344

Non-SSL PostgreSQL connection over compressed SSH tunnel (compression
level 6):

127.0.0.1.3767  127.0.0.1.22: P 3632:3688(56) ack 5104 win 57344 
127.0.0.1.22  127.0.0.1.3767: P 5104:5192(88) ack 3688 win 57344

Only the last case, a PostgreSQL connection over a compressed SSH
tunnel, showed any compression in the response.

It looks like OpenSSL supports compression but the application has
to enable it:

http://www.openssl.org/docs/ssl/SSL_COMP_add_compression_method.html

-- 
Michael Fuhr
http://www.fuhr.org/~mfuhr/

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] how do I clear a page, or set an item in a page to

2005-03-21 Thread Eric Parusel
Tom Lane wrote:
Eric Parusel [EMAIL PROTECTED] writes:
ok, I can dd, if=/dev/zero of=base/dbnum/tablenum, but how do I figure 
how where and how much to write over?
Is there an easy to see or calculate offset value for a particular page 
number? (28393)
dd bs=8k seek=28393 count=1
shutdown postgres
cp base/dbnum/tablenum* /somewhere/else
dd if=/dev/zero of=base/dbnum/tablenum bs=8k seek=28393 count=1
startup postgres
It worked...!   select count(*) from table; now works fine!
I'm currently doing a vacuum then a db dump to confirm there's no other 
page issues...

Vacuum is aptly reporting:
WARNING: relation table page 28393 is uninitialized --- fixing
Thanks for your assistance, hopefully this helps someone else in the 
future a little bit.

I don't know why the problem occurred, but I don't think it's realistic 
to figure that out easily. (disk, raid, server, os, pgsql, or some 
combination!)

Eric
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Encoding-related errors when moving from 7.3 to 8.0.1

2005-03-21 Thread Alvaro Herrera
On Sun, Mar 20, 2005 at 10:02:24AM -0500, Carlos Moreno wrote:

Carlos,

 So, our system (CGI's written in C++ running on a Linux server)
 simply takes whatever the user gives (properly validated and
 escaped) and throws it in the database.  We've never encountered
 any problem  (well, or perhaps it's the opposite?  Perhaps we've
 always been living with the problem without realizing it?)

The latter, I think.  The problem is character recoding.  If your old
system has been running with encoding SQL_ASCII, then no recoding ever
takes place.  If you are now using UTF8 or latin1 (say) as server
encoding, then as soon as the client is using a different encoding,
there should be conversion in order to make the new data correct w.r.t.
the server encoding.  If the wrong conversion takes place, or if no
conversion takes place, you may either end up with invalid data, or
have the server reject your input (as was this case.)

So the moral of the story seems to be that yes, you need to make each
application issue the correct client_encoding before entering any data.
You can attach it to the user or database, by issuing ALTER USER (resp.
DATABASE).  But if you are using a web interface, where the user can
enter data in either win1252 or latin1 encoding (or whatever) depending
on the environment, then I'm not sure what you should do.  One idea
would be do nothing, but that seems very invalid-data-prone.  Another
idea would be having the user select an encoding (and maybe display the
data to them after the recoding has taken place so they can correct it
in case they got it wrong.)  This seems messy and likely to upset your
users.

Someone else may have better advise for you on this.  I haven't really
worked with these things.

-- 
Alvaro Herrera ([EMAIL PROTECTED])
I can't go to a restaurant and order food because I keep looking at the
fonts on the menu.  Five minutes later I realize that it's also talking
about food (Donald Knuth)

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] bit varying(512) vs bytea(64)

2005-03-21 Thread TJ O'Donnell
I have N-bit data pairs.  I want to write a c-language function
which compares bits set in each.  N is typically 512, but could be other.
I can store this as bit varying(512) or bytea(64).  I can't decide which.
Here are the questions that concern me.
1) will each take the same storage?
2) can I pass bit varying data to a c-language function?  I can't find any
   docs or examples of that.
3) are bit strings stored as actual bits or as character strings of 0 and 1?
Thanks,
TJ
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] Encoding-related errors when moving from 7.3 to 8.0.1

2005-03-21 Thread Alvaro Herrera
On Sun, Mar 20, 2005 at 10:02:24AM -0500, Carlos Moreno wrote:

Carlos,

 Carlos
 --
 PS: I have a strict white-list anti-spam filter in place, which
 is why a direct e-mail would be rejected -- let me know if
 you want to write directly through e-mail, so that I can
 add you to the white list file.

I forgot to mention that I did receive the rejected mail message,
which directed me to an URL which thrown a 404 error.  You may want to
take a look ...

-- 
Alvaro Herrera ([EMAIL PROTECTED])
Amanece.   (Ignacio Reyes)
 El Cerro San Cristóbal me mira, cínicamente, con ojos de virgen

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Copression

2005-03-21 Thread Jan Wieck
On 3/20/2005 10:50 PM, Bruce Momjian wrote:
Stanislaw Tristan wrote:
It's a possible to compress traffic between server and client while server 
returns query result?
It's a very actually for dial-up users.
What is solution?
No, unless SSL compresses automatically.
You can use ssh port forwarding with compression. Works quite well.
Jan
--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [GENERAL] New user: Windows, Postgresql, Python

2005-03-21 Thread Paul Moore
[EMAIL PROTECTED] (Marco Colombo) writes:

 It seems python documentation is plain wrong, or I'm not able to
 read it at all:

 http://docs.python.org/ref/physical.html

 A physical line ends in whatever the current platform's convention is for
 terminating lines. On Unix, this is the ASCII LF (linefeed) character. On
 Windows, it is the ASCII sequence CR LF (return followed by linefeed). On
 Macintosh, it is the ASCII CR (return) character.

 This is the language _reference_ manual, btw. I'm very surprised to hear
 python on windows is so broken.

I believe this is wrong in two ways - first, it hasn't been updated
to cater for the recent Universal Newline support, and second, it
applies only to Python source code files (embedded code using the C
APIs should pass code using C newline conventions, ie \n characters,
as we have confirmed). I've submitted a Python bug report (SF ref
1167922) against the documentation.

I've suggested updated wording for this section as follows:


A physical line is a sequence of characters terminated by an end-of-line
sequence.  In source files, any of the standard platform line
termination sequences can be used - the \UNIX form using \ASCII{} LF
(linefeed), the Windows form using the \ASCII{} sequence CR LF (return
followed by linefeed), or the Macintosh form using the \ASCII{} CR
(return) character.  All of these forms can be used equally, regardless
of platform.

When embedding Python, source code strings should be passed to Python
APIs using the standard C conventions for newline characters (the
\code{\e n} character, representing \ASCII{} LF, is the line
terminator).


Is that clearer?

Paul.
-- 
Once the game is over, the King and the pawn go back in the same box. --
Italian Proverb

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] Tracking row updates - race condition

2005-03-21 Thread Harald Fuchs
In article [EMAIL PROTECTED],
Alex Adriaanse [EMAIL PROTECTED] writes:

 I think that would greatly decrease the chances of a race condition
 occurring, but I don't think it'd solve it.  What if 150 other
 revisions occur between a row update and its corresponding commit?

How about the following:

* Use a TIMESTAMP rather than a SERIAL
* Set this timestamp to NULL in your INSERT/UPDATE trigger
* Use a cron job to set the timestamp to current_timestamp when it's NULL

This way the client would lag behind somewhat, depending on the cron
job frequency, but it should not miss a change.


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Tsearch vector not stored by update/set

2005-03-21 Thread Andrew J. Kopciuch
 It seems to be selective of only numbers, words with numbers in them,
 words with '.' or '/' characters.  It completely ignores any other words
 or text in any of the 3 fields.


This is a very big hint to your problem.

 You requested the pg_ts_* tables:
 On the Linux-redhat, pg7.3.2

 pg_ts_cfgmap(73 rows)
 ts_name  tok_alias dict_name
 default lword {en_stem}
 default nlword {simple}
 default word {simple}
 default email {simple}
 default url {simple}
 default host {simple}
 default sfloat {simple}
 default version {simple}
 default part_hword {simple}
 default nlpart_hword {simple}
 default lpart_hword {en_stem}
 default hword {simple}
 default lhword {en_stem}
 default nlhword {simple}
 default uri {simple}
 default file {simple}
 default float {simple}
 default int {simple}
 default uint {simple}
 default_russian lword  {en_stem}
 default_russian nlword {ru_stem}
 default_russian word {ru_stem}
 default_russian email {simple}
 default_russian url {simple}
 default_russian host {simple}
 default_russian sfloat {simple}
 default_russian version {simple}
 default_russian part_hword {simple}
 default_russian nlpart_hword {ru_stem}
 default_russian lpart_hword {en_stem}
 default_russian hword {ru_stem}
 default_russian lhword {en_stem}
 default_russian nlhword {ru_stem}
 default_russian uri {simple}
 default_russian file {simple}
 default_russian float {simple}
 default_russian int {simple}
 default_russian uint {simple}
 simple lword {simple}
 simple nlword {simple}
 simple word {simple}
 simple email {simple}
 simple url {simple}
 simple host {simple}
 simple sfloat {simple}
 simple version {simple}
 simple part_hword {simple}
 simple nlpart_hword {simple}
 simple lpart_hword {simple}
 simple hword {simple}
 simple lhword {simple}
 simple nlhword {simple}
 simple uri {simple}
 simple file {simple}
 simple float {simple}
 simple int {simple}
 simple uint {simple}
 default_english url {simple}
 default_english host {simple}
 default_english sfloat {simple}
 default_english uri {simple}
 default_english int {simple}
 default_english float {simple}
 default_english email {simple}
 default_english word {simple}
 default_english hword {simple}
 default_english nlword {simple}
 default_english nlpart_hword {simple}
 default_english part_hword {simple}
 default_english nlhword {simple}
 default_english file {simple}
 default_english uint {simple}
 default_english version {simple}


I am assuming that your cluster is running created with en_US for the locale, 
and that you have set the matching tsearch2 configuration to be your default 
(Or curcfg for each process running).

If you look at your config mappings for the default_english you will notice 
that you have 16 records, as opposed to 19 records like every other 
configuration mapping.  From some more in depth observations, I noticed you 
are missing entries for the 'lword', 'lhword' and ''lpart_hword'.  That means 
that tokens found to be of types 'Latin Words', 'Latin Hyphenated Words' and 
'Latin Part Hyphenated Words' are just dropped because you do not have a 
configuration mapping set up for them.

This is why only numbers (or other lexem types) would show (They are returned 
as lexem_types : int, uint, float, url, etc. for which you have mappings).  
Most regular words are simply discarded due to missing entries.  If you fix 
your configurations the triggers should work properly.

Your examples worked before, simply because you specified the 'default' 
configuration on the insert statement.  Which is not the same as the 
'default_english' configuration which is used by the trigger based on your 
server encoding (en_US).

 I have made a single change to it from its default installation.  When I
 was working with the rank_cd() function on the 8.0.0 machine, it had
 errors due to a non-existant english stop file, so I changed
 pg_ts_dict.dict_initoption = '' where dict_name = 'en_stem'.  The indexing
 system was working fine both before and after the change to the pg_ts_dict
 table.  I also propagated the change to the 7.3.2 machine even though it
 didn't have the error message (the stop file didn't exist on that computer
 either, but it never gave an error message about it).

I would not recommend this.  The stop file should is most likely on the system 
somewhere.  It will change depending on your installation.  Look for 
english.stop on the computer(s).  If it is not there, you can grab the one 
out of the source distribution and put it wherever you want.  Then just 
update the settings to the location you used.


good luck,


Andy

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Time Stamp

2005-03-21 Thread Sean Davis
Hrishi,
Depending on what you are using the information for, you might want to think 
about using EXPLAIN ANALYZE like:

EXPLAIN ANALYZE SELECT * FROM TEST_TABLE;
This will give you the output of the explain command (describing what the 
query planner decided was the best plan) and the results of various timing 
results.  It doesn't include the Total time, but it is pretty close.  If I 
recall, you were interested in doing this from within R (from post in 
another group).  In that specific situation, you can use R's timing 
commands.  From within R, type:

help.search('timing')
or
help.search('profile')
In general, though, it is probably most useful to get the timings for 
queries from explain analyze, as it gives you a wealth of information that 
you can then use to optimize the results.

Sean
- Original Message - 
From: Hrishikesh Deshmukh [EMAIL PROTECTED]
To: Richard Huxton dev@archonet.com
Cc: Postgresql-General pgsql-general@postgresql.org
Sent: Monday, March 21, 2005 11:38 AM
Subject: Re: [GENERAL] Time Stamp


Hi,
I want to know how long did it take for my query to run!
Thanks,
Hrishi
On Mon, 21 Mar 2005 15:57:38 +, Richard Huxton dev@archonet.com 
wrote:
Hrishikesh Deshmukh wrote:
 Hi All,

 I want to get a timestamp of the queries that i run!
 Is there a builtin command to do this/ does one need to write a
 function/stored procedure!
 Any pointers will help.
You don't say where you want this timestamp.
In psql look into \timing (see the man page)
To record this in the logs, see the configuration section of the manual,
specifically Error Reporting and Logging
--
   Richard Huxton
   Archonet Ltd
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])

---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


[GENERAL] Problem connecting to postmaster

2005-03-21 Thread Glenn Sullivan
Hi,
I have installed native postgresql8.0.1 on a PC running Windows XP 
professional.
I am having trouble connecting to it from my application.  The application
is a combination of java and C++ which was developed on Sun Solaris (Unix).
Thus on the PC, I have installed Microsoft Interix (Service For Unix) on
the PC to run the application.  I don't know if this is the problem or not,
but I am thus trying to connect to the DB running native from an application
running from Interix.  Does anyone know if this is inherently a problem?

My java code connects to the DB using
DriverManager.getConnection(jdbc:postgresql://gandalf:5432/mydb, 
user, passwd);

I get a PSQLException with a null message when this is executed.
I start up the postmaster with -i to allow TCP/IP connections.
I set Windows Firewall to off
Does anyone have any experience using Interix (SFU) connecting to the DB?
Any suggestions?
Thanks,
Glenn
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] Problem connecting to postmaster

2005-03-21 Thread Dann Corbit
What happens when you connect using the same user/password using PSQL?

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Glenn Sullivan
Sent: Monday, March 21, 2005 4:41 PM
To: pgsql-general
Subject: [GENERAL] Problem connecting to postmaster

Hi,

I have installed native postgresql8.0.1 on a PC running Windows XP 
professional.
I am having trouble connecting to it from my application.  The
application
is a combination of java and C++ which was developed on Sun Solaris
(Unix).
Thus on the PC, I have installed Microsoft Interix (Service For Unix) on
the PC to run the application.  I don't know if this is the problem or
not,
but I am thus trying to connect to the DB running native from an
application
running from Interix.  Does anyone know if this is inherently a problem?

My java code connects to the DB using
 DriverManager.getConnection(jdbc:postgresql://gandalf:5432/mydb, 
user, passwd);

I get a PSQLException with a null message when this is executed.

I start up the postmaster with -i to allow TCP/IP connections.
I set Windows Firewall to off

Does anyone have any experience using Interix (SFU) connecting to the
DB?

Any suggestions?

Thanks,
Glenn


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Tsearch vector not stored by update/set

2005-03-21 Thread Justin L. Kennedy
On Sun, 20 Mar 2005, Andrew J. Kopciuch wrote:

 On Thursday 17 March 2005 17:55, you wrote:
  The short question is why does this:
 
  select to_tsvector('default', coalesce(name, '') ||' '||
  coalesce(description, '') ||' '|| coalesce(keywords,'')) from link_items;
 
  give different results than this:
 
  update link_items set linksfti=to_tsvector('default', coalesce(name, '')
 
  ||' '|| coalesce(description, '') ||' '|| coalesce(keywords,''));
 
  select linksfti from link_items;
 


 It shouldn't.  I just tested with some of your data on my machine, and the
 results were fine.  (PG 7.4.6).  I can not see why they would be any
 different for 7.3.2.

 Your data looks rather strange. (re '60':1 '000':2).  Is that really all
 that was inserted? Or have you just left some out for your email?

About 95% of the 1900 insertions ended up with empty strings (not NULLs),
the other 5% looked like that above.  Either just numbers, or occasionally
words (defined by consecutive non-whitespace characters separated by
whitespace) that had numbers or symbols in them.  Like: U.S. Senate was
transformed in such a way that Senate was dropped completely and U.S.
became lowercased u.s..  Another example was a URL that happened to be
in the description column of one was captured, but the rest of the text
was not.  Another had a name of World T.E.A.M. Sports and all that was
stored in the vector was t.e.a.m.

It seems to be selective of only numbers, words with numbers in them,
words with '.' or '/' characters.  It completely ignores any other words
or text in any of the 3 fields.

 I could see this being a configuration issue possibly.  What do your pg_ts
 tables look like?  Have you made modifications there?

This morning, I decided to remove the following trigger from the
link_items table:
CREATE TRIGGER updateprodtbl
  BEFORE INSERT OR UPDATE
  ON link_items
  FOR EACH ROW
  EXECUTE PROCEDURE tsearch2('linksfti', 'name', 'description',
'keywords');

Now the UPDATE command I listed above works, so apparently there is
something about this trigger that is blocking the search vector from being
stored.  This trigger was copied and pasted (with only changes to the
column names) from
http://www.sai.msu.su/~megera/postgres/gist/tsearch/V2/docs/tsearch-V2-intro.html
in the INDEXING FIELDS IN A TABLE section.

This does fix the immediate problem of getting the search results for the
live website we are running, but now we have the problem of keeping it up
to date.  I could run a nightly cronjob to update all the rows, but it
seems inefficient, and I would really like to know why the trigger would
keep it from working on 7.3, but not 8.0.

You requested the pg_ts_* tables:
On the Linux-redhat, pg7.3.2

pg_ts_cfg: (4 rows)
oid ts_name prs_namelocale
106407  default   default   C
988004  default_english   default   en_US
106408  default_russian   default   ru_RU.KOI8-R
106409  simpledefault   NULL

pg_ts_dict: (5 rows)
oid dict_name   dict_initoption dict_init   dict_lexize
106356  en_stem   snb_en_init snb_lexize
106361  ispell_template NULLspell_init  spell_lexize
106358  ru_stem /usr/local/pgsql/share/contrib/russion.stop
snb_ru_init snb_lexize
106353  simple  NULLdex_initdex_lexize
106364  synonym NULLsyn_initsyn_lexize

pg_ts_parser: (1 row)
oid prs_nameprs_start   prs_nexttoken   prs_end
prs_headlineprs_lextype
106389  default   prsd_start  prsd_getlexeme  prsd_end
prsd_headline   prsd_lextype

pg_ts_cfgmap(73 rows)
ts_name tok_alias   dict_name
default   lword {en_stem}
default   nlword{simple}
default   word  {simple}
default   email {simple}
default   url   {simple}
default   host  {simple}
default   sfloat{simple}
default   version   {simple}
default   part_hword{simple}
default   nlpart_hword  {simple}
default   lpart_hword   {en_stem}
default   hword {simple}
default   lhword{en_stem}
default   nlhword   {simple}
default   uri   {simple}
default   file  {simple}
default   float {simple}
default   int   {simple}
default   uint  {simple}
default_russian   lword {en_stem}
default_russian   nlword{ru_stem}
default_russian   word  {ru_stem}
default_russian   email {simple}
default_russian   url   {simple}
default_russian   host  {simple}
default_russian   sfloat{simple}
default_russian   version   {simple}
default_russian   part_hword{simple}
default_russian   nlpart_hword  {ru_stem}
default_russian   lpart_hword   {en_stem}
default_russian   hword {ru_stem}
default_russian   lhword{en_stem}
default_russian   nlhword   {ru_stem}
default_russian   uri   {simple}
default_russian  

Re: [GENERAL] Problem connecting to postmaster

2005-03-21 Thread Doug McNaught
Glenn Sullivan [EMAIL PROTECTED] writes:

 Hi,

 I have installed native postgresql8.0.1 on a PC running Windows XP
 professional.
 I am having trouble connecting to it from my application.  The application
 is a combination of java and C++ which was developed on Sun Solaris (Unix).
 Thus on the PC, I have installed Microsoft Interix (Service For Unix) on
 the PC to run the application.  I don't know if this is the problem or not,
 but I am thus trying to connect to the DB running native from an application
 running from Interix.  Does anyone know if this is inherently a problem?

Have you tried writing a small standalone Java application to test
connecting to the database?  That would take Interix out of the
equation.

 My java code connects to the DB using
  DriverManager.getConnection(jdbc:postgresql://gandalf:5432/mydb,
  user, passwd);

 I get a PSQLException with a null message when this is executed.

 I start up the postmaster with -i to allow TCP/IP connections.
 I set Windows Firewall to off

Does netstat show the Postgres server listening on 5432?  Have you
tried using localhost instead of gandalf in the JDBC URL?

-Doug

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] Tracking row updates

2005-03-21 Thread Qingqing Zhou

Alex Adriaanse [EMAIL PROTECTED] writes
 Applying this analogy to our database, wouldn't that require a
 table-level lock during a CVS-like commit (which would mean locking the
 table, getting the revision number, updating the row(s), and committing
 the transaction)?


You may have a look at how CVS works and decide it. Note the difference is
that CVS manages the files and versions, and seems you just manage the
versions?

Another way is to change the logic - try to submit the update together.
Think the bank-transfer example we always used in describing transaction's
atomic property, which shares something common in your case. We fold the
logic of reduce some money and add some money together, so no matter how
many concurrent transfer is on the target account, it is guarantteed no race
condition, since the serializable property assures that.

Regards,
Qingqing









---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] Measuring Query Performance

2005-03-21 Thread Ed L.

I'm attempting to measure database query performance across a 
large number of high-volume clusters.  I'd like to share a 
couple of ideas and hear what other ideas folks know of for this 
problem.  I suppose there are existing solutions, I just haven't 
found them.

The idea here is to systematically capture execution times of 
predictable SQL queries taking longer than certain threshholds.  
For example, suppose my application routinely launches queries 
of the form SELECT ... FROM table1, ... WHERE id = NNN and 
..., and from experience we know this query takes 5ms when 
fully cached, and maybe 50ms when not cached.  So we'd like to 
capture when this query exceeds, say, 100ms.

My latest thought is to store regexes of interesting queries 
along with their threshholds in a central database:

create table interesting_query (
regex   varchar
min float
)

Then, with the cluster logging queries and durations, I'd tail 
the log into a perl script that 1) connects to the central DB 
and downloads the interesting queries, and then 2) parses the 
log output, 3) keeps track of max/min/avg/stddev, and then 4) 
periodically insert the results into the central monitoring 
database.  

So, supposing there were 10 queries/second for a given query, 
then we might report the slowness every minute, and each report 
would include the aggregate max/min/stddev/count/avg stats for 
600 instances of the queries in the preceding minute.  Once 
those numbers are in a central database, I could easily identify 
performance troublespots.

How are others handling this problem?  Other ideas?

Thanks.

Ed


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Encoding-related errors when moving from 7.3 to 8.0.1

2005-03-21 Thread Carlos Moreno
Thanks again, Alvaro!
Alvaro Herrera wrote:
So, our system (CGI's written in C++ running on a Linux server)
simply takes whatever the user gives (properly validated and
escaped) and throws it in the database.  We've never encountered
any problem  (well, or perhaps it's the opposite?  Perhaps we've
always been living with the problem without realizing it?)
The latter, I think.  The problem is character recoding.  If your old
system has been running with encoding SQL_ASCII, then no recoding ever
takes place.  If you are now using UTF8 or latin1 (say) as server
encoding, then as soon as the client is using a different encoding,
there should be conversion in order to make the new data correct w.r.t.
the server encoding.  If the wrong conversion takes place, or if no
conversion takes place, you may either end up with invalid data, or
have the server reject your input (as was this case.)
This makes sense to me, yes.  The reason why I'm a bit lost is that
we never did anything whatsoever with respect to encoding.  Oddly
enough, I couldn't find much about this in the docs.
I see references to it in the runtime configuration docs (the part
where they describe the postgres.conf file).  There's one line,
commented out, where they set (as an example), the client_encoding
to sql_ascii, and a comment to the end of that line says actually,
it defaults to the server encoding).
I just found out that in the create database statement, one of the
options specifies the encoding using for that database.
I guess what changed from version 7.4.x to 8.0 is that the default
server_encoding changed?
This means that a temporary solution (or rather, a temporary patch)
would be to create the database specifying the right server_encoding
to match what I currently have on my system?  (I wouldn't want to do
that if it is nothing more than a patch to keep hiding the problem)
So the moral of the story seems to be that yes, you need to make each
application issue the correct client_encoding before entering any data.
You can attach it to the user or database, by issuing ALTER USER (resp.
DATABASE).  But if you are using a web interface, where the user can
enter data in either win1252 or latin1 encoding (or whatever) depending
on the environment, then I'm not sure what you should do. 
This is indeed the case;  and do nothing is what we have always
done with respect to this issue...  Why has it been so long without
us realizing that there was a hidden problem, I really don't know.
(and we do have users with plenty of weird characters -- accent
aigue, grave, circumflex, tilde, dieresis, etc. -- and they have
always worked).
I'm so lost!  :-(
BTW, the correct e-mail to pass through the anti-spam filter is
my first name, followed by a dot, followed by my last name (the
rest after the @ is the same)
Thanks again for your message!
Carlos
--
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


[GENERAL] Clearing locks

2005-03-21 Thread Edwin New








I am using postgreSQL version 8 on
Solaris 9.



I have encountered a situation where a java process is dying
but leaving locks active. When I
restart the process, it gets a new connection, but can't proceed as the
previous lock is still active.



How can I, as DBA, clear a lock / roll back an incomplete
transaction without access to the connection that created the lock? All I've been able to do is a
complete stop/start of the database, but that is obviously sub-optimal.



Thanks,

Edwin
New
Software Developer

Toll - Integrated Business Systems

43 - 63 Princes Highway,
Doveton, VIC. 3175

Ph:
038710 0858
Fax: 03 9793 3970
Mobile: 0417 341 074
Email: [EMAIL PROTECTED]

This message is written in FORTRAN until you look at it.










Re: [GENERAL] Clearing locks

2005-03-21 Thread Neil Conway
Edwin New wrote:
I have encountered a situation where a java process is dying but leaving 
locks active.
If the connection to PostgreSQL is severed (e.g. the client actually 
disconnects), the current transaction will be rolled back and any held 
locks will be released.

So it seems that the problem is that when the client dies, it is not 
actually disconnecting from PostgreSQL, and is in the midst of a 
transaction that has acquired some locks. Perhaps this is due to buggy 
connection pooling software that does not rollback a connection's 
transaction before putting it back into the connection pool? Without 
more information it's tough to be sure.

FYI, you can examine the status of the lock manager via the pg_locks 
system view:

http://www.postgresql.org/docs/8.0/static/monitoring-locks.html
How can I, as DBA, clear a lock / roll back an incomplete transaction 
without access to the connection that created the lock?
Well, you can always kill the backend process -- that will abort its 
transaction and release any locks it holds.

-Neil
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[GENERAL] multi line text data/query ?bug?

2005-03-21 Thread Sim Zacks
While I was testing an issue in python I discovered a general
*nix/windows/mac issue with newlines.
The same query will give different results depending on what client executes
it.

create table test1(f1 text);

insert into test1 values('this is a long string.
it will have new lines in it.
I want to see if those new lines go away.
so ignore. the ugliness')

If the insert was done on a Windows machine there will be a CRLF as the
EOLN, if done on Unix it will have LF and if done on Mac it will have CR.

So if the insert was done on windows, the following query will only work
from a windows client :
select * from test1 where f1='this is a long string.
it will have new lines in it.
I want to see if those new lines go away.
so ignore. the ugliness'

If that insert was done on a *NIX then the query will only work from that
client



---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] pl/perl problem

2005-03-21 Thread Sean Davis
Title: RE: [GENERAL] Convert Cursor to array





  - Original Message - 
  From: 
  FERREIRA William (COFRAMI) 

  To: 'pgsql-general@postgresql.org' 
  
  Sent: Monday, March 21, 2005 9:22 
AM
  Subject: [GENERAL] pl/perl problem
  
  hi
  
  i 
  wrote a store procedure using the pl/perlU language, and the comportment is 
  strange.
  my 
  procedure do a select on my database and some traitments too and write the 
  result in a file;
  
  when 
  i run the procedure a first time, it works fine, the file is create and data 
  are in. but when i run my procedure a second time, the file is create but the 
  data aren't write in it. where is the problem ?
  
Can you show the function?

  
  
  i 
  had an other problem the past week, but i not able to reproduce it. it was a 
  very simple funtion who store a string into a variable and display iton 
  the screen : something like this :
  my 
  $toto = '-';
  $toto.='titi';
  elog 
  NOTICE, $toto;
  
Again, can you show the whole function? 
And do you 'use strict' when 
coding?