Re: [GENERAL] pg_repack and Postgres versions > 9.4

2016-12-20 Thread Gaetano Mendola
I wonder why this is not a VACUUM option.

On Fri, 16 Dec 2016 at 15:30 Kenneth Marshall  wrote:

> On Fri, Dec 16, 2016 at 09:15:51AM -0500, Bill Moran wrote:
> >
> > Does anyone have experience using pg_repack on Postgres versions > 9.4?
> > Specifically 9.5, but probably 9.6 at some point.
> >
> > The documentation claims it supports up to 9.4. I haven't looked at it
> > closely enough to guess whether there might be changes in 9.5/9.6 to
> > cause it not to work any more.
> >
> > Anyone know? Or, alternatively, anyone have another option to get the
> > same job done?
> >
> > --
> > Bill Moran 
>
> Hi Bill,
>
> We are currently using it with 9.5.5 and it works well. I would expect
> that it would work with 9.6 as well, but we have not tested it.
>
> Regards,
> Ken
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] [HACKERS] RH9 postgresql 8.0.7 rpm

2006-04-13 Thread Gaetano Mendola
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Devrim GUNDUZ wrote:
 Hi,
 
 On Wed, 2006-04-12 at 13:07 +0200, Gaetano Mendola wrote:
 I was able to create it with:

 --nodeps --define 'buildrhel3 1' --define 'build9 1' 
 
 I'll be hppy if you send the RPMs directly to me; so that I can upload
 them. A tarball would be fine.
 

Dev I'll send you in private a link to my own web server so you can
download the files from there.

Regards
Gaetano Mendola

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.5 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFEPQ+B7UpzwH2SGd4RAsbxAKCK2AIZK5+YzY5+BeGnoWY4+n3/3QCfUfiE
SpVt3/M0srlf6Vw3MhLGUXs=
=9pF+
-END PGP SIGNATURE-


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] RH9 postgresql 8.0.7 rpm

2006-04-12 Thread Gaetano Mendola
Joshua D. Drake wrote:
 Gaetano Mendola wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 Hi all,
 I'm trying to build the rpms for RH9,
 I downloaded the srpm for RH9 but I'm stuck on these errors:
 
 RH9 is not a supported platform by RedHat or PGDG.
 

I thought given this link

http://www.postgresql.org/ftp/binary/v8.0.7/linux/srpms/redhat/redhat-9/

is not empty RH9 was still supported.

Regards
Gaetano Mendola




---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] [HACKERS] RH9 postgresql 8.0.7 rpm

2006-04-12 Thread Gaetano Mendola
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Devrim GUNDUZ wrote:
 Hi Gaetano,
 
 On Tue, 2006-04-11 at 18:31 +0200, Gaetano Mendola wrote:
 I'm trying to build the rpms for RH9,
 I downloaded the srpm for RH9 but I'm stuck on these errors:

 Attempt a:

 # rpmbuild --rebuild postgresql-8.0.7-1PGDG.src.rpm
 Installing postgresql-8.0.7-1PGDG.src.rpm
 error: Failed build dependencies:
 tcl-devel is needed by postgresql-8.0.7-1PGDG

 why tcl-devel on rh9 version? tcl-devel doesn't exist on rh9 )
 
 We use only one spec file for all platforms. If you are using Red Hat 9,
 you should consider using the macros that are enabled for RH9 (build89
 and/or build9 in the spec file).
 
 Attempt b:
 # rpmbuild --nodeps --rebuild postgresql-8.0.7-1PGDG.src.rpm
 
 checking krb5.h presence... no
 checking for krb5.h... no
 configure: error: header file krb5.h is required for Kerberos 5
 error: Bad exit status from /var/tmp/rpm-tmp.73067 (%build)
 
 I think rebuilding with --define 'buildrhel3 1' will work here.
 ok no kerberos now:

 Attempt c:
 # rpmbuild --nodeps  --rebuild --define 'kerberos 0' 
 postgresql-8.0.7-1PGDG.src.rpm
 .
 checking for zlib.h... yes
 checking openssl/ssl.h usability... no
 checking openssl/ssl.h presence... no
 checking for openssl/ssl.h... no
 configure: error: header file openssl/ssl.h is required for OpenSSL
 error: Bad exit status from /var/tmp/rpm-tmp.3109 (%build)

 actually I have that file:

 # locate openssl/ssl.h
 /usr/include/openssl/ssl.h
 
 I have no idea about this and I can't remember right now how I was
 building RH9 RPMs ...
 
 If you provide me a RH9 box, I can help you.

I was able to create it with:

- --nodeps --define 'buildrhel3 1' --define 'build9 1'

thank you for the support and I'm rly sorry to not be able to write/read
the postgres forums as I was used to do in the past.

Regards
Gaetano Mendola




-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.5 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFEPN977UpzwH2SGd4RAgT7AJ9FjmQ1QbX6RmuAl35UmiaqJbQ2pQCguudG
JvAWClxAWnT3FjbRS6M5gf8=
=uUh5
-END PGP SIGNATURE-


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Indexen on 8.0.3

2005-10-10 Thread Gaetano Mendola
Tom Lane wrote:
 [EMAIL PROTECTED] writes:
 After upgrading to 8.0.3 I see very poor performance on several indexes.
 ...
 Database was recently analyzed. Clearly, something has to be tuned that 
 didn't 
 need tuning on 7.4.3 ? (Main table has about 1.7 million records).
 
 No, there's no reason for 8.0 to be slower at this than 7.4, if all else
 is equal.  I'm betting that all else is not equal.  Maybe you are using
 a different encoding or locale in the new installation than the old?

Mmm, sure 8.0 is not slower than 7.4 in certain scenarios?

Consider:

select f1(id), f2(id), ..., fn(id) from my_view;

where fi is eligible for be marked as STABLE but is not.

In 8.0 in that select are involved n+1 snapshots instead of one as it
in 7.4. Could this be a performance issue ?

Regards
Gaetano Mendola






---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] RSS date still broken

2004-11-25 Thread Gaetano Mendola
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi all,
how I already wrote, the RSS feed report as date:
1 jan 1970 for all entries.

Regards
Gaetano Mendola
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.5 (MingW32)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iD8DBQFBpnIY7UpzwH2SGd4RAjOHAJ9NdZO7+zJNDzm1dlwriLAyXYPXowCeLFqs
bfQQ/iX4sgcdYQZVK+1IXYs=
=FE3A
-END PGP SIGNATURE-
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] Postresql RFD version 2.0 Help Wanted.

2004-11-07 Thread Gaetano Mendola
Marc G. Fournier wrote:
On Sat, 6 Nov 2004, Mike Cox wrote:
1.  I tried subscribing to comp.databases.postgresql.general through my
usenet provider thinking it was a regular big 8 group.  When it wasn't
found, I sent a request to my news provider to include it.

Most modern news readers allow for multiple news server ... just point 
yours at news.postgresql.org, and you can read from there, which has 
always been the case ...
As I already wrote, the actual postgres NG is missing some lists like: 
www,
to complete the panorama news.us.postgresql.org have a slony list that does
not exist in the archives. I think that NG is the best way to follow
the discussion and shall be at least a complete container for them and a
complete archive mirror too.

Regards
Gaetano Mendola


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Gborg down?

2004-11-06 Thread Gaetano Mendola
Marc G. Fournier wrote:
 On Fri, 5 Nov 2004, Joshua D. Drake wrote:



 I've posted details to the FreeBSD -stable and -current mailing list,
 but the simple summary is that 'ifconfig dev -alias' with no ip
 specific'd literally erases all IPs on that device, leaving the
 server running 'un-networked' ... :(

 Good lord.


 Ya well, its not something I'm particularly proud about ...

 Stupid question for someone running Linux ... is this standard behaviour
 that I've been lucky never to hit before, or is this something that
 Linux deals with slightly more intelligently?
I used to have a script on my remote server that I was running in BG before
to touch the network, that script was bringing up the network if was down
for more then 2 minutes. My server is now hosted in my house and I do not
need it anymore, I changed it with the no-ip script :-)

Regards
Gaetano Mendola







---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] '1 year' = '360 days' ????

2004-10-28 Thread Gaetano Mendola
Guy Fraser wrote:
Trivia: In approximately 620 million years a day will be twice as long 
as it is today.
Do you think then that Postgres628M.0 will fix it ?  :-)

Regards
Gaetano Mendola


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] [SECURITY] New set of PostgreSQL RPMS are available for download

2004-10-26 Thread Gaetano Mendola
Devrim GUNDUZ wrote:
* Updated PyGreSQL from 3.4 to 3.5 (only for 7.4.6-2PGDG)
Given the fact that PyGreSQL is still affected by the unfamous
idle in transaction behaviour:
def __init__(self, cnx):
self.__cnx = cnx
self.__cache = pgdbTypeCache(cnx)
try:
src = self.__cnx.source()
src.execute(BEGIN)
except:
raise OperationalError, invalid connection.
def close(self):
self.__cnx.close()
def commit(self):
try:
src = self.__cnx.source()
src.execute(COMMIT)
src.execute(BEGIN)
except:
raise OperationalError, can't commit.
def rollback(self):
try:
src = self.__cnx.source()
src.execute(ROLLBACK)
src.execute(BEGIN)
except:
raise OperationalError, can't rollback.

why do not distribute with next RPM the psycopg instead ?

Regards
Gaetano Mendola

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] (S)RPMs for PostgreSQL 7.2.6, 7.3.8 and 7.4.6 are ready

2004-10-25 Thread Gaetano Mendola
Devrim GUNDUZ wrote:
Hi,
(S)RPMs for new point releases (per 
http://archives.postgresql.org/pgsql-announce/2004-10/msg00010.php)
have been built for Fedora Core 12, Red Hat Linux 9 and Red Hat 
Enterprise Linux 3.
If you want insert in the mirrors the RPMs for RH 2.1AS you can find
them here:
http://mendola.no-ip.com/rpm746.htm
Regards
Gaetano Mendola


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] PgSQL MVCC vs MySQL InnoDB

2004-10-25 Thread Gaetano Mendola
Andrew Sullivan wrote:
On Mon, Oct 25, 2004 at 01:15:33PM -0400, Jan Wieck wrote:
On 10/25/2004 11:53 AM, [EMAIL PROTECTED] wrote:

Is this true?
From a functional point of view, the two appear to do the same thing. 

Well, except for one difference.  InnoDB will allow you refer to
tables not controlled by the InnoDB table handler, whereas we don't
have that problem with MVCC.  
From MySQL gotchas:
1) And the same feature allow also to start a transaction, mix the two
tables and have a warning only after the rollback about the inability
destroy the updates done on non INNODB tables.
2) Create or delete and index or alter a table will recreate the entire
   table.
3) Our rollback is a O(1) operation not O(N)  where N is the operations
   performed during the transaction

Regards
Gaetano Mendola


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Problem with query plan

2004-10-23 Thread Gaetano Mendola
Tom Lane wrote:
Cott Lang [EMAIL PROTECTED] writes:
Fiddling with the above values, only setting sort_mem absurdly large
easily causes NAN.  

Ah.  I see an overflow case for sort_mem exceeding 1Gb; that's probably
what you tickled.
I've fixed this in HEAD, but it doesn't seem worth back-patching.
If you care, the change in HEAD is
*** src/backend/optimizer/path/costsize.c.orig	Sun Aug 29 01:06:43 2004
--- src/backend/optimizer/path/costsize.c	Fri Oct 22 20:02:39 2004
***
*** 566,572 
  	if (nbytes  work_mem_bytes)
  	{
  		double		npages = ceil(nbytes / BLCKSZ);
! 		double		nruns = nbytes / (work_mem_bytes * 2);
  		double		log_runs = ceil(LOG6(nruns));
  		double		npageaccesses;
  
--- 566,572 
  	if (nbytes  work_mem_bytes)
  	{
  		double		npages = ceil(nbytes / BLCKSZ);
! 		double		nruns = (nbytes / work_mem_bytes) * 0.5;
  		double		log_runs = ceil(LOG6(nruns));
  		double		npageaccesses;
  

but the variable names have changed since 7.4 so this won't apply
cleanly.
If somebody care about apply this for 7.4, here there is the equivalent change:
--- costsize.c.orig 2004-10-23 11:17:38.0 +0200
+++ costsize.c  2004-10-23 11:19:04.0 +0200
@@ -548,7 +548,7 @@
if (nbytes  sortmembytes)
{
double  npages = ceil(nbytes / BLCKSZ);
-   double  nruns = nbytes / (sortmembytes * 2);
+   double  nruns = ( nbytes / sortmembytes ) * 0.5 ;
double  log_runs = ceil(LOG6(nruns));
double  npageaccesses;




Regards
Gaetano Mendola





---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] table size/record limit

2004-10-22 Thread Gaetano Mendola
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Dennis Gearon wrote:
| Gaetano Mendola wrote:
|
| Dennis Gearon wrote:
|
| I am designing something that may be the size of yahoo, google, ebay,
| etc.
|
| Just ONE many to many table could possibly have the following
| characteristics:
|
|3,600,000,000 records
|
| This is a really huge monster one, and if you don't partition that
| table in some way I think you'll have nightmares with it...
|
| Regards
| Gaetano Mendola
|
| thanks for the input, Gaetano.
For partion in some way I don't mean only split it in more tables. You
can use some available tools in postgres and continue to see this table
as one but implemented behind the scenes with more tables.
One usefull and impressive way is to use the inheritance in order to obtain
a vertical partition
0) Decide a partition policy ( based on time stamp for example )
1) Create an empty base table with the name that you want see as public
2) Create the partition using the empty table as base table
3) Create a rule on the base table so an insert or the update on it is
~   performed as a insert or an update on the right table ( using the partition
~   policy at step 0 )
in this way you are able to vacuum each partition, reindex each partition and
so on in a more feseable way I do not immagine vacuum full or reindex a
3,600,000,000 records table...

Regards
Gaetano Mendola










-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.5 (MingW32)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iD8DBQFBeLiK7UpzwH2SGd4RAh+TAJ4w89SvkFWgt9DGhQx/aUR6j2wDtwCgtut5
FN0OuoycbI37a8Wouvo3icw=
=Wb6h
-END PGP SIGNATURE-
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Slony-I 1.0.4 Released

2004-10-22 Thread Gaetano Mendola
Christopher Browne wrote:
 Oops! Gaetano Mendola [EMAIL PROTECTED] was seen spray-painting on a wall:

Chris Browne wrote:
  The Slony-I team is proud to present the 1.0.4 release of the most
  advanced replication solution for the most advanced Open Source
  Database in the world.
 
  The release tarball is available for download
 http://developer.postgresql.org/~wieck/slony1/download/slony1-1.0.4.tar.gz

May I use it on a machine with a postgres 7.4.5 installed with RPM ?


 Probably only with some degree of trickery.
I suspected it, I currently can not use it because of this. Any chance to
have a slony rpm compatible with the 7.4.5 rpm ?
Regards
Gaetano Mendola

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] correct example of a functional index usage?

2004-10-22 Thread Gaetano Mendola
Tom Lane wrote:
Since we don't yet have bitmap combining of indexes...
 ^^^
Are you trying to tell us something ? :-)
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] index not used?

2004-10-22 Thread Gaetano Mendola
Scott Marlowe wrote:
On Wed, 2004-10-20 at 08:06, Dan Pelleg wrote:
I'm trying to access a table with about 120M rows. It's a vertical version
of a table with 360 or so columns. The new columns are: original item col,
original item row, and the value.
I created an index:
CREATE INDEX idx on table (col, row)
however, selects are still very slow. It seems it still needs a sequential
scan:
EXPLAIN SELECT * FROM table WHERE col=1 AND row=10;
 QUERY PLAN  
--
Seq Scan on table  (cost=1.00..102612533.00 rows=1 width=14)
  Filter: ((col = 1) AND (row = 10))

What am I doing wrong?

What type are row and col?  If they're bigint (i.e. not int / int4) then
you might need to quote the value to get the query to use an index:
SELECT * FROM table WHERE col='1' AND row='10';
also, have you vacuumed / analyzed the table?  I'm assuming yes.
I assume not, seen that cost...
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] table size/record limit

2004-10-21 Thread Gaetano Mendola
Dennis Gearon wrote:
I am designing something that may be the size of yahoo, google, ebay, etc.
Just ONE many to many table could possibly have the following 
characteristics:

   3,600,000,000 records
This is a really huge monster one, and if you don't partition that
table in some way I think you'll have nightmares with it...

Regards
Gaetano Mendola
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Slony-I 1.0.4 Released

2004-10-21 Thread Gaetano Mendola
Chris Browne wrote:
 The Slony-I team is proud to present the 1.0.4 release of the most
 advanced replication solution for the most advanced Open Source
 Database in the world.

 The release tarball is available for download
http://developer.postgresql.org/~wieck/slony1/download/slony1-1.0.4.tar.gz
May I use it on a machine with a postgres 7.4.5 installed with RPM ?
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] delayed input

2004-10-19 Thread Gaetano Mendola
Hicham G. Elmongui wrote:
I need this for a side project. Is there a way to do something like this:
SELECT *
FROM DelayedTable('tablename', 5);
No, at my knowledge you'll obtain the first tuple only when
the function exit.
Regards
Gaetano Mendola

---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] removing idle connections

2004-10-19 Thread Gaetano Mendola
Josh Close wrote:
Is there a way to remove idle connections? My postgres server is
getting serveral hundred idle connections. It's due to a postgres .NET
provider not closing the connections properly. I don't want to kill
them all, or restart postgres everytime the connections go crazy.
I do not think is problem of not close the connections.
I bet the driver is acting like this:
On connection:
. Connect
. start transaction
On Commit:
. commit transaction
. start transaction
On Abort:
. abort transaction
. start transaction
On statemet:
. execute statement
As you can see you are always inside a transaction, idle I mean.
BTW this is the behaviour of python driver PgDB  ( I suggest to
use psycopg instead ) and before the 8.0 series the JDBC driver
did the same. The way to solve it is, delay the begin till the first
statement:
On connection:
. Connect
On Commit:
. commit transaction
On Abort:
. abort transaction
On statemet:
. If is the first statement after a connection or a commit or
  an abort execute the: start transaction
. execute statement

For rpm mantainer: why do not include the psycopg instead of the actual
python driver ?
Regards
Gaetano Mendola



---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] Change query priority

2004-10-16 Thread Gaetano Mendola
Barry S wrote:
Thats fine, but you do understand that nice (linux) will have *no*
effect on I/O? 
I do.
For any non-trivial table (that can't be held entirely in memory), 
re-nice will almost certainly have no effect.
That's my feeling too, but at least is a try.

Regards
Gaetano Mendola
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] Recovering data from corrupted table. Urgent Help!!

2004-10-14 Thread Gaetano Mendola
ruben wrote:
 Hi Gaetano:

 This procedure to recover data from a corrupted table should be
 documented somewhere... If it is, I could not find it!
First of all the table was not corrupted, the glitch was in another
subsystem.
The procedure is documented in the archives :-(
I agree with you but any cure seems worst then the disease. You have understand
why the file 0004 was not anymore there, did you had a power failure for
example ?
 Now I wonder if I have lost any data, because after creating the
 pg_clog/0004 and running VACCUM everything seems ok.
Normally you didn't lost any data.
Regards
Gaetano Mendola
PS: I had the same error for the first time in my postgres usage life
only after ( some weeks after ) having upgrade from a 7.4.2 - 7.4.5
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] memory leak of PQmakeEmptyPGresult??

2004-10-13 Thread Gaetano Mendola
Ann wrote:
 I found the reason of this question and fixed the bug :))
 

Why then don't you share it ?



Regards
Gaetano Mendola


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] could not access status of transaction 4244329

2004-10-13 Thread Gaetano Mendola
[EMAIL PROTECTED] wrote:
Hi:
I've migrated a couple of weeks ago from 7.4.2 to 7.4.5 and I am getting 
this error after executing a query:

  Warning: pg_exec() query failed: ERROR: could not access status of 
transaction 4244329 in /home/wisconsin/www/_proc/bbdd/_c_bbdd.php on 
line 160
  ERROR ACCESO BASE DE DATOSERROR: could not access status of 
transaction 4244329

[SNIP]
I tried reindexing:
  DROP INDEX movimientos_c_c_i01;
  CREATE INDEX movimientos_c_c_i01 ON movimientos_c_c (cod_empresa, 
cod_per_emp, cod_movimiento, fecha_movimiento);

  ERROR:  could not access status of transaction 4244329
  DETAIL:  could not open file /usr/local/pgsql/data/pg_clog/0004: No 
existe el fichero o el directorio

create a empty file:
# touch /usr/local/pgsql/data/pg_clog/0004
at this point postgres will complain about the fact that an offset is missing,
at this point fill with 0 your file ( blocks of 8K ) till reach that offset reclaimed.

Regards
Gaetano Mendola

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [GENERAL] psql : how to make it more silent....

2004-10-13 Thread Gaetano Mendola
Patrick Fiche wrote:
Hi,
 
When I execute a function, I would like psql to show me only RAISE 
NOTICE messages but not all function calls
Indeed, I currently get some messages that I don't care about :
 

* PL/pgSQL function adm_user line 321..
* CONTEXT: SQL query SELECT.
Is there a way to get rid of these messages
 
modify your log_error_verbosity  to terse

Regards
Gaetano Mendola
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Recovering data from corrupted table. Urgent Help!!

2004-10-13 Thread Gaetano Mendola
[EMAIL PROTECTED] wrote:
Hi:
Is there any way to recover data from a corrupted table? I can only run 
SELECTs on certain WHERE conditions.

I cannot vacuum, pg_dump, I've deleted the indexes and try to reindex, 
always get error:

ERROR:  could not access status of transaction 4244329
DETAIL:  could not open file /usr/local/pgsql/data/pg_clog/0004: No 
existe el fichero o el directorio

Thanks a lot.
Again:
create an empty file:
# touch /usr/local/pgsql/data/pg_clog/0004
at this point postgres will complain about the fact that an offset is missing,
at this point fill with 0 your file ( blocks of 8K ) till reach that offset reclaimed.

Regards
Gaetano Mendola

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Recovering data from corrupted table. Urgent Help!!

2004-10-13 Thread Gaetano Mendola
Gaetano Mendola wrote:
[EMAIL PROTECTED] wrote:
Hi:
Is there any way to recover data from a corrupted table? I can only 
run SELECTs on certain WHERE conditions.

I cannot vacuum, pg_dump, I've deleted the indexes and try to reindex, 
always get error:

ERROR:  could not access status of transaction 4244329
DETAIL:  could not open file /usr/local/pgsql/data/pg_clog/0004: No 
existe el fichero o el directorio

Thanks a lot.

Again:
create an empty file:
# touch /usr/local/pgsql/data/pg_clog/0004
at this point postgres will complain about the fact that an offset is 
missing,

at this point fill with 0 your file ( blocks of 8K ) till reach that 
offset reclaimed.
I forgot to suggest you to do:
dd bs=8k count=1  /dev/zero  /usr/local/pgsql/data/pg_clog/0004
you have to repeat this command till the offset is covered.

Regards
Gaetano Mendola



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] OS not good for database

2004-10-12 Thread Gaetano Mendola
Simon Windsor wrote:
 Hi

 Can you provide a link to the interview?
Ops! I forget it :-(
Here it is:  http://www.alwayson-network.com/comments.php?id=6186_0_4_0_C
Regards
Gaetano Mendolaa
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Change query priority

2004-10-12 Thread Gaetano Mendola
Tom Lane wrote:
Michael Fuhr [EMAIL PROTECTED] writes:
I don't know how effective this would be, but you could wrap the
system call setpriority() in a user-defined function if your
platform supports it.  This would set the nice value of the
backend process, which might serve as a crude prioritization
mechanism.

Every couple of months someone comes along and says why don't you
provide a way to renice a backend ... but in point of fact it's
somewhere between useless and counterproductive for most query loads.
The useless part comes in because nice only affects CPU priority not
I/O priority, but I/O load is the thing that counts for most database
applications.  The counterproductive part comes in because of an
effect called priority inversion.  The niced-down process may be holding
a lock that is wanted by some higher-priority process --- but the kernel
scheduler knows nothing of that, and will leave the niced-down process
at the bottom of the queue, and thus the high-priority process is
effectively stuck at the bottom too.
Without change priority doesn't means we are immune to a priority inversion,
for example the way semaphore are implemented in Linux doesn't prevent you
to be bitten, at least IIRC the Linux kernel doesn't trace chain locks...
however I'd ve worried about priority inversion if I have hard deadline,
have hard deadline and database in the same sentence is like put
windows and security in the same sentence too...
I feel that renice a backend will not kill your system.
Regards
Gaetano Mendola

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Change query priority

2004-10-12 Thread Gaetano Mendola
Tom Lane wrote:
 Gaetano Mendola [EMAIL PROTECTED] writes:

I feel that renice a backend will not kill your system.


 It won't kill the system, but it probably won't accomplish what you
 hoped for, either.

That's true but right now renice a backend is the only way to procede
in order to *try* to slow down some queries

Regards
Gaetano Mendola

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] deadlock with vacuum full on 7.4.5

2004-10-12 Thread Gaetano Mendola
Joseph Shraibman wrote:
That is what I wanted to know, how to get the evidence for next time.
select * from pg_locks

Regards
Gaetano Mendola
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [GENERAL] 8.0 questions

2004-10-08 Thread Gaetano Mendola
David Garamond wrote:
Thomas Madsen wrote:
A quite impressive list of changes in version 8.0.0. ...
But the question from me is: When is it done? 

Two words: Nobody knows. Beta cycle is usually at least 2-3 months. 
First beta is in Aug. So a release is probably Nov at the earliest.

We have a lot of 7.2.5 versions running which badly needs an update, but
if version 8.0.0 comes in the near future, we could surely hold out a
while longer and go for the top of the line.

Unless you need a specific feature of 8.0 (savepoint, PITR, migration to 
Windows :)), it's generally recommended to use 7.4.5. 8.0's previous 
name was 7.5 and it was renamed to X.0 to reflect the fact that it is 
not believed to be more robust than 7.4.
Wait wait. All X.0 version are not believed more robust then Y.Z with
Y  X and Z  0
Anyway is a general consensus that the win32 8.0 version is not more robust
than his counterpart *nix 8.0  this because lot of code already mature in *nix
environment is new in the win32 one.
Regards
Gaetano Mendola

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [GENERAL] interfaces for python

2004-10-07 Thread Gaetano Mendola
Steven Klassen wrote:
* Pierre-Frédéric Caillaud [EMAIL PROTECTED] [2004-10-07 10:45:57 +0200]:

I'd advise psycopg as the fastest one (by a factor of 10x on large
selects).

I second this recommendation.
Also because this interface is not affected by the idle in transaction, indeed 
as the
last JDBC interface version this interface delay the transaction open at the first
statement. I think that the new postgres rpm shall insert this interface instead the
PyGres one.

Regards
Gaetano Mendola



---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] VACUUM FULL on 24/7 server

2004-10-04 Thread Gaetano Mendola
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Matthew T. O'Connor wrote:
| Gaetano Mendola wrote:
|
| Matthew T. O'Connor wrote:
|
| Since you are running autovacuum I doubt the doing vacuumdb -a -z is 3
| times a day buying you much.  It's not a bad idea to do once in a while.
|
|
|
| The reason is that I have few tables of about 5 milion with ~ 1
| insert per
| day. Even with setting  -v 300 -V 0.1 this means these tables will be
| analyzed
| each 50 days. So I  have to force it.
|
|
| I understand, but 10,000 new rows increate your table size only 0.2%, so
| it won't significantly effect anything.  Also, if they really are just
| inserts then vacuum is totally unnecessary.  I agree that for these
| situations pg_autovacuum should be supplemented by vacuumdb -a -z every
| once in a while, all I was pointing out was that 3 times a day is
| probably excessive.
Right, but the table collect logs, so is mandatory have the statistics up-to-date
in order to obtain index scan for queries that are involving the last 24 hours.
For the vacuum vs the analyze I do vacuum because other tables are not in this
category of only update so instead of write tons of line in my crontab I prefer
only one line.

Regards
Gaetano Mendola


-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.5 (MingW32)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iD8DBQFBYYm77UpzwH2SGd4RAmilAJ98skWgiKI7mqOgYIgigzgpLe0JpQCfRm8/
IPXFZwZVcdJP0RQCE1fPXpw=
=CExm
-END PGP SIGNATURE-
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] VACUUM FULL on 24/7 server

2004-10-03 Thread Gaetano Mendola
Christopher Browne wrote:
 [EMAIL PROTECTED] (Aleksey Serba) wrote:

   Hello!

   I have 24/7 production server under high load.
   I need to perform vacuum full on several tables to recover disk
   space / memory  usage frequently ( the server must be online during
   vacuum time )


 The main thought is: Don't do that.

 It is almost certainly the wrong idea to do a VACUUM FULL.

 Assuming that the tables in question aren't so large that they cause
 mass eviction of buffers, it should suffice to do a plain VACUUM (and
 NOT a VACUUM FULL) on the tables in question quite frequently.
This is easy to say and almost impraticable. I run a 7.4.5 with the autovacuum:
pg_autovacuum -d 3 -v 300 -V 0.5 -S 0.8 -a 200 -A 0.8
I have also a vacuumdb -z -v -a  running each six hours and if i don't execute
a vacuum FULL for one weeks I collect almost 400 MB of dead rows :-(
For this reason even with a 7.4.5 I'm obliged to run a vacuum full at least once
a week and a reindex once a month.
And my FSM parameters are large enough:
INFO:  free space map: 141 relations, 26787 pages stored; 26032 total pages needed
DETAIL:  Allocated FSM size: 1000 relations + 200 pages = 11780 kB shared memory.
Regards
Gaetano Mendola
PS: I do not have any idle in transaction connections around.


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] VACUUM FULL on 24/7 server

2004-10-03 Thread Gaetano Mendola
Tom Lane wrote:
Gaetano Mendola [EMAIL PROTECTED] writes:
Christopher Browne wrote:
Assuming that the tables in question aren't so large that they cause
mass eviction of buffers, it should suffice to do a plain VACUUM (and
NOT a VACUUM FULL) on the tables in question quite frequently.

This is easy to say and almost impraticable. I run a 7.4.5 with the autovacuum:

pg_autovacuum -d 3 -v 300 -V 0.5 -S 0.8 -a 200 -A 0.8

I'm not very familiar at all with appropriate settings for autovacuum,
but doesn't the above say to vacuum a table only when the dead space
reaches 50%?  That seems awfully lax to me.  I've always thought one
should vacuum often enough to keep dead space to maybe 10 to 25%.
The problem is that I can not set these value per table and per database
so, I had to find some compromise, however I will test in the next days
what happen with -V 0.2
However each six hour I perform a vacuum on all database and the HD space
continue to grow even with FSM parameters large enough.
I'll post in a couple of day about the new settings.
Regards
Gaetano Mendola


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] VACUUM FULL on 24/7 server

2004-10-03 Thread Gaetano Mendola
Matthew T. O'Connor wrote:
On Sun, 2004-10-03 at 21:01, Gaetano Mendola wrote:
Tom Lane wrote:
Gaetano Mendola [EMAIL PROTECTED] writes:

Christopher Browne wrote:
pg_autovacuum -d 3 -v 300 -V 0.5 -S 0.8 -a 200 -A 0.8
I'm not very familiar at all with appropriate settings for
autovacuum,
but doesn't the above say to vacuum a table only when the dead space
reaches 50%?  That seems awfully lax to me.  I've always thought one
should vacuum often enough to keep dead space to maybe 10 to 25%.

Yes that is what those options say.  The default values are even more
lax.  I wasn't sure how best to set them, I erred on the conservative
side.

The problem is that I can not set these value per table and per
database
so, I had to find some compromise, however I will test in the next
days
what happen with -V 0.2
However each six hour I perform a vacuum on all database and the HD
space
continue to grow even with FSM parameters large enough.

Since you are running autovacuum I doubt the doing vacuumdb -a -z is 3
times a day buying you much.  It's not a bad idea to do once in a while.
The reason is that I have few tables of about 5 milion with ~ 1 insert per
day. Even with setting  -v 300 -V 0.1 this means these tables will be analyzed
each 50 days. So I  have to force it.
Regards
Gaetano Mendola









Given the way Postgres works, it is normal to have slack space in your
tables.  The real question is do your table stop growing?  At some point
you should reach a stead state where you have some percentage of slack
space that stops growing.
You said that after running for a week you have 400M of reclaimable
space.  Is that a problem?  If you don't do a vacuum full for two weeks
is it still 400M?  My guess is most of the 400M is created in the first
few hours (perhaps days) after running your vacuum full.
Matthew
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] ODBC for PostgreSQL 7.4

2004-10-02 Thread Gaetano Mendola
Astha Raj wrote:
Hi All,
I want to connect to PostgreSQL 7.4 from my Windows machine. What ODBC
version is needed? Is there any other important settings required? I am very
new to this database.
Search on google:   odbc postgresql  and I'm feeling lucky.


Regards
Gaetano Mendola
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Foreign key order evaluation

2004-09-28 Thread Gaetano Mendola
Michael Fuhr wrote:
On Tue, Sep 28, 2004 at 01:30:08PM +, Randy Yates wrote:
Randy Yates [EMAIL PROTECTED] writes:
I'm confused. Where is the lock? Is it on the 1 record in the model table?

Yes.

If so, why is that record locked? Is it possible in Postgresql to update
the primary key of a record? 

When you insert a row that has a foreign key reference, PostgreSQL
does a SELECT FOR UPDATE on the referenced row in the foreign table;
the lock prevents other transactions from changing the referenced
row before this transaction completes.  Unfortunately it also
prevents other transactions from acquiring a lock on the same row,
so those transactions will block until the transaction holding the
lock completes.
There are some proposal to have another kind of lock in order to avoid the
above. I hope soon.
Regards
Gaetano Mendola



---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Getting an out of memory failure.... (long email)

2004-09-28 Thread Gaetano Mendola
Sean Shanny wrote:
Tom,
The Analyze did in fact fix the issue.  Thanks.
--sean
Given the fact that you are using pg_autovacuum, you have to consider
a few points:
1) Is out there a buggy version that will not analyze big tables.
2) The autovacuum fail in scenarios with big tables not eavy updated,
   inserted.
For the 1) I suggest to check in your logs and see how the total rows
in your table are displayed, the right version show you the rows number
as a float:
[2004-09-28 17:10:47 CEST]   table name: empdb.public.user_logs
[2004-09-28 17:10:47 CEST]  relid: 17220;   relisshared: 0
[2004-09-28 17:10:47 CEST]  reltuples: 5579780.00;  relpages: 69465
[2004-09-28 17:10:47 CEST]  curr_analyze_count: 171003; curr_vacuum_count: 0
[2004-09-28 17:10:47 CEST]  last_analyze_count: 165949; last_vacuum_count: 0
[2004-09-28 17:10:47 CEST]  analyze_threshold: 4464024; vacuum_threshold: 
2790190
for the point 2) I suggest you to cron analyze during the day.

Regards
Gaetano Mendola
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Réf. : Re: R?f. : Re:

2004-09-28 Thread Gaetano Mendola
[EMAIL PROTECTED] wrote:
Thanks for your help,
I haven't tested with cc_r but with VisualAge C from IBM , and configure
CC=/usr/vac/bin/cc --enable-thread-safety works fine.
But when compiling, link edition fails, ld doesn't find threads entries
(like pthreads_mutex_lock...).
Each compilation that have failed was launched without any option for
multithreading.
I have tried to launch it with -lpthreads option, and that works fine.
When investigating, I have found that AIX5.2 (and 5.3) has a new library
named libthread.a (not present on AIX5.1 where compilation OK) . Maybe this
new threads library introduced for sunOs compatibility disturbs
configure?
With configure CC=/usr/vac/bin/cc CFLAGS=-lpthreads
--enable-thread-safety, compilation is OK.
I have launched gmake check and 1 of the 96 tests fails:
$ more
/postgres_util/postgresql-8.0.0beta2/src/test/regress/regression.diffs
*** ./expected/geometry.out Fri Oct 31 21:07:07 2003
--- ./results/geometry.out  Mon Sep 27 11:45:58 2004
***
*** 117,123 
  | (5.1,34.5) | [(1,2),(3,4)] | (3,4)
  | (-5,-12)   | [(1,2),(3,4)] | (1,2)
  | (10,10)| [(1,2),(3,4)] | (3,4)
! | (0,0)  | [(0,0),(6,6)] | (-0,0)
  | (-10,0)| [(0,0),(6,6)] | (0,0)
  | (-3,4) | [(0,0),(6,6)] | (0.5,0.5)
  | (5.1,34.5) | [(0,0),(6,6)] | (6,6)
--- 117,123 
  | (5.1,34.5) | [(1,2),(3,4)] | (3,4)
  | (-5,-12)   | [(1,2),(3,4)] | (1,2)
  | (10,10)| [(1,2),(3,4)] | (3,4)
! | (0,0)  | [(0,0),(6,6)] | (0,0)
  | (-10,0)| [(0,0),(6,6)] | (0,0)
  | (-3,4) | [(0,0),(6,6)] | (0.5,0.5)
  | (5.1,34.5) | [(0,0),(6,6)] | (6,6)
==
But I  think that's not bad.
Ouch. This is the same error already seen on OSX !! Tom Lane modified the
test in order to get rid this from that OS. It's not bad, however now
are two platforms with that glitch.
Regards
Gaetano Mendola





---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] Comparing a varchar of length 32

2004-09-23 Thread Gaetano Mendola
Christian Enklaar wrote:
Hello,
we are using a table with a primary key of type varchar[50].
If we try to find entries with select * from table where table.key =
'text';
entries with a key length of more than 32 characters are not found.
Entries with a shorter key are found. Using Like instead of = works for
varchar keys with length  32 as well.
Does anybody know about this Problem ?
(We use PostgresQL 7.4.1)
I think you are hiding some informations.
It works here with a 7.4.5 and I'm not aware of any bug like this in previous
versions.
test=# \d test
Table public.test
 Column | Type  | Modifiers
+---+---
 a  | character varying(50) | not null
Indexes:
test_pkey primary key, btree (a)
test=# select a, length(a) from test;
 a  | length
+
 01234567890123456789012345678901234567890123   | 44
 0123456789012345678901234567890123456789   | 40
 01234567890123456789012345678901234567890123456789 | 50
(3 rows)
test=# select length(a) from test where a = 
'01234567890123456789012345678901234567890123';
 length

 44
(1 row)
Just an idea, could you reindex your table ?
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Number of Active Connections

2004-09-13 Thread Gaetano Mendola
MaRCeLO PeReiRA wrote:
Hi guys,
How can I know about the number of active
connections?? (not the maximum allowed, but the number
of open connections).
If you have enough permission:
select count(*) from pg_stat_activity;

Regards
Gaetano Mendola

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] What is the postgres version of mysql's ON DUPLICATE KEY

2004-09-10 Thread Gaetano Mendola
Nick wrote:
I have a table with columns
(product_id,related_product_id,related_counter)
If product A is related to product B then a record should be created,
if the record already exists then the related_counter should be
incremented.
This is very easy to do with MySQL using INSERT... ON DUPLICATE KEY.
Standard or not, it is very usefull.
Is there a way to catch the insert error. For example...
INSERT INTO related_products (product_id,related_product_id) VALUES
(?,?);
IF (???error: duplicate key???) THEN
UPDATE related_products SET related_counter = related_counter + 1;
END IF;
-Nick
With a rule you can do it easily ( never tried ).
Regards
Gaetano Mendola


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Salt in encrypted password in pg_shadow

2004-09-10 Thread Gaetano Mendola
Greg Stark wrote:
Gaetano Mendola [EMAIL PROTECTED] writes:

Well, when SHA-0 was ready NSA suggested to apply some changes in order to
correct some flaw discovered and SHA-1 comes out, interesting NSA never wrote
which flaw was corrected!
May be SHA-1 is trasparent water to NSA eyes :-)

This is awfully similar to the story that's told about DES:
When DES was under development the NSA told people to try a few specific
constants for the sboxes stage of the cipher. As far as anyone at the time
could tell they were completely random values and nearly any value would have
been just as good.
Then 30 years later when differential cryptanalysis was invented people found
the values the NSA told them to use are particularly resistant to differential
cryptanalysis attacks. Almost any other values and DES would have fallen right
then.
This means it's quite possible the NSA had differential cryptanalysis 30 years
before anyone else. Quite a remarkable achievement. However it's unlikely that
the same situation holds today. 30 years ago nobody outside the government was
doing serious cryptanalysis. If you were a mathematician interested in the
field you worked for the NSA or you changed fields. These days there's tons of
research in universities and in the private sector in serious cryptanalysis.
The NSA still employs plenty of good cryptanalysts but they no longer have the
monopoly they did back then.
I will invite you to repeat the same sentence in 2034 ... :-)

Regards
Gaetano Mendola
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] postgresql hanging (blocking) with smp kernel

2004-09-09 Thread Gaetano Mendola
Marcel Groner wrote:
I have a problem with postgresql runnung on smp kernel.
setup:
master:
---
- Pentium 4 (hyperthreading)
- 2 GB Memory
- os: fedora core 1
- kernel: 2.4.22-1.2188.nptlsmp
- postgresql: 7.4.3-1PGDG
slave 1:

- Pentium 4 (hyperthreading)
- 2 GB Memory
- os: fedora core 1
- kernel: 2.4.22-1.2115.nptlsmp
- postgresql: 7.4.3-1PGDG
slave 2:

- Double Xeon (with hyperthreading)
- 2 GB Memory
- os: fedora core 1
- kernel: 2.4.22-1.2199.nptlsmp
- postgresql: 7.4.3-1PGDG
the replication is made by hand. reading from slave 1 or slave 2
(balanced) and wrtiting to master, slave 1 and slave 2. Our site is a
high-traffic site (the biggest dating-site in switzerland:
www.swissflirt.ch) with  1400 concurrent users and  40'000 visits
per day.
master and slave 1 (with pentium 4) are working perfectly with
smp-kernel. slave 2 (with double xeon) has big problems. running p.e.
with kernel 2.4.22 (non smp) works also but of course only one CPU is
used. when I use the smp kernel, the connections (and queries) are
hanging (blocking) after some short time. Also when I shutdown the
application (using java and jdbc to connect to the databases) the
postgres-processes (on slave 2) keep existing while the processes on
master and slave 1 shutdown propertly.
Hanging on what ? I'm running postgres in a similar server ( the HT
is enabled too so is like 4 CPU)  without any problem at all.
Show as select * from pg_stat_activity
select * from pg_locks
Regards
Gaetano Mendola



---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] Gentoo for production DB server?

2004-09-07 Thread Gaetano Mendola
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Jeremiah Elliott wrote:
| Gaetano Mendola wrote:
|
| Barry S wrote:
|
| In article [EMAIL PROTECTED], Christine Desmuke wrote:
|
| Hello:
|
| At the risk of starting a flame-war, I'd like some more details on the
| use of Gentoo Linux for a production PostgreSQL server. There have been
| a couple of comments lately that it is not such a great idea; does
| anyone have specific experience they'd be willing to share?
|
|
| snip
|
| I'm an ex-Gentoo admin, not because gentoo isn't fun, just that you need
| to really really like to constantly fiddle with it to keep it happy.
|
| The worst thing is to have not done an 'emerge world' in 2 months, only
| to discover that there are now 99 pending updates.
|
|
|
| Do you was obliged to catch them ?
|
| Gaetano
| I use gentoo and RHEL.  My biggest beef with redhat is that their rpm of
| postgres is of a rather old version, so I end up downloading the source
| tar and compiling it my self. Also I would really like to be running XFS
| on all my databases servers, but the only fs I can run on the redhat
| servers is ext3.
| -jeremiah
And how RH can delivery a Postgres upgrade if it require an initdb ?
The reason as already discussed is a leak of pg_upgrade that can permit
the upgrade without perform a cicle of:
dump-install-initdb-crossedfingers-reload
and even if the pg_upgrade was existing for sure I'll not trust my data to
an automatic upgrade.
Regards
Gaetano Mendola

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.4 (MingW32)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iD8DBQFBPidp7UpzwH2SGd4RAtaEAKDSVWqGJyu0QW2XIjPyaLZaQCSpcQCdHOOO
ZMkeVbecaZEslFsnNslMAfE=
=B1ns
-END PGP SIGNATURE-
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Gentoo for production DB server?

2004-09-03 Thread Gaetano Mendola
Barry S wrote:
In article [EMAIL PROTECTED], Christine Desmuke wrote:
Hello:
At the risk of starting a flame-war, I'd like some more details on the
use of Gentoo Linux for a production PostgreSQL server. There have been
a couple of comments lately that it is not such a great idea; does
anyone have specific experience they'd be willing to share?
snip
I'm an ex-Gentoo admin, not because gentoo isn't fun, just that you need
to really really like to constantly fiddle with it to keep it happy.
The worst thing is to have not done an 'emerge world' in 2 months, only
to discover that there are now 99 pending updates.
Do you was obliged to catch them ?
Gaetano



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] postgres on in the internet

2004-09-03 Thread Gaetano Mendola
Mike Mascari wrote:
Paul Tillotson wrote:
At my company we are looking at deploying clients for our
client/server app outside our firewall, which will then require
our postgres box to be internet-accessible. Does anyone out there
have experience with this or recommended best practices?  We have
been looking at either (a) tunnelling everything over ssh, or (b)
just making sure that users have strong passwords and requiring
md5 authentication in pg_hba.conf.
Our client app is in C# using the postgresql .net data provider.

Is the .net provider capable of an SSL connection? I'd be hesitant to 
throw around data over the Internet without using SSL for all the 
various reasons: DNS hijacking, TCP replay, etc.
If not you can tunnel it.

Regards
Gaetano Mendola


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] Can we return a table from a function?

2004-09-02 Thread Gaetano Mendola
Arundhati wrote:
Hi
I want the result of select query to be returned from a function. Or
is there any way to return a table from a function?
What you are looking for is a table function:
http://www.postgresql.org/docs/7.3/interactive/xfunc-tablefunctions.html
http://www.postgresql.org/docs/7.4/interactive/plpgsql-control-structures.html#PLPGSQL-STATEMENTS-RETURNING
I'm not finding the equivalent for 7.4 about the first link.
Regards
Gaetano Mendola

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] Performance on 7.4 vs 7.2?

2004-09-01 Thread Gaetano Mendola
Pablo S wrote:
Hi there Pg admins,
I have 2 systems, one OLD,
(linux 2.4 running postgresql-7.2.1-5 with a perl web db on
Apache/1.3.23 mod_perl 1.26 and dbi 1.37 with peristent db connections
via apache:dbi.  )
The other system is NEW
(running the same web app on linux 2.6 postgresql-7.4.2-1 w/
Apache/2.0.49  mod_perl/1.99_12  apache:dbi. fedora core 2)
Both systems have almost identical hardware, and have had the same
tweaks made to pg - at least all I can rack out of my brain 
their SYSV shared mem increased to 128mb
shared_buffers = 15200 
sort_mem = 32168
effective_cache_size = 4000 
I don't know what changes from 7.2 - 7.4 but effective_cache_size
is too little IMO, or at least incongruous with sort_mem.
Try to rise that parameter to something more realistic.

Regards
Gaetano Mendola

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Conditional foreign key?

2004-08-31 Thread Gaetano Mendola
Benjamin Smith wrote:
We have a list of customers, some of whom have purchased feature X and some of 
whom have not. If a customer has paid for featurex, they can use it, and a 
strict relationship between cust_items.items_id and items.id, but only if 
they are signed up to use featurex, otherwise I want cust_items.items_id to 
be NULL. 

Currently, I have tables defined similar to: 

create table Customer (
id serial unique not null, 
name varchar(30) unique not null, 
FeatureX bool not null
); 

Create table cust_items (
id serial unique not null, 
customer_id integer not null references customer(id), 
name varchar(30) not null, 
type varchar not null, 
items_id integer default null references featurex(id), 
cust_active bool not null 
); 

// type is one of book, tape, or featurex
Create table items (
id serial not null unique, 
title varchar(30)
); 

I want to say 
If the cust_items.type='featurex' then (
	(customer.featurex must be true) 
	AND 
	(cust_items.items_id must be in 
		(select id from items) 
	); 

I'm just stumped as to how to say this. 

I've tried, with the above table defs, 
CREATE RULE check_customer ON UPDATE to cust_items 
	WHERE NEW.type='featurex' AND 
	NEW.customer_id IN 
		(SELECT customer.id FROM customer 
		WHERE featurex=TRUE
		)
	DO ... ? too many tries to count 

Any pointers, hints, or info on this kind of statement? 

This is a trigger job not a rule one.
Regards
Gaetano Mendola




---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] Forcing a stored procedure recompile

2004-08-31 Thread Gaetano Mendola
Mike McGavin wrote:
Hello everyone.
Can anyone suggest if there's a way to force a plpgsql stored procedure 
to be recompiled every time that it's called, based on the values of the 
parameters that it's given?  I assumed it would be possible, but 
unfortunately haven't been able to find any documentation on how to 
force a recompile at all, let alone automatically.

I've encountered a situation where the standard precompiled generic 
query plan isn't working at all well with the unknown variables that the 
stored procedure receives.  It seems to do nicely if I replace them with 
constants, though.  (Compilation time isn't really an issue in this 
situation, but I'd like to leave everything in the stored procedure for 
other reasons.)

I could probably re-write the procedure to concatenate a string 
containing the unknowns as constants and then EXECUTE it.  Doing it that 
way seems a bit ugly, though, and it'd make maintenance a bigger problem.

Right now unfortunately this is the only one solution.
See the post on performance (8/28/2004) with the title:
ill-planned queries inside a stored procedure

Regards
Gaetano Mendola
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Connection to a PG 8.0 Beta 1 win32 server

2004-08-24 Thread Gaetano Mendola
Secrétariat wrote:

Hello !
I've installed the Beta 1 on Win XP Pro, it seem working correctly.
I load a database from Linux 7.4.3 with pgdumpall, it works too.
But I can't connect from other PC over the LAN (I modified pg_hba.conf
for the hosts).
If I write in postgresql.conf :
tcpip_socket = true
port = 5432
I can't connect from the LAN,  NEITHER from the local machine ?!
Where I've made a mistake ?
Show us your pg_hba.conf, what is the exact error ?
Beta 1 Win32 server at 192.168.0.10,
W2k client at 192.168.0.11.
My pg_hba.conf :
# TYPE DATABASE USER IP-ADDRESS IP-MASK METHOD
local all all md5
# IPv4-style local connections
host all all 127.0.0.1 255.255.255.255 md5
host all all 192.168.0.0 255.255.255.255 md5
# IPv6-style local connections:
#host all all ::1/128 md5
If you want allow all the network 192.168.0.0 then
your netmask have to be: 255.255.0.0

Regards
Gaetano Mendola

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] Python and 8.0 beta

2004-08-24 Thread Gaetano Mendola
Clodoaldo Pinto Neto wrote:
Are there any python drivers that work with the version 8 beta?

The version seven ones didn't.


 This script is working with version 7.4.2, FC2, python 2.3.3

 [SNIP]
May you test the following script and let me know which error you are encountering:
#!/usr/bin/python
import pgdb
if ( __name__ == __main__) :
connection = pgdb.connect(
user = 'user',
password = 'password',
host = '127.0.0.1',
database = 'database')
if ( connection == None):
raise Could Not Connect
cursor = connection.cursor()
cursor.execute ( 'select version()' )
result = cursor.fetchall()
cursor.close()
while ( type(result) is type([]) ):
result = result[ 0 ]
print result

this is working correctly on my 8.0beta1 installation
Regards
Gaetano Mendola





---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] No connection to a PG 8.0 Beta 1 win32 server

2004-08-24 Thread Gaetano Mendola
Secrétariat wrote:
I don't have a telnet server on the Win XP Pro PC acting as PG server for
Beta1 !
So ? If you do: telnet your server 5432
the command only open a TCP connection to the port 5432,
this will test if you are able to reach your server.
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] Postgresql 8.0 beta 1 - strange cpu usage statistics and slow

2004-08-24 Thread Gaetano Mendola
Shelby Cain wrote:
--- Tom Lane [EMAIL PROTECTED] wrote:
Ah-hah.  The win32 hackers should confirm this, but
my recollection is
that sync/fsync are no-ops under Cygwin (one of the
several reasons
we would never recommend that port for production
use).  So this would
fit the assumption that the 7.4 code was simply not
syncing.

Sounds reasonable.  However, I don't see the same
performance hit while doing bulk database operations
(ie: inserts, deletes, updates).  Is that expected
behavior?  Do vacuum operations fsync()/_commit() more
often?
I think you have other problems around, see the post where
I did some tests.
Regards
Gaetano Mendola

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Connection to a PG 8.0 Beta 1 win32 server

2004-08-23 Thread Gaetano Mendola
Secrétariat wrote:
Hello !
 
I've installed the Beta 1 on Win XP Pro, it seem working correctly.
I load a database from Linux 7.4.3 with pgdumpall, it works too.
But I can't connect from other PC over the LAN (I modified pg_hba.conf 
for the hosts).
If I write in postgresql.conf :
tcpip_socket = true
port = 5432
I can't connect from the LAN,  NEITHER from the local machine ?!
Where I've made a mistake ?
Show us your pg_hba.conf, what is the exact error ?

Regards
Gaetano Mendola

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] pg_dump in stand alone backend

2004-08-23 Thread Gaetano Mendola
Ulrich Wisser wrote:
Hi,
I would like to stop the postmaster every night and run
vacuum
pg_dump
reindex
in the stand alone backend.
Vacuum and reindex seem to be quite easy, as I can setup a small script 
with both commands. But what about pg_dump. That seems somewhat more 
complex.
Explain what exactly you are trying to do, why do you have to stop
the postmaster ? If you request is due only to forbid the access then
you can replace the pg_hba.conf with a void one and replace it again
at the end of operations.
BTW vacuum, pg_dump, reindex are operations that can be performed
with the server up and running.
Regards
Gaetano Mendola

---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] Unsupported 3rd-party solutions (Was: Few questions

2004-08-23 Thread Gaetano Mendola
Bruce Momjian wrote:
Csaba Nagy wrote:
Hi all,
Bruce, if postgres is not a company and so on, why don't you open up the
core development team to include some of the contributors who would like
to include their product in the main distribution, and have a bundled
product ? Cause a good data base is definitely not made up just by the
core, but all the rest too. And I'm sure that there are many people out
there who would use a PL/Java if they would find it in the main
distribution, and that's all you have to do for this to happen: include
it. Users of postgres are busy too, and some of them will never notice
what they are missing.
Now it's very clear to me that if all the extensions would be bundled,
it would be too much, and that there's no commitee to steer what
should go in or out... but then maybe a vote would help ? If the
contributor wants it in the core, a vote of the interested would be
quite relevant.

We are not adverse to someone taking the core db code, adding other
stuff, and making a new super distribution.
And? Put it on www.postgresql.org ?
Are you proposing to do a sort of
linux kernel : Red Hat = postgresql : super distribution ?
Is this the way that the core is following? Is the time mature enough ?

Regards
Gaetano Mendola

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] Few questions on postgresql (dblink, 2pc, clustering)

2004-08-22 Thread Gaetano Mendola
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Jim Worke wrote:
| On Sunday 22 August 2004 11:02, Bruce Momjian wrote:
|
|2-phase isn't in 8.0 but I expect it in 8.1.
|
|
| Is it possible to know when is 8.1 going to be released for production (an
| estimate)?
Consider that 8.0 will be release *may be* during the end of this year.
Usually a development cycle between two release is 9 month and + 3 month
beta let me say: 8.1 will be release in 12 months. The core will try to
have a shortest cycle for 8.1 but I'll not bet on it.
|Basically, our concern is that dblink, 2PC implementation are there, but
|not in the PostgreSQL mainstream.
|
|You need to understand the limitations of dblink and see if it will work
|for you.  I can't imagine MySQl is allowing you to do this cleanly so I
|don't see why it would hold up a MySQL - PostgreSQL migration.
| Hmm... forgive me for saying it wrongly.  We're actually thinking of
| migrating to PostgreSQL.  Here's our case:
|
| We're going to do a major upgrading on our PHP code (from PHP 3 style to PHP
| 5.0), and was thinking of changing the database to PostgreSQL too.
| Currently, the number of transaction is not high, but we'd like to have a
| more scalable solution.
|
| MySQL does not allow cross-server database connection such as dblink.  So,
| we're thinking of 3 alternatives:
|
| 1) Wait for MySQL clustering to be stable and put all our databases in the
| cluster
| 2) Migrate to PostgreSQL and use dblink to solve the referential integrity
| 3) Migrate to PostgreSQL clustering solution
May I know why are you sticky on the idea of spread your database among
various servers ? Free your mysql-minded. If you idea is an horizontal
scale solution then open your wallet and buy Oracle.
Postgresql scale very well vertically.
SciencieFiction
Another solution is hack the postmaster in order to have two parallel
postmaster running on the same server ( first phase ), when you did
this successfully then the second phase ( to hack too ) is buy the
hardware that permit more servers to share an unique shared memory
segment and then with the help of SAN you can have two postmaster that
are running on two different server that are belonging to a SAN and the
common shared memory segment.
/ScienceFiction
Right now your only solution is buy a multiprocessor machine.
Regards
Gaetano Mendola






-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.4 (MingW32)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iD8DBQFBKG6x7UpzwH2SGd4RAn06AKCQ50Nbp8qvNlMQt2TZqCEcrsMWdgCgphRC
aAn1xCqgGYIh0KtSy3s4zSI=
=iDku
-END PGP SIGNATURE-
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Help with optimizing query

2004-08-20 Thread Gaetano Mendola
Marek Lewczuk wrote:
Hello,
I have a query, which is quite big and there is a huge difference 
between execution time in MySQL and PostgreSQL. I think that I have made 
all possible steps to increase the speed of the query, but unfortunately 
 it is still about 100 times slower. I'm out of ideas what to do next, 
so maybe you will point me what shall I do. In the attachment I send you 
the result of explain analyze.

I will be appreciated for any help. Thanks in advance.
Where is the query ?
Also tables definition could help.
The first look suggest that the statistics are not up to date or you
have to increase your default_statistics_target.
337 rows  vs  3618 rows
Regards
Gaetano Mendola



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Finally tsearch works ... somehow... remain a few

2004-08-19 Thread Gaetano Mendola
Oleg Bartunov wrote:
Marcel,
On Thu, 19 Aug 2004, Marcel Boscher wrote:

For now i am almost statisfied with my tsearch2  installation war over night
somehow it seems to work, finally...
What does have tsearch2 that htdig doesn't have ( for index document I mean ) ?

Regards
Gaetano Mendola


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Postgresql 8.0 beta 1 - strange cpu usage statistics and slow

2004-08-19 Thread Gaetano Mendola
Shelby Cain wrote:
I'm putting 8.0 through its paces and here are a few
things I've noticed on the native win32 port running
on my workstation (2.0g p4 w/256 megs of ram).
Here is the output of vacuum verbose item:

INFO:  vacuuming public.item
INFO:  item: removed 246381 row versions in 24044
pages
DETAIL:  CPU -1.-1612s/-1.99u sec elapsed 1434.79 sec.
INFO:  item: found 246381 removable, 492935
nonremovable row versions in 50413 pages
DETAIL:  0 dead row versions cannot be removed yet.
There were 100991 unused item pointers.
0 pages are entirely empty.
CPU 1081264882.-821s/0.02u sec elapsed 1682.87 sec.
Query returned successfully with no result in 1683460
ms.

As you can see the cpu statistics are obviously bogus
although the elasped time is correct.
My other concern is the length of time that vacuum
runs when cost based vacuuming is disabled.
Under 8.0, if I run an update statement (update item
where set cost = cost + 0 where country = 'US' [causes
an update w/o really changing data]) that updates half
the rows in the table (~250k out of 500k - average
tuple width is about 500 bytes) and then execute a
regular vacuum it takes approximately 1400 seconds to
complete.  A vacuum full performed immediately after
takes on the order of 2000 seconds to complete.  
On Windows XP with 8.0beta1 I'm experiencing different
values instead, after updating 800K rows the plain vacuum
takes 200 seconds and the vacuum full immediately after
takes 620 seconds.
In both case the cpu usage was near zero.
I'm using a 2.2GHZ 1GB di RAM and I'm using 64MB to workmem.

Regards
Gaetano Mendola



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Thousands of parallel connections

2004-08-16 Thread Gaetano Mendola
Tom Lane wrote:
Michal Taborsky [EMAIL PROTECTED] writes:
Peter Eisentraut wrote:
Is there any practical limit on the number of parallel connections that a 
PostgreSQL server can service?  We're in the process of setting up a system 
that will require up to 1 connections open in parallel.  The query load 
is not the problem, but we're wondering about the number of connections.  
Does anyone have experience with these kinds of numbers?

No experience, but a little thinking and elementary school math tells 
me, that you'd need huge amount of RAM to support 1 connections, 
since postgres is multi-process. Our typical postgres process eats 5-40 
megs of memory, depending on activity. So even if it was just 5 megs, 
with 10k connections we are talking about 50G of RAM. If these 
connections are idle, it would be plain waste of resources.

5-40 megs sounds high, unless you run very complex queries.  I wonder
whether you aren't counting Postgres shared memory in that per process
figure.  (Most implementations of top are not very good about
distinguishing shared and private memory, FWIW.)
But even estimating just a meg or two of private space apiece, the total
is daunting.
I did last week an Ariadne+Postgresql valutation for the company where I work
and I learned that
with 250 MB you can open up to  80 concurrent query
with 500 MB you can open up to 120 concurrent query
from now on for each 250MB you can have ~40 connections more
if you break these rules that machine trash...
Peter for 1 connections need then 61 GB that is quite amazing :-)
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] Does a 'stable' deferred trigger execution order exist?

2004-08-16 Thread Gaetano Mendola
Frank van Vugt wrote:
If during a transaction a number of deferred triggers are fired, what
will be their execution order upon the commit?

Should be alphabetical within each triggering event, IIRC.

Mmm, yes, but are all the deferred triggers on the same event 'grouped'?
What I'm thinking about is something like:
BEGIN;
update foo1;= triggers deferred after insert trigger 'Z'
select bar;
update foo2;= triggers deferred after insert triggers 'B' and 'A'
COMMIT;
Now, will I see an execution order of 'Z-A-B' (on alfabet per event per 
statement) or 'A-B-Z' (on alfabet per event per transaction)??

For what I want to accomplish, I don't care about the order of A/B, but 
together they depend on the work that is done by the earlier triggered Z.
The best way is to raise notice inside the trigger function and observe
the results
Regards
Gaetano Mendola

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [GENERAL] Thousands of parallel connections

2004-08-16 Thread Gaetano Mendola
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Tom Lane wrote:
| Gaetano Mendola [EMAIL PROTECTED] writes:
|
|I did last week an Ariadne+Postgresql valutation for the company where I work
|and I learned that
|with 250 MB you can open up to  80 concurrent query
|with 500 MB you can open up to 120 concurrent query
|from now on for each 250MB you can have ~40 connections more
|
|
| That does not add up: the graph can't have a negative y-intercept.
| There should be a substantial cost to run the postmaster at all,
| and then an essentially fixed cost per connection --- assuming
| that all the connections are running similar queries, of course.
| You're telling us the first 40 connections require zero RAM.
I was not speaking about a single process memory consumption I was
speaking in general, and indeed I don't know why but seems the first
concurrent queries are less expensive, I was able to confirm this
rule till 2GB I don't know what there is after.
BTW the machine is a single processor with HT enabled.
Regards
Gaetano Mendola


-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.4 (MingW32)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iD8DBQFBIUB17UpzwH2SGd4RAiF7AJ9SFrs+sjcHhNyT4BU9svvBHqmrRgCg7A0w
es6qvgRJPiu7XzmJ/zup5gU=
=6k1Q
-END PGP SIGNATURE-
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] BIGINT indexes still with problems

2004-08-10 Thread Gaetano Mendola
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Dan Ruthers wrote:
| Now, if I run this query (note the int8 cast - also tried with the '' cast to 
String, same results):
| test= explain select * from dmaildatum where idparent=int8(783219);
| QUERY PLAN
| --
|  Seq Scan on dmaildatum  (cost=0.00..2241.71 rows=2229 width=272)
|Filter: (idparent = 783219::bigint)
| (2 rows)
|
| The index is not used. But with an identical query, only different parameter value:
| desknow= explain select * from dmaildatum where idparent=int8(1187838);
|   QUERY PLAN
|
| 
| ---
|  Index Scan using ix_dmaildatum_idparent on dmaildatum  (cost=0.00..284.05 rows=
| 102 width=272)
|Index Cond: (idparent = 1187838::bigint)
| (2 rows)
|
| The index is used!
| I also did a vacuum analyze, and restarted Postgres and it did not make any 
difference.
| I tried many other ID values (ex 783218 and 783220), and they seem to use the index 
correctly. Only that value doesn't.
|
| Can anyone explain why Postgres behaves differently in these two cases, or at least 
point to some hints?
Because this means that a sequential scan is better for that value.
Perform this selects:
(1) select count(*) from dmaildatum;
(2) select count(*) from dmaildatum where idparent=int8(783219);
(3) select count(*) from dmaildatum where idparent=int8(1187838);
I bet that the ratio  (2)/(1) is greater then (3)/(1).
Now show us the following results:
explain analyze select * from dmaildatum where idparent=int8(783219);
explain analyze select * from dmaildatum where idparent=int8(1187838);
and repeat it again but executing before:
set enable_seqscan = off;

Depending on the results that you get may be you need to lower the index
scan cost tuning the cpu related GUC variables.

Regards
Gaetano Mendola






-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.4 (MingW32)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iD8DBQFBGSfL7UpzwH2SGd4RAgBsAKCXvs2L/XUEmSGxBzEiAHmWasgShACeLvjp
9m12DSnj2tBuGSgldr4D9Po=
=KTil
-END PGP SIGNATURE-
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] PostgreSQL 8.0.0 Officially Goes Beta

2004-08-10 Thread Gaetano Mendola
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Marc G. Fournier wrote:
| On Tue, 10 Aug 2004, Rajesh Kumar Mallah wrote:
|
|
| The beta link under  ftp://ftp3.us.postgresql.org/pub/postgresql/
| and possible all mirrors leads to nowhere. I guess many people
| would click there.
|
|
| Already fixed ... s
|
Doesn't work, I just check the url not the link, testing the link,
this is what a Squid say:

The following URL could not be retrieved: 
ftp://ftp3.us.postgresql.org/pub/postgresql/beta
Squid sent the following FTP command:
RETR beta
and then received this reply
beta: No such file or directory.
This might be caused by an FTP URL with an absolute path (which does not comply with 
RFC 1738).
If this is the cause, then the file can be found at 
ftp://ftp3.us.postgresql.org/%2f/pub/postgresql/beta.

Regards
Gaetano Mendola

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.4 (MingW32)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iD8DBQFBGSV67UpzwH2SGd4RAkdsAKCnmCbZEiXPzA/TnKWcXGqmyNvB/gCdHjW+
KT+kU9eT4z9SDw0IHhRHpqE=
=Io4A
-END PGP SIGNATURE-
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] PostgreSQL 8.0.0 Officially Goes Beta

2004-08-10 Thread Gaetano Mendola
Marc G. Fournier wrote:
For a complete list of changes/improvements since 7.4.0 was released, 
please see:

http://developer.postgresql.org/beta-history.txt
I think is better write in the Win32 Native Server section that Postgres
is only available on Win32 with NTFS file system.

Regards
Gaetano Mendola
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] PostgreSQL 8.0.0 Officially Goes Beta

2004-08-10 Thread Gaetano Mendola
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Rajesh Kumar Mallah wrote:
|
| The beta link under  ftp://ftp3.us.postgresql.org/pub/postgresql/
| and possible all mirrors leads to nowhere. I guess many people
| would click there.
It works for me.
Regards
Gaetano Mendola

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.4 (MingW32)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iD8DBQFBGQT47UpzwH2SGd4RAtdAAJ0Y3H6MyIPbqMJxJ7DIJBADXwgHLgCg1Rta
heomRx//60ZjhDOdG/18D3A=
=SSHT
-END PGP SIGNATURE-
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] Create Table with Foreign Key Error

2004-08-10 Thread Gaetano Mendola
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
John Haney wrote:
| Postgresql 7.4.3-1 under Cygwin.
|
| I created a table called ServerTypes:
|
| CREATE TABLE ServerTypes(
|   ServerTypeID SERIAL UNIQUE NOT NULL,
|   Type TEXT PRIMARY KEY);
|
| Works fine.
|
| Now, I want to create a table called servers with a Foreign Key
| referencing ServerTypes.Type:
|
| CREATE TABLE Servers(
|   ServerID SERIAL UNIQUE NOT NULL,
|   Type REFERENCES ServerTypes (Type),
|   Server TEXT PRIMARY KEY);
You forgot to specify the data type for the field Type.

Regards
Gaetano Mendola


-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.4 (MingW32)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iD8DBQFBGVHq7UpzwH2SGd4RAriNAKDRKJCpgGen8VVsxg//rmjqU+O6vgCg4u/9
9zcAUYNCfaeU2i9WVTXdh3k=
=5YeH
-END PGP SIGNATURE-
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] getting dead locks with 2 functions

2004-08-06 Thread Gaetano Mendola
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Amir Zicherman wrote:
| i am running multiple threads that are calling this function at the
| same time.  i want to be able to do that and have the locking in
| postgresql take care of locking the selected rows of each thread.  why
| is the function not thread safe? how do i make it so it is?
|
| thanx, amir
|
| On Fri, 06 Aug 2004 10:54:07 +0200, Gaetano Mendola [EMAIL PROTECTED] wrote:
|
| Amir Zicherman wrote:
|
| | I have the following 2 functions and I'm getting deadlocks when I call
|
|
| | them from multiple threads.  The first, I'm not sure why because I'm
| | doing a select for update.  The second I'm doing an insert on, and I
| | thought insert will automatically do a lock as it inserts:
| |
| | -FUNCTION 1: -
| |
| | CREATE OR REPLACE FUNCTION
| | public.select_pend_visitation_for_unvisited_links(int4)
| |   RETURNS SETOF record AS
| | '
| | DECLARE
| | urlrow RECORD;
| | BEGIN
| |
| | FOR urlrow in EXECUTE \'SELECT * FROM URL WHERE visited=1::int2
| | LIMIT \' || $1::int4 || \'FOR UPDATE\'
| | LOOP
| | UPDATE URL SET visited=2 WHERE URLID::int8 =
| | urlrow.URLID::int8;
| | RETURN NEXT urlrow;
| | END LOOP;
| | RETURN;
| | END;
| | '
| |   LANGUAGE 'plpgsql' VOLATILE;
This function *is* thread safe. The only fault here is that this function
not designed to be used in an multithread environment because you are not
taking any policy for locks resources.
Lock always the line in the same order so you avoid cyclic locks dependencies.
Your select must appear like this:
SELECT * FROM URL WHERE visited=1 ORDER BY oid LIMIT $1 FOR UPDATE;
Normaly this shall solve your problem.
Regards
Gaetano Mendola












-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.4 (MingW32)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iD8DBQFBE1bm7UpzwH2SGd4RAlXrAKC8a7vuDnxspfWC42/8JObgSpTcfwCeIYI0
a0z0pj9ahiyJIYOz3t8wLUY=
=syCe
-END PGP SIGNATURE-
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] PG over NFS tips

2004-08-06 Thread Gaetano Mendola
Cott Lang wrote:
The higher-ups are attempting to force me to run Postgres over NFS at
least temporarily. 

Despite giving me a queasy feeling and reading quite a bit of messages
advising against it, running Oracle over NFS with a NAS filer doesn't
seem to be unusual. Is there a reason PG would be more sensitive than
Oracle?
Anyone ever done this before in a production environment?
thanks!
Do you trust your data to a udp connection ?
We had problem in copying big files ( 1.9GB ) in a mounted NFS partition
and now we prefer to not use it anymore for our data.

Regards
Gaetano Mendola


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] constraitnt on case sensetive and case insensetive columns

2004-07-25 Thread Gaetano Mendola
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
A Bruce wrote:
| hello,
|
| I am attempting to convert a oracle database to postgresql and I am having
| some problems creating a constraint across multiple columns which are a
| mixture of case insensitive and case sensitive.
|
| The original oracle database created an index with:
| CREATE UNIQUE INDEX hatidx ON hats (upper(name), upper(server), login);
|
| However postgresql can handle multiple columns in the index, or one function,
| but not multiple functions, thus this fails.
|
| Queries are only done using the actual values, so the presence of the
| index is not required for performance reasons, and exists only to
| enforce the constraint that (upper(name), upper(server), login) is a
| unique tuple. Is there anyway to create a constraint which will check
| this? I suspect it would be possible to create a trigger to check this,
| however this is a little ugly, and i would like something more similar to
| to the original if possible.
|
| Any suggestions as to how to approach this would be greatly appreciated,
| -bruce
|
I'm using the 7.4.x version and what you ask for is supported:
regression=# create table test ( a varchar, b varchar, c varchar );
CREATE TABLE
regression=# create unique index test_idx on test ( upper(a), upper(b), c);
CREATE INDEX

Regards
Gaetano Mendola


-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.4 (MingW32)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iD8DBQFBBET87UpzwH2SGd4RAvUeAJ4vG0CxIQdUe8KjsYs/kk7yC1/dLQCgsy9t
IZrziKueFyht39zm+/XoD8w=
=gA20
-END PGP SIGNATURE-
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [GENERAL] CREATE DATABASE on the heap with PostgreSQL?

2004-06-06 Thread Gaetano Mendola
Albretch wrote:
 After RTFM and googling for this piece of info, I think PostgreSQL
has no such a feature.
 Why not? 

 . Isn't RAM cheap enough nowadays? RAM is indeed so cheap that you
could design diskless combinations of OS + firewall + web servers
entirely running off RAM. Anything needing persistence you will send
to the backend DB then
 . Granted, coding a small Data Structure with the exact functionality
you need will do exactly this keeping the table's data on the heap.
But why doing this if this is what DBMS have been designed for in the
first place? And also, each custom coded DB functionality will have to
be maintaned.
 Is there any way or at least elegant hack to do this?
 I don't see a technically convincing explanation to what could be a
design decision, could you explain to me the rationale behind it, if
any?

If you access a table more frequently then other and you have enough
RAM your OS will mantain that table on RAM, don't you think ?
BTW if you trust on your UPS I'm sure you are able to create a RAM
disk and place that table in RAM.
Regards
Gaetano Mendola


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Insert speed question

2004-05-31 Thread Gaetano Mendola
Josué Maldonado wrote:
sort_mem = 131072   # min 64, size in KB
128 MB for sort_mem is really an huge ammount of
memory considering that is not system-wide but
almost for process ( under certain operations a
single process can use more then this quantity ).
Hackers: am I wrong ?
Regards
Gaetano Mendola

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] PostgreSQL Tablespaces

2004-05-31 Thread Gaetano Mendola
TroyGeek wrote:
I found this on the Internet.
http://candle.pha.pa.us/main/writings/pgsql/project/tablespaces.html
 

Does anyone know when tablespaces will make their way into PostgreSQL?
The next version ( is not yet choosed if is a 7.5 or 8.0 ) will contain
that feature ( we hope ).
Regards
Gaetano Mendola

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] Poatgresql database on more than one disk

2004-05-25 Thread Gaetano Mendola
Barry wrote:
Hi All,
I am a newcommer to Postgresql, currently I am looking at moving
a Pick based application across to PostgreSQL.
I am using RH Linux and Postgresql 7.3.6
The test system I am using has a 2 channel raid card with a disk pack
connected to each channel. The OS and Postgresql sits on its own internal 
disk.

I would like to be able to use one disk pack for development and the
other for end user testing / verification (the tape drive can only backup 
one disk pack completely, not both)

I have spent a fair amount of time researching how I can run two
databases, one on each disk pack but have not been able to find a
solution.
Is it possible to configure Postgresql to have seperate databases
on seperate disks ?
Not easily as will be with the Table Space feature that most probably
will be present on 7.5
With 7.3.6 what you can do is move your db and create a link in the
original place:
-bash-2.05b$ oid2name
All databases:
-
17142  = kalman
19185  = photodb
27895  = empdb
1  = template1
17141  = template0
5776262 = logs
-bash-2.05b$ pwd
/var/lib/pgsql/data/base
-bash-2.05b$ ll
total 32
drwx--2 postgres postgres 4096 Feb  8 15:18 1
drwx--2 postgres postgres 4096 Feb  8 03:56 17141
drwx--2 postgres postgres 4096 May 25 19:37 17142
drwx--2 postgres postgres 8192 Feb  8 15:58 19185
drwx--3 postgres postgres 8192 May 16 02:46 27895
drwx--2 postgres postgres 4096 May 18 00:06 5776262


in my case if I want the database kalman to another disk what I have to
do is: stop the engine, move the directory /var/lib/pgsql/data/base/17142
to the new location and create the link.
With the same mechanism you can also place different tables in different
disks.
I hope this help you.
Regards
Gaetano Mendola









---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] Slow network retrieves

2004-05-11 Thread Gaetano Mendola
[EMAIL PROTECTED] wrote:

The back end is pretty much idle.  It shows 'idle in transaction'.
Well, is not soo much idle, it's holding a transaction id!

That idle in transaction is not your problem but however I suggest you
take a look at why you have idle in transaction backend; do you
have back end sitting there days and days in that state ?


Regards
Gaetano Mendola


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Before ship 7.4.2

2004-02-03 Thread Gaetano Mendola
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Lamar Owen wrote:

| On Monday 02 February 2004 10:54 pm, Tom Lane wrote:
|
|Gaetano Mendola [EMAIL PROTECTED] writes:
|
|Is someone taking care about the fact that the pgdb.py shipped with
|7.4.1 is the wrong version?
|
|
|There is no pgdb.py in the core PG 7.4.* releases.  I suppose you
|are talking about a packaging error in the RPM distribution.  Lamar Owen
|would be the man to talk to about that ... Lamar, any thoughts about
|this?
|
|
| Well, my first instinct is to throw out the python client RPM
entirely.  Then
| package the python client in a separate RPM.  My original plan was not to
| ship a python subpackage at all, but then I had a spec file change
| contributed that kept the python client in.  So I went that direction;
| principle of least surprise and all.  But I am not at all attached to
keeping
| it; likewise, the JDBC stuff could easily be moved to a completely
separate
| RPM instead of a subpackage.
Is it not too late drop the python client RPM for the version 7.4 ?
Anyway the version pgdb.py in rpm with the version 7.3.2 is the good
one, I don't know where you get the wrong pgdb.py pre 7.3.2, isn't this
file in any CVS ?




Regards
Gaetano Mendola


-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.3 (MingW32)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iD8DBQFAH/Rh7UpzwH2SGd4RAkD9AKCg8dwii1r0xKXZxa8H3UZ2oVPY/QCfThtO
yFyk9wzD3uVFzkRF7GJiDJU=
=HQnJ
-END PGP SIGNATURE-
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


[GENERAL] Before ship 7.4.2

2004-02-02 Thread Gaetano Mendola
Is someone taking care about the fact that the pgdb.py shipped with
7.4.1 is the wrong version? What bail me out is the fact that the
version pgdb.py shipped with 7.4.1 is a version *pre 7.3*; we
add the same bug with the 7.3 and was not solved until the 7.3.2
distribution:
http://archives.postgresql.org/pgsql-bugs/2002-12/msg00082.php

Regards
Gaetano Mendola
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] PostgreSQL 7.4.1 and pgdb.py

2004-02-01 Thread Gaetano Mendola
Manuel Tejada wrote:

import pgdb
dbConnect = pgdb.connect(dsn='localhost:oracle', user='manuel',
password='')

cursor = dbConnect.cursor()
cursor.execute(select * from address)
Traceback (most recent call last):
   File stdin, line 1, in ?
   File /usr/lib/python2.2/site-packages/pgdb.py, line 189, in execute
self.executemany(operation, (params,))
   File /usr/lib/python2.2/site-packages/pgdb.py, line 221, in executemany
desc = type[1:2]+self ._cache.getdescr(typ[2])
   File /usr/lib/python2.2/site-packages/pgdb.py, line 149, in getdescr
self  ._source.execute(
_pg.error: ERROR: non exist the column typprtlen
--
This is a really old problem already solved on 7.3 see this my post:

http://archives.postgresql.org/pgsql-bugs/2002-12/msg00082.php

I'm checking that my 7.4.1 installation is affected by the same
problem. I don't understand how this could happen that a modification
made on a 7.3 was not ported to 7.4
For the moment what you can do is substitute this select:

SELECT typname, typprtlen, typlen 
FROM pg_type WHERE oid = %s % oid
inside the file pgdb.py with this one:

SELECT typname, 4, typlen 
FROM pg_type WHERE oid = %s % oid
just to not break all file.

I'm not able to look at CVS to see where the modification was lost.

Regards
Gaetano Mendola
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] PostgreSQL 7.4.1 and pgdb.py

2004-02-01 Thread Gaetano Mendola
Tom Lane wrote:

Manuel Tejada [EMAIL PROTECTED] writes:

But now when I input the same sintaxis with the new Installation(PostgreSQL
7.4.1), I get an error when I enter rhe four line:


_pg.error: ERROR: non exist the column typprtlen


I believe this indicates you're using an old version of the PyGreSQL
module.  typprtlen disappeared from the pg_type system catalog several
releases back.  There is updated PyGreSQL code out there, but I'm not
very sure where --- have you looked at gborg.postgresql.org?
Unfortunately the pgdb.py is wrong and is shipped with

postgresql-python-7.4.1-1PGDG.i386.rpm

this problem was solved already on 7.3

look this:

http://archives.postgresql.org/pgsql-bugs/2002-12/msg00082.php

something did wrong during the SRPM file building for the 7.4.1

Is a good idea look how this happen.



Regards
Gaetano Mendola
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [GENERAL] PostgreSQL 7.4.1 and pgdb.py

2004-02-01 Thread Gaetano Mendola
Manuel Tejada wrote:
 Thank you very much Gaetano

 I edited the pgdb.py file setting 4 instead of typprtlen.
 Now I am able to connect to PostgreSQL using pgdb.py.

 Just for curiosity,  Can I set to -1 too as Gerhard Haring told to you?
I think yes, I really didn't dig on it to see the usage of that
value.
Regards
Gaetano Mendola




---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] Date column that defaults to 'now'

2004-01-06 Thread Gaetano Mendola
John Siracusa wrote:

 On 1/5/04 4:29 PM, Michael Glaesemann wrote:

(If you're not the John Siracusa who writes for Ars Technica, the
sentiment still holds. :) )


 I am everywhere!

 (worked, thanks to both of you who replied :)
Anyway the two solution solve different problems:

1) DEFAULT now()
you'll have the timestamp of transaction
2) DEFAULT timeofday()
you'll have the timestamp of insertion


Regards
Gaetano Mendola
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] why the need for is null?

2004-01-03 Thread Gaetano Mendola
Martijn van Oosterhout wrote:

On Thu, Jan 01, 2004 at 11:53:29PM +0100, Baldur Norddahl wrote:

Ok, but since this can be quite annoying and unexpected, could we get an
operator that does not use tristate logic but simply compares? Maybe == which
seems to be free :-)
So X==Y is true if X and Y are equal or both are null, false othervise.


Annoying, not really. It's actually extremely useful. It's useful having a
value which is never equal to anything else, not even itself. If you use it
to represent unknown it will work for you. If you try to use it for
anything else, it will bite you.
You could create a new operator, but that means you'll have difficulty
moving it to any database that doesn't have that operator (which is most of
them).
If you want it to match perhaps you should forget NULL and use '' (zero
length string) instead.
Don't mentioning the fact that for Oracle a zero length string is NULL!
Isn't that funny ?
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] add column sillyness

2003-12-10 Thread Gaetano Mendola
[EMAIL PROTECTED] wrote:

Please, read this carefully: http://sql-info.de/mysql/


I can not believe it!

Regards
Gaetano Mendola
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] add column sillyness

2003-12-09 Thread Gaetano Mendola
Thomas Zehetbauer wrote:

Why do I have to use FOUR queries to accomplish the same result I can
get from MySQL with only ONE query:
alter table users add column $ColumnName text;
alter table users alter column $ColumnName set default '';
update users set t_shirt_size='' where $ColumnName is null;
alter table users alter column $ColumnName set not null;
Wow, that's true!

On MySQL 4.0.16-nt is also possible write:

alter table T1
add CONSTRAINT FK_test foreign key (id)
REFERENCES T2 (id);
that doesn't complain. Unfortunately repeating
the command N times doesn't complain neither.
And the funny here is that FK are not yet supported !
No regards.
Gaetano Mendola






---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] RPM RH9.0 conflict with unixODBC

2003-11-25 Thread Gaetano Mendola
Sander Steffann wrote:

Hi,

It turns out that preventing RH9 from building the debuginfo package also
prevented it from stripping the binaries. This was what caused the big
difference in filesize. I have rebuilt the RPMs for RH9 and put them on
http://opensource.nederland.net/.
I had to make a small modification to the specfile (again) because it seems
that macro's work differently for each RPM / RedHat version. There have been
no other changes to the sources or specfile, so the end-result is the same.
Sorry for the inconvenience I caused by disabling the debuginfo package!
Sander.
Is this also related to the fact that gdb on libraries of RH9.0 don't
complain about the debugging info ?
Regards
Gaetano Mendola


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] [off-topic] Bugtracker using PostgreSQL

2003-11-10 Thread Gaetano Mendola
MaRcElO PeReIrA wrote:

Hi guys,

Do you know any web based bug tracker software that
use PostgreSQL???
Somebody has told me about Mantis, but it use MySQL...
and I resign to use that! :(
Which is the best bug tracker you
know???(PHP+PostgreSQL)


We use TUTOS ( www.tutos.org ) but the bugs tracking
tool that have is not too much evolved, try it.
Regards
Gaetano Mendola


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] VACUUM degrades performance significantly. Database

2003-10-20 Thread Gaetano Mendola
Stephen wrote:
Nope, I installed the RedHat 9 myself and no one else has access to this
machine. It's either that Redhat uses a different elevator setting for SCSI
drives than IDEs or the latest Redhat updates I applied brought it to my
current numbers. Besides, I believe your values may indicate an outdated
system because IIRC the max_bomb_segments has been disabled and should
always be zero because of some inefficiencies in the elevator algorithm.
Regards, Stephen
Well, I obtains the same values for two different RH9 installation:

# uname -a
Linux  2.4.20-13.9smp #1 SMP Mon May 12 10:48:05 EDT 2003 i686 i686 
i386 GNU/Linux
# elvtune /dev/hda6

/dev/hda6 elevator ID   1
read_latency:   64
write_latency:  8192
max_bomb_segments:  6
# uname -a
Linux  2.4.20-20.9smp #1 SMP Mon Aug 18 11:32:15 EDT 2003 i686 i686 
i386 GNU/Linux
# elvtune /dev/sda7

/dev/sda7 elevator ID   5
read_latency:   64
write_latency:  8192
max_bomb_segments:  6
I'll try on my laptop.



Reagards
Gaetano Mendola




---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] VACUUM degrades performance significantly. Database

2003-10-19 Thread Gaetano Mendola
Stephen wrote:

Good news,

I partially fixed the problem on Linux 2.4. It appears the responsiveness
can be improved significantly by tuning the disk IO elevator in Linux using
elvtune in util-linux. The elevator in Linux is used to re-order
read/write requests to reduce disk seeks by ordering requests according to
disk sectors. Unfortunately, the elevator in kernel 2.4 is not very smart
(or flexible I should say depending on your needs) and can starve a
read/write request for a long time if not properly tuned.
elvtune -r 2048 -w 8192 /dev/hdc (default Redhat 9):

Are you sure ? In my RH9.0 installation I obtain:

# elvtune /dev/sda7

/dev/sda7 elevator ID   5
read_latency:   64
write_latency:  8192
max_bomb_segments:  6
may be your problem is due the fact that someone change these values
on your machine!


Regards
Gaetano Mendola
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] Damaged table

2003-10-16 Thread Gaetano Mendola
Dagoberto wrote:

can somebody tell me how can I get old data from a recently damaged table?
What does mean damaged?

Regards
Gaetano Mendola
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Question

2003-10-14 Thread Gaetano Mendola
Robert Partyka wrote:

Hi,

I have question:

why such condition:
foofield not like '%bar%'
where foofield is varchar

returns false (or rather even ignore row) on record where foofield is null
but
returns true on records where foofield is '' (empty string)
SQL specifications.

Empty string and NULL are two different thinks.

Regards
Gaetano Mendola
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [GENERAL] insert duplicated unique/PK with no errors

2003-09-24 Thread Gaetano Mendola
Mirek Rusin wrote:

...what is the best way to force duplicated unique
or primary key'ed row inserts not to raise errors?
Doesn't make sense, am I missing the point ?

Regards
Gaetano Mendola
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] How to find LIMIT in SQL standard

2003-09-22 Thread Gaetano Mendola
Rory Campbell-Lange wrote:

Essentially the call (as defined below) asks for an update and adds a
LIMIT parameter on the end of the UPDATE. (eg update where x=1 limit 1).
Postgres doesn't like this and I assume it isn't SQL standards
compliant and need to refer to this in my bug report.
As far as I know you can not specify a limit for update in Postgres,
at least not in that way.
if you want to do

UPDATE foo SET a='bar' where b LIMIT 1;

this is possible in Postgres doing:

UPDATE foo SET a = 'bar
WHERE foo.oid IN
( SELECT f.oid
  FROM foo f
  WHERE b
  LIMIT 1
);
This fail if the table are created without OID.

Regards
Gaetano Mendola
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] Rockets (was Re: PostgreSQL versus MySQL)

2003-09-22 Thread Gaetano Mendola
Jan Wieck wrote:


Richard Welty wrote:

On Fri, 19 Sep 2003 09:49:32 -0600 (MDT) scott.marlowe 
[EMAIL PROTECTED] wrote:

On Fri, 19 Sep 2003, Ron Johnson wrote:


 What's a Saturn IV?  Do you mean the Saturn V?

http://www.aviation-central.com/space/usm50.htm


actually, may i suggeset

  http://www.astronautix.com/lvfam/saturnv.htm

there actually was a design for a Saturn IV (really called a Saturn C4,
the contemporary Saturn C5 became the Saturn V, and development of the C4
was dropped) see
  http://www.astronautix.com/lvfam/saturnc.htm)

this is awfully off topic, but here is a web page i've been working on
sporadically now for a couple of months that rocketheads may find
interesting:
  http://www.averillpark.net/space/booster.html

I would suggest that if Tom wanted to use the rocket analogy, he might 
want
to compare PostgreSQL to maybe a contemporary Atlas 5 medium 
configuration.
the Titan II is quite old now and there's only one more launch scheduled.


Whereas despite the crashes, the Space Shuttle with it's 
Add-On-Collection look alike is yet most popular ;-)
BTW this weekend we are going to launch a new satellite using
ariane V rockets ( http://www.satexpo.it/en/news-new.php/1?c=6531 )
this mean more users and more stress for our Postgres installation :-)
http://www.arianespace.com/site/news/news_sub_missionupdate_index.html

some pictures here:

http://66.33.199.225/image_library/images_sub_index.html



Regards
Gaetano Mendola






---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] High-volume shop uses PostgreSQL

2003-09-18 Thread Gaetano Mendola
Ron Johnson wrote:
On Thu, 2003-09-18 at 03:23, Gaetano Mendola wrote:

Ron Johnson wrote:

PostgreSQL does not do horizontal scaling at all, since the postmaster
can only run on 1 CPU, but it's good at vertical scaling, since it
can make use of all of the CPUs in a box.  (Well, there's sure to
be a point at which there is so much activity that the postmaster
can't handle it all...)
I seen some PCI cards that permits to have a shared memory shared 
between more boxes, I'd like know how much effort is required to permit 
postgres to run on two or more machine and have the shared memory shared 
between the boxes.


HPaq/DEC has a hardware/software product called MemoryChannel, which
does that for you.  Of course, it only works with Tru64 and OpenVMS.


I knew the existence of this hardware my concern is about made the
postmaster aware that another postmaster is running on another machine
and that the underlyng shared memory is shared between two/more boxes.
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 8: explain analyze is your friend


  1   2   >