[GENERAL] Slow update

2009-01-21 Thread Herouth Maoz
Hello.

I have a daily process that synchronizes our reports database from our
production databases. In the past few days, it happened a couple of
times that an update query took around 7-8 hours to complete, which
seems a bit excessive. This is the query:

UPDATE rb
SET service = b.service,
status = b.status,
has_notification = gateway_id NOT IN (4,101,102),
operator = COALESCE(
b.actual_target_network_id,
b.requested_target_network_id
)
FROM sms.billing b
WHERE b.time_arrived = :date_start
AND   rb.time_stamp = :date_start
AND   rb.delivered = 0
AND   rb.sms_user = b.user_id
AND   rb.reference = b.user_reference
AND   OVERLAY( rb.msisdn placing '972' from 1 for 1 ) = b.msisdn
AND   NOT mo_billed
AND   system_id  6  -- Exclude Corporate, as it aleady has
service/status
;

The variable :date_start is set to a date 3 days ago.

I ran explain for this query and it gave me this:


--
 Nested Loop  (cost=21567.12..854759.82 rows=1 width=210)
   Join Filter: (((rb.sms_user)::text = (b.user_id)::text) AND
(rb.reference = b.user_reference))
   -  Bitmap Heap Scan on rb  (cost=21546.02..23946.16 rows=819 width=198)
 Recheck Cond: ((delivered = 0) AND (time_stamp = '2009-01-18
00:00:00'::timestamp without time zone) AND (time_stamp  '2009-01-21
00:00:00'::timestamp without time zone))
 Filter: ((NOT mo_billed) AND (system_id  6))
 -  BitmapAnd  (cost=21546.02..21546.02 rows=819 width=0)
   -  Bitmap Index Scan on rb_delivered_ind 
(cost=0.00..1419.99 rows=45768 width=0)
 Index Cond: (delivered = 0)
   -  Bitmap Index Scan on rb_timestamp_ind 
(cost=0.00..20125.37 rows=188994 width=0)
 Index Cond: ((time_stamp = '2009-01-18
00:00:00'::timestamp without time zone) AND (time_stamp  '2009-01-21
00:00:00'::timestamp without time zone))
   -  Bitmap Heap Scan on billing b  (cost=21.10..1004.77 rows=351
width=49)
 Recheck Cond: ((b.msisdn)::text =
((substring((rb.msisdn)::text, 1, 0) || '972'::text) ||
substring((rb.msisdn)::text, 2)))
 Filter: ((b.time_arrived = '2009-01-18 00:00:00'::timestamp
without time zone) AND (b.time_arrived  '2009-01-21
00:00:00'::timestamp without time zone))
 -  Bitmap Index Scan on billing_msisdn_sme_reference 
(cost=0.00..21.10 rows=351 width=0)
   Index Cond: ((b.msisdn)::text =
((substring((rb.msisdn)::text, 1, 0) || '972'::text) ||
substring((rb.msisdn)::text, 2)))

I'm not an expert on reading plans, but it seems to me that it uses
indices on both tables that participate in this query, so it shouldn't
take such a long time.

The number of records in the table rb for the past three days is 386833.
On the sms.billing table it seems to select the index on the msisdn and
sme_reference fields and use it partially (only using the msisdn field).
Looking at that table, the frequency of each value in the msisdn field
is at most 17678 for the current data, where mostly it's a couple of
thousands. How can this take so long?

Thanks,
Herouth


Re: [GENERAL] pg_restore: [archiver] entry ID -825110830 out of range -- perhaps a corrupt TOC

2009-01-21 Thread Andreas Wenk
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Dennis C schrieb:
 OK that was it!  Wow, thank you so very much!  Nice to know it was just
 plpython tracking such an obsolete version of postgresql much to my
 dismay now (especially even going backwards, which didn't even occur to
 me), as opposed to postgresql itself being less reliable than I've come
 to expect over the years!  Thanks for all your great work with that too
 in the first place!

cool that it's working now ;-)

 
 On Tue, Jan 20, 2009 at 10:40 AM, Scott Marlowe scott.marl...@gmail.com
 mailto:scott.marl...@gmail.com wrote:
 
 On Tue, Jan 20, 2009 at 11:15 AM, Dennis C dcsw...@gmail.com
 mailto:dcsw...@gmail.com wrote:
  Greetings;
  And thanks for your reply!  I tried the following:
  less xaa | grep ^;
  xaa may be a binary file.  See it anyway? y
  Binary file (standard input) matches
 
  And so am not sure which version I did the following from:
  pg_dump -c -F c -Z 9 [databasename]
 
 It's kind of important, but... PostgreSQL's dump and restore commands
 are designed to work from the same versions or going a new version
 from an older version.  Going backwards is not supported.

That was what I head in mind asking you about the version ;-) Thank's to Scott 
for
bringing it to the point ;-)

  But I installed it about a year ago, so whichever was the release
 then.
  Am trying to restore to the following:
 
 8.2 or 8.3.  Unless you were using a version supplied by a distro,
 which could go further back.
 
  postgresql-client-7.4.21 PostgreSQL database (client)
  postgresql-plpython-7.4.21_1 A module for using Python to write SQL
  functions
  postgresql-server-7.4.21 The most advanced open-source database
 available
  anywhere
 
 Now's the time to upgrade.  7.4 is the oldest supported version, which
 means it's next for the chopping block.  It's also A LOT slower than
 8.3.  Can you get and install a newer version of pgsql, preferably 8.3
 and try restoring there?
  cat * | pg_restore -d [databasename]
 
 The normal way to run it is to use the -f switch for the file
 
 pg_restore -d dbname -f filename
 
 Not sure there's anything wrong with your way, but I've never used
 pg_restore like that.
 
 

Cheers

Andy
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFJdvxAVa7znmSP9AwRAqqhAKCswD9ioSbJuIwiBLZLfTdTaW+jVwCgy3d7
IQiwmaLkNoxs7zbSZcH1+5E=
=i/ZQ
-END PGP SIGNATURE-

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Slow update

2009-01-21 Thread Marc Mamin
Hello,
 
- did you vacuum your tables recently ?
 
- What I miss in your query is a check for the rows that do not need to
be udated:
 
AND NOT (service = b.service 
   AND status = b.status 
  AND has_notification = gateway_id NOT IN (4,101,102)
  AND operator = COALESCE( b.actual_target_network_id,
b.requested_target_network_id   )
 
 
depending on the fraction of rows that are already up to date, the might
fasten your process quite a lot...
 
 
Hope To Help,
 
Marc Mamin

 
 
 



From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Herouth Maoz
Sent: Wednesday, January 21, 2009 10:30 AM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Slow update


Hello.

I have a daily process that synchronizes our reports database from our
production databases. In the past few days, it happened a couple of
times that an update query took around 7-8 hours to complete, which
seems a bit excessive. This is the query:

UPDATE rb
SET service = b.service,
status = b.status,
has_notification = gateway_id NOT IN (4,101,102),
operator = COALESCE(
b.actual_target_network_id,
b.requested_target_network_id
)
FROM sms.billing b
WHERE b.time_arrived = :date_start
AND   rb.time_stamp = :date_start
AND   rb.delivered = 0
AND   rb.sms_user = b.user_id
AND   rb.reference = b.user_reference
AND   OVERLAY( rb.msisdn placing '972' from 1 for 1 ) = b.msisdn
AND   NOT mo_billed
AND   system_id  6  -- Exclude Corporate, as it aleady has
service/status
;

The variable :date_start is set to a date 3 days ago.

I ran explain for this query and it gave me this:




--
 Nested Loop  (cost=21567.12..854759.82 rows=1 width=210)
   Join Filter: (((rb.sms_user)::text = (b.user_id)::text) AND
(rb.reference = b.user_reference))
   -  Bitmap Heap Scan on rb  (cost=21546.02..23946.16 rows=819
width=198)
 Recheck Cond: ((delivered = 0) AND (time_stamp = '2009-01-18
00:00:00'::timestamp without time zone) AND (time_stamp  '2009-01-21
00:00:00'::timestamp without time zone))
 Filter: ((NOT mo_billed) AND (system_id  6))
 -  BitmapAnd  (cost=21546.02..21546.02 rows=819 width=0)
   -  Bitmap Index Scan on rb_delivered_ind
(cost=0.00..1419.99 rows=45768 width=0)
 Index Cond: (delivered = 0)
   -  Bitmap Index Scan on rb_timestamp_ind
(cost=0.00..20125.37 rows=188994 width=0)
 Index Cond: ((time_stamp = '2009-01-18
00:00:00'::timestamp without time zone) AND (time_stamp  '2009-01-21
00:00:00'::timestamp without time zone))
   -  Bitmap Heap Scan on billing b  (cost=21.10..1004.77 rows=351
width=49)
 Recheck Cond: ((b.msisdn)::text =
((substring((rb.msisdn)::text, 1, 0) || '972'::text) ||
substring((rb.msisdn)::text, 2)))
 Filter: ((b.time_arrived = '2009-01-18 00:00:00'::timestamp
without time zone) AND (b.time_arrived  '2009-01-21
00:00:00'::timestamp without time zone))
 -  Bitmap Index Scan on billing_msisdn_sme_reference
(cost=0.00..21.10 rows=351 width=0)
   Index Cond: ((b.msisdn)::text =
((substring((rb.msisdn)::text, 1, 0) || '972'::text) ||
substring((rb.msisdn)::text, 2)))

I'm not an expert on reading plans, but it seems to me that it uses
indices on both tables that participate in this query, so it shouldn't
take such a long time.

The number of records in the table rb for the past three days is 386833.
On the sms.billing table it seems to select the index on the msisdn and
sme_reference fields and use it partially (only using the msisdn field).
Looking at that table, the frequency of each value in the msisdn field
is at most 17678 for the current data, where mostly it's a couple of
thousands. How can this take so long?

Thanks,
Herouth



Re: [GENERAL] Slow update

2009-01-21 Thread Herouth Maoz
Marc Mamin wrote:

 Hello,
  
 - did you vacuum your tables recently ?
  
 - What I miss in your query is a check for the rows that do not need
 to be udated:
  
 AND NOT (service = b.service
AND status = b.status
   AND has_notification = gateway_id NOT IN (4,101,102)
   AND operator = COALESCE( b.actual_target_network_id, 
 b.requested_target_network_id   )
  
  
 depending on the fraction of rows that are already up to date, the
 might fasten your process quite a lot...
I don't see why it would. As far as I know, the high saving in update
time is done by using the indices. All the other conditions that are not
on indices are all checked using a sequential scan on the rows that were
brought from the index, so adding more conditions wouldn't make this a
lot faster - maybe even slower because more comparisons are made.

In any case, the logic of the database is that the records that have
delivered = 0 are always a subset of the records that are changed in
this query, so querying on delivered=0 - which is an indexed query -
actually make the above redundant.

Thanks for your response,
Herouth


Re: [GENERAL] Custom type, operators and operator class not sorting/indexing correctly

2009-01-21 Thread Roger Leigh
On Wed, Jan 21, 2009 at 02:03:03AM -0500, Tom Lane wrote:
 Roger Leigh rle...@codelibre.net writes:
  I've created a new domain (debversion) derived from TEXT, which
  includes its own operators ( = = =  and ), and also its
  own operator class for BTREE indices.
 
 You can't realistically attach such things to a domain; try making
 a separate type, perhaps with an implicit cast to text to allow
 use of text operators for other purposes.

Ah, thanks for the clarification.  So I need to use CREATE TYPE
rather than CREATE DOMAIN.  Because I'm essentially just storing
a text string with different operators, can I derive a type from
TEXT (perhaps by reusing the same input, output, receive and send
functions as TEXT?)  I saw the textsend and textreceive functions,
which I assume are the appropriate functions for send and receive?
Are there any for input and output which I may reuse?


Many thanks,
Roger

-- 
  .''`.  Roger Leigh
 : :' :  Debian GNU/Linux http://people.debian.org/~rleigh/
 `. `'   Printing on GNU/Linux?   http://gutenprint.sourceforge.net/
   `-GPG Public Key: 0x25BFB848   Please GPG sign your mail.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] import sql dump with psql - language creation throws error

2009-01-21 Thread Jasen Betts
On 2009-01-20, Andreas Wenk a.w...@netzmeister-st-pauli.de wrote:

 Hi everybody,

 I have an automated mechanism to restore a demo database each night with an 
 SQL dump. What
 I do inbetween a shell script is the following:

 1. all database access is canceled
 2. dropdb
 3. createdb
 4. import SQL dump: psql -o /dev/null $DB  
 /var/lib/postgresql/scripts/$SQL_DUMP

 The last step is the issue. The shell script is run by an cronjob and if one 
 of the steps
 is failing, the crondaemon sends an E-Mail. The cluster allready exists (for 
 sure) and the
 language plpgsl also. The last point (4.) always creates an error (what is 
 clear),
 allthough the dump is imported anyway:

 ERROR:  Language »plpgsql« allready exists

 psql -o /dev/null $DB  /var/lib/postgresql/scripts/$SQL_DUMP ||
  echo Der dump konnte nicht eingespielt werden. 2

 And because the ERROR message is the output, the crondaemon sends an email.

 Question:

 Where can I prevent bulding the language again? My idea was to do that while 
 creating the
 dump or while importing the dump. But as far as I understand, that's not 
 possible.

easiest solution is probably to drop it before restoring,
else, seeing as you have cron you probably have sed also and can use
sed to drop the apropriate lines from the dump, or to remove the error
message.

psql -o /dev/null $DB  /var/lib/postgresql/scripts/$SQL_DUMP \
 21 | sed 'SED SCRIPT HERE'


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Get object creation sql script in psql client

2009-01-21 Thread Jasen Betts
On 2009-01-20, Igor Katson descent...@gmail.com wrote:
 Is there a way to get i.e. table creation sql script from an existing 
 table in psql (not postgresql, but psql client), like it is in pgAdmin?

 I.e. i point it to existing table 'foo', and it writes:
 CREATE TABLE foo (
bar int
 );

pg_dump dbname --table=foo --schema-only

I realise this is not what exactly you asked for, nor does it give the
most efficient SQL for many tables.




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Slow update

2009-01-21 Thread Herouth Maoz
Filip Rembiałkowski wrote:


 1. which postgres version?
8.3.1
 2. can you post results of EXPLAIN ANALYZE (please note it actually
 executes the query)? 

Well, if it executes the query it's a problem. I might be able to do so
during the weekend, when I can play with the scripts and get away with
failures, but of course there is less data in the tables then.

Thanks,
Herouth

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] bytea size limit?

2009-01-21 Thread paulo matadr
I everyone, need help!!!
My aplication return erro:

2009-01-20 10:22:03,264 INFO  [STDOUT] Caused by: org.postgresql.util.
PSQLException: ERROR: invalid memory alloc request size 1705447581

 TABLE batch.relatorio_gerado

  rege_id integer NOT NULL,
  fuin_id integer NOT NULL,
  rela_id integer NOT NULL,
  rege_tmultimaalteracao timestamp without time zone NOT NULL DEFAULT now(),
  rege_nnpaginas integer NOT NULL,
  rege_binario bytea,
 rege_pdf bytea

I get this erro above  refers the column rege_pdf bytea   , when try generate 
report in pdf.

thanks for help


  Veja quais são os assuntos do momento no Yahoo! +Buscados
http://br.maisbuscados.yahoo.com

Re: [GENERAL] [ADMIN] bytea size limit?

2009-01-21 Thread Kenneth Marshall
On Wed, Jan 21, 2009 at 05:21:03AM -0800, paulo matadr wrote:
 I everyone, need help!!!
 My aplication return erro:
 
 2009-01-20 10:22:03,264 INFO  [STDOUT] Caused by: org.postgresql.util.
 PSQLException: ERROR: invalid memory alloc request size 1705447581
 
  TABLE batch.relatorio_gerado
 
   rege_id integer NOT NULL,
   fuin_id integer NOT NULL,
   rela_id integer NOT NULL,
   rege_tmultimaalteracao timestamp without time zone NOT NULL DEFAULT now(),
   rege_nnpaginas integer NOT NULL,
   rege_binario bytea,
  rege_pdf bytea
 
 I get this erro above  refers the column rege_pdf bytea   , when try 
 generate report in pdf.
 
 thanks for help
 

I believe that the default size limit for a bytea or text field is
currently 1GB.

Ken

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] bytea size limit?

2009-01-21 Thread Grzegorz Jaśkiewicz
there's no real limit (its size is described with 32bit number, and
that's the only limitation here).
But you need to be aware, that content is sent over at once, so memory
is the limit in your case.

http://www.postgresql.org/docs/8.3/static/datatype-binary.html

For such large objects, it might be actually better to store them
separate as files, and just store file name.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [ADMIN] [GENERAL] bytea size limit?

2009-01-21 Thread Kenneth Marshall
On Wed, Jan 21, 2009 at 01:55:28PM +, Grzegorz Ja??kiewicz wrote:
 there's no real limit (its size is described with 32bit number, and
 that's the only limitation here).
 But you need to be aware, that content is sent over at once, so memory
 is the limit in your case.
 
 http://www.postgresql.org/docs/8.3/static/datatype-binary.html
 
 For such large objects, it might be actually better to store them
 separate as files, and just store file name.
 

The TOAST implementation however only allows 30-bits for the
size of the TOAST entry which caps the size at 2^30 or 1GB. I
agree that he could very well be limited also by the memory on
his system.

Cheers,
Ken

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [ADMIN] [GENERAL] bytea size limit?

2009-01-21 Thread Grzegorz Jaśkiewicz
On Wed, Jan 21, 2009 at 2:06 PM, Kenneth Marshall k...@rice.edu wrote:

 The TOAST implementation however only allows 30-bits for the
 size of the TOAST entry which caps the size at 2^30 or 1GB. I
 agree that he could very well be limited also by the memory on
 his system.

i wasn't aware of that, and also - it doesn't say anything about it in docs.
As for limitations, that also depends on db drivers he is using, etc,
etc. I use bytea to store 100-200MB objects in many dbs, but I
wouldn't go as far as 1.5GB ...



-- 
GJ

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [ADMIN] [GENERAL] bytea size limit?

2009-01-21 Thread Kenneth Marshall
On Wed, Jan 21, 2009 at 02:09:01PM +, Grzegorz Ja??kiewicz wrote:
 On Wed, Jan 21, 2009 at 2:06 PM, Kenneth Marshall k...@rice.edu wrote:
 
  The TOAST implementation however only allows 30-bits for the
  size of the TOAST entry which caps the size at 2^30 or 1GB. I
  agree that he could very well be limited also by the memory on
  his system.
 
 i wasn't aware of that, and also - it doesn't say anything about it in docs.
 As for limitations, that also depends on db drivers he is using, etc,
 etc. I use bytea to store 100-200MB objects in many dbs, but I
 wouldn't go as far as 1.5GB ...
 
The reference is in:
http://www.postgresql.org/docs/8.3/static/storage-toast.html

Here is the pertinent excerpt:

Only certain data types support TOAST -- there is no need to impose the
overhead on data types that cannot produce large field values. To support
TOAST, a data type must have a variable-length (varlena) representation,
in which the first 32-bit word of any stored value contains the total
length of the value in bytes (including itself). TOAST does not constrain
the rest of the representation. All the C-level functions supporting a
TOAST-able data type must be careful to handle TOASTed input values.
(This is normally done by invoking PG_DETOAST_DATUM before doing anything
with an input value, but in some cases more efficient approaches are possible.)

TOAST usurps two bits of the varlena length word (the high-order bits on
big-endian machines, the low-order bits on little-endian machines),
thereby limiting the logical size of any value of a TOAST-able data type
to 1 GB (230 - 1 bytes). When both bits are zero, the value is an ordinary
un-TOASTed value of the data type, and the remaining bits of the length
word give the total datum size (including length word) in bytes. When the
highest-order or lowest-order bit is set, the value has only a single-byte
header instead of the normal four-byte header, and the remaining bits give
the total datum size (including length byte) in bytes. As a special case,
if the remaining bits are all zero (which would be impossible for a
self-inclusive length), the value is a pointer to out-of-line data stored
in a separate TOAST table. (The size of a TOAST pointer is given in the
second byte of the datum.) Values with single-byte headers aren't aligned
on any particular boundary, either. Lastly, when the highest-order or
lowest-order bit is clear but the adjacent bit is set, the content of the
datum has been compressed and must be decompressed before use. In this
case the remaining bits of the length word give the total size of the
compressed datum, not the original data. Note that compression is also
possible for out-of-line data but the varlena header does not tell whether
it has occurred -- the content of the TOAST pointer tells that, instead.

Cheers,
Ken

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [ADMIN] [GENERAL] bytea size limit?

2009-01-21 Thread Grzegorz Jaśkiewicz
you don't have to quote everything :)
I ment, there's nothing on bytea on its doc page, where one would
expect to read it.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Odd array issue

2009-01-21 Thread Craig Ringer
Hi folks

While debugging a query that was returning far fewer records than
expected, I narrowed the issue down to what I think might be a Pg bug. I
thought I'd check here in case I'm just missing something obvious before
sending anything to the bugs list.


test= select version();
  version


 PostgreSQL 8.3.5 on i486-pc-linux-gnu, compiled by GCC gcc-4.3.real
(Ubuntu 4.3.2-1ubuntu11) 4.3.2
(1 row)


I managed to simplify the original huge query down to a few simple test
statements. First:

The following query should return 2 records, but returns zero instead:

test= select x, regexp_matches(x::text, 'm')
test- FROM generate_series(1,2) AS x;
 x | regexp_matches
---+
(0 rows)

The match regexp may be anything, as may the input, so long as the
regexp does not match. If it does match, an array is output as one would
expect.

Surely regexp_matches should be returning a null or empty array, rather
than somehow suppressing the output of that record entirely if there's
no match?



Another case (closer to what I was originally facing in a much larger
query) looks like this:

test= select x, ''||((regexp_matches(x::text, E'\\d'))[0])
test- FROM generate_series(1,2) AS x;
 x | ?column?
---+--
(0 rows)

Yes, that's a zero index, and no the issue doesn't arise when correctly
indexing from one. Surely, though, the incorrect index should result in
an error or a null value rather than silent suppression of a record?
Also, it doesn't happen when you take the zero'th index of ANY null arry
(see below), only one from regexp_matches.

Note also that the pre-concatenation of '' is significant. Without that,
the output is:

test= \pset null NULL
Null display is NULL.

test= select x, (regexp_matches(x::text, E'\\d'))[0] FROM
generate_series(1,2) AS x;
 x | regexp_matches
---+
 1 | NULL
 2 | NULL
(2 rows)

Yet other queries that also attempt to get the zeroth index of an array,
concat a null, etc all work how you'd expect:

test= SELECT x, ''||(null::text) from generate_series(1,2) AS x;
 x | ?column?
---+--
 1 | NULL
 2 | NULL
(2 rows)

test= SELECT x, ''||(('{4}'::text[])[0])
test- FROM generate_series(1,2) AS x;
 x | ?column?
---+--
 1 | NULL
 2 | NULL
(2 rows)

test= SELECT x, ''||((null::text[])[1]) from generate_series(1,2) AS x;
 x | ?column?
---+--
 1 | NULL
 2 | NULL
(2 rows)



I'm confused. Am I missing something obvious, or are things not working
how they should be?

--
Craig Ringer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Odd array issue

2009-01-21 Thread Tom Lane
Craig Ringer cr...@postnewspapers.com.au writes:
 The following query should return 2 records, but returns zero instead:

 test= select x, regexp_matches(x::text, 'm')
 test- FROM generate_series(1,2) AS x;
  x | regexp_matches
 ---+
 (0 rows)

No, that's correct.  The SRF returns an empty set, so there are no
output records.  The behavior you seem to be imagining would make it
impossible to distinguish empty set from a single row containing NULL.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] autovacuum daemon

2009-01-21 Thread Abdul Rahman
Deal All,

I have set postgresql.conf for autovacuum and need to know weather the process 
is running/working or not.

Regards,
Avdul Rehman.


  

Re: [GENERAL] autovacuum daemon

2009-01-21 Thread Raymond O'Donnell
On 21/01/2009 07:47, Abdul Rahman wrote:

 I have set postgresql.conf for autovacuum and need to know weather
 the process is running/working or not.

ps ax | grep postgres


Ray.

--
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
r...@iol.ie
Galway Cathedral Recitals: http://www.galwaycathedral.org/recitals
--

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] autovacuum daemon

2009-01-21 Thread Raymond O'Donnell
On 21/01/2009 14:57, Raymond O'Donnell wrote:
 On 21/01/2009 07:47, Abdul Rahman wrote:
 
 I have set postgresql.conf for autovacuum and need to know weather
 the process is running/working or not.
 
 ps ax | grep postgres

Also, in psql:

postgres=# show autovacuum;
 autovacuum

 on
(1 row)

HTH,

Ray.

--
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
r...@iol.ie
Galway Cathedral Recitals: http://www.galwaycathedral.org/recitals
--

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Res: [ADMIN] [GENERAL] bytea size limit?

2009-01-21 Thread paulo matadr
My system have very large ram size, so its possible review postgresql.conf ?



De: Kenneth Marshall k...@rice.edu
Para: Grzegorz Ja??kiewicz gryz...@gmail.com
Cc: paulo matadr saddon...@yahoo.com.br; pgsql-general@postgresql.org; admin 
pgsql-ad...@postgresql.org
Enviadas: Quarta-feira, 21 de Janeiro de 2009 11:06:23
Assunto: Re: [ADMIN] [GENERAL] bytea size limit?

On Wed, Jan 21, 2009 at 01:55:28PM +, Grzegorz Ja??kiewicz wrote:
 there's no real limit (its size is described with 32bit number, and
 that's the only limitation here).
 But you need to be aware, that content is sent over at once, so memory
 is the limit in your case.
 
 http://www.postgresql.org/docs/8.3/static/datatype-binary.html
 
 For such large objects, it might be actually better to store them
 separate as files, and just store file name.
 

The TOAST implementation however only allows 30-bits for the
size of the TOAST entry which caps the size at 2^30 or 1GB. I
agree that he could very well be limited also by the memory on
his system.

Cheers,
Ken



  Veja quais são os assuntos do momento no Yahoo! +Buscados
http://br.maisbuscados.yahoo.com

Re: [GENERAL] autovacuum daemon

2009-01-21 Thread Andreas Wenk
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Abdul Rahman schrieb:
 Deal All,
 
 I have set postgresql.conf for autovacuum and need to know weather the
 process is running/working or not.

Hi Abdul,

1. you could check the log file

2. select setting from pg_settings where name = 'autovacuum';

Cheers

Andy

- -- Andreas Wenk St.Pauli - Hamburg - Germany

 Regards,
 Avdul Rehman.
 
 
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFJdzk6Va7znmSP9AwRAk2nAJ9eeGtXGZbyMoPFZwukmg0T/4sWtgCg28iN
B6m0nEgLALvIv2ZGZt9syvs=
=4zR5
-END PGP SIGNATURE-

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [ADMIN] [GENERAL] bytea size limit?

2009-01-21 Thread Grzegorz Jaśkiewicz
On Wed, Jan 21, 2009 at 3:02 PM, paulo matadr saddon...@yahoo.com.br wrote:
 My system have very large ram size, so its possible review postgresql.conf ?

all depends on how you access DB, what type of drivers (odbc, libpq, etc, etc).
See, every time you pass a row , they usually have to allocate that
much memory, not only its quite inefficient, but also slow.

As for the configuration option, I am interested in knowing myself too :)

-- 
GJ

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [ADMIN] [GENERAL] bytea size limit?

2009-01-21 Thread Kenneth Marshall
On Wed, Jan 21, 2009 at 03:07:13PM +, Grzegorz Ja??kiewicz wrote:
 On Wed, Jan 21, 2009 at 3:02 PM, paulo matadr saddon...@yahoo.com.br wrote:
  My system have very large ram size, so its possible review postgresql.conf ?
 
 all depends on how you access DB, what type of drivers (odbc, libpq, etc, 
 etc).
 See, every time you pass a row , they usually have to allocate that
 much memory, not only its quite inefficient, but also slow.
 
 As for the configuration option, I am interested in knowing myself too :)
 
I do not think that the size limit is a runtime option. It is currently
compiled into the server.

Ken

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] bytea size limit?

2009-01-21 Thread Merlin Moncure
On 1/21/09, paulo matadr saddon...@yahoo.com.br wrote:

 I everyone, need help!!!
 My aplication return erro:

 2009-01-20 10:22:03,264 INFO  [STDOUT] Caused by: org.postgresql.util.
 PSQLException: ERROR: invalid memory alloc request size 1705447581

What exactly were you doing when you got the error?
How big is the item?
How are you trying to pull it?? libpq, php, etc?

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Odd array issue

2009-01-21 Thread Craig Ringer
Tom Lane wrote:
 Craig Ringer cr...@postnewspapers.com.au writes:
 The following query should return 2 records, but returns zero instead:
 
 test= select x, regexp_matches(x::text, 'm')
 test- FROM generate_series(1,2) AS x;
  x | regexp_matches
 ---+
 (0 rows)
 
 No, that's correct.  The SRF returns an empty set, so there are no
 output records.  The behavior you seem to be imagining would make it
 impossible to distinguish empty set from a single row containing NULL.

Aah, that makes sense. For some reason I'd been thinking of
regexp_matches as returning a single array, rather than returning a set
of scalar values. RTFM (better) I guess. Sorry.

Thanks for taking a look, it's appreciated.

--
Craig Ringer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] autovacuum daemon

2009-01-21 Thread Grzegorz Jaśkiewicz
select * from pg_autovacuum;




-- 
GJ

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Slow update

2009-01-21 Thread Filip Rembiałkowski
2009/1/21 Herouth Maoz hero...@unicell.co.il

  Hello.

 I have a daily process that synchronizes our reports database from our
 production databases. In the past few days, it happened a couple of times
 that an update query took around 7-8 hours to complete, which seems a bit
 excessive. This is the query:

 UPDATE rb
 SET service = b.service,
 status = b.status,
 has_notification = gateway_id NOT IN (4,101,102),
 operator = COALESCE(
 b.actual_target_network_id,
 b.requested_target_network_id
 )
 FROM sms.billing b
 WHERE b.time_arrived = :date_start
 AND   rb.time_stamp = :date_start
 AND   rb.delivered = 0
 AND   rb.sms_user = b.user_id
 AND   rb.reference = b.user_reference
 AND   OVERLAY( rb.msisdn placing '972' from 1 for 1 ) = b.msisdn
 AND   NOT mo_billed
 AND   system_id  6  -- Exclude Corporate, as it aleady has
 service/status
 ;

 The variable :date_start is set to a date 3 days ago.

 I ran explain for this query and it gave me this:



 --
  Nested Loop  (cost=21567.12..854759.82 rows=1 width=210)
Join Filter: (((rb.sms_user)::text = (b.user_id)::text) AND
 (rb.reference = b.user_reference))
-  Bitmap Heap Scan on rb  (cost=21546.02..23946.16 rows=819 width=198)
  Recheck Cond: ((delivered = 0) AND (time_stamp = '2009-01-18
 00:00:00'::timestamp without time zone) AND (time_stamp  '2009-01-21
 00:00:00'::timestamp without time zone))
  Filter: ((NOT mo_billed) AND (system_id  6))
  -  BitmapAnd  (cost=21546.02..21546.02 rows=819 width=0)
-  Bitmap Index Scan on rb_delivered_ind
 (cost=0.00..1419.99 rows=45768 width=0)
  Index Cond: (delivered = 0)
-  Bitmap Index Scan on rb_timestamp_ind
 (cost=0.00..20125.37 rows=188994 width=0)
  Index Cond: ((time_stamp = '2009-01-18
 00:00:00'::timestamp without time zone) AND (time_stamp  '2009-01-21
 00:00:00'::timestamp without time zone))
-  Bitmap Heap Scan on billing b  (cost=21.10..1004.77 rows=351
 width=49)
  Recheck Cond: ((b.msisdn)::text = ((substring((rb.msisdn)::text,
 1, 0) || '972'::text) || substring((rb.msisdn)::text, 2)))
  Filter: ((b.time_arrived = '2009-01-18 00:00:00'::timestamp
 without time zone) AND (b.time_arrived  '2009-01-21 00:00:00'::timestamp
 without time zone))
  -  Bitmap Index Scan on billing_msisdn_sme_reference
 (cost=0.00..21.10 rows=351 width=0)
Index Cond: ((b.msisdn)::text =
 ((substring((rb.msisdn)::text, 1, 0) || '972'::text) ||
 substring((rb.msisdn)::text, 2)))

 I'm not an expert on reading plans, but it seems to me that it uses indices
 on both tables that participate in this query, so it shouldn't take such a
 long time.

 The number of records in the table rb for the past three days is 386833. On
 the sms.billing table it seems to select the index on the msisdn and
 sme_reference fields and use it partially (only using the msisdn field).
 Looking at that table, the frequency of each value in the msisdn field is at
 most 17678 for the current data, where mostly it's a couple of thousands.
 How can this take so long?


1. which postgres version?
2. can you post results of EXPLAIN ANALYZE (please note it actually executes
the query)?




-- 
Filip Rembiałkowski


Re: [GENERAL] Slow update

2009-01-21 Thread Grzegorz Jaśkiewicz
On Wed, Jan 21, 2009 at 12:55 PM, Herouth Maoz hero...@unicell.co.il wrote:
 Well, if it executes the query it's a problem. I might be able to do so
 during the weekend, when I can play with the scripts and get away with
 failures, but of course there is less data in the tables then.


you should seirously think about having test machine

-- 
GJ

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] autovacuum daemon

2009-01-21 Thread Sam Mason
  [ Grzegorz, please include some context ]

On Wed, Jan 21, 2009 at 03:01:39PM +, Grzegorz Jaaakiewicz wrote:
 Avdul Rehman wrote:
  I have set postgresql.conf for autovacuum and need to know weather
  the process is running/working or not.
 
 select * from pg_autovacuum;

This won't do what the OP was asking for; pg_autovacuum only records
non-default autovacuum parameters where they have been set by the
user/admin.  More details here:

  http://www.postgresql.org/docs/current/static/catalog-pg-autovacuum.html

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Slow update

2009-01-21 Thread Sam Mason
On Wed, Jan 21, 2009 at 02:55:00PM +0200, Herouth Maoz wrote:
 Filip Rembiakowski wrote:
  2. can you post results of EXPLAIN ANALYZE (please note it actually
  executes the query)? 
 
 Well, if it executes the query it's a problem.

You can wrap an EXPLAIN ANALYSE up in BEGIN;...ROLLBACK;.  That way
PG won't record any of the changes permanently.  More explanation is
available in the docs:

  http://www.postgresql.org/docs/current/static/sql-explain.html

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Slow update

2009-01-21 Thread Herouth Maoz
Grzegorz Jaśkiewicz wrote:

 On Wed, Jan 21, 2009 at 12:55 PM, Herouth Maoz hero...@unicell.co.il wrote:
   
 Well, if it executes the query it's a problem. I might be able to do so
 during the weekend, when I can play with the scripts and get away with
 failures, but of course there is less data in the tables then.

 

 you should seirously think about having test machine

   
I have a test machine - but the data in there is test data, and it's a
slower machine. A testing environment is good for development, but can
hardly be used to really simulate the production machine for performance.

Herouth


Re: [GENERAL] bytea size limit?

2009-01-21 Thread Albe Laurenz
paulo matadr wrote:
 I everyone, need help!!!
 My aplication return erro:
 
 2009-01-20 10:22:03,264 INFO  [STDOUT] Caused by: org.postgresql.util.
 PSQLException: ERROR: invalid memory alloc request size 1705447581
 
 
  TABLE batch.relatorio_gerado 
 
   rege_id integer NOT NULL,
   fuin_id integer NOT NULL,
   rela_id integer NOT NULL,
   rege_tmultimaalteracao timestamp without time zone NOT NULL DEFAULT now(),
   rege_nnpaginas integer NOT NULL,
   rege_binario bytea,
   rege_pdf bytea
 
 I get this erro above  refers the column rege_pdf bytea   , 
 when try generate report in pdf.

What are you doing in terms of SQL?
INSERT, UPDATE, DELETE?

How big are the binary objects involved?

What are the values of the database parameters shared_buffers and work_mem?

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to find how much postgresql use the memory?

2009-01-21 Thread Albe Laurenz
Luki Rustianto wrote:
 Ok I see. So what's the best way to find optimum value for 
 various memory-related setting of postgresql ?

How much memory is there in the machine?
Are there other programs running or is the machine dedicated to the database?
Are the queries you run complicated (order / hash large amounts of data) or 
simple?

Maybe a rough guideline would be:
Set work_mem to the amount of memory a query will regularly need for sorting 
and such,
set max_connections to the maximum number of connections you need.

Then figure out how much memory you want to dedicate to the database,
subtract work_mem * max_connections from that value and set
shared_buffers to the result value.

Make sure you adjust the kernel parameters accordingly, see
http://www.postgresql.org/docs/current/static/kernel-resources.html

Tom Lane suggested in
http://archives.postgresql.org/pgsql-general/2004-02/msg00471.php
that it might be better to set shared_buffers relatively
small and let the filesystem cache do the buffering, so that's
another way you can go. His advice is usually good.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Problem with retrieving records using double precision fields

2009-01-21 Thread Raymond C. Rodgers
In two separate databases that are configured to have latitude and 
longitude as double precision fields, I'm having trouble retrieving 
records using between on the longitude field. I know that I have data 
within range, but any query involving the longitude field fails to find 
records.


Here's a quick example table; it's not the actual table in either 
database, but it's close enough to demonstrate my point on PostgreSQL 
8.3.5 on Fedora 10 x86_64:


test=# create table coordtest (id serial, latitude float, longitude float);
NOTICE:  CREATE TABLE will create implicit sequence coordtest_id_seq 
for serial column coordtest.id

CREATE TABLE
test=# insert into coordtest(latitude,longitude) values 
(42.38013,-83.05175),(42.411143,-82.943461);

INSERT 0 2
test=# select * from coordtest where latitude between 42.0 and 42.5 and 
longitude between -83.0 and -84.0;

id | latitude | longitude
+--+---
(0 rows)



test=# select * from coordtest;
id | latitude  | longitude 
+---+

 1 |  42.38013 |  -83.05175
 2 | 42.411143 | -82.943461
(2 rows)

test=# select * from coordtest where latitude between 42.0 and 42.5 ;
id | latitude  | longitude 
+---+

 1 |  42.38013 |  -83.05175
 2 | 42.411143 | -82.943461
(2 rows)

test=# select * from coordtest where longitude between -83.0 and -84.0;
id | latitude | longitude
+--+---
(0 rows)


Any idea what's going on here and why I'm not getting results?
Thanks,
Raymond

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem with retrieving records using double precision fields

2009-01-21 Thread Sam Mason
On Wed, Jan 21, 2009 at 12:22:14PM -0500, Raymond C. Rodgers wrote:
 test=# select * from coordtest where latitude between 42.0 and 42.5 ;

The LHS value of a BETWEEN operator has to be of smaller value than the
RHS's value.  You've got it correct above, but it's not correct here:

 test=# select * from coordtest where longitude between -83.0 and -84.0;

-83 is greater than -84.


-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem with retrieving records using double precision fields

2009-01-21 Thread Richard Huxton
Raymond C. Rodgers wrote:
 In two separate databases that are configured to have latitude and
 longitude as double precision fields, I'm having trouble retrieving
 records using between on the longitude field. I know that I have data
 within range, but any query involving the longitude field fails to find
 records.

 test=# select * from coordtest where longitude between -83.0 and -84.0;

Order of between arguments is important.

richardh= SELECT 2 between 1 and 3;
 ?column?
--
 t
(1 row)

richardh= SELECT 2 between 3 and 1;
 ?column?
--
 f
(1 row)

richardh= SELECT -2 between -1 and -3;
 ?column?
--
 f
(1 row)

-- 
  Richard Huxton
  Archonet Ltd

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Res: [ADMIN] [GENERAL] bytea size limit?

2009-01-21 Thread paulo matadr
The size of object depend on report  for a user request,
shared_buffers = 2048MB
work_mem = 12MB 

---
Server conf
16 GB RAM
Red Hat Enterprise Linux Server release 5

Using apliccation web based , with Jboss apliccation server on jdbc driver.
Lets see  scenario :
Apliccation request a report,if this bigger , hangs .
No able to select or others report in this table.


 









De: Albe Laurenz laurenz.a...@wien.gv.at
Para: paulo matadr  *EXTERN* saddon...@yahoo.com.br; 
pgsql-general@postgresql.org; admin pgsql-ad...@postgresql.org
Enviadas: Quarta-feira, 21 de Janeiro de 2009 14:03:17
Assunto: Re: [ADMIN] [GENERAL] bytea size limit?

paulo matadr wrote:
 I everyone, need help!!!
 My aplication return erro:
 
 2009-01-20 10:22:03,264 INFO  [STDOUT] Caused by: org.postgresql.util.
 PSQLException: ERROR: invalid memory alloc request size 1705447581
 
 
  TABLE batch.relatorio_gerado 
 
   rege_id integer NOT NULL,
   fuin_id integer NOT NULL,
   rela_id integer NOT NULL,
   rege_tmultimaalteracao timestamp without time zone NOT NULL DEFAULT now(),
   rege_nnpaginas integer NOT NULL,
   rege_binario bytea,
   rege_pdf bytea
 
 I get this erro above  refers the column rege_pdf bytea   , 
 when try generate report in pdf.

What are you doing in terms of SQL?
INSERT, UPDATE, DELETE?

How big are the binary objects involved?

What are the values of the database parameters shared_buffers and work_mem?

Yours,
Laurenz Albe

-- 
Sent via pgsql-admin mailing list (pgsql-ad...@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin



  Veja quais são os assuntos do momento no Yahoo! +Buscados
http://br.maisbuscados.yahoo.com

Re: [GENERAL] Problem with retrieving records using double precision fields

2009-01-21 Thread Raymond C. Rodgers

Richard Huxton wrote:

Raymond C. Rodgers wrote:
  

In two separate databases that are configured to have latitude and
longitude as double precision fields, I'm having trouble retrieving
records using between on the longitude field. I know that I have data
within range, but any query involving the longitude field fails to find
records.



  

test=# select * from coordtest where longitude between -83.0 and -84.0;



Order of between arguments is important.

richardh= SELECT 2 between 1 and 3;
 ?column?
--
 t
(1 row)

richardh= SELECT 2 between 3 and 1;
 ?column?
--
 f
(1 row)

richardh= SELECT -2 between -1 and -3;
 ?column?
--
 f
(1 row)

  
Ok, so I made a simple math error (-83 being larger than -84) and didn't 
catch it.


Thanks,
Raymond


[GENERAL] deductive databases in postgreSQL

2009-01-21 Thread Carlos Gonzalez-Cadenas
Hi all,

We're looking for a deductive database for our application. Oracle, in the
11g version, has extended its RDBMS with deductive capabilities, supporting
a subset of OWL-DL[1]. They are able to load an ontology, perform the
inferences, dump the inferred info in the database and after that answer
queries with the asserted plus the inferred info.

We would like to have similar functionalities in PostgreSQL. Do you know if
there's someone working on that or if there are plans to support it soon?.
If not, what do you think is the best and most efficient way of implementing
it?

Thank you very much in advance,

Carlos

[1]: http://www.oracle.com/technology/tech/semantic_technologies/index.html


Re: [GENERAL] Database schema data synchronizer software for PostgreSQL?

2009-01-21 Thread David Fetter
On Wed, Jan 21, 2009 at 05:18:57AM +0100, Együd Csaba wrote:
 From: David Fetter [mailto:da...@fetter.org]
 On Tue, Jan 20, 2009 at 03:03:33PM +0100, Csaba Együd wrote:
  Hi,
  I'd like to ask your suggestions about a reliable admin software
  which is able to compare two dabases and generate a schema
  synchrinizer script.
 
 There is no such thing, and there is no prospect of there ever
 being such a thing, because the database does not contain enough
 information to create this automatically.  The problem exists at
 the organizational level, and needs to be solved there.
 
  It would be nice to be able to generate data synchronization
  script for only the selected tables, and other features.
 
 Yes, you should definitely do that and store the scripts to do it
 in your source code management system along with all the rest of
 the deploy and upgrade scripts.  They can't be generated
 automatically either.
 
 David,
 I see your points and generally can agree with, but there is a level
 which can be automated - I mean a mechanic comparison. 

That level isn't terribly high, and in my experience, it is not a
worthwhile endeavor because it does not solve the actual problem.

 Of course the result sync script must be tested before applying in
 production environment.  These tools can/could save a lot of time.

No.

What saves time is getting your development and deployment processes
to the point where you're not needing to figure out what's happened.
Instead, you'll be doing database changes *only* with scripts, which
you'll test, etc., etc., rather than trying to reverse engineer your
own stuff.

Reverse engineering is what you do, and then only in an emergency, to
*others'* software, not *yours.*

 In my opinion the result, this way or that way, would be the same: a
 version migration or sync script to attach to the upgrade package.
 I think the difference is that I do not have to maintain a db script
 during the development to keep it up to date.  I simply concentrate
 on the task not the administration.  I may be wrong...

You're right, in that you're wrong on this.  You need to take your
development process in hand and then keep it so.

 Up to now I've been doing it the manual way but it makes me really
 non-effective.

What's *really* ineffective is continuing as you've been doing.

Cheers,
David.
-- 
David Fetter da...@fetter.org http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] deductive databases in postgreSQL

2009-01-21 Thread Grzegorz Jaśkiewicz
On Wed, Jan 21, 2009 at 6:09 PM, Carlos Gonzalez-Cadenas
car...@gonzalez.name wrote:

 If not, what do you think is the best and most efficient way of implementing
 it?

 Thank you very much in advance,


I think you should ask that sort of quesitons on -hackers list, not here.


-- 
GJ

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] how to find foreign key details (column, that is)

2009-01-21 Thread Bruce Momjian
Tom Lane wrote:
 Karsten Hilbert karsten.hilb...@gmx.net writes:
  This is what my 8.3 manual says:
 
  conkey ??? int2[] ??? pg_attribute.attnum ??? If a table constraint, list 
  of columns which the constraint constrains ???
 
  From that I wouldn't have figured it'd apply to foreign keys
  as well. So I assume it is fair to say that foreign keys
  are one type of table constraint, right ?
 
 Right.  I think what the comment is actually trying to point out is that
 conkey isn't relevant to domain constraints, which also appear in
 pg_constraint.

Can someone come up with better documention wording for conkey?  I
can't:


http://developer.postgresql.org/pgdocs/postgres/catalog-pg-constraint.html

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] how to find foreign key details (column, that is)

2009-01-21 Thread Karsten Hilbert
On Wed, Jan 21, 2009 at 01:49:44PM -0500, Bruce Momjian wrote:

 Tom Lane wrote:
  Karsten Hilbert karsten.hilb...@gmx.net writes:
   This is what my 8.3 manual says:
  
   conkey │ int2[] │ pg_attribute.attnum │ If a table constraint, list of 
   columns which the constraint constrains │
  
   From that I wouldn't have figured it'd apply to foreign keys
   as well. So I assume it is fair to say that foreign keys
   are one type of table constraint, right ?
  
  Right.  I think what the comment is actually trying to point out is that
  conkey isn't relevant to domain constraints, which also appear in
  pg_constraint.
 
 Can someone come up with better documention wording for conkey?  I
 can't:
 
   
 http://developer.postgresql.org/pgdocs/postgres/catalog-pg-constraint.html

How about adding a second line:

Note: since foreign keys are table constraints, applies to those, too.

or

If a foreign key, list of columns referencing the target table

(note, referencing as opposed to referenced by as in confkey)

Karsten
-- 
GPG key ID E4071346 @ wwwkeys.pgp.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Followup: Here's why I want to use connection pooling middleware!

2009-01-21 Thread Kirk Strauser
On Thursday 15 January 2009 09:54:50 Kirk Strauser wrote:
 I have a PostgreSQL 8.3.5 server with max_connections = 400.  At this
 moment, I have 223 open connections, including 64 from a bunch of
 webserver processes and about 100 from desktop machines running a
 particular application.  The rest are from various scheduled processes
 and other assorted things.  Now, I know there are projects like pgpool-
 II that can serve to pool connections to the server.  Why would I want
 to do that, though?

After installing and configuring PgBouncer and then pointing all of our clients 
at it, our average number of database connections has dropped from 250+ to 17.  
Query times are also much better, and more RAM is going to caching than to 
holding processes.  Count me as a new fan.
-- 
Kirk Strauser

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Custom type, operators and operator class not sorting/indexing correctly

2009-01-21 Thread Martijn van Oosterhout
On Wed, Jan 21, 2009 at 10:48:09AM +, Roger Leigh wrote:
 Ah, thanks for the clarification.  So I need to use CREATE TYPE
 rather than CREATE DOMAIN.  Because I'm essentially just storing
 a text string with different operators, can I derive a type from
 TEXT (perhaps by reusing the same input, output, receive and send
 functions as TEXT?)  I saw the textsend and textreceive functions,
 which I assume are the appropriate functions for send and receive?
 Are there any for input and output which I may reuse?

Yes, you can copy all the existing attributes. Use:

select * from pg_type where typname='text';

To get the relevent info (upcoming 8.4 will have CREATE TYPE xxx
(LIKE=text) ). Lookup the citext project for an example.

Have a nice day,
-- 
Martijn van Oosterhout   klep...@svana.org   http://svana.org/kleptog/
 Please line up in a tree and maintain the heap invariant while 
 boarding. Thank you for flying nlogn airlines.


signature.asc
Description: Digital signature


[GENERAL] how to implement a foreign key type constraint against a not unique column

2009-01-21 Thread Brent Wood
Hi,

I have a table with a column of ID's (integer), these are unique except where 
they = -1 (column 1)
I have a partial unique index where the value is not -1 to enforce this.

I want to use this column as a foreign key on a column in another table (column 
2), but cannot without a full unique index. Is there any way to add an 
equivalent constraint to a foreign key which restricts entries in column 2 to 
values in column 1?

I tried a check where obs_id in (select id from ..), but subqueries are not 
supported in a check.


I believe it is possible by using a table with nulls for the -1 values with a 
unique index on it as the foreign key, then a view which uses case or coalesce 
to present the nulls as -1, but this seems a cumbersome workaround.


Thanks,

  Brent Wood


Brent Wood
DBA/GIS consultant
NIWA, Wellington
New Zealand
NIWA is the trading name of the National Institute of Water  Atmospheric 
Research Ltd.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] how to implement a foreign key type constraint against a not unique column

2009-01-21 Thread Richard Broersma
On Wed, Jan 21, 2009 at 12:53 PM, Brent Wood b.w...@niwa.co.nz wrote:

 I believe it is possible by using a table with nulls for the -1 values with a 
 unique index on it as the foreign key, then a view which uses case or 
 coalesce to present the nulls as -1, but this seems a cumbersome workaround.

This will work and yes it is a bit cumbersome but I don't think that
there is much else that can be done.

Another solution that is probably more cumbersome and ugly would be to
vertically partition your table and include all non -1 values in it.
Then use this table as the reference for your foreign key.  Then
create your own trigger to keep these two table in sync with each
other.

-- 
Regards,
Richard Broersma Jr.

Visit the Los Angeles PostgreSQL Users Group (LAPUG)
http://pugs.postgresql.org/lapug

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] A complex plproxy query

2009-01-21 Thread Igor Katson
This is a complex question, and I couldn't form it in a short and easy 
way, and I'm sorry for that.


First of all, let me introduce you to the DB (to form a question), for 
you to understand what am I talking about. The module looks like a 
social network, just the users have friends, which can be in different 
groups.


Also it is clustered with PLPROXY by user_id, so the user itself, and 
his friends list (the list of ID's) is always in the same DB, but the 
information about the friends is not (it is clustered through all the 
partitions). Here is a little sketch of a the needed tables:


CREATE TABLE friend
(
  id bigint,
  user_id integer,
  friend_id integer,
  group_id bigint,
...
);
This table is a 'friend link' from one user to another, which can be 
marked as being in some 'group', and the backward link exists also (from 
the 2nd user to the 1st), which can possibly be in another 'group'.


CREATE TABLE user
(
 user_id integer,
 nickname text,
 -- lots of other info
);
This is just a user table.

Both of these are clustered by user_id. I need to form the following 
query, for it to be as fast as possible (here it is written as if it the 
DB was not partitioned):

SELECT something FROM user u, friend f
WHERE u.user_id = f.friend.id
AND f.user_id = $1 (this is given as an argument)
AND f.group_id = $2

So to say, give me the list of friends (not only their ID's, but all the 
needed columns!) of given individual, which are in a given group. That 
seems ok without plproxy, but with using it, I can't imagine how can I 
form a nice query, or a function (or a set of plpgsql + plproxy 
functions) to do the job.


Thanks in advance and regards,
Igor Katson.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Automatic CRL reload

2009-01-21 Thread Bruce Momjian
Alvaro Herrera wrote:
 Andrej Podzimek wrote:
 
  The files server.key, server.crt, root.crt, and root.crl are only
  examined during server start; so you must restart the server for
  changes in them to take effect.
  (http://www.postgresql.org/docs/8.3/static/ssl-tcp.html)
 
  This is perfectly fine for server.key, server.crt and root.crt. These
  files change quite rarely. However, root.crl usually chages once a
  month (which is the default in OpenSSL) or even more often when
  necessary.
 
 I think the right solution here is to reload the CRL file on SIGHUP
 (reload).  Whoever changes the CRL file should send a signal.
 
 I've had that on my TODO list for a while.

Added to TODO:

Allow SSL CRL files to be re-read during configuration file reload,
rather than requiring a server restart

Unlike SSL CRT files, CRL (Certificate Revocation List) files are
updated frequently

* 
http://archives.postgresql.org/pgsql-general/2008-12/msg00832.php 

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] encoding of PostgreSQL messages

2009-01-21 Thread Bruce Momjian

Added to TODO:

Improve encoding of connection startup messages sent to the client

Currently some authentication error messages are sent in the server
encoding

* 
http://archives.postgresql.org/pgsql-general/2008-12/msg00801.php
* 
http://archives.postgresql.org/pgsql-general/2009-01/msg5.php 

---

Karsten Hilbert wrote:
 Bruce, et al,
 
 given the thread partially quoted below would this warrant a
 TODO item improve communication of encoding between client
 and server regarding early startup messages ?
 
 A very usable band-aid for 8.4 - short of a proper fix -
 would be the minimal-invasive sending of messages in 7-bit
 English until server_encoding can be retrieved by the client
 by current means.
 
 Thanks,
 Karsten
 
 On Thu, Jan 01, 2009 at 08:33:56PM +0200, Peter Eisentraut wrote:
  Subject: Re: [GENERAL] encoding of PostgreSQL messages
  User-Agent: KMail/1.9.9
  
  On Wednesday 31 December 2008 20:23:47 Tom Lane wrote:
The proper fix is probably to include the client encoding in the
connection startup message.
  
   What of errors occurring before such an option could be applied?
  
  Connection errors are handled by the client, which knows the client 
  encoding.  
  If the setting of the client encoding would be one of the first things to 
  be 
  done on the server side, you would only have a handful of possible error 
  conditions left (e.g., setlocale failed, out of memory).  You could choose 
  to 
  report those in plain ASCII or send a special error code that the client 
  can 
  resolve.  Although I guess no one could fault us if could not set 
  language 
  is reported not translated. ;-)
  
   I think that ultimately it's necessary to accept that there will be some
   window during connection startup where sending plain ASCII (English)
   messages is the best recourse.
  
  Ultimately yes.  But we currently handle the client encoding quite late in 
  the 
  startup sequence so that many connection startup failure messages that are 
  of 
  interest to normal users would likely be affected.  So moving the client 
  encoding handling to the earliest possible phase would still be desirable.
  
  -- 
  Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
  To make changes to your subscription:
  http://www.postgresql.org/mailpref/pgsql-general
 
 -- 
 GPG key ID E4071346 @ wwwkeys.pgp.net
 E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346
 
 -- 
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [Plproxy-users] A complex plproxy query

2009-01-21 Thread Hannu Krosing
On Thu, 2009-01-22 at 02:33 +0300, Igor Katson wrote:

 So to say, give me the list of friends (not only their ID's, but all the 
 needed columns!) of given individual, which are in a given group. That 
 seems ok without plproxy, but with using it, I can't imagine how can I 
 form a nice query, or a function (or a set of plpgsql + plproxy 
 functions) to do the job.

You need to do it in two steps - first run a query on the partition the
user is in to get list of friends ids, then run a second RUN ON ALL
query with 

WHERE f.friend.id in (list of ids from f1) AND f.group_id = $2

to gather all friend info in parallel

-- 
--
Hannu Krosing   http://www.2ndQuadrant.com
PostgreSQL Scalability and Availability 
   Services, Consulting and Training


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Check if column is substring of another column

2009-01-21 Thread Keaton Adams
PostgreSQL 8.1 question:

I have two columns.
policyNumber contains a 12-13 varchar string
AllPolicyNumbersIncluded contains one or more 12-13 varchar strings (policy 
nums) separated by commas

I want to check if policyNumber is contained in AllPolicyNumbersIncluded.

In SQL Server the PATINDEX function returns the starting position of the first 
occurrence of a pattern in a specified expression, or zeros if the pattern is 
not found, on all valid text and character data types, so something like this 
works:

SELECT me.policyNumber, me.CompanyName, me.Address, 
PolicyPrint.AllPolicyNumbersIncluded
FROM PolicyPrint INNER JOIN PolicyDetails me
ON (PolicyPrint.cicPolicyNumber = PolicyDetails.policyNumber
OR PATINDEX('%' + me.policyNumber + '%',cicPrint.AllPolicyNumbersIncluded) 0 )

Is there a way to do this in a single SQL statement in PostgreSQL 8.1?

Thanks,

Keaton



Re: [GENERAL] Check if column is substring of another column

2009-01-21 Thread Rodrigo E . De León Plicet
On Wed, Jan 21, 2009 at 5:44 PM, Keaton Adams kad...@mxlogic.com wrote:
 Is there a way to do this in a single SQL statement in PostgreSQL 8.1?

SELECT
d.policyNumber
  , d.CompanyName
  , d.Address
  , p.AllPolicyNumbersIncluded
FROM PolicyPrint p
  INNER JOIN PolicyDetails d
ON (
  p.cicPolicyNumber = d.policyNumber
OR
  p.AllPolicyNumbersIncluded LIKE '%' || d.policyNumber || '%'
)

-- OR --

SELECT
d.policyNumber
  , d.CompanyName
  , d.Address
  , p.AllPolicyNumbersIncluded
FROM PolicyPrint p
  INNER JOIN PolicyDetails d
ON (
  p.cicPolicyNumber = d.policyNumber
OR
  d.policyNumber = ANY(string_to_array(p.AllPolicyNumbersIncluded, ','))
)

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] autovacuum daemon

2009-01-21 Thread Abdul Rahman
Dear All,

Thanks for fruitful replies. But I checked it by running ANALYZE on psql. First 
updated 1 rows in a table and got certain number of dead rows in result of 
ANALYZE. After few minutes the number of dead rows becomes zero which assured 
me that AUTOVACUUM is running in background.

Now, kindly let me know the detail about the solution send by Ray, i.e. 

ps ax | grep postgres

Regards,
Abdul Rehman.


  

Re: [GENERAL] autovacuum daemon

2009-01-21 Thread Abdul Rahman
Thanks Ray,

Your solution of using ps command is for Linux but I am using WinXp. That is 
why it confused me.

Regards,
Abdul Rehman.



  

Re: [GENERAL] Check if column is substring of another column

2009-01-21 Thread Richard Huxton
Keaton Adams wrote:
 PostgreSQL 8.1 question:
 
 I have two columns.
 policyNumber contains a 12-13 varchar string
 AllPolicyNumbersIncluded contains one or more 12-13 varchar strings (policy 
 nums) separated by commas

Rodrigo has given a direct answer, but you might want to consider either
an array of text for AllPolicyNumbersIncluded or better still a join to
another table. That will make your queries more natural.

-- 
  Richard Huxton
  Archonet Ltd

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general