Re: [GENERAL] Pains in upgrading to 8.3

2008-02-19 Thread Magnus Hagander
On Mon, Feb 18, 2008 at 06:35:11PM -0300, Alvaro Herrera wrote:
 Bruce Momjian escribió:
  Magnus Hagander wrote:
 
   For the case of upgrading, it wouldn't work. But there are certainly 
   other cases where it would help. Say from your central pgadmin console 
   administering 10 servers from 3 different major release trees :-(
 
 What's wrong with providing statically-linked pg_dump-8.2, pg_dump-8.3
 and so on, and asking the user which one to use (depending on the target
 server version)?

Other than the much-increased work in building things, probabliy nothing.
(The package would be noticably larger as well, of course, but that
shuouldn't be a big problem today).

//Magnus

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Auto incrementing primary keys

2008-02-19 Thread Gordon
On Feb 18, 1:14 pm, pgsql_user [EMAIL PROTECTED] wrote:
 On Feb 18, 6:08 pm, Paul Boddie [EMAIL PROTECTED] wrote:



  On 18 Feb, 13:36, django_user [EMAIL PROTECTED] wrote:

   How can stop postgresql from incrementing the primary key value, so
   that even after many failed insert statements it get the next id val.

  Auto-incrementing columns, typically implemented using the serial
  data type [1], employ sequences.

  From the manual:

  To avoid blocking of concurrent transactions that obtain numbers from
  the same sequence, a nextval operation is never rolled back; that is,
  once a value has been fetched it is considered used, even if the
  transaction that did the nextval later aborts. This means that aborted
  transactions may leave unused holes in the sequence of assigned
  values. setval operations are never rolled back, either.

 http://www.postgresql.org/docs/8.1/interactive/functions-sequence.html

  In other words, to permit a decent level of concurrency, PostgreSQL
  doesn't wait to see if a transaction succeeds with a value from a
  sequence before updating the sequence. If you want to reset a sequence
  so that it always uses the next unused value as determined by looking
  at the table, I suppose you could do something like this:

  select setval('mytable_id_seq', x) from (select max(id) as x from
  mytable) as y;

  But I doubt that you would want to do this too often in any system
  with any reasonable level of concurrent access to the table or the
  sequence concerned.

  Paul

  [1]http://www.postgresql.org/docs/8.1/interactive/datatype.html#DATATYPE...

 so wouldnt I run out of ids one day, if there are lot of failed insert
 statements, lets say for every successful insert there are 50
 unsuccessful inserts, so ids would be 1, 50, 100, and once I have
 thousands of rows, I will run out of IDs ? should I use bigserial
 instead ?

 Thanks

In theory, yes.  but the standard 4 byte integer can represent about 2
billion positive numbers so even with a lot of failed inserts you're
probably not going to run out for years.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Pains in upgrading to 8.3

2008-02-19 Thread Dave Page
On Feb 19, 2008 8:48 AM, Magnus Hagander [EMAIL PROTECTED] wrote:
 On Mon, Feb 18, 2008 at 06:35:11PM -0300, Alvaro Herrera wrote:
  Bruce Momjian escribió:
   Magnus Hagander wrote:
 
For the case of upgrading, it wouldn't work. But there are certainly
other cases where it would help. Say from your central pgadmin console
administering 10 servers from 3 different major release trees :-(
 
  What's wrong with providing statically-linked pg_dump-8.2, pg_dump-8.3
  and so on, and asking the user which one to use (depending on the target
  server version)?

 Other than the much-increased work in building things, probabliy nothing.
 (The package would be noticably larger as well, of course, but that
 shuouldn't be a big problem today).

I suspect that building static versions of the utilities and retaining
the OpenSSL  Kerberos support would be nigh-on impossible (I've never
even managed to build my own dynamic version of Kerberos (which seems
to rely heavily on the build environment used within MIT).

In pgAdmin, bundling such utilities would be a big no-no. Imagine the
docs - pgAdmin supports SSL encryption and Kerberos authentication,
but if you wish to back or restore your databases you'll need to turn
off those requirements in the server.

-- 
Dave Page
EnterpriseDB UK: http://www.enterprisedb.com
The Oracle-compatible database company

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] out-of-line (TOAST) storage ineffective when loading from dump?

2008-02-19 Thread Martijn van Oosterhout
On Tue, Feb 19, 2008 at 03:55:27PM +0600, Markus Bertheau wrote:
  The toast pages are stored in a separate table - see manual for
  details. There's a whole chapter (53.2) on this.
 
 Yes, but I assume that on disk the pages will be laid out sequentially
 - not intentionally so, of course. See below.

I should hope not. I don't think any modern filesystem works that way.
There's a reason why there's no defragmentation tool for many
filesystems, it's not an issue these days.

Have a nice day,
-- 
Martijn van Oosterhout   [EMAIL PROTECTED]   http://svana.org/kleptog/
 Those who make peaceful revolution impossible will make violent revolution 
 inevitable.
  -- John F Kennedy


signature.asc
Description: Digital signature


Re: [GENERAL] out-of-line (TOAST) storage ineffective when loading from dump?

2008-02-19 Thread Markus Bertheau
2008/2/19, Richard Huxton [EMAIL PROTECTED]:
 Markus Bertheau wrote:
  Afaics, TOAST was invented so that big attributes wouldn't be in the
  way when working with the other attributes.

 Actually, I think it was designed as a way of working around PG's 8KB
 block-size. That imposed a maximum row size of the same and, well meant
 you couldn't have 2 x 5KB text fields for example.

Ah, ok. So what I'm talking about is a possible positive side effect of
out-of-line storage.

 The toast pages are stored in a separate table - see manual for
 details. There's a whole chapter (53.2) on this.

Yes, but I assume that on disk the pages will be laid out sequentially
- not intentionally so, of course. See below.

 I'm not sure what you mean by this. The page-cache will cache individual
 pages regardless of type afaik. A large data-load will probably mess up
 your cache for other processes. On the other hand, assuming you'll be
 building indexes etc. too then it's going to be in cache one way or another.

I'm loading a table with some short attributes and a large toastable attribute.
That means that for every main table heap page several toast table heap pages
are written. This happens through the buffer cache and the background writer,
so maybe the pages aren't written in the order in which they were created in
the buffer cache, but if they are, they end up on disk (assuming that the file
system is not fragmented) roughly like that:

main table heap page 1
toast table heap page 1
toast table heap page .
toast table heap page n
main table heap page 2
toast table heap page n+1
toast table heap page .
toast table heap page 2n

Significantly later a sequential scan of the table has to be made, the
toastable attribute is not needed for the operation. The caches are cold or
otherwise occupied. If the granularity of caches that are nearer to the disk in
the cache hierarchy than the PG buffer cache is higher than the PG page size
(i.e. a cache unit is bigger than the PG page size), then every read of a main
table heap page will inescapably read some toast table heap pages into the
cache (whichever cache that may be).  If all the main table heap pages were
laid out adjecently on disk, they could be read faster and caches be polluted
less.

Markus

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] out-of-line (TOAST) storage ineffective when loading from dump?

2008-02-19 Thread Richard Huxton

Markus Bertheau wrote:

Afaics, TOAST was invented so that big attributes wouldn't be in the
way (of readahead, the buffer cache and so on) when working with the
other attributes. This is based on the assumption that the other
attributes are accessed more often than the whole contents of the big
attributes.


Actually, I think it was designed as a way of working around PG's 8KB 
block-size. That imposed a maximum row size of the same and, well meant 
you couldn't have 2 x 5KB text fields for example.



Now I wonder how disk blocks are used when loading a dump with big
text data so that out-of-line storage is used. If disk block usage was
following this pattern:

heap page
toast heap page 1
toast heap page ..
toast heap page n


The toast pages are stored in a separate table - see manual for 
details. There's a whole chapter (53.2) on this.



If further the assumption is correct, that the granularity of the
lower level chaches is bigger than the PostgreSQL page size, then that
would mean that loading from a dump destroys the advantage of
out-of-line storage.

I haven't got any numbers to back this theory up.


I'm not sure what you mean by this. The page-cache will cache individual 
pages regardless of type afaik. A large data-load will probably mess up 
your cache for other processes. On the other hand, assuming you'll be 
building indexes etc. too then it's going to be in cache one way or another.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Analogue to SQL Server UniqueIdentifier?

2008-02-19 Thread Magnus Hagander
On Mon, Feb 18, 2008 at 05:01:22PM +, Dave Page wrote:
 On Feb 18, 2008 4:52 PM, Bill Moran [EMAIL PROTECTED] wrote:
  In response to [EMAIL PROTECTED] [EMAIL PROTECTED]:
 
 
   Hi
  
   My porting experiment has encountered the SQL Server UniqueIdentifier 
   problem. I can see one or two suggestions about this have been made over 
   the years but I'd like to try and stay close to the original. So:
  
   I'm wondering if I can use a combination of a domain 'hack' for syntatic 
   compatibillity and an externally implemented function to handle 
   generation.
  
   More specifically, given a table defined thus:
  
   CREATE TABLE jazz(
   UUID UniqueIdentifier DEFAULT newIdentifier(),
   rootname VARCHAR(255),
   data_source VARCHAR(1024),
   date_created DATETIME DEFAULT GETDATE())
  
   1. Can I handle the UniqueIdentifier datatype via a domain that aliases 
   UniqueIdentifier to char(X) (for example) ? This seems to work fine for 
   the DATETIME datatype.
   2. Implement newIdentifier() in some extension DLL that simply calls 
   CoCreateGUID() ?
  
   or does uuid-ossp do this for me?
 
  I'm no expert on this topic, but since nobody else has responded ...
 
  I'm unsure why you would do anything other than install uuid-ossp.
  Anything else is going to be a hack, and uuid-ossp was created specifically
  to address this requirement.
 
 Lack of support for Windows, which it sounds like the OP might be running?

That's something that's been on my agenda for a while. There are certainly
UUID generation functions available on Windows - at least for some of the
cases supported by uuid-ossp. If I were to write the same functions for
that one, where would people prefer that to go - in the uuid-ossp module
even though that's actually not correct (since it wouldn't be using ossp)
or a separate module uuid-win32?

//Magnus

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] out-of-line (TOAST) storage ineffective when loading from dump?

2008-02-19 Thread Richard Huxton

Markus Bertheau wrote:

2008/2/19, Richard Huxton [EMAIL PROTECTED]:
I'm loading a table with some short attributes and a large toastable attribute.
That means that for every main table heap page several toast table heap pages
are written. This happens through the buffer cache and the background writer,
so maybe the pages aren't written in the order in which they were created in
the buffer cache, but if they are, they end up on disk (assuming that the file
system is not fragmented) roughly like that:

main table heap page 1
toast table heap page 1
toast table heap page .
toast table heap page n
main table heap page 2
toast table heap page n+1
toast table heap page .
toast table heap page 2n


Well, that's assuming:
1. You're not re-using space from previously deleted/updated rows.
2. You've not got a RAID array striping writes over multiple disks
3. The underlying filesystem + buffering isn't doing anything too clever.


Significantly later a sequential scan of the table has to be made, the
toastable attribute is not needed for the operation. The caches are cold or
otherwise occupied. 


OK

 If the granularity of caches that are nearer to the disk in

the cache hierarchy than the PG buffer cache is higher than the PG page size
(i.e. a cache unit is bigger than the PG page size), then every read of a main
table heap page will inescapably read some toast table heap pages into the
cache (whichever cache that may be).  


I think disk blocks on your more common file-systems are 4KB by default 
(ext2/3 and ntfs for example). I'm not aware of any default disk-block 
sizes more than the 8KB page-size of PG. Of course, the OS may read 
ahead if it sees you scanning, but it will do that on a file basis.


 If all the main table heap pages were

laid out adjecently on disk, they could be read faster and caches be polluted
less.


True enough. The key word there though is if - it means that PG is 
trying to out-think the filesystem, OS and hardware.


It should be easy enough to test on a particular system though.
1. pg_restore a sample table with TOASTed data.
2. pg_restore the same data but no TOASTed data.
3. cluster the table with TOASTed data (which should force a rewrite of 
the whole table but not its TOASTed data)


If the timing of various selects differ hugely then there's something 
worth investigating.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] shared buffer hash table corrupted

2008-02-19 Thread Ashish Karalkar
Hello List membersI have just upgraded the PostgreSQL server from 8.2.4 to 8.3 on Suse 10.3 64 bit.While inspecting the log i see this errorERROR: shared buffer hash table corruptedCan anybody please help me in getting into details of it?With regardsAshish...


   Unlimited freedom, unlimited storage. Get it now



   Meet people who discuss and share your passions.  Join them now.


[GENERAL] MS library files

2008-02-19 Thread Josue Gomes
Hi,

I'm just wondering why lib\ms\*.lib files are not part of
binaries-no-installer distribution.
Is there any specific reason they're not included there?

regards,
josue gomes

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] greylisting no longer working?

2008-02-19 Thread Alvaro Herrera
Hi,

It seems the greylisting setup stopped quarantining emails?  I'm getting
a lot more spam in pgsql-hackers and the other lists I moderate, and
nothing in the headers suggest that they were greylisted at all.

Did something happen?

-- 
Alvaro Herrera http://www.flickr.com/photos/alvherre/
The problem with the future is that it keeps turning into the present
(Hobbes)

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] questions about very large table and partitioning

2008-02-19 Thread [EMAIL PROTECTED]

Hi,

Thank you guys.


Enrico Sirola wrote:


Il giorno 18/feb/08, alle ore 17:37, [EMAIL PROTECTED] ha scritto:

1) PostgreSQL only support partition by inheritance, and rules have to
be created for each child table, this will result *a lot of* rules if
the number of child tables is large.

Are there some smart ways to avoid this kind of mass ?


you can obtain the same result using a trigger, but you must replace the 
trigger function every time you add/remove a partition. The trigger also 
has an additional feature: you can use copy in in the father table, 
while copy in bypasses the rules subsystem



2) I have added check constraints for child tables. According to the
documents, query performance can be improved dramatically for certain
kinds of queries. Does this mean that the query can be improved only if
the query contains the constrained column? What will happen if the
constrained column doesn't appear in the WHERE clause?


if the constraint doesn't appear in the where clause, then it is 
executed in all partitions



3) Is partition by inheritance the only appropriate way to organize very
large table in PostgreSQL ?


don't know. I think partitioning is useful when you perform partitions 
rotation e.g. when you periodically delete old rows and insert new 
ones (think about log files). In this case you should periodically 
perform vacuums to ensure that the dead rows gets recycled otherwise the 
DB will continue to grow. Partitions help a lot in this case (also 
autovacuum does)
I'd try to tune autovacuum for your workload, and only at a second time 
I'd try to partition the tables.
There has been some discussion on partitioning in this list in the past. 
Try also to take a look at the archives for last june or july

Bye,
e.




I have tried to do partition with inheritance and rules. First, I created 
master table and many child table, and also the rules for insert, delete and 
update. Then I do some select, insert, delete and update operations on the 
master to test if it works. However, the insert an delete work very well, but 
the update operation seems never return. I tried several times, and could wait 
it to return and killed the process.


I tried the commands manually, and it seemed very weird.
The delete command:
DELETE FROM master_table WHERE id='' AND data_type='aaa'
and select command with the same condition expression:
SELECT * FROM master_table WHERE id='' AND data_type='aaa'
both return without delay.
But the update command with the same condition expression:
UPDATE master_table set data_value='somevalue' WHERE id='' AND 
data_type='aaa'
didn't return in 5 minutes.

Every table has index and constraint on column id. I have already set 
constraint_exclusion=true.


Why the update command runs so slow ?

Thanks a lot.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] [pgsql-www] greylisting no longer working?

2008-02-19 Thread User Scrappy


I'm going to have to loook into it ... Dave just reported that also ... as 
far as I know, everything should be still working, unless I somehow 
disabled it the other day when I was in on the bayes stuff (not sure how, 
its a screen I never visit) ...


Will investigate ...


On Tue, 19 Feb 2008, Alvaro Herrera wrote:


Hi,

It seems the greylisting setup stopped quarantining emails?  I'm getting
a lot more spam in pgsql-hackers and the other lists I moderate, and
nothing in the headers suggest that they were greylisted at all.

Did something happen?

--
Alvaro Herrera http://www.flickr.com/photos/alvherre/
The problem with the future is that it keeps turning into the present
(Hobbes)

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

   http://www.postgresql.org/about/donate



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] MS library files

2008-02-19 Thread Dave Page
On Feb 19, 2008 1:16 PM, Josue Gomes [EMAIL PROTECTED] wrote:
 Hi,

 I'm just wondering why lib\ms\*.lib files are not part of
 binaries-no-installer distribution.
 Is there any specific reason they're not included there?

Everything is Microsoft-compiled now, so no need for special versions
of the import libraries any more.


-- 
Dave Page
EnterpriseDB UK: http://www.enterprisedb.com
The Oracle-compatible database company

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] dynamic crosstab

2008-02-19 Thread Alvaro Herrera
Joe Conway wrote:
 Erik Jones wrote:
 See how postgres handles filling the NULLs for you?  What you'd really  
 want to do with this would be to define some functions for setting and  
 getting a person's answers to a given question or set of questions so  
 that you could implement some kind of data integrity with regards to  
 question ids and indices into the answers arrays such as in the example 
 above you'd want to prevent an entry at index 7 when there is no entry  
 in the questions table for question_id=7.

 It occurs to me that it shouldn't be terribly difficult to make an  
 alternate version of crosstab() that returns an array rather than tuples  
 (back when crosstab() was first written, Postgres didn't support NULL  
 array elements). Is this worth considering for 8.4?

How about returning generic rows?  Is that possible?  It would be really
neat if you didn't have to specify the return type in the query that
invoked the crosstab.

I keep wondering if there's a way to pivot (transpose) a result set
defined by the standard.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] dynamic crosstab

2008-02-19 Thread Joe Conway

Alvaro Herrera wrote:

Joe Conway wrote:
It occurs to me that it shouldn't be terribly difficult to make an  
alternate version of crosstab() that returns an array rather than tuples  
(back when crosstab() was first written, Postgres didn't support NULL  
array elements). Is this worth considering for 8.4?


How about returning generic rows?  Is that possible?  It would be really
neat if you didn't have to specify the return type in the query that
invoked the crosstab.


Yeah, I was thinking about that as well. I'm not sure how difficult it 
would be. Hopefully I'll be able to find some time to play with it in 
the next month or so.



I keep wondering if there's a way to pivot (transpose) a result set
defined by the standard.


I've looked at SQL2003 and couldn't find anything, but then again I 
could have easily missed it.


Joe

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Auto incrementing primary keys

2008-02-19 Thread Lew

django_user wrote:

How can stop postgresql from incrementing the primary key value, so
that even after many failed insert statements it get the next id val.

,,,

so wouldnt I run out of ids one day, if there are lot of failed insert
statements, lets say for every successful insert there are 50
unsuccessful inserts, so ids would be 1, 50, 100, and once I have
thousands of rows, I will run out of IDs ? should I use bigserial
instead ?


Gordon wrote:

In theory, yes.  but the standard 4 byte integer can represent about 2
billion positive numbers so even with a lot of failed inserts you're
probably not going to run out for years.


First of all, sequenced keys are an artificial (surrogate) key.  They should 
carry no meaning whatsoever to the data model or business logic.  If they do 
carry meaning, then your code should manage the values rather than using an 
automatic sequence.


Also, if you (after careful review of your analysis) still have restrictions 
that are not part of the auto-generated sequence mechanism, such as leaving no 
gaps in the sequence, you should implement your own sequence instead of 
wishing the automatic mechanism were different from what it is.


Chances are good that the auto-genned sequence will work for you, if you think 
about it.  If not, would you share why you anticipate that gaps will cause 
trouble for you?


--
Lew

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Using sequences in SQL text files

2008-02-19 Thread Lone Wolf
Can I have something like this in my SQL text file:
(items_seq.nextval(), '', '...')
?

Raymond O'Donnell [EMAIL PROTECTED] wrote: On 19/02/2008 15:43, HHB wrote:

 How to use a sequence in such text files?

You're looking for the nextval() function - look it up in the docs.

Ray.

---
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
[EMAIL PROTECTED]
---



Deep into that darkness peering, long I stood there, wondering, fearing, 
Doubting, dreaming dreams no mortal ever dreamed before.
  E.A Poe
  
   



   
-
Be a better friend, newshound, and know-it-all with Yahoo! Mobile.  Try it now.

Re: [GENERAL] Using sequences in SQL text files

2008-02-19 Thread Richard Huxton

HHB wrote:

Hi.
I have sequence for each table in my database.
In order to populate same data in the database, I created some SQL text
files.
---
insert into categories values (id value from sequence, '..', '...');
insert into books values (id value from sequence, '..', '...', '..', fk to
category id);


If they are of SERIAL type then they'll use their sequence by default:

INSERT INTO my_table (id, a, b) VALUES (DEFAULT, 'abc', 123);

Otherwise you can use the currval/nextval() functions:

INSERT INTO my_table (id, a, b) VALUES (nextval(SEQUENCE-NAME), 'abc', 
123);


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] Alter Domain Type

2008-02-19 Thread George Weaver

Hi Everyone,

Many years ago I created a domain with a char(4) datatype.

Now in my wisdom I would like to change this to a text datatype, but I can't 
see any way of altering the type of a domain.


I have experimented with backing up the database and manually editing the 
dump file to change the type, and then recreating the database.  This seems 
to work without problem.


Are there any hidden dangers of doing this?

Is there an alternate way of altering the type without backing up and 
recreating the database?


I'm working with version 8.2.3.

Thanks for your help.

George 




---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] questions about very large table and partitioning

2008-02-19 Thread Erik Jones


On Feb 19, 2008, at 7:46 AM, [EMAIL PROTECTED] wrote:

I have tried to do partition with inheritance and rules. First, I  
created master table and many child table, and also the rules for  
insert, delete and update. Then I do some select, insert, delete  
and update operations on the master to test if it works. However,  
the insert an delete work very well, but the update operation seems  
never return. I tried several times, and could wait it to return  
and killed the process.


I tried the commands manually, and it seemed very weird.
The delete command:
DELETE FROM master_table WHERE id='' AND data_type='aaa'
and select command with the same condition expression:
SELECT * FROM master_table WHERE id='' AND data_type='aaa'
both return without delay.
But the update command with the same condition expression:
UPDATE master_table set data_value='somevalue' WHERE id='' AND  
data_type='aaa'

didn't return in 5 minutes.

Every table has index and constraint on column id. I have already  
set constraint_exclusion=true.


Why the update command runs so slow ?


You'll need to post some of your tables' schemas along with the  
results of running your queries through EXPLAIN or EXPLAIN ANALYZE  
(better) in order for anyone to answer that.


Erik Jones

DBA | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate  market in style.
Visit us online at http://www.myemma.com




---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] out-of-line (TOAST) storage ineffective when loading from dump?

2008-02-19 Thread Markus Bertheau
2008/2/19, Richard Huxton [EMAIL PROTECTED]:
 Markus Bertheau wrote:
  2008/2/19, Richard Huxton [EMAIL PROTECTED]:
  I'm loading a table with some short attributes and a large toastable 
  attribute.
  That means that for every main table heap page several toast table heap 
  pages
  are written. This happens through the buffer cache and the background 
  writer,
  so maybe the pages aren't written in the order in which they were created in
  the buffer cache, but if they are, they end up on disk (assuming that the 
  file
  system is not fragmented) roughly like that:
 
  main table heap page 1
  toast table heap page 1
  toast table heap page .
  toast table heap page n
  main table heap page 2
  toast table heap page n+1
  toast table heap page .
  toast table heap page 2n

 Well, that's assuming:
 1. You're not re-using space from previously deleted/updated rows.
 2. You've not got a RAID array striping writes over multiple disks
 3. The underlying filesystem + buffering isn't doing anything too clever.

 I think disk blocks on your more common file-systems are 4KB by default
 (ext2/3 and ntfs for example). I'm not aware of any default disk-block
 sizes more than the 8KB page-size of PG. Of course, the OS may read
 ahead if it sees you scanning, but it will do that on a file basis.

Ok, turns out that I was largely unaware of how smart file systems are
nowadays. The whole story looks like a good example of how PostgreSQL relies on
the file system and its caches for performant operation.

Thanks

Markus Bertheau

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] Using sequences in SQL text files

2008-02-19 Thread HHB

Hi.
I have sequence for each table in my database.
In order to populate same data in the database, I created some SQL text
files.
---
insert into categories values (id value from sequence, '..', '...');
insert into books values (id value from sequence, '..', '...', '..', fk to
category id);
---
Is it possible to do so?
How to use a sequence in such text files?
Thanks.
-- 
View this message in context: 
http://www.nabble.com/Using-sequences-in-SQL-text-files-tp15561422p15561422.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Using sequences in SQL text files

2008-02-19 Thread Raymond O'Donnell

On 19/02/2008 15:43, HHB wrote:


How to use a sequence in such text files?


You're looking for the nextval() function - look it up in the docs.

Ray.

---
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
[EMAIL PROTECTED]
---

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Analogue to SQL Server UniqueIdentifier?

2008-02-19 Thread Tom Lane
Magnus Hagander [EMAIL PROTECTED] writes:
 Lack of support for Windows, which it sounds like the OP might be running?

 That's something that's been on my agenda for a while. There are certainly
 UUID generation functions available on Windows - at least for some of the
 cases supported by uuid-ossp. If I were to write the same functions for
 that one, where would people prefer that to go - in the uuid-ossp module
 even though that's actually not correct (since it wouldn't be using ossp)
 or a separate module uuid-win32?

The latter is *completely* unacceptable.  The entire point here is to
not expose any differences at the SQL level.

Why can't ossp be used --- is it impossible to port to Windows?

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Using sequences in SQL text files

2008-02-19 Thread brian

HHB wrote:

Hi.
I have sequence for each table in my database.
In order to populate same data in the database, I created some SQL text
files.
---
insert into categories values (id value from sequence, '..', '...');
insert into books values (id value from sequence, '..', '...', '..', fk to
category id);
---
Is it possible to do so?
How to use a sequence in such text files?
Thanks.


I think it depends. If this is a new database then you can leave off the 
SERIAL id values and let the sequence do its thing. To insert the 
foreign key into books you can use currval() like so:


-- don't insert an id here
INSERT INTO categories ('..', '..' ...) VALUES ('..', '..', ...);

--
INSERT INTO books (category_id, '..', ...) VALUES 
(CAST(currval('categories_id_seq') AS INT), '..', ...);
INSERT INTO books (category_id, '..', ...) VALUES 
(CAST(currval('categories_id_seq') AS INT), '..', ...);



INSERT INTO categories ('..', '..' ...) VALUES ('..', '..', ...);
INSERT INTO books (category_id, '..', ...) VALUES 
(CAST(currval('categories_id_seq') AS INT), '..', ...);

...

If the data is from a dump (and so the sequence IDs--and foreign key 
relations--already exist) you'll need to use setval() afterwards to 
reset where the sequences should begin from afterwards.


After all of your inserts (this time with the existing IDs):

SELECT setval('books_id_seq', max(id)) FROM books;
SELECT setval('categories_id_seq', max(id)) FROM categories;

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] dynamic crosstab

2008-02-19 Thread David Fetter
On Tue, Feb 19, 2008 at 11:56:08AM -0300, Alvaro Herrera wrote:
 Joe Conway wrote:
  Erik Jones wrote:
  See how postgres handles filling the NULLs for you?  What you'd
  really  want to do with this would be to define some functions
  for setting and  getting a person's answers to a given question
  or set of questions so  that you could implement some kind of
  data integrity with regards to  question ids and indices into the
  answers arrays such as in the example above you'd want to prevent
  an entry at index 7 when there is no entry  in the questions
  table for question_id=7.
 
  It occurs to me that it shouldn't be terribly difficult to make an
  alternate version of crosstab() that returns an array rather than
  tuples  (back when crosstab() was first written, Postgres didn't
  support NULL  array elements). Is this worth considering for 8.4?
 
 How about returning generic rows?  Is that possible?

One hack I've used in the past to get those is serializing the rows:
XML, YAML and most recently JSON.

 It would be really neat if you didn't have to specify the return
 type in the query that invoked the crosstab.

It would be handy :)

Cheers,
David.
-- 
David Fetter [EMAIL PROTECTED] http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: [EMAIL PROTECTED]

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] dynamic crosstab

2008-02-19 Thread Webb Sprague
   It occurs to me that it shouldn't be terribly difficult to make an
   alternate version of crosstab() that returns an array rather than
   tuples  (back when crosstab() was first written, Postgres didn't
   support NULL  array elements). Is this worth considering for 8.4?
 
  How about returning generic rows?  Is that possible?

 One hack I've used in the past to get those is serializing the rows:
 XML, YAML and most recently JSON.

  It would be really neat if you didn't have to specify the return
  type in the query that invoked the crosstab.

 It would be handy :)

+1

What about (for a 2 dim crosstab anyway) take a table and two column
names to group by, and return the following results:  an  1-d array
with the column names, a 1-d with the rownames, and a 2-d array with
the cell values; a function to take these three arrays and make csv
readable text would be great; also  a function to explode the arrays
into a table (like an array_accum inverse), but this would take a type
or something.

Is this what every one means  anyway?

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] MS library files

2008-02-19 Thread Dave Page
On Feb 19, 2008 6:33 PM, Josue Gomes [EMAIL PROTECTED] wrote:
 On 2/19/08, Dave Page [EMAIL PROTECTED] wrote:
  On Feb 19, 2008 1:16 PM, Josue Gomes [EMAIL PROTECTED] wrote:
   Hi,
  
   I'm just wondering why lib\ms\*.lib files are not part of
   binaries-no-installer distribution.
   Is there any specific reason they're not included there?
 
  Everything is Microsoft-compiled now, so no need for special versions
  of the import libraries any more.
 

 Are you meaning that we can use .a files instead?


Sorry, I just realised you said you were using the binary-only distro.
In 8.0 through 8.2, we built with Mingw and bundled mingw libraries
with the installer and binary-only distros. In addition, the installer
had the Microsoft compatible libraries, but they weren't included in
the binary-only package.

From 8.3, the entire server is built with VC++. We ship the Microsoft
compatible libraries in both the binary-only and installer distros. We
do not ship any mingw libraries any more.

-- 
Dave Page
EnterpriseDB UK: http://www.enterprisedb.com
The Oracle-compatible database company

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] initdb problem with Windows Installer for PostgreSQL 8.2.4

2008-02-19 Thread brrCv
It turned out that NT Authority\Authenticated Users and NT Authority
\Interactive had been removed from the Users group on the machine that
was getting the initdb permission error.  Added these back to the
Users group and the install was successful.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Analogue to SQL Server UniqueIdentifier?

2008-02-19 Thread Magnus Hagander

On Tue, 2008-02-19 at 11:07 -0500, Tom Lane wrote:
 Magnus Hagander [EMAIL PROTECTED] writes:
  Lack of support for Windows, which it sounds like the OP might be running?
 
  That's something that's been on my agenda for a while. There are certainly
  UUID generation functions available on Windows - at least for some of the
  cases supported by uuid-ossp. If I were to write the same functions for
  that one, where would people prefer that to go - in the uuid-ossp module
  even though that's actually not correct (since it wouldn't be using ossp)
  or a separate module uuid-win32?
 
 The latter is *completely* unacceptable.  The entire point here is to
 not expose any differences at the SQL level.
 
 Why can't ossp be used --- is it impossible to port to Windows?

I haven't looked into the details - it's possible that it could be
portable to Windows. But that would a Yet Another Dependency to be bale
to build and run pg... So I'd like to avoid it if possible.

//Magnus

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Analogue to SQL Server UniqueIdentifier?

2008-02-19 Thread Alvaro Herrera
Magnus Hagander wrote:
 
 On Tue, 2008-02-19 at 11:07 -0500, Tom Lane wrote:

  Why can't ossp be used --- is it impossible to port to Windows?
 
 I haven't looked into the details - it's possible that it could be
 portable to Windows. But that would a Yet Another Dependency to be bale
 to build and run pg... So I'd like to avoid it if possible.

I think it's messy enough to port that it would make sense to create a
separate Windows library with the same interface.

The problem I see with porting it is that the author crammed too many
things in the same package, and we don't have any interest in porting
most of the stuff only to get something that we can get more easily by
hooking into Windows native calls.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] uninstalling tsearch2 error: gin_tsvector_ops does not exist for access method gin

2008-02-19 Thread [EMAIL PROTECTED]
hi
when i try to uninstall tsearch2 i get this error,
my postgres version is 8.2.5
how to fix this?
thanks a lot!



/usr/local/pgsql/bin/psql -U postgres -h localhost -f
/usr/local/pgsql/share/contrib/uninstall_tsearch2.sql
BEGIN
psql:/usr/local/pgsql/share/contrib/uninstall_tsearch2.sql:8: ERROR:
operator class gin_tsvector_ops does not exist for access method
gin
psql:/usr/local/pgsql/share/contrib/uninstall_tsearch2.sql:10: ERROR:
current transaction is aborted, commands ignored until end of
transaction block
psql:/usr/local/pgsql/share/contrib/uninstall_tsearch2.sql:13: ERROR:
current transaction is aborted, commands ignored until end of
transaction block
psql:/usr/local/pgsql/share/contrib/uninstall_tsearch2.sql:14: ERROR:
current transaction is aborted, commands ignored until end of
transaction block
psql:/usr/local/pgsql/share/contrib/uninstall_tsearch2.sql:15: ERROR:
current transaction is aborted, commands ignored until end of
transaction block
psql:/usr/local/pgsql/share/contrib/uninstall_tsearch2.sql:19: ERROR:
current transaction is aborted, commands ignored until end of
transaction block
psql:/usr/local/pgsql/share/contrib/uninstall_tsearch2.sql:20: ERROR:
current transaction is aborted, commands ignored until end of
transaction block
psql:/usr/local/pgsql/share/contrib/uninstall_tsearch2.sql:21: ERROR:
current transaction is aborted, commands ignored until end of
transaction block
psql:/usr/local/pgsql/share/contrib/uninstall_tsearch2.sql:22: ERROR:
current transaction is aborted, commands ignored until end of
transaction block
psql:/usr/local/pgsql/share/contrib/uninstall_tsearch2.sql:24: ERROR:
current transaction is aborted, commands ignored until end of
transaction block
psql:/usr/local/pgsql/share/contrib/uninstall_tsearch2.sql:25: ERROR:
current transaction is aborted, commands ignored until end of
transaction block
psql:/usr/local/pgsql/share/contrib/uninstall_tsearch2.sql:26: ERROR:
current transaction is aborted, commands ignored until end of
transaction block
psql:/usr/local/pgsql/share/contrib/uninstall_tsearch2.sql:27: ERROR:
current transaction is aborted, commands ignored until end of
transaction block
psql:/usr/local/pgsql/share/contrib/uninstall_tsearch2.sql:28: ERROR:
current transaction is aborted, commands ignored until end of
transaction block
psql:/usr/local/pgsql/share/contrib/uninstall_tsearch2.sql:30: ERROR:
current transaction is aborted, commands ignored until end of
transaction block
psql:/usr/local/pgsql/share/contrib/uninstall_tsearch2.sql:31: ERROR:
current transaction is aborted, commands ignored until end of
transaction block
psql:/usr/local/pgsql/share/contrib/uninstall_tsearch2.sql:32: ERROR:
current transaction is aborted, commands ignored until end of
transaction block
psql:/usr/local/pgsql/share/contrib/uninstall_tsearch2.sql:34: ERROR:
current transaction is aborted, commands ignored until end of
transaction block
psql:/usr/local/pgsql/share/contrib/uninstall_tsearch2.sql:35: ERROR:
current transaction is aborted, commands ignored until end of
transaction block
psql:/usr/local/pgsql/share/contrib/uninstall_tsearch2.sql:36: ERROR:
current transaction is aborted, commands ignored until end of
transaction block
psql:/usr/local/pgsql/share/contrib/uninstall_tsearch2.sql:37: ERROR:
current transaction is aborted, commands ignored until end of
transaction block
psql:/usr/local/pgsql/share/contrib/uninstall_tsearch2.sql:38: ERROR:
current transaction is aborted, commands ignored until end of
transaction block
psql:/usr/local/pgsql/share/contrib/uninstall_tsearch2.sql:39: ERROR:
current transaction is aborted, commands ignored until end of
transaction block
psql:/usr/local/pgsql/share/contrib/uninstall_tsearch2.sql:40: ERROR:
current transaction is aborted, commands ignored until end of
transaction block
psql:/usr/local/pgsql/share/contrib/uninstall_tsearch2.sql:41: ERROR:
current transaction is aborted, commands ignored until end of
transaction block
psql:/usr/local/pgsql/share/contrib/uninstall_tsearch2.sql:42: ERROR:
current transaction is aborted, commands ignored until end of
transaction block
psql:/usr/local/pgsql/share/contrib/uninstall_tsearch2.sql:43: ERROR:
current transaction is aborted, commands ignored until end of
transaction block
psql:/usr/local/pgsql/share/contrib/uninstall_tsearch2.sql:44: ERROR:
current transaction is aborted, commands ignored until end of
transaction block
psql:/usr/local/pgsql/share/contrib/uninstall_tsearch2.sql:45: ERROR:
current transaction is aborted, commands ignored until end of
transaction block
psql:/usr/local/pgsql/share/contrib/uninstall_tsearch2.sql:46: ERROR:
current transaction is aborted, commands ignored until end of
transaction block
psql:/usr/local/pgsql/share/contrib/uninstall_tsearch2.sql:47: ERROR:
current transaction is aborted, commands ignored until end of
transaction block
psql:/usr/local/pgsql/share/contrib/uninstall_tsearch2.sql:48: ERROR:
current transaction is aborted, commands ignored until end 

Re: [GENERAL] uninstalling tsearch2 error: gin_tsvector_ops does not exist for access method gin

2008-02-19 Thread Tom Lane
[EMAIL PROTECTED] [EMAIL PROTECTED] writes:
 when i try to uninstall tsearch2 i get this error,

Hmm, maybe you originally put tsearch2 into some other schema than
public?  If so, try setting search_path to point to that schema
before you run the uninstall script.

For that matter, are you sure tsearch2 actually is installed in
this database?  The behavior you show is indistinguishable from
what would happen if it isn't.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] ERROR: relation with OID 1322527 does not exist

2008-02-19 Thread Geoffrey
We've just started seeing these errors.  Research I've done seems to 
indicate that it's related to temp tables.  Question is, we didn't start 
seeing these errors until we started using slony to replicate our data.


The errors only showed up shortly after the initial replication of the 
data was complete.  That is, once the replication server 'caught up' 
with the production server.


I posted to the slony list about this issue, but didn't get any bites.

--
Until later, Geoffrey

Those who would give up essential Liberty, to purchase a little
temporary Safety, deserve neither Liberty nor Safety.
 - Benjamin Franklin

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] How to make update rapidly?

2008-02-19 Thread Webb Sprague
Post the table, the query, and the explain output, and then we can help you.

On Feb 19, 2008 7:38 PM, hewei [EMAIL PROTECTED] wrote:
 Hi,Every body;
I have a table contains 100,000 rows, and has a primary key(int).
   Now ,I need to execute sql command like update .. where id=*(id
 is primary key).
   I expect execute 1200-1600 sqlcommands per second(1200-1600/s).
In test,when the id increase by degrees in sqlcommands, then I can reach
 the speed(1600/s);
   But in fact , the id  in sqlcommands  is out of rule, then the speed is
 very slow, just 100/s.
   what can i do? can you help me ?


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] How to make update rapidly?

2008-02-19 Thread hewei
table:
CREATE TABLE price (
  TIMESTAMP Timestamp NULL,
  idnumeric(5,0)  NOT NULL,
  price numeric(10,3) NULL,
  primary key (id)
);
sql:
update price set price=* where id=*;


On Feb 20, 2008 11:56 AM, Webb Sprague [EMAIL PROTECTED] wrote:

 Post the table, the query, and the explain output, and then we can help
 you.

 On Feb 19, 2008 7:38 PM, hewei [EMAIL PROTECTED] wrote:
  Hi,Every body;
 I have a table contains 100,000 rows, and has a primary key(int).
Now ,I need to execute sql command like update .. where
 id=*(id
  is primary key).
I expect execute 1200-1600 sqlcommands per second(1200-1600/s).
 In test,when the id increase by degrees in sqlcommands, then I can
 reach
  the speed(1600/s);
But in fact , the id  in sqlcommands  is out of rule, then the speed
 is
  very slow, just 100/s.
what can i do? can you help me ?
 



[GENERAL] Regex query not using index

2008-02-19 Thread Postgres User
I'm running a simple query on 8.2.  With this syntax, Explain indicate
that the index is scanned:
select * from eod where name = 'AA'

However, when I change the query to use simple regex:
select * from eod where name ~ 'AA'

now Explain indicates a seq scan:
Index Scan using equity_eod_symbol_idx on equity_eod  (cost=0.00..8.27
rows=1 width=149)
Index Cond: ((symbol)::text = 'AA'::text)

Is there any way to 'encourage' Postgres to hit the index when using
regex?  Do I need to create a functional index or something?
Without the index in play, I really can't use regex on any of my larger tables.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] How to make update rapidly?

2008-02-19 Thread hewei
Hi,Every body;
   I have a table contains 100,000 rows, and has a primary key(int).
  Now ,I need to execute sql command like update .. where id=*(id
is primary key).
  I expect execute 1200-1600 sqlcommands per second(1200-1600/s).
  In test,when the id increase by degrees in sqlcommands, then I can reach
the speed(1600/s);
  But in fact , the id  in sqlcommands  is out of rule, then the speed is
very slow, just 100/s.
  what can i do? can you help me ?


Re: [GENERAL] How to make update rapidly?

2008-02-19 Thread Tom Lane
hewei [EMAIL PROTECTED] writes:
   idnumeric(5,0)  NOT NULL,

Don't use NUMERIC where INTEGER would do ...

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Regex query not using index

2008-02-19 Thread Erik Jones

On Feb 19, 2008, at 9:32 PM, Postgres User wrote:


I'm running a simple query on 8.2.  With this syntax, Explain indicate
that the index is scanned:
select * from eod where name = 'AA'

However, when I change the query to use simple regex:
select * from eod where name ~ 'AA'

now Explain indicates a seq scan:
Index Scan using equity_eod_symbol_idx on equity_eod  (cost=0.00..8.27
rows=1 width=149)
Index Cond: ((symbol)::text = 'AA'::text)

Is there any way to 'encourage' Postgres to hit the index when using
regex?  Do I need to create a functional index or something?
Without the index in play, I really can't use regex on any of my  
larger tables.


You need it to be anchored:

select * from eod where name ~ '^AA';

If you're looking to be able to use indexes for searches within a  
string then, for 8.2, you'll need to check out tsearch2.


Erik Jones

DBA | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate  market in style.
Visit us online at http://www.myemma.com




---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL]

2008-02-19 Thread serafin segador
 
hello list,
 
i am a newbie in postgresql.
 
i ported a small application to postgresql 8.2 originally written to store data 
in maxdb.  since stored procedures are not supported in postgresql and since 
postgresql functions cannot return cursor to the calling applications, i 
rewrote the maxdb stored procedures into functions that store rows in temp 
tables and return the temp table name to the application ( saves on coding the 
select string in the application) , and retrieve the rows from the temp table 
within the same session.  this i can no longer do after upgrading to 8.3.  is 
there any thing i can do to restore this functionality i can get in 8.2, like 
configuration tweaking, apart from recoding the select string in the 
application.
 
regards and any help is highly appreciated,
 
raffy segador
   
-
Never miss a thing.   Make Yahoo your homepage.

Re: [GENERAL] postgresql book - practical or something newer?

2008-02-19 Thread Robert Treat
On Monday 04 February 2008 10:48, vincent wrote:
  Christopher Browne wrote:
 
  Personally I'm surprised that the last couple responses seem to center
  around not being able to make much money off of it. I agree that it
  would require some time investment, but so did building PG in the first
  place. Countless people have already sacrificed hours upon hours of
  their time with no return on their investment except pride in their work
  and a better overall product for everybody to use. I'm not a talented
  enough programmer to contribute to the code, but in this way I can do
  something to give back to the pg community.
 
  --
  Tom Hart

 +1

 It seems there's a stalemate, apparently PgSQL needs to be more popular
 before authors want to write for it, and the public doesn't want to commit
 to a database that has only a handfull of books available.


Just to clarify, the market needs to expand to get publishers on board, not 
authors. 

-- 
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Regex query not using index

2008-02-19 Thread Postgres User
Thanks, my dumb mistake.
I need to perform the equivalent of a WHERE clause OR expression using
regex to match exact strings.

_

this example hits the index:
select * from eod where name ~ '^BA$'

but when I try to add another possible value to the regex, it does a row scan:
select * from eod where name ~ ^BA$|^AA$'

both of these statements return the right results, but the 2nd ignores
the index even though both values are left-anchored.

any workaround- this behavior doesn't seem to make sense

On Feb 19, 2008 8:45 PM, Erik Jones [EMAIL PROTECTED] wrote:

 On Feb 19, 2008, at 9:32 PM, Postgres User wrote:

  I'm running a simple query on 8.2.  With this syntax, Explain indicate
  that the index is scanned:
  select * from eod where name = 'AA'
 
  However, when I change the query to use simple regex:
  select * from eod where name ~ 'AA'
 
  now Explain indicates a seq scan:
  Index Scan using equity_eod_symbol_idx on equity_eod  (cost=0.00..8.27
  rows=1 width=149)
  Index Cond: ((symbol)::text = 'AA'::text)
 
  Is there any way to 'encourage' Postgres to hit the index when using
  regex?  Do I need to create a functional index or something?
  Without the index in play, I really can't use regex on any of my
  larger tables.

 You need it to be anchored:

 select * from eod where name ~ '^AA';

 If you're looking to be able to use indexes for searches within a
 string then, for 8.2, you'll need to check out tsearch2.

 Erik Jones

 DBA | Emma(R)
 [EMAIL PROTECTED]
 800.595.4401 or 615.292.5888
 615.292.0777 (fax)

 Emma helps organizations everywhere communicate  market in style.
 Visit us online at http://www.myemma.com





---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Regex query not using index

2008-02-19 Thread Chris

Postgres User wrote:

Yes that works, but the whole point of the exercise is replace many OR
statements with 1 regex expression.  So it's not what I'm looking for.


Why do you want it done this way?

You can build an array of strings to check and use an in clause.

Using php :

$checks = array('AA', 'BA');

$query = select * from table where name in (' . implode(',', 
$checks) . ');


and it should use an index (up to a point anyway).

--
Postgresql  php tutorials
http://www.designmagick.com/

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL]

2008-02-19 Thread Tom Lane
serafin segador [EMAIL PROTECTED] writes:
 i ported a small application to postgresql 8.2 originally written to store 
 data in maxdb.  since stored procedures are not supported in postgresql and 
 since postgresql functions cannot return cursor to the calling applications, 
 i rewrote the maxdb stored procedures into functions that store rows in temp 
 tables and return the temp table name to the application ( saves on coding 
 the select string in the application) , and retrieve the rows from the temp 
 table within the same session.  this i can no longer do after upgrading to 
 8.3.  is there any thing i can do to restore this functionality i can get in 
 8.2, like configuration tweaking, apart from recoding the select string in 
 the application.

Er ... say what?

 stored procedures are not supported in postgresql

This is a false statement, unless perhaps for a very narrow definition
of stored procedure, which you didn't specify.

 and since postgresql functions cannot return cursor to the
 calling applications,

Likewise a false statement.

 i rewrote the maxdb stored procedures into
 functions that store rows in temp tables and return the temp table
 name to the application ( saves on coding the select string in the
 application) , and retrieve the rows from the temp table within the
 same session.  this i can no longer do after upgrading to 8.3.

I do not know any reason why a feature in this vicinity would have
disappeared in 8.3.

If you want help, you need to be a lot more specific about exactly what
you did and exactly what failure you saw.  And a bit less FUD-spouting
would probably make people more inclined to assist you.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Regex query not using index

2008-02-19 Thread Postgres User
im trying to allow the client to pass a varchar param into my
function, and want to avoid any parsing of the parameter inside the
function, or code to build a sql string.

if the function can use this code, it will be compiled and optimized
(unlike a dynamic sql stirng)

select * from mytable where fielda ~ p_param

(where p_param is the input parameter)

On Feb 19, 2008 9:34 PM, Chris [EMAIL PROTECTED] wrote:
 Postgres User wrote:
  Yes that works, but the whole point of the exercise is replace many OR
  statements with 1 regex expression.  So it's not what I'm looking for.

 Why do you want it done this way?

 You can build an array of strings to check and use an in clause.

 Using php :

 $checks = array('AA', 'BA');

 $query = select * from table where name in (' . implode(',',
 $checks) . ');

 and it should use an index (up to a point anyway).

 --

 Postgresql  php tutorials
 http://www.designmagick.com/


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Regex query not using index

2008-02-19 Thread Tom Lane
Postgres User [EMAIL PROTECTED] writes:
 Yes that works, but the whole point of the exercise is replace many OR
 statements with 1 regex expression.  So it's not what I'm looking for.

Unfortunately, Postgres is not as intelligent as you are.  There is
no mechanism to rewrite a multi-branch regex condition into multiple
indexscans.  I recommend going back to the OR's.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Regex query not using index

2008-02-19 Thread Postgres User
Yes that works, but the whole point of the exercise is replace many OR
statements with 1 regex expression.  So it's not what I'm looking for.

On Feb 19, 2008 9:16 PM, Chris [EMAIL PROTECTED] wrote:
 Postgres User wrote:
  Thanks, my dumb mistake.
  I need to perform the equivalent of a WHERE clause OR expression using
  regex to match exact strings.
 
  _
 
  this example hits the index:
  select * from eod where name ~ '^BA$'
 
  but when I try to add another possible value to the regex, it does a row 
  scan:
  select * from eod where name ~ ^BA$|^AA$'
 
  both of these statements return the right results, but the 2nd ignores
  the index even though both values are left-anchored.
 
  any workaround- this behavior doesn't seem to make sense

 try changing it to

 select * from eod where (name ~ '^BA$' or name ~ '^AA$')

 though in this example they should both be name = 'XX' rather than regex'es.

 --
 Postgresql  php tutorials
 http://www.designmagick.com/


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Regex query not using index

2008-02-19 Thread Chris

Postgres User wrote:

Thanks, my dumb mistake.
I need to perform the equivalent of a WHERE clause OR expression using
regex to match exact strings.

_

this example hits the index:
select * from eod where name ~ '^BA$'

but when I try to add another possible value to the regex, it does a row scan:
select * from eod where name ~ ^BA$|^AA$'

both of these statements return the right results, but the 2nd ignores
the index even though both values are left-anchored.

any workaround- this behavior doesn't seem to make sense


try changing it to

select * from eod where (name ~ '^BA$' or name ~ '^AA$')

though in this example they should both be name = 'XX' rather than regex'es.

--
Postgresql  php tutorials
http://www.designmagick.com/

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Regex query not using index

2008-02-19 Thread Postgres User
by the way, your example works fine unless it's a null value or empty string
unfortunately, postgres isn't smart enough to know that the when
p_param below is null, that the WHERE condition can be ignored

select * from table where name in (Coalesce(p_param, name))

which is the same as:   select * from table where name in (name)

postgres does a row scan on the above sql.  too slow.


On Feb 19, 2008 9:34 PM, Chris [EMAIL PROTECTED] wrote:
 Postgres User wrote:
  Yes that works, but the whole point of the exercise is replace many OR
  statements with 1 regex expression.  So it's not what I'm looking for.

 Why do you want it done this way?

 You can build an array of strings to check and use an in clause.

 Using php :

 $checks = array('AA', 'BA');

 $query = select * from table where name in (' . implode(',',
 $checks) . ');

 and it should use an index (up to a point anyway).

 --

 Postgresql  php tutorials
 http://www.designmagick.com/


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Regex query not using index

2008-02-19 Thread Chris

Postgres User wrote:

by the way, your example works fine unless it's a null value or empty string
unfortunately, postgres isn't smart enough to know that the when
p_param below is null, that the WHERE condition can be ignored

select * from table where name in (Coalesce(p_param, name))

which is the same as:   select * from table where name in (name)

postgres does a row scan on the above sql.  too slow.


If there's no where condition, postgres has to do a seq-scan anyway so 
your argument is void :)


--
Postgresql  php tutorials
http://www.designmagick.com/

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] ERROR: relation with OID 1322527 does not exist

2008-02-19 Thread H . Harada
2008/2/20, Geoffrey [EMAIL PROTECTED]:
 We've just started seeing these errors.  Research I've done seems to
 indicate that it's related to temp tables.  Question is, we didn't start
 seeing these errors until we started using slony to replicate our data.

 The errors only showed up shortly after the initial replication of the
 data was complete.  That is, once the replication server 'caught up'
 with the production server.

I've ever seen the same error. At that time I didn't use slony but
replicated some postgresql servers by an application layer. It seemed
that not only temp table but regular relations are dropped and created
quickly when the error occurred.
There could be a chance to drop information about relations or
themselves between SELECT catching relation OID and begining scan the
relation actually.

I have no solution for this until now. But just for your information.


Hitoshi Harada

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Regex query not using index

2008-02-19 Thread Postgres User
doh!  tom, let me know if you decide to hack out a fix for this one of
these nights ;)
thanks for your help.

On Feb 19, 2008 9:45 PM, Tom Lane [EMAIL PROTECTED] wrote:
 Postgres User [EMAIL PROTECTED] writes:
  Yes that works, but the whole point of the exercise is replace many OR
  statements with 1 regex expression.  So it's not what I'm looking for.

 Unfortunately, Postgres is not as intelligent as you are.  There is
 no mechanism to rewrite a multi-branch regex condition into multiple
 indexscans.  I recommend going back to the OR's.

 regards, tom lane


---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] temp table in 8.3

2008-02-19 Thread serafin segador


hello list,

i am a newbie in postgresql.

i ported a small application to postgresql 8.2 originally written to store  
data in maxdb.  since stored procedures are not supported in postgresql  
and since postgresql functions cannot return cursor to the calling  
applications, i rewrote the maxdb stored procedures into functions that  
store rows in temp tables and return the temp table name to the  
application ( saves on coding the select string in the application) , and  
retrieve the rows from the temp table within the same session.  this i can  
no longer do after upgrading to 8.3.  is there any thing i can do to  
restore this functionality i can get in 8.2, like configuration tweaking,  
apart from recoding the select string in the application.


regards and any help is highly appreciated,

raffy segador



--
Using Opera's revolutionary e-mail client: http://www.opera.com/mail/


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL]

2008-02-19 Thread Scott Marlowe
On Feb 19, 2008 11:39 PM, Tom Lane [EMAIL PROTECTED] wrote:

  and since postgresql functions cannot return cursor to the
  calling applications,

 Likewise a false statement.

Yeah, I remembered there being a section on returning cursors.  I went
to the docs page and seached, and found it here:

http://www.postgresql.org/docs/8.3/static/plpgsql-cursors.html
section 38.7.3.5.  But then I go to the index page for plpgsql at
http://www.postgresql.org/docs/8.3/static/plpgsql.html and there are
only entries for 38.7.5.1 through 3...

Is there some problem with the doc rendering going on here?  Pic of
what I'm seeing attached.
attachment: Screenshot.png
---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] How to make update rapidly?

2008-02-19 Thread Scott Marlowe
On Feb 19, 2008 9:38 PM, hewei [EMAIL PROTECTED] wrote:
 Hi,Every body;
I have a table contains 100,000 rows, and has a primary key(int).
   Now ,I need to execute sql command like update .. where id=*(id
 is primary key).
   I expect execute 1200-1600 sqlcommands per second(1200-1600/s).
In test,when the id increase by degrees in sqlcommands, then I can reach
 the speed(1600/s);
   But in fact , the id  in sqlcommands  is out of rule, then the speed is
 very slow, just 100/s.

Assuming that you're updating a non-indexed field, you should really
look at migrating to 8.3 if you haven't already.  It's performance on
such issues is reportedly much faster than 8.2.

As for processing them in order versus randomly, that's a common
problem.  right sizing shared_buffers so that all of the table can fit
in ram might help too.  As would a caching RAID controller.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] temp table in 8.3

2008-02-19 Thread Scott Marlowe
On Feb 19, 2008 10:37 PM, serafin segador [EMAIL PROTECTED] wrote:

 hello list,

 i am a newbie in postgresql.

Why are you posting the same request twice?

 i ported a small application to postgresql 8.2 originally written to store
 data in maxdb.  since stored procedures are not supported in postgresql
 and since postgresql functions cannot return cursor to the calling
 applications,

Maybe the docs would come in handy here.

1: Postgresql supports returning cursors from functions, and has for a
very long time.
2: It also supports returning sets of  records.  Both of these are
covered in the docs on plpgsql.

http://www.postgresql.org/docs/8.3/static/plpgsql-cursors.html
http://www.postgresql.org/docs/8.3/static/plpgsql-control-structures.html#PLPGSQL-STATEMENTS-RETURNING


 i rewrote the maxdb stored procedures into functions that
 store rows in temp tables and return the temp table name to the
 application ( saves on coding the select string in the application) , and
 retrieve the rows from the temp table within the same session.  this i can
 no longer do after upgrading to 8.3.  is there any thing i can do to
 restore this functionality i can get in 8.2, like configuration tweaking,
 apart from recoding the select string in the application.

There's no reason it shouldn't work, but this method is suboptimal.
Do you have something a little more useful, like an error / syntax
message?

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] ERROR: relation with OID 1322527 does not exist

2008-02-19 Thread Dragan Zubac

Hello

Have the same problem but in ordinary usage,no replication or 
whatsoever. Problem is when client connects to the DB,stored procedure 
in its initialization check mapping between some_table_name and 
corresponding OID. Once something  is changed in DB,for example OID's 
deleted and recreated or something similar,You'll have errors in Your 
log file. After changes are done,and You restart Your application which 
reconnect to the DB,things are normal back again.


Sincerely

H.Harada wrote:

2008/2/20, Geoffrey [EMAIL PROTECTED]:
  

We've just started seeing these errors.  Research I've done seems to
indicate that it's related to temp tables.  Question is, we didn't start
seeing these errors until we started using slony to replicate our data.

The errors only showed up shortly after the initial replication of the
data was complete.  That is, once the replication server 'caught up'
with the production server.



I've ever seen the same error. At that time I didn't use slony but
replicated some postgresql servers by an application layer. It seemed
that not only temp table but regular relations are dropped and created
quickly when the error occurred.
There could be a chance to drop information about relations or
themselves between SELECT catching relation OID and begining scan the
relation actually.

I have no solution for this until now. But just for your information.


Hitoshi Harada

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/