Re: [GENERAL] thoughts about constraint trigger

2010-06-15 Thread Craig Ringer
On 15/06/10 02:33, Adrian von Bidder wrote:
 Heyho!
 
 I was trying to implement a deferred NOT NULL constraint using a deferred 
 constraint trigger (on update and insert of this row) because some values 
 would be filled in later during the transaction, after the initial part of 
 the record has been filled.

AFAIK, at this point only FOREIGN KEY constraints may be deferred.

http://www.postgresql.org/docs/current/static/sql-set-constraints.html
http://www.postgresql.org/docs/current/static/sql-createtable.html

DEFERRABLE
NOT DEFERRABLE

This controls whether the constraint can be deferred. A constraint
that is not deferrable will be checked immediately after every command.
Checking of constraints that are deferrable can be postponed until the
end of the transaction (using the SET CONSTRAINTS command). NOT
DEFERRABLE is the default. Only foreign key constraints currently accept
this clause. All other constraint types are not deferrable.

-- 
Craig Ringer

Tech-related writing: http://soapyfrogs.blogspot.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Does enterprisedb.com down?

2010-06-15 Thread M. Bashir Al-Noimi
All the mirrors work except the enterprisedb.com itself.

On Tue, Jun 15, 2010 at 7:50 AM, Jayadevan M
jayadevan.maym...@ibsplc.comwrote:

 May be you will be able to get one that is not blocked from the ftp sites
 list? I don't know if the rules applicable to main server are
 automatically applied to the mirror sites too.
 http://wwwmaster.postgresql.org/download/mirrors-ftp
 Regards,
 Jayadevan





 DISCLAIMER:

 The information in this e-mail and any attachment is intended only for
 the person to whom it is addressed and may contain confidential and/or
 privileged material. If you have received this e-mail in error, kindly
 contact the sender and destroy all copies of the original communication.
 IBS makes no warranty, express or implied, nor guarantees the accuracy,
 adequacy or completeness of the information contained in this email or any
 attachment and is not liable for any errors, defects, omissions, viruses
 or for resultant loss or damage, if any, direct or indirect.








-- 
Best Regards
Muhammad Bashir Al-Noimi
My Blog: http://mbnoimi.net


Re: [GENERAL] Does enterprisedb.com down?

2010-06-15 Thread Magnus Hagander
On Tue, Jun 15, 2010 at 07:21, M. Bashir Al-Noimi ad...@mbnoimi.net wrote:
 On 15/06/2010 06:00 ص, John Gage wrote:

 I ran the IP on http://whatismyipaddress.com/blacklist-check and it is not
 blacklisted.

 Actually I suspect that pg takes same policy of sf.net where sf.net forbids
 open source projects to specific countries as mentioned in the following
 links (for that I migrated  my projects from sf.net to Launchpad because
 sf.net works against FOSS):
 http://arabcrunch.com/2010/01/following-clintons-internet-freedom-speech-us-based-sourceforge-blocked-syria-sudan-iran-korea-cuba-is-open-source-still-really-open.html
 http://sourceforge.net/blog/clarifying-sourceforgenets-denial-of-site-access-for-certain-persons-in-accordance-with-us-law/
 http://sourceforge.net/blog/some-good-news-sourceforge-removes-blanket-blocking/

 So I wish to get a clarification about this issue, does pg forbids my
 country? is it still open source?

PostgreSQL does *not* forbid your country. As you showed further down
this thread, you can reach the main website and mirror systems, so
that's not where the problem is. But you wanted a statement on it.

This is, however, the second report in just a couple of days of people
not being able to reach the windows downloads that are hosted by
EnterpriseDB (I assume you are looking for the Windows downloads - if
you're actually looking for EnterpriseDB's commercial product, you
need to talk to them and not us). This indicates a real problem.

Can someone from EnterpriseDB (hi, Dave!) verify with their provider
that they do *not* intentionally or unintentionally prevent people
from downloading the software based on their location?

Unfortunately, there are no backup download locations available for
the windows installers, but you can always download the source off the
postgresql.org mirrors and build your own binaries.


-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Does enterprisedb.com down?

2010-06-15 Thread M. Bashir Al-Noimi
Thanks All, this thread moved now to another mailing list for discussing
this problem with EnterpriseDB folks (Bruce Momjian and others).


2010/6/15 Magnus Hagander mag...@hagander.net

 On Tue, Jun 15, 2010 at 07:21, M. Bashir Al-Noimi ad...@mbnoimi.net
 wrote:
  On 15/06/2010 06:00 ص, John Gage wrote:
 
  I ran the IP on http://whatismyipaddress.com/blacklist-check and it is
 not
  blacklisted.
 
  Actually I suspect that pg takes same policy of sf.net where sf.netforbids
  open source projects to specific countries as mentioned in the following
  links (for that I migrated  my projects from sf.net to Launchpad because
  sf.net works against FOSS):
 
 http://arabcrunch.com/2010/01/following-clintons-internet-freedom-speech-us-based-sourceforge-blocked-syria-sudan-iran-korea-cuba-is-open-source-still-really-open.html
 
 http://sourceforge.net/blog/clarifying-sourceforgenets-denial-of-site-access-for-certain-persons-in-accordance-with-us-law/
 
 http://sourceforge.net/blog/some-good-news-sourceforge-removes-blanket-blocking/
 
  So I wish to get a clarification about this issue, does pg forbids my
  country? is it still open source?

 PostgreSQL does *not* forbid your country. As you showed further down
 this thread, you can reach the main website and mirror systems, so
 that's not where the problem is. But you wanted a statement on it.

 This is, however, the second report in just a couple of days of people
 not being able to reach the windows downloads that are hosted by
 EnterpriseDB (I assume you are looking for the Windows downloads - if
 you're actually looking for EnterpriseDB's commercial product, you
 need to talk to them and not us). This indicates a real problem.

 Can someone from EnterpriseDB (hi, Dave!) verify with their provider
 that they do *not* intentionally or unintentionally prevent people
 from downloading the software based on their location?

 Unfortunately, there are no backup download locations available for
 the windows installers, but you can always download the source off the
 postgresql.org mirrors and build your own binaries.


 --
  Magnus Hagander
  Me: http://www.hagander.net/
  Work: http://www.redpill-linpro.com/




-- 
Best Regards
Muhammad Bashir Al-Noimi
My Blog: http://mbnoimi.net


Re: [GENERAL] Re: Moving a live production database to different server and postgres release

2010-06-15 Thread Ulas Albayrak
Unfortunately, the switch to Windows is out of my hands. If it were up
to me I'd stick with BSD. When you say postgres on Windows is known
for its mediocre performance, do you mean it's slower or buggy? Or
both?

/Ulas

On Mon, Jun 14, 2010 at 10:22 PM, Scott Marlowe scott.marl...@gmail.com wrote:
 On Mon, Jun 14, 2010 at 7:09 AM, Ulas Albayrak ulas.albay...@gmail.com 
 wrote:
 The database is  10GB and currently on a postgres version 8.2.15 on a
 BSD system and moving to postgres version 8.4.4 on a windows 2008
 server. The adding of data is continuous but in small quantities,
 totaling at about 20MB a day.

 Is there are good reason to go to Windows instead of a new BSD system?
  Windows is a known mediocre performer for postgres.

 BTW the slony versions need to match down to the minor rev number.




-- 
Ulas Albayrak
ulas.albay...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: Moving a live production database to different server and postgres release

2010-06-15 Thread Guillaume Lelarge
Le 15/06/2010 09:18, Ulas Albayrak a écrit :
 Unfortunately, the switch to Windows is out of my hands. If it were up
 to me I'd stick with BSD. When you say postgres on Windows is known
 for its mediocre performance, do you mean it's slower or buggy? Or
 both?
 

Slower. If it were buggy, it would be fixed.


-- 
Guillaume
 http://www.postgresql.fr
 http://dalibo.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Silent installer in Windows

2010-06-15 Thread M. Bashir Al-Noimi
Sorry for disturbing.
I want to run One-Click completely silent, how I can do it?

PS
I run the following but it always shows installing progress dialog:
postgresql-8.4.4-1-windows --install_runtimes 0 --servicepassword root
--unattendedmodeui minimal --mode unattended --prefix c:\pg-8.4 --datadir
c:\pgData --superpassword admin

On Mon, Jun 14, 2010 at 9:55 PM, Sachin Srivastava 
sachin.srivast...@enterprisedb.com wrote:

  In the One-Click Installer for PostgreSQL use the CLI option, --mode
 unattended for more options see --help.


 On 6/15/10 12:45 AM, M. Bashir Al-Noimi wrote:

 Hi folks,


  I want to install ps silentely in Windows how I can do it? what's needed
 arguments?


  Sorry I'm still a newbie


  Ps

 I read pginstaller doc about silent 
 installinghttp://pginstaller.projects.postgresql.org/silent.htmlbut I 
 noticed that pginstaller no longer available as mentioned in its home
 page.

 --
 Best Regards
 Muhammad Bashir Al-Noimi
 My Blog: http://mbnoimi.net





 --
 Regards,
 Sachin Srivastava
 EnterpriseDB http://www.enterprisedb.com, the Enterprise 
 Postgreshttp://www.enterprisedb.comcompany.




-- 
Best Regards
Muhammad Bashir Al-Noimi
My Blog: http://mbnoimi.net


Re: [GENERAL] Silent installer in Windows

2010-06-15 Thread Sachin Srivastava

Use CLI option,  --unattendedmodeui none

On 6/15/10 1:17 PM, M. Bashir Al-Noimi wrote:

Sorry for disturbing.
I want to run One-Click completely silent, how I can do it?

PS
I run the following but it always shows installing progress dialog:
postgresql-8.4.4-1-windows --install_runtimes 0 --servicepassword root 
--unattendedmodeui minimal --mode unattended --prefix c:\pg-8.4 
--datadir c:\pgData --superpassword admin


On Mon, Jun 14, 2010 at 9:55 PM, Sachin Srivastava 
sachin.srivast...@enterprisedb.com 
mailto:sachin.srivast...@enterprisedb.com wrote:


In the One-Click Installer for PostgreSQL use the CLI option,
--mode unattended for more options see --help.


On 6/15/10 12:45 AM, M. Bashir Al-Noimi wrote:


Hi folks,


I want to install ps silentely in Windows how I can do it? what's
needed arguments?


Sorry I'm still a newbie


Ps

I read pginstaller doc about silent installing
http://pginstaller.projects.postgresql.org/silent.html but I
noticed that pginstaller no longer available as mentioned in its
home page.

-- 
Best Regards

Muhammad Bashir Al-Noimi
My Blog:http://mbnoimi.net
   




   



-- 
Regards,

Sachin Srivastava
EnterpriseDB http://www.enterprisedb.com, the Enterprise
Postgres http://www.enterprisedb.com company.




--
Best Regards
Muhammad Bashir Al-Noimi
My Blog: http://mbnoimi.net




--
Regards,
Sachin Srivastava
EnterpriseDB http://www.enterprisedb.com, the Enterprise Postgres 
http://www.enterprisedb.com company.


Re: [GENERAL] DDL partitioning with insert trigger issue

2010-06-15 Thread Alban Hertroys
On 15 Jun 2010, at 24:46, mark wrote:

 Hello,
  
 I am running PG 8.3. and following the guide found 
 athttp://www.postgresql.org/docs/current/static/ddl-partitioning.html
  
  
 I have followed the steps outlined here nearly exactly with regards to using 
 an insert trigger to call a function to insert data into the child partition. 
 I am wondering why I am getting the record inserted in both the child and the 
 parent partition when executing an insert into the parent.
  
  
 Is there a step missing from the DOC? Something else I need to do?

Are you sure you're not mistaking table inheritance for duplicates? If you're 
querying the master table without the ONLY keyword then you'll see the data 
from the child tables as well.

Alban Hertroys

--
Screwing up is an excellent way to attach something to the ceiling.


!DSPAM:737,4c1752e1286215067983550!



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Partial indexes instead of partitions

2010-06-15 Thread Leonardo F
 AFAIU the OP is trying to give the cache a chance of 
 doing some useful
 work by partitioning by time so it's going to be forced to 
 go to disk
 less.

Exactly

 have you 
 considered a couple of
 levels to your hierarchy.  Maybe bi-hourly (~15 
 million records?)
 within the current day and move them over into a day 
 table at night

I was going for the partitioned-index approach because
it would avoid re-copying the data over another table.
My idea was: 

1) create partial indexes on today's table
2) at night, create a whole index (not partial) on yesterday's
table
3) drop the partial indexes on yesterday's table

But this doesn't work, because partial indexes aren't 
appended the way partitioned tables are... that is, if I have
one index covering half table, and another covering the other
half, if I query the data over the intersection I'll always get
a plain table scan, where I would expect the planner to do
an append of the result of 2 index scans...

Would it be something that could be added to the TODO list?
It doesn't look that different from what table partitioning/pruning
does

Thank you everybody for your replies anyway!




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: Moving a live production database to different server and postgres release

2010-06-15 Thread Alban Hertroys
On 15 Jun 2010, at 9:21, Guillaume Lelarge wrote:

 Le 15/06/2010 09:18, Ulas Albayrak a écrit :
 Unfortunately, the switch to Windows is out of my hands. If it were up
 to me I'd stick with BSD. When you say postgres on Windows is known
 for its mediocre performance, do you mean it's slower or buggy? Or
 both?
 
 
 Slower. If it were buggy, it would be fixed.


Yes, bugs do get fixed, but bugs that haven't been fixed yet are still bugs.

Why I'm saying this is that Postgres on Windows is relatively new compared to 
other operating systems. In general, the longer software exists and the more 
users it has on a certain operating system, the more bugs get fixed.

Now I don't know the usage numbers on the different operating systems, the 
number of implementations on Windows has definitely grown a lot in the last 
years, but historically the Windows version used to have a relatively small 
user base and was therefore more likely to contain bugs.

That aside, I think Windows is the only supported OS that has API's that differ 
a lot from the usual API's that Postgres was developed for. Therefore the 
Windows version has its own set of potential problems.

I'm not saying that Postgres on Windows is buggy, I'm just saying that the 
chances you run into one are relatively speaking higher on Windows than on 
other operating systems. If you check the archives you won't see a lot of bug 
reports though, no matter what OS people are using.

And yes, it's slower on Windows. IIRC that's because Windows isn't very good at 
multi-processing and Postgres runs as multiple processes.

Alban Hertroys

--
If you can't see the forest for the trees,
cut the trees and you'll see there is no forest.


!DSPAM:737,4c17598b286211489720513!



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: Moving a live production database to different server and postgres release

2010-06-15 Thread Greg Smith

Ulas Albayrak wrote:

Unfortunately, the switch to Windows is out of my hands. If it were up
to me I'd stick with BSD. When you say postgres on Windows is known
for its mediocre performance, do you mean it's slower or buggy? Or
both?
  


Three examples that have varying proportions of slow and buggy in them:

-Without risky registry hacking, Windows systems won't allow more than 
about 125 connections to the server at a time if you're using the 
standard service infrastructure to manage the server.  See the last 
entry at 
http://wiki.postgresql.org/wiki/Running_%26_Installing_PostgreSQL_On_Native_Windows 
for details.  If you need more clients than that connecting to the 
database, you'll need to either tweak the registry, run it outside of 
the services model, or put a connection pooler between the clients and 
the database.


-UNIX systems normally allow giving the database up to several gigabytes 
of RAM for its direct utilization.  Windows installs have to be kept in 
the 128MB - 512MB range because they get unexpectedly slower when the 
database has more memory than that.


-Anti-virus software installed on Windows servers has to be very 
carefully screened for compatibility with the database, with really 
random sorts of problems popping up when you have a bad combination.  
Any time you let your AV software get updated, you're potentially 
exposed to the database becoming unreliable afterwards.


--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com   www.2ndQuadrant.us


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] thoughts about constraint trigger

2010-06-15 Thread Adrian von Bidder
On Tuesday 15 June 2010 08.03:49 Craig Ringer wrote:
 AFAIK, at this point only FOREIGN KEY constraints may be deferred.

I think you didn't understand what I wrote.   9.0 allows to defer UNIQUE as 
well, but not NOT NULL, which is why I wrote a derred constraint trigger to 
implement it, which behaved slightly different from what I expected, which 
lead me to wonder if my expectation was so far off ...

cheers
-- vbi

-- 
Valentine's Day is the one holiday when everyone is expected to do
something romantic for their spouse or lover -- and if someone has both,
it's a serious problem. ...  planning a 'business trip' that falls over
Valentine's Day is a typical mistake cheaters make. ... So now I'm
wondering why the RSA Conference is being held over Valentine's Day.
-- Bruce Schneier quoting the Wall Street Journal


signature.asc
Description: This is a digitally signed message part.


[GENERAL] Disk performance

2010-06-15 Thread Janning
Hi all,

as we encountered some limitations of our cheap disk setup, I really would 
like to see how cheap they are compared to expensive disk setups.

We have a 12 GB RAM machine with intel i7-975 and using
3 disks Seagate Barracuda 7200.11, ST31500341AS (1.5 GB) 
One disk for the system and WAL etc. and one SW RAID-0 with two disks for  
postgresql data.

Now I ran a few test as described in
http://www.westnet.com/~gsmith/content/postgresql/pg-disktesting.htm

# time sh -c dd if=/dev/zero of=bigfile bs=8k count=300  sync
300+0 records in
300+0 records out
2457600 bytes (25 GB) copied, 276.03 s, 89.0 MB/s

real4m48.658s
user0m0.580s
sys 0m51.579s

# time dd if=bigfile of=/dev/null bs=8k
300+0 records in
300+0 records out
2457600 bytes (25 GB) copied, 222.841 s, 110 MB/s

real3m42.879s
user0m0.468s
sys 0m18.721s


IMHO it is looking quite fast compared to the values mentioned in the article. 
What values do you expect with a very expensive setup like many spindles, 
scsi, raid controller, battery cache etc. How much faster will it be? 

Of yourse, you can't give me exact results, but I would just like to get a an 
idea about how much faster an expensive disk setup could be. 
Would it be like 10% faster, 100% or 1000% faster? If you can give me any 
hints, I would greatly appreciate it.

kind regards
Janning





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_restore performance / practicality question

2010-06-15 Thread Mike C
Hi,

I want to know if the pg_restore -a (data only, no schema) function
is a good method to restore data from a backup into an existing
database which already has data in its tables (and ensuring that
existing data is preserved).

I've done a simple test by backing up the database using pg_dump and
the custom format, then changing the primary keys of the tables to be
different (update TABLE set ID = nextval('SEQ')), then 'pg_restore -a'
and it looks like the data restores fine. During the restoration
period I was able to perform normal CRUD operations on existing rows.

Has anyone had experience doing such an activity on a production
database? i.e. Restoring tables of 50GB and up? Should I expect
problems with this method - are there any tricks to be aware of?
Potential data inconsistencies? The only other option I can see is
'dump as inserts' but I'd prefer to avoid the disk overhead of such a
verbose backup file, plus COPY is faster than INSERT.

I'm going to continue my testing on some larger data sets, but would
appreciate if anyone already has some insights about it.

Regards,

Mike

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Disk performance

2010-06-15 Thread Ivan Voras
On 06/15/10 14:59, Janning wrote:
 Hi all,
 
 as we encountered some limitations of our cheap disk setup, I really would 
 like to see how cheap they are compared to expensive disk setups.
 
 We have a 12 GB RAM machine with intel i7-975 and using
 3 disks Seagate Barracuda 7200.11, ST31500341AS (1.5 GB) 
 One disk for the system and WAL etc. and one SW RAID-0 with two disks for  
 postgresql data.
 
 Now I ran a few test as described in
 http://www.westnet.com/~gsmith/content/postgresql/pg-disktesting.htm
 
 # time sh -c dd if=/dev/zero of=bigfile bs=8k count=300  sync
 300+0 records in
 300+0 records out
 2457600 bytes (25 GB) copied, 276.03 s, 89.0 MB/s
 
 real  4m48.658s
 user  0m0.580s
 sys   0m51.579s
 
 # time dd if=bigfile of=/dev/null bs=8k
 300+0 records in
 300+0 records out
 2457600 bytes (25 GB) copied, 222.841 s, 110 MB/s
 
 real  3m42.879s
 user  0m0.468s
 sys   0m18.721s

The figures are ok if the tests were done on a single drive (i.e. not
your RAID-0 array).

 IMHO it is looking quite fast compared to the values mentioned in the 
 article. 
 What values do you expect with a very expensive setup like many spindles, 
 scsi, raid controller, battery cache etc. How much faster will it be? 

For start, you are attempting to use RAID-0 with two disks here. This
means you have twice as much risk that a drive failure will cause total
data loss. In any kind of serious setup this would be the first thing to
replace.

 Of yourse, you can't give me exact results, but I would just like to get a an 
 idea about how much faster an expensive disk setup could be. 
 Would it be like 10% faster, 100% or 1000% faster? If you can give me any 
 hints, I would greatly appreciate it.

There is no magic here - scalability of drives can be approximated linearly:

a) faster drives: 15,000 RPM drives will be almost exactly 15000/7200
times faster at random access

b) more drives: depending on your RAID schema, each parallel drive or
drive combination will grow your speed linearly. For example, a 3-drive
RAID-0 will be 3/2 times faster than a 2-drive RAID-0. Of course, you
would not use RAID-0 anywhere serious. But an 8-drive RAID-10 array will
be 8/4=2 times faster than a 4-drive RAID-10 array.

Finally, it all depends on your expected load vs budget. If you are
unsure of what you want and what you need, but don't expect serious
write loads, make a 4-drive RAID-10 array of your cheap 7200 RPM drives,
invest in more RAM and don't worry about it.

Drive controllers are another issue and there is somewhat more magic
here. If the above paragraph describes you well, you probably don't need
a RAID controller. There are many different kinds of these with
extremely different prices, and many different configuration option so
nowadays it isn't practical to think about those until you really need to.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is there a way to backup Postgres via SQL commands?

2010-06-15 Thread Dimitri Fontaine
Frank Church voi...@googlemail.com writes:
 Are there SQL commands that can do a backup over a client connection,
 rather than from the command line like pgsql etc?

That's pg_dump ?

 By that I mean some kind of SELECT commands that can retrieve the
 database's content as SQL commands that can be replayed to a server to
 restore it, rather than something that saves directly to file, or
 passes it through a pipe?

If you want to retrieve the SQL commands that allows you to recreate a
live database, use pg_dump.

If you want to build a file-by-file replica of the live system (base
backup) through a usual PostgreSQL connection, you can use pg_basebackup
which is available on github:
  http://github.com/dimitri/pg_basebackup

If you want to run pg_dump via an SQL query, I say I don't see any
interest in doing so. Plain client-side pg_dump will get the data it
needs (including necessary DDLs) through a normal PostgreSQL connection
already. Arrange yourself so that you can run pg_dump!

As other said, though, it can certainly be made, but not with some
caveats. Do you want only the schema or the schema and the data? The
first limitation I can think of is the 1GB - 4 bytes bytea datatype
capacity in memory.

Regards,
-- 
dim

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Monitoring activities of PostgreSQL

2010-06-15 Thread Allan Kamau
I do have a PL/SQL function that gets executed called many times but
with different parameter values each of these times. For most
invocations of this function run in a couple of seconds however some
invocations of the same function run (on the same dataset) for hours
with very little disk activity but high CPU.

How can I monitor the actual DB activities during such times so I may
better understand what the situation truly is. I have seen some users
on this list posting some complex log/outputs, this are the kind of
outputs I would like to capture and view. Where are they?

Allan.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Does enterprisedb.com down?

2010-06-15 Thread Bruce Momjian
Jayadevan M wrote:
 May be you will be able to get one that is not blocked from the ftp sites 
 list? I don't know if the rules applicable to main server are 
 automatically applied to the mirror sites too.
 http://wwwmaster.postgresql.org/download/mirrors-ftp

The EnterpriseDB binaries are not on the community ftp servers, and
can't be because some of the community servers are in the USA, and those
USA servers might not block embargoed countries.  The on-click
installers are unique because they embed OpenSSL, while things like PRMs
use the OpenSSL library in the operating system.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + None of us is going to be here forever. +

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Inconsistency with LIKE and SIMILAR TO

2010-06-15 Thread Rosiński Krzysztof 2 - Detal TP
Hello.

Operators LIKE and SIMILAR TO work differently
This query works ok.:

SELECT *
  FROM www.test
  WHERE expr like any (ARRAY['a','b']);

But this not work:

SELECT *
  FROM www.test
  WHERE expr similar to any (ARRAY['a','b']);

ERROR:  syntax error at or near any
LINE 3:   WHERE expr similar to any (ARRAY['a','b']);

Is this shouldn't works ?


pasman

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Inconsistency with LIKE and SIMILAR TO

2010-06-15 Thread Thom Brown
2010/6/15 Rosiński Krzysztof 2 - Detal TP 
krzysztof.rosins...@telekomunikacja.pl

 Hello.

 Operators LIKE and SIMILAR TO work differently
 This query works ok.:

 SELECT *
  FROM www.test
  WHERE expr like any (ARRAY['a','b']);

 But this not work:

 SELECT *
  FROM www.test
  WHERE expr similar to any (ARRAY['a','b']);

 ERROR:  syntax error at or near any
 LINE 3:   WHERE expr similar to any (ARRAY['a','b']);

 Is this shouldn't works ?


SIMILAR TO and LIKE aren't synonyms for each other.  SIMILAR TO uses a cross
between LIKE syntax and regular expressions for the evalutated expression.
I'm not entirely sure what your query is supposed to be doing.  I thought
ANY was a keyword denoting a row-wise comparison?

Thom


[GENERAL] integer to little endian conversion

2010-06-15 Thread maarten
Hi all,

I've been looking for a while now to solve my problem.
I'd like to store an integer (and other things) in a bytea field of a
table from a trigger function.
The integer needs to be inserted in it's binary representation:
1 - \x01\x00\x00\x00
256 - \x00\x01\x00\x00

(which would be E'\\001\\000\\000\\000' and E'\\000\\001\\000\\000')

Since I did not find any functions in the documentation I'm writing my
own function to do this in plpgsql. (attached below)

This works for many values and fails for many values.  The reason is
that chr(integer) works fine for me till 127 (chr(127) - '\177') but
from 128 and onwards it returns useless results. chr(128) - '\302\200'

If anyone has any idea on how to proceed that would be greatly
appreciated.

For the record, chr(integer) works just how it's supposed to, I realize
I'll need something along the lines of
IF v_temp = 0 - E'\\000'
ELSIF v_temp  128 - chr(v_temp)
ELSE ...

It's the ... I'm looking for. (or some function that can replace that
entire IF block, or even better, the entire function)

Thanks in advance,
Maarten


CREATE OR REPLACE FUNCTION utils.int_littleendian(v_number integer)
  RETURNS bytea AS
$BODY$
DECLARE 
v_textresult bytea;
v_temp int;
v_int int;
v_i int = 4;
BEGIN
v_int = v_number;
v_textresult = '';
WHILE(v_i  0) LOOP
v_temp := v_int%256;
v_int := v_int - v_temp;
v_int := v_int / 256;
IF v_temp = 0 THEN
v_textresult = v_textresult || E'\\000';
ELSE
v_textresult = v_textresult || chr(v_temp);
END IF;
v_i := v_i - 1;
END LOOP;
return v_textresult;
END;

$BODY$
  LANGUAGE 'plpgsql' VOLATILE
  COST 100;




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] IMMUTABLE columns in tables?

2010-06-15 Thread Chris Browne
avbid...@fortytwo.ch (Adrian von Bidder) writes:
 Heyho!

 (Ok, seems to be feature wish day ...)

 I was wondering if others would find an IMMUTABLE (or whatever) column 
 constraint useful as well.  Semantics would (obviously?) be to disallow 
 changing the value of this column after insert.

 I realize that this is possible via triggers, and with the recent 
 possibility of having triggers fire only on changes to certain columns it's 
 even (presumably) not much runtime overhead, but creating triggers is very 
 verbose and doesn't make the db schema very readable.

I'd like that feature, and I don't think it takes too much arguing to
get to the point that a declarative IMMUTABLE control is rather less
opaque than someone saying oh, you could just create a trigger
running PL/LOLCODE to do that!

I thought that this was on the Todo list, but I don't see it.
http://wiki.postgresql.org/wiki/Todo

If you're keen on getting it onto the ToDo list, the argumentation
process would be made easier if the material about this included
answers to a couple more questions:
  - What do other databases use as syntax for this?
  - Does SQL standard have anything to say about how this sort
of thing ought to be declared?
-- 
select 'cbbrowne' || '@' || 'cbbrowne.com';
http://cbbrowne.com/info/internet.html
MS  apparently now  has a  team dedicated  to tracking  problems with
Linux  and publicizing them.   I guess  eventually they'll  figure out
this back fires... ;) -- William Burrow aa...@delete.fan.nb.ca

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Disk performance

2010-06-15 Thread Janning
On Tuesday 15 June 2010 15:16:19 Ivan Voras wrote:
 On 06/15/10 14:59, Janning wrote:
  Hi all,
 
  as we encountered some limitations of our cheap disk setup, I really
  would like to see how cheap they are compared to expensive disk setups.
 
  We have a 12 GB RAM machine with intel i7-975 and using
  3 disks Seagate Barracuda 7200.11, ST31500341AS (1.5 GB)
  One disk for the system and WAL etc. and one SW RAID-0 with two disks for
  postgresql data.
 
  Now I ran a few test as described in
  http://www.westnet.com/~gsmith/content/postgresql/pg-disktesting.htm
 
  # time sh -c dd if=/dev/zero of=bigfile bs=8k count=300  sync
  300+0 records in
  300+0 records out
  2457600 bytes (25 GB) copied, 276.03 s, 89.0 MB/s
 
  real4m48.658s
  user0m0.580s
  sys 0m51.579s
 
  # time dd if=bigfile of=/dev/null bs=8k
  300+0 records in
  300+0 records out
  2457600 bytes (25 GB) copied, 222.841 s, 110 MB/s
 
  real3m42.879s
  user0m0.468s
  sys 0m18.721s

 The figures are ok if the tests were done on a single drive (i.e. not
 your RAID-0 array).

Ahh, I meant raid-1, of course.  Sorry for this.
I tested my raid 1 too and it looks quite the same. Not much difference.

  IMHO it is looking quite fast compared to the values mentioned in the
  article. What values do you expect with a very expensive setup like many
  spindles, scsi, raid controller, battery cache etc. How much faster will
  it be?

 For start, you are attempting to use RAID-0 with two disks here. This
 means you have twice as much risk that a drive failure will cause total
 data loss. In any kind of serious setup this would be the first thing to
 replace.

I did it already :-)

  Of yourse, you can't give me exact results, but I would just like to get
  a an idea about how much faster an expensive disk setup could be.
  Would it be like 10% faster, 100% or 1000% faster? If you can give me any
  hints, I would greatly appreciate it.

 There is no magic here - scalability of drives can be approximated
 linearly:

 a) faster drives: 15,000 RPM drives will be almost exactly 15000/7200
 times faster at random access

ok. 

 b) more drives: depending on your RAID schema, each parallel drive or
 drive combination will grow your speed linearly. For example, a 3-drive
 RAID-0 will be 3/2 times faster than a 2-drive RAID-0. Of course, you
 would not use RAID-0 anywhere serious. But an 8-drive RAID-10 array will
 be 8/4=2 times faster than a 4-drive RAID-10 array.

So RAID-10 with 4 disks is 2 times faster than a RAID-1, I got it. So as I 
need much more power I should look for a RAID-10 with 8 or more 15k RPM disks.

 Finally, it all depends on your expected load vs budget. If you are
 unsure of what you want and what you need, but don't expect serious
 write loads, make a 4-drive RAID-10 array of your cheap 7200 RPM drives,
 invest in more RAM and don't worry about it.

ok, I will look for a hoster who can provide this. Most hosters normaly offer 
lots of ram and cpu but no advanced disk configuration.

 Drive controllers are another issue and there is somewhat more magic
 here. If the above paragraph describes you well, you probably don't need
 a RAID controller. There are many different kinds of these with
 extremely different prices, and many different configuration option so
 nowadays it isn't practical to think about those until you really need to.

thanks very much for your help.
It gave me a good idea of what to do. If you have further recommendations, I 
would be glad to here them.

kind regards
Janning



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Disk performance

2010-06-15 Thread Leonardo F
 thanks very much for your 
 help.
 It gave me a good idea of what to do. If you have further 
 recommendations, I 
 would be glad to here them.

I guess you should give more info about the expected

workload of your server(s)... otherwise you'll risk spend
too much money/spend your money in a wrong way...




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Disk performance

2010-06-15 Thread Alan Hodgson
On Tuesday, June 15, 2010, Janning m...@planwerk6.de wrote:
 ok, I will look for a hoster who can provide this. Most hosters normaly
 offer lots of ram and cpu but no advanced disk configuration.
 

I've noticed that too, even Rackspace doesn't offer a standard config that 
anyone would actually want to use for a database server. I know they can 
custom build something but is there really no demand for servers with real 
storage subsystems?

-- 
No animals were harmed in the recording of this episode. We tried but that 
damn monkey was just too fast.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] IMMUTABLE columns in tables?

2010-06-15 Thread Craig Ringer

On 15/06/2010 2:41 AM, Adrian von Bidder wrote:

Heyho!

(Ok, seems to be feature wish day ...)

I was wondering if others would find an IMMUTABLE (or whatever) column
constraint useful as well.  Semantics would (obviously?) be to disallow
changing the value of this column after insert.

I realize that this is possible via triggers, and with the recent
possibility of having triggers fire only on changes to certain columns it's
even (presumably) not much runtime overhead, but creating triggers is very
verbose and doesn't make the db schema very readable.


Where possible, I do this with column priveleges, ensuring that nobody 
(except the superuser) has UPDATE rights for that column. It's short and 
simple, though it's only effective if the regular user isn't the same as 
the table owner.


It'd be nice to have something stronger in the form of a column 
qualifier (akin to NOT NULL), though, and much easier to show to be 
correct. Column privs may be bypassed by a superuser (or a suitably 
privileged SECURITY DEFINER function/trigger), and triggers can have 
interesting interactions between them that make it hard to feel 
confident in the effect.


--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pgcrypto in separate schema

2010-06-15 Thread Brad Nicholson
I want to put the functions from pgcrypto into a separate schema, but
pgcrypto.sql is explicitly setting the search path to public.  Is there
a reason it does this that I should be aware of? Is it fine to change
that and install the functions in a separate schema?
-- 
Brad Nicholson  416-673-4106
Database Administrator, Afilias Canada Corp.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pgcrypto in separate schema

2010-06-15 Thread Magnus Hagander
On Tue, Jun 15, 2010 at 19:02, Brad Nicholson bnich...@ca.afilias.info wrote:
 I want to put the functions from pgcrypto into a separate schema, but
 pgcrypto.sql is explicitly setting the search path to public.  Is there
 a reason it does this that I should be aware of? Is it fine to change
 that and install the functions in a separate schema?

Nope, just remove that line. pgcrypto works perfectly fine when
running in it's own schema - in fact, I think that's the general
recommendation. I think the lock to public schema was just left in
there pending the extensions mechanism to eventually get finished.


-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Disk performance

2010-06-15 Thread Ivan Voras
On 15 June 2010 18:22, Janning m...@planwerk6.de wrote:

 The figures are ok if the tests were done on a single drive (i.e. not
 your RAID-0 array).

 Ahh, I meant raid-1, of course.  Sorry for this.
 I tested my raid 1 too and it looks quite the same. Not much difference.

This is expected: a RAID-1 array (mirroring) will have the performance
of the slowest drive (or a single drive if they are equal).

 There is no magic here - scalability of drives can be approximated
 linearly:

 a) faster drives: 15,000 RPM drives will be almost exactly 15000/7200
 times faster at random access

 ok.

(or if you are looking at raw numbers: a 15,000 RPM drive will sustain
15000/60=250 random IOs per second (IOPS); but now you are entering
magic territory - depending on the exact type of your load you can get
much better results, but not significantly worse).

 b) more drives: depending on your RAID schema, each parallel drive or
 drive combination will grow your speed linearly. For example, a 3-drive
 RAID-0 will be 3/2 times faster than a 2-drive RAID-0. Of course, you
 would not use RAID-0 anywhere serious. But an 8-drive RAID-10 array will
 be 8/4=2 times faster than a 4-drive RAID-10 array.

 So RAID-10 with 4 disks is 2 times faster than a RAID-1, I got it. So as I
 need much more power I should look for a RAID-10 with 8 or more 15k RPM disks.

Yes, if you expect serious write or random IO load. To illustrate: if
you are trying to power a generic web site, for example a blog, you
can expect that most of your load will be read-only (mostly pageviews)
and except if you plan on having a really large site (many authors for
example), that your database will largely fit into RAM, so you don't
have to invest in disk drives as it will be served from cache. On the
other hand, a financial application will do a lot of transactions and
you will almost certainly need good storage infrastructure - this is
where the 250 IOPS for a 15000 RPM drive estimates come into play.

 thanks very much for your help.
 It gave me a good idea of what to do. If you have further recommendations, I
 would be glad to here them.

I can point you to a dedicated mailing list: pgsql-performance @
postgresql.org for questions about performance such as yours.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Disk performance

2010-06-15 Thread Greg Smith

Janning wrote:
IMHO it is looking quite fast compared to the values mentioned in the article. 
  


The tests in the article were using the 2006 versions of the same drive 
you have, so I'd certainly hope yours are faster now.


What values do you expect with a very expensive setup like many spindles, 
scsi, raid controller, battery cache etc. How much faster will it be? 
  


If you visit look at my Database Hardware Benchmarking talk at 
http://projects.2ndquadrant.com/talks I give examples of some of this.  
Page 9 shows how much of a speedup I saw going from one cheap drive to 
three for example, and P32 shows that in the mixed I/O bonnie++ seeks 
tests tripling the number of drives increases the seeks rating it 
computes from 177 to 371.


If you add in a RAID controller, the sequential read/write numbers 
increase no differently than if you add disks with software RAID.  They 
do significantly increase what I call the Commit Rate, which is how 
many small writes you can get per second for database commits.  The 
commit rate for regular drives is proportional to their rotation rate, 
between 100-250 commit/second without a battery-backed RAID controller.  
As you can also see on P32, it jumps to thousands of commits/second with 
one.


Presuming you have reasonable sequential performance and a 
battery-backed controller to make the commit rate reasonable, database 
applications will then normally bottleneck at how fast they can seek 
around.  It is extremely hard to estimate how fast that scales upwards 
as you add more disks to an array and insert a read/write cache into the 
system.


--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com   www.2ndQuadrant.us


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] use window as field name in 8.4

2010-06-15 Thread Peter Lee
I am trying to upgrade our postgresql from 8.3 to 8.4.

I found the window as field name makes many errors during pg_restore.

- like item.window.

Is there any way I can restore the dump file from 8.3 without errors.

 

Peter



Re: [GENERAL] use window as field name in 8.4

2010-06-15 Thread Stephen Frost
* Peter Lee (pe...@flairpackaging.com) wrote:
 I am trying to upgrade our postgresql from 8.3 to 8.4.
 
 I found the window as field name makes many errors during pg_restore.
 
 - like item.window.
 
 Is there any way I can restore the dump file from 8.3 without errors.

The best solution would probably be to rename those fields in the 8.3
database, redo the dump, and then load it into 8.4.  You could also use
the 8.4 pg_dump to connect to and dump out the 8.3 database, but if you
don't change the identifiers, anything trying to use that database will
fall over if it doesn't quote the identifiers.

window in 8.4 is a reserved word, which it wasn't in 8.3.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [GENERAL] Disk performance

2010-06-15 Thread Greg Smith

Ivan Voras wrote:

(or if you are looking at raw numbers: a 15,000 RPM drive will sustain
15000/60=250 random IOs per second (IOPS)


That's only taking into account the rotation speed--a 15K drive can do 
250 physical commits per second if you never seek anywhere.  A true IOPS 
number also considers average seek latency.  A decent 15K drive will be 
around 4ms there, which makes for 167 IOPS total.


Random note:  this discussion is on the wrong list.  There are more 
people interested in this topic who post regularly on pgsql-performance 
than pgsql-general.


--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com   www.2ndQuadrant.us


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is there a way to backup Postgres via SQL commands?

2010-06-15 Thread Greg Smith

Dimitri Fontaine wrote:

Frank Church voi...@googlemail.com writes:
  

Are there SQL commands that can do a backup over a client connection,
rather than from the command line like pgsql etc?



That's pg_dump ?
  


Dimitri has correctly pointed out the flaw behind the basic assumption 
being made by asking this question.  You do not have to use pg_dump from 
the server's command line in order to dump a database.  You can install 
the same PostgreSQL version on whatever system you want to do the dump 
from instead, have it connect to the server as a client, and then dump 
your database on that remote system.  pg_dump is already a client 
application that only requires a network connection to the server and 
sends it a series of SQL commands.


--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com   www.2ndQuadrant.us


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] use window as field name in 8.4

2010-06-15 Thread Steve Crawford

On 06/15/2010 07:58 AM, Peter Lee wrote:


I am trying to upgrade our postgresql from 8.3 to 8.4.

I found the window as field name makes many errors during pg_restore.

- like item.window.

Is there any way I can restore the dump file from 8.3 without errors.

Peter



Does this happen using the 8.3 version of dump or the 8.4 version of 
dump. 8.4 added windowing functions which is likely the cause of your 
difficulty (i.e. a reserved word). The recommended procedure is to use 
the dump program from the newer version of PostgreSQL.


I also periodically review the reserved words at 
http://www.postgresql.org/docs/8.4/interactive/sql-keywords-appendix.html and 
(*ahem* try to) avoid using them as names for columns/tables/...


Cheers,
Steve



Re: [GENERAL] use window as field name in 8.4

2010-06-15 Thread Peter Lee
Well, I guess that's the best solution: change the field name.
I hope to find some alternative solution, but I know it won't be easy.

Thank you.

Peter


-Original Message-
* Stephen Frost (sfr...@snowman.net)
The best solution would probably be to rename those fields in the 8.3
database, redo the dump, and then load it into 8.4.  You could also use the
8.4 pg_dump to connect to and dump out the 8.3 database, but if you don't
change the identifiers, anything trying to use that database will fall over
if it doesn't quote the identifiers.

window in 8.4 is a reserved word, which it wasn't in 8.3.

Thanks,

Stephen

* Peter Lee (pe...@flairpackaging.com) wrote:
 I am trying to upgrade our postgresql from 8.3 to 8.4.
 
 I found the window as field name makes many errors during pg_restore.
 
 - like item.window.
 
 Is there any way I can restore the dump file from 8.3 without errors.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] GSS Authentication

2010-06-15 Thread greigwise

As suggested below, I just tried this: 

kinit -S POSTGRES/host.domain.com user 

(where user is my account name in AD). That then asked for my password and when 
I entered it, it seemed to work. And now klist shows that I have a ticket. 
Doing it this way though, the keytab file doesn't seem to come into play. Does 
this point to something in my keytab file being wrong? 

I did this: 

klist -ket postgres.keytab 

and got: 

KVNO Timestamp Principal 
 -  
3 12/31/69 19:00:00 POSTGRES/host.domain@domain.com (DES cbc mode with 
RSA-MD5) 

That timestamp seems kinda funky, doesn't it? 12/31/69? That can't be right, 
can it? 

Thanks again. 

Greig 

- Original Message - 
From: Stephen Frost sfr...@snowman.net 
To: Bryan Montgomery mo...@english.net 
Cc: greigw...@comcast.net, pgsql-general@postgresql.org 
Sent: Saturday, June 12, 2010 8:35:13 AM GMT -05:00 US/Canada Eastern 
Subject: Re: [GENERAL] GSS Authentication 

* Bryan Montgomery (mo...@english.net) wrote: 
 I've been trying this as well off and on. In my case I'm not convinced the 
 AD configuration is correct (And someone else manages that). 

Yeah, that can be a challenge.. but it's *definitely* possible to get 
it set up and working correctly. 

 Can you use kinit with the key tab options to get a good response from the 
 server? I think I should be able to do this .. 
 $ kinit -V -k -t poe3b.keytab HTTP/poe3b.lab2k.net 
 kinit(v5): Preauthentication failed while getting initial credentials 

err, I'm not sure that should be expected to work. 

What does klist -ek keytab file return? Also, you should be able to 
kinit to *your* princ in the AD, and if you can do that, you should be 
able to use your princ to request the service princ ticket from the KDC 
by doing kinit -S HTTP/poe3b.lab2k.net your.princ 

Also, provided your *client* is set up/configured correctly, you should 
be able to see that it acquires the ticket (by using klist) when you try 
to connect to the server, even if the server is misconfigured. 

 I'd be interested to know if you get something different - and the steps you 
 went through on the AD side. 

You have to create an account in Active Directory for the PG service and 
then use: 

ktpass /princ POSTGRES/myserver.mydomain@mydomain.com /mapuser 
postg...@mydomain.com /pass mypass /crypto AES256-SHA1 /ptype 
KRB5_NT_PRINCIPAL /out krb5.keytab 

Then copy that krb5.keytab to the server. Note that you then have to 
adjust the server config to have service name set to POSTGRES, and 
adjust clients using the environment variables to indiciate they should 
ask for POSTGRES (instead of the postgres default). 

Thanks, 

Stephen 


signature.asc
Description: Digital signature

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Anyone know about PgMQ?

2010-06-15 Thread Rory Campbell-Lange
OK -- I've spotted another project, AMQP for PostgreSQL.
http://lethargy.org/~jesus/writes/amqp-for-postgresql

Which looks pretty good.

Rory

On 11/06/10, Rory Campbell-Lange (r...@campbell-lange.net) wrote:
 I was intrigued to see Chris Bohn's page about PgMQ (Embedding
 messaging in PostgreSQL) on the PGCon website at 
 http://www.pgcon.org/2010/schedule/events/251.en.html
 
 I have also had a look at the pgfoundry site at 
 http://pgfoundry.org/projects/pgmq/ -- its empty.
 
 I've tried to email Chris to find out more about his project, but
 haven't received a response. Does any one have any details of this
 project?
 
 I am very interested in the possibility of linking Postgres events such
 as triggers to RabbitMQ messaging queues.

-- 
Rory Campbell-Lange
r...@campbell-lange.net

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] IMMUTABLE columns in tables?

2010-06-15 Thread Adrian von Bidder
On Tuesday 15 June 2010 18.56:46 Craig Ringer wrote:

[IMMUTABLE columns]

 Column privs may be bypassed by a superuser

To be fair, constraints can be removed via ALTER TABLE, so this is not an 
argument.

For me, the compelling reason to propose this is that it's much more 
readable than either triggers or GRANTs.  It may be more efficient at 
runtime as well, but (i) this is just speculation and (ii) it has to be 
weighed against the small cost even in cases it is not used, and the cost of 
maintaining the additional code paths.

Chris:
   - What do other databases use as syntax for this?

Either my Google skills are lacking, or none of Oracle, MySQL, Sybase or MS 
SQL does currently implement such a thing.

There is a proposal from 2008 for MS SQL:
http://connect.microsoft.com/SQLServer/feedback/details/346200
Whis is essentially identical to my proposal in its first part (IMMUTABLE as 
a regular constraint) and adds an extension of the form IMMUTABLE WHEN 
condition where condition is probably anything that would also be valid as 
a check contstraint.

I think the latter part is probably too complicated and opens too many gray 
areas (it's not IMMUTABLE if I just can UPDATE to invalidate condition, 
change my immutable column, and update again to undo the first change...)

The only extension I think might be useful is IMMUTABLE WHEN NOT NULL, which 
would allow the row to be unset on INSERT and later be frozen.

Another question opened in that MS SQL change request: should rows with 
IMMUTABLE values be allowed to be deleted?  Judgement call, personally, I 
think that immutability and DELETE rights have nothing to do with each other 
(I imagined use primarily to protect the id column against [accidential?] 
modification.)

   - Does SQL standard have anything to say about how this sort
 of thing ought to be declared?

It appears that at least the SQL 2008 standard is not available for free.  
If it were in SQL92 or whatever, I'd expect it to be already implemented, 
but admitttedly I did not check these standards.

thanks for the feedback!
-- vbi

-- 
This is what open source software is all about: creating knockoffs and
giving them away, destroying the value of whatever the other guy is
selling.
-- Danyel Lyons, Forbes


signature.asc
Description: This is a digitally signed message part.


Re: [GENERAL] Inconsistency with LIKE and SIMILAR TO

2010-06-15 Thread Tom Lane
=?iso-8859-2?Q?Rosi=F1ski_Krzysztof_2_-_Detal_TP?= 
krzysztof.rosins...@telekomunikacja.pl writes:
 Operators LIKE and SIMILAR TO work differently

Yup.  It's an implementation restriction (see the comment for
subquery_Op in gram.y if you want to know).

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Does enterprisedb.com down?

2010-06-15 Thread Vick Khera
On Tue, Jun 15, 2010 at 1:21 AM, M. Bashir Al-Noimi ad...@mbnoimi.net wrote:
 So I wish to get a clarification about this issue, does pg forbids my
 country? is it still open source?

enterprisedb.com != postgres open source project.  postgres as far as
i know has no restrictions on who may use it anywhere in the universe
as long as they abide by the license.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Backups / replication

2010-06-15 Thread Oliver Kohll - Mailing Lists
Hello,

I'm interested in using WAL shipping / replication for backup purposes but have 
no interest in failover. Currently my situation is:

I have two servers, live and backup, which are in different cities. The backup 
server is also a test/development machine.

Backups of my most important database are made hourly with pg_dump, excluding 
some larger tables with non-critical logging data. Even so, as the database 
grows, backups are taking longer and it looks as though they may start to 
impact performance. A full backup is made nightly and transferred to the backup 
machine, along with all of the day's hourly backups.

I'm looking into using replication by WAL shipping - after all, there's no use 
to backing up data which hasn't changed since last time - only a small 
percentage of records are created/updated. However, I need

a) to be able to restore to a point in time easily, which I can do to within an 
hour at the moment by restoring the correct dump. Sometimes users ask for a 
restore having accidentally updated/deleted records.
b) to carry on running a test server database, that means one that's read and 
writeable.

I obviously can't use a replication slave as a read/write test server at the 
same time. At the moment I've thought of a couple of options, I don't know if 
either are possible - I have a bit of a hazy idea of WAL replication.

1) Continuously ship the WAL records to somewhere on the test server unknown to 
Postgres but run the test machine as a normal database completely separately. 
If a backup is needed, delete the test database, restore to the last full 
backup (a filesystem backup?) and copy all WAL records into Postgres' directory 
so it can see them. Start it up configured to replay them, up to a certain time.

2) Run two instances of Postgres on the test/backup server on different ports, 
one configured as a replication slave, one normal. I'm not sure if this is 
possible with the RPM builds I'm using.

Are either of those two likely? Any other suggestions? Another question is will 
the replication coming in v9.0 change things and would it be worth holding off 
until then? In particular Command Prompt's PITR tools look useful for restoring 
to a particular point in time, will these still work or will there be 
equivalents?

Regards
Oliver Kohll

oli...@agilebase.co.uk / +44(0)7814 828608 / skype:okohll




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] IMMUTABLE columns in tables?

2010-06-15 Thread Vick Khera
On Tue, Jun 15, 2010 at 11:54 AM, Chris Browne cbbro...@acm.org wrote:
 I'd like that feature, and I don't think it takes too much arguing to
 get to the point that a declarative IMMUTABLE control is rather less


Not only that, but if you were to, say, make the PK field IMMUTABLE
you could then optimize out the locks involved when updating rows with
FK relations to the PK (or any other FK field, I suppose)

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Inconsistency with LIKE and SIMILAR TO

2010-06-15 Thread Bruce Momjian
Thom Brown wrote:
 2010/6/15 Rosi?ski Krzysztof 2 - Detal TP 
 krzysztof.rosins...@telekomunikacja.pl
 
  Hello.
 
  Operators LIKE and SIMILAR TO work differently
  This query works ok.:
 
  SELECT *
   FROM www.test
   WHERE expr like any (ARRAY['a','b']);
 
  But this not work:
 
  SELECT *
   FROM www.test
   WHERE expr similar to any (ARRAY['a','b']);
 
  ERROR:  syntax error at or near any
  LINE 3:   WHERE expr similar to any (ARRAY['a','b']);
 
  Is this shouldn't works ?
 
 
 SIMILAR TO and LIKE aren't synonyms for each other.  SIMILAR TO uses a cross
 between LIKE syntax and regular expressions for the evalutated expression.
 I'm not entirely sure what your query is supposed to be doing.  I thought
 ANY was a keyword denoting a row-wise comparison?

LIKE is only similar to SIMILAR TO.  ;-)

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + None of us is going to be here forever. +

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Backups / replication

2010-06-15 Thread Alan Hodgson
On Tuesday, June 15, 2010, Oliver Kohll - Mailing Lists 
oliver.li...@gtwm.co.uk wrote:
 Are either of those two likely? Any other suggestions? Another question
 is will the replication coming in v9.0 change things and would it be
 worth holding off until then? In particular Command Prompt's PITR tools
 look useful for restoring to a particular point in time, will these
 still work or will there be equivalents?

PITR in recent versions allows restoration to any point in time after the 
base backup was created, assuming you have the WAL logs from that point 
forward.

-- 
No animals were harmed in the recording of this episode. We tried but that 
damn monkey was just too fast.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to emulate password generation in PHP with PlpgSQL?

2010-06-15 Thread Josh Kupershmidt
On Sun, Jun 13, 2010 at 8:45 AM, Andre Lopes lopes80an...@gmail.com wrote:

 But I'am not getting how to generate the SALT. Can someone give me a clue on
 how to do this.

The salt() function you posted returns 10 random hexadecimal digits.
You could mimic it with something like:
SELECT substr(md5(RANDOM()::text), 0, 11);

Josh

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] GSS Authentication

2010-06-15 Thread Stephen Frost
* greigw...@comcast.net (greigw...@comcast.net) wrote:
 kinit -S POSTGRES/host.domain.com user 
 
 (where user is my account name in AD). That then asked for my password and 
 when I entered it, it seemed to work. And now klist shows that I have a 
 ticket. Doing it this way though, the keytab file doesn't seem to come into 
 play. Does this point to something in my keytab file being wrong? 

Good that you were able to get a ticket manually.  Next you need to try
getting a client application (eg: psql) to get that same ticket.  Before
you run psql, do:

kdestroy
kinit
export PGKRBSRVNAME=POSTGRES
psql -d postgres -h host.domain.com
klist

And see if you acquired the same ticket you got with the manual klist.

 I did this: 
 
 klist -ket postgres.keytab 
 
 and got: 
 
 KVNO Timestamp Principal 
  - 
  
 3 12/31/69 19:00:00 POSTGRES/host.domain@domain.com (DES cbc mode with 
 RSA-MD5) 
 
 That timestamp seems kinda funky, doesn't it? 12/31/69? That can't be right, 
 can it? 

The timestamp isn't really right, but it shouldn't really hurt either-
that's just when it was created.  The encyprtion is crappy though and
might be disabled by default (MIT Kerberos recently started disabling
DES and lower encryption because it's horribly insecure).  Check your
/etc/krb5.conf for permitted_enctypes.  Also, after you get a
POSTGRES/host.domain.com ticket using kinit (or psql), do a klist -e and
see if the encryption type of the ticket you got matches that of the
keytab.  If it doesn't, then you might have created multiple keys for
the same princ on the server (not generally a bad thing), but not
exported and loaded all of them into the keytab on the unix system
(which would be a problem...).

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [GENERAL] Backups / replication

2010-06-15 Thread Adrian von Bidder
[continuous backup]

On Tuesday 15 June 2010 21.42:52 Oliver Kohll - Mailing Lists wrote:
 1) Continuously ship the WAL records to somewhere on the test server
 unknown to Postgres but run the test machine as a normal database
 completely separately. If a backup is needed, delete the test database,
 restore to the last full backup (a filesystem backup?) and copy all WAL
 records into Postgres' directory so it can see them. Start it up
 configured to replay them, up to a certain time.

 2) Run two instances of Postgres on the test/backup server on different
 ports, one configured as a replication slave, one normal. I'm not sure
 if this is possible with the RPM builds I'm using.

Both scenarious are possible.  I don't know the rpm builds you're using; the 
Debian packages allow configuring two instances on two different ports 
AFAIK.  Possibly the rpm installation do, too.  Even if not: hacking up a 
2nd start script which runs postgres against a different data directory / 
config file should be quite trivial.

Keeping the base backup plus all the WAL files for the case you need to 
restore will need quite a bit of diskspace if your database is reasonably 
big (on some database I administrated, I scheduled weekly base backups and 
kept a week of WAL - since we sometimes had quite a lot changes in the db, 
WAL was quickly 10 times as big as the base backup.  So depending on your DB 
load, keeping a 2nd installation of postgres running and continuously 
reading the WAL files might be cheaper in terms of disk space.

(and with 9.0, you even have a near real-time read-only copy of the db for 
free gratis...)

cheers
-- vbi

-- 
90% of the people do not understand copyright,
the other 10% simply ignore it.
-- Aigars Mahinovs


signature.asc
Description: This is a digitally signed message part.


[GENERAL] Ideal Disk recommendation for PostgreSQL

2010-06-15 Thread Bhaskar Sirohi
Hi All,

We are right now in initial phase to setup a production server having
PostgreSQL database installed and would require help with Disk
configuration. The database size would grow approx to 500 GB. I have gone
through the following link
http://momjian.us/main/writings/pgsql/hw_performance/index.html which was
very helpful.  However what I would like to understand is, do you have any
figures around minimum / optimal disk performance for postgresql to operate?
As an example I remember MS have a document on Exchange that gives some
rough guidance around the number of IOPS it needs depending on number of
users / mailbox size.

Any suggestions are welcome !

-- 
Thanks and Regards

Bhaskar Sirohi
bhaskar.sir...@druvaa.com
www.druvaa.com
(C) +91 986 094 6619


[GENERAL] Problem with triggers

2010-06-15 Thread Sid
Hi,

I am writing trigger function for validating values inserted into table. The
goal is to print user friendly messages when inserted value is wrong.
When I check for null values everything works as expected:
Inside trigger I have lines:
if (new.tvalue is null) then
RAISE EXCEPTION 'error message';
end if;
This one works ok.

But when I try to do something like that:

if length(new.tvalue)  20 then
  RAISE EXCEPTION 'error message - too long';
end if;

then I get just information from database that given field value is to long
(of course only when I try insert too long value).

In fact even if I put RAISE EXCEPTION as first line inside trigger it does
not work.
All I get is ERROR: value to long..



Trigger declaration looks as follows:
CREATE TRIGGER bir_validate
  BEFORE INSERT OR UPDATE
  ON portal.documentation
  FOR EACH ROW
  EXECUTE PROCEDURE portal.dynamic_trigger_validate();

Postgres 8.4.4 on Windows 7 32bit

My question is: why do I get information about too long value before trigger
fires?
Can I change this behavior?


-- 
Best regards
Sid


Re: [GENERAL] GSS Authentication

2010-06-15 Thread greigwise
OK. I tried what you suggested. I pasted the whole sequence of commands and the 
results below. As you can see, the connection to postgres still failed, but it 
looks like it actually acquired the ticket (I think). What do you make of that? 

Thanks again for the help. 
Greig 


[u...@client ~]$ kdestroy 
 
[u...@client ~]$ klist 
klist: No credentials cache found (ticket cache FILE:/tmp/krb5cc_503) 

Kerberos 4 ticket cache: /tmp/tkt503 
klist: You have no tickets cached 
 
[u...@client ~]$ kinit 
Password for u...@domain.com: 
 
[u...@client ~]$ klist 
Ticket cache: FILE:/tmp/krb5cc_503 
Default principal: u...@domain.com 

Valid starting Expires Service principal 
06/15/10 17:16:37 06/16/10 03:16:42 krbtgt/domain@domain.com 
renew until 06/16/10 03:16:37 


Kerberos 4 ticket cache: /tmp/tkt503 
klist: You have no tickets cached 
 
[u...@client ~]$ psql -d postgres -h server.DOMAIN.COM 
psql: FATAL: accepting GSS security context failed 
DETAIL: Miscellaneous failure: Unknown code krb5 181 
 
[u...@client ~]$ klist 
Ticket cache: FILE:/tmp/krb5cc_503 
Default principal: u...@domain.com 

Valid starting Expires Service principal 
06/15/10 17:16:37 06/16/10 03:16:42 krbtgt/domain@domain.com 
renew until 06/16/10 03:16:37 
06/15/10 17:17:01 06/16/10 03:16:42 POSTGRES/server.domain@domain.com 
renew until 06/16/10 03:16:37 


Kerberos 4 ticket cache: /tmp/tkt503 
klist: You have no tickets cached 
 


- Original Message - 
From: Stephen Frost sfr...@snowman.net 
To: greigw...@comcast.net 
Cc: pgsql-general@postgresql.org, Bryan Montgomery mo...@english.net 
Sent: Tuesday, June 15, 2010 4:25:55 PM GMT -05:00 US/Canada Eastern 
Subject: Re: [GENERAL] GSS Authentication 

* greigw...@comcast.net (greigw...@comcast.net) wrote: 
 kinit -S POSTGRES/host.domain.com user 
 
 (where user is my account name in AD). That then asked for my password and 
 when I entered it, it seemed to work. And now klist shows that I have a 
 ticket. Doing it this way though, the keytab file doesn't seem to come into 
 play. Does this point to something in my keytab file being wrong? 

Good that you were able to get a ticket manually. Next you need to try 
getting a client application (eg: psql) to get that same ticket. Before 
you run psql, do: 

kdestroy 
kinit 
export PGKRBSRVNAME=POSTGRES 
psql -d postgres -h host.domain.com 
klist 

And see if you acquired the same ticket you got with the manual klist. 

 I did this: 
 
 klist -ket postgres.keytab 
 
 and got: 
 
 KVNO Timestamp Principal 
  - 
  
 3 12/31/69 19:00:00 POSTGRES/host.domain@domain.com (DES cbc mode with 
 RSA-MD5) 
 
 That timestamp seems kinda funky, doesn't it? 12/31/69? That can't be right, 
 can it? 

The timestamp isn't really right, but it shouldn't really hurt either- 
that's just when it was created. The encyprtion is crappy though and 
might be disabled by default (MIT Kerberos recently started disabling 
DES and lower encryption because it's horribly insecure). Check your 
/etc/krb5.conf for permitted_enctypes. Also, after you get a 
POSTGRES/host.domain.com ticket using kinit (or psql), do a klist -e and 
see if the encryption type of the ticket you got matches that of the 
keytab. If it doesn't, then you might have created multiple keys for 
the same princ on the server (not generally a bad thing), but not 
exported and loaded all of them into the keytab on the unix system 
(which would be a problem...). 

Thanks, 

Stephen 


signature.asc
Description: Digital signature

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] GSS Authentication

2010-06-15 Thread greigwise
I just notice that in your message you had more text further down (regarding 
the DES encryption). I didn't see that at first. So, I did klist -e as you 
suggested and I got this: 

Ticket cache: FILE:/tmp/krb5cc_502 
Default principal: u...@domain.com 

Valid starting Expires Service principal 
06/15/10 18:07:33 06/16/10 04:07:36 krbtgt/domain@domain.com 
renew until 06/16/10 04:07:33, Etype (skey, tkt): ArcFour with HMAC/md5, 
ArcFour with HMAC/md5 


Kerberos 4 ticket cache: /tmp/tkt502 
klist: You have no tickets cached 

Is that the problem? I don't see anything about permitted enctypes in my 
krb5.conf. Should I add something in there to allow DES, or should I recreate 
my keytab to use a different encryption type? If so, what should I use? 

Thanks again. I feel like I'm making progress. 
Greig 

- Original Message - 
From: Stephen Frost sfr...@snowman.net 
To: greigw...@comcast.net 
Cc: pgsql-general@postgresql.org, Bryan Montgomery mo...@english.net 
Sent: Tuesday, June 15, 2010 4:25:55 PM GMT -05:00 US/Canada Eastern 
Subject: Re: [GENERAL] GSS Authentication 

* greigw...@comcast.net (greigw...@comcast.net) wrote: 
 kinit -S POSTGRES/host.domain.com user 
 
 (where user is my account name in AD). That then asked for my password and 
 when I entered it, it seemed to work. And now klist shows that I have a 
 ticket. Doing it this way though, the keytab file doesn't seem to come into 
 play. Does this point to something in my keytab file being wrong? 

Good that you were able to get a ticket manually. Next you need to try 
getting a client application (eg: psql) to get that same ticket. Before 
you run psql, do: 

kdestroy 
kinit 
export PGKRBSRVNAME=POSTGRES 
psql -d postgres -h host.domain.com 
klist 

And see if you acquired the same ticket you got with the manual klist. 

 I did this: 
 
 klist -ket postgres.keytab 
 
 and got: 
 
 KVNO Timestamp Principal 
  - 
  
 3 12/31/69 19:00:00 POSTGRES/host.domain@domain.com (DES cbc mode with 
 RSA-MD5) 
 
 That timestamp seems kinda funky, doesn't it? 12/31/69? That can't be right, 
 can it? 

The timestamp isn't really right, but it shouldn't really hurt either- 
that's just when it was created. The encyprtion is crappy though and 
might be disabled by default (MIT Kerberos recently started disabling 
DES and lower encryption because it's horribly insecure). Check your 
/etc/krb5.conf for permitted_enctypes. Also, after you get a 
POSTGRES/host.domain.com ticket using kinit (or psql), do a klist -e and 
see if the encryption type of the ticket you got matches that of the 
keytab. If it doesn't, then you might have created multiple keys for 
the same princ on the server (not generally a bad thing), but not 
exported and loaded all of them into the keytab on the unix system 
(which would be a problem...). 

Thanks, 

Stephen 


signature.asc
Description: Digital signature

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem with triggers

2010-06-15 Thread Adrian Klaver

On 06/15/2010 02:01 PM, Sid wrote:

Hi,

I am writing trigger function for validating values inserted into table. The
goal is to print user friendly messages when inserted value is wrong.
When I check for null values everything works as expected:
Inside trigger I have lines:
if (new.tvalue is null) then
 RAISE EXCEPTION 'error message';
end if;
This one works ok.

But when I try to do something like that:

if length(new.tvalue)  20 then
   RAISE EXCEPTION 'error message - too long';
end if;

then I get just information from database that given field value is to long
(of course only when I try insert too long value).

In fact even if I put RAISE EXCEPTION as first line inside trigger it does
not work.
All I get is ERROR: value to long..



Trigger declaration looks as follows:
CREATE TRIGGER bir_validate
   BEFORE INSERT OR UPDATE
   ON portal.documentation
   FOR EACH ROW
   EXECUTE PROCEDURE portal.dynamic_trigger_validate();

Postgres 8.4.4 on Windows 7 32bit

My question is: why do I get information about too long value before trigger
fires?
Can I change this behavior?




The database is beating you to the validation. Basically you are trying 
to override the built in validation. To make this work you will need to 
let the field be longer than you want i.e varchar with no length 
argument and then let your trigger handle the validations.


--
Adrian Klaver
adrian.kla...@gmail.com

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem with triggers

2010-06-15 Thread Tom Lane
Adrian Klaver adrian.kla...@gmail.com writes:
 On 06/15/2010 02:01 PM, Sid wrote:
 I am writing trigger function for validating values inserted into table. The
 goal is to print user friendly messages when inserted value is wrong.

 My question is: why do I get information about too long value before trigger
 fires?

 The database is beating you to the validation.

People try this every few months :-(, but it's basically a dead-end idea.
A large majority of the things you might want to report an error for are
going to be rejected by the datatype input functions for the column
datatypes --- for example, you're not going to be able to print a user
friendly message on a bad timestamp, because that will be noticed long
before any trigger gets to fire.

You can either decide that the built-in error messages aren't so awful
after all, or do your data validation on the client side.

Or I guess you could lobotomize the database completely by making all
your fields be unlimited-length varchar so that there's no interesting
checking to be done.  But you really, really don't want to go there.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Does enterprisedb.com down?

2010-06-15 Thread Jayadevan M
Hello,

 The EnterpriseDB binaries are not on the community ftp servers, and
 can't be because some of the community servers are in the USA, and those
 USA servers might not block embargoed countries. 
Thanks for the clarification. We are in a very early stage of evaluating 
EnterpriseDB for our product and the information may have some relevance. 
So EnterpriseDB products can't be provided to clients based in embargoed 
countries?

Regards,
Jayadevan






DISCLAIMER: 

The information in this e-mail and any attachment is intended only for 
the person to whom it is addressed and may contain confidential and/or 
privileged material. If you have received this e-mail in error, kindly 
contact the sender and destroy all copies of the original communication. 
IBS makes no warranty, express or implied, nor guarantees the accuracy, 
adequacy or completeness of the information contained in this email or any 
attachment and is not liable for any errors, defects, omissions, viruses 
or for resultant loss or damage, if any, direct or indirect.






-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Does enterprisedb.com down?

2010-06-15 Thread Bruce Momjian
Jayadevan M wrote:
 Hello,
 
  The EnterpriseDB binaries are not on the community ftp servers, and
  can't be because some of the community servers are in the USA, and those
  USA servers might not block embargoed countries. 

 Thanks for the clarification. We are in a very early stage of evaluating 
 EnterpriseDB for our product and the information may have some relevance. 
 So EnterpriseDB products can't be provided to clients based in embargoed 
 countries?

Uh, I don't know.  However, I talked to Dave Page today and he is making
progress on understanding how things can be done better, and Peter
mentioned that Debian is doing some things that look promising, so I
hope things will be improved soon.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + None of us is going to be here forever. +

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] consequent PQsendQueryPrepared() failed: another command is already in progress

2010-06-15 Thread Anton Maksimenkov
I'm using libpq C Library. I prepared some query and trying to call it
many times.
But it success only at first time, and then fail with error:

... another command is already in progress

Here is my testbed:
int
main (register int const argc, register char *const argv[])
{
PGconn  *conn;
PGresult*res;

conn = PQsetdbLogin(PGHOST, PGPORT, PGOPTIONS, PGTTY, PGDBNAME,
PGLOGIN, PGPWD);
if (PQstatus(conn) == CONNECTION_BAD) {
fprintf(stderr, PQstatus(): %s, PQerrorMessage(conn));
PQfinish(conn);
exit(1);
}
if ((res = PQprepare(conn, GET_USER, SELECT uid FROM users WHERE
uid = $1::INT LIMIT 1, 1, NULL)) == NULL) {
fprintf(stderr, PQprepare() res == NULL);
PQfinish(conn);
exit(1);
}
if (PQresultStatus(res) != PGRES_COMMAND_OK) {
fprintf(stderr, PQprepare() failed: %s, PQerrorMessage(conn));
PQclear(res);
PQfinish(conn);
exit(1);
}

fprintf(stderr, FIRST: );
query(conn);

fprintf(stderr, SECOND: );
query(conn);

exit(0);
}

int
query(PGconn *conn)
{
const char *paramValues[1];
int paramLengths[1];
int paramFormats[1];
uint32_tbinaryIntVal;
PGresult   *res;

binaryIntVal = htonl((uint32_t) 15);
paramValues[0] = (char *) binaryIntVal;
paramLengths[0] = sizeof(binaryIntVal);
paramFormats[0] = 1;

if (PQsendQueryPrepared(conn, GET_USER, 1, paramValues,
paramLengths, paramFormats, 1) == 0) {
fprintf(stderr, PQsendQueryPrepared() failed: %s, 
PQerrorMessage(conn));
return -1;
}
while (PQisBusy(conn))
if (PQconsumeInput(conn) == 0) {
fprintf(stderr, PQconsumeInput() failed: %s, 
PQerrorMessage(conn));
return -1;
}

if ((res = PQgetResult(conn)) == NULL) {
fprintf(stderr, PQgetResult() res == NULL);
PQfinish(conn);
return -1;
}
if (PQresultStatus(res) != PGRES_TUPLES_OK) {
fprintf(stderr, PQgetResult() failed: %s, 
PQerrorMessage(conn));
PQclear(res);
PQfinish(conn);
return -1;
}

int i, uidFN;
char *uidPTR;
int uid;

uidFN = PQfnumber(res, uid);
printf(tuples %d\n, PQntuples(res));
for (i = 0; i  PQntuples(res); i++) {
uidPTR = PQgetvalue(res, i, uidFN);
uid = ntohl(*((uint32_t *) uidPTR));
printf(tuple %d: uid[%d]\n, i, uid);
}
PQclear(res);

return 0;
}

$ ./test
FIRST: tuples 1
tuple 0: uid[15]
SECOND: PQsendQueryPrepared() failed: another command is already in progress

Where I was wrong?


And another question. Is it possible to simultaneously keep a number
of prepared queries and run any of them from time to time?
Something like this:
{
 PQprepare(Q1);
 PQprepare(Q2);
 PQprepare(Q3);
 ...
 query(Q1);
 ...
 query(Q2);
 ...
 query(Q1);
 ...
 query(Q3);
 ...
 query(Q2);
 ...
 query(Q3);
 ...
 query(Q1);
 ...
}
-- 
antonvm

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem with triggers

2010-06-15 Thread Adrian von Bidder
Heyho!

On Wednesday 16 June 2010 00.56:14 Adrian Klaver wrote:
  My question is: why do I get information about too long value before
  trigger fires?
  Can I change this behavior?

I firmly feel friendly error messages like this firmly beong into the 
application and not into the DB.  Next thing you'll want translated messages 
as well, and your triggers become so complex that you don't wnat to maintain 
them ...

 The database is beating you to the validation.

With triggers.  A question to the experts: Couldn't this, in theory, be 
implememnted within the rules system?  From what I understand they are run 
right after the query is parsed; I'd expect data validation to come a bit 
later.  Not sure if this is right.

cheers
-- vbi


-- 
Or is it?


signature.asc
Description: This is a digitally signed message part.


Re: [GENERAL] GSS Authentication

2010-06-15 Thread Greig Wise
Nope.  I get this:

kinit(v5): Client not found in Kerberos database while getting initial 
credentials


On Jun 15, 2010, at 10:03 PM, Bryan Montgomery wrote:

 I'm not in front of a linux machine, but does
 kinit -kt postgres.keytab -S POSTGRES/host.domain.com grant a ticket without 
 asking for the password?
 
 On Tue, Jun 15, 2010 at 2:38 PM, greigw...@comcast.net wrote:
 
 As suggested below, I just tried this:
 
 kinit -S POSTGRES/host.domain.com user 
 
 (where user is my account name in AD).  That then asked for my password and 
 when I entered it, it seemed to work. And now klist shows that I have a 
 ticket.  Doing it this way though, the keytab file doesn't seem to come into 
 play.  Does this point to something in my keytab file being wrong?
 
 I did this:  
 
 klist -ket postgres.keytab
 
 and got: 
 
 KVNO Timestamp Principal
  - 
 
3 12/31/69 19:00:00 POSTGRES/host.domain@domain.com (DES cbc mode with 
 RSA-MD5)
 
 That timestamp seems kinda funky, doesn't it?  12/31/69?  That can't be 
 right, can it?
 
 
 Thanks again.
 
 Greig
 
 - Original Message -
 From: Stephen Frost sfr...@snowman.net
 To: Bryan Montgomery mo...@english.net
 Cc: greigw...@comcast.net, pgsql-general@postgresql.org
 Sent: Saturday, June 12, 2010 8:35:13 AM GMT -05:00 US/Canada Eastern
 Subject: Re: [GENERAL] GSS Authentication
 
 * Bryan Montgomery (mo...@english.net) wrote:
  I've been trying this as well off and on. In my case I'm not convinced the
  AD configuration is correct (And someone else manages that).
 
 Yeah, that can be a challenge..  but it's *definitely* possible to get
 it set up and working correctly.
 
  Can you use kinit with the key tab options to get a good response from the
  server? I think I should be able to do this ..
  $ kinit -V -k -t poe3b.keytab HTTP/poe3b.lab2k.net
  kinit(v5): Preauthentication failed while getting initial credentials
 
 err, I'm not sure that should be expected to work.
 
 What does klist -ek keytab file return?  Also, you should be able to
 kinit to *your* princ in the AD, and if you can do that, you should be
 able to use your princ to request the service princ ticket from the KDC
 by doing kinit -S HTTP/poe3b.lab2k.net your.princ
 
 Also, provided your *client* is set up/configured correctly, you should
 be able to see that it acquires the ticket (by using klist) when you try
 to connect to the server, even if the server is misconfigured.
 
  I'd be interested to know if you get something different - and the steps you
  went through on the AD side.
 
 You have to create an account in Active Directory for the PG service and
 then use:
 
 ktpass /princ POSTGRES/myserver.mydomain@mydomain.com /mapuser
 postg...@mydomain.com /pass mypass /crypto AES256-SHA1 /ptype
 KRB5_NT_PRINCIPAL /out krb5.keytab
 
 Then copy that krb5.keytab to the server.  Note that you then have to
 adjust the server config to have service name set to POSTGRES, and
 adjust clients using the environment variables to indiciate they should
 ask for POSTGRES (instead of the postgres default).
 
 Thanks,
 
 Stephen
 



Re: [GENERAL] Ideal Disk recommendation for PostgreSQL

2010-06-15 Thread A. Kretschmer
In response to Bhaskar Sirohi :
 Hi All,
 
 We are right now in initial phase to setup a production server having
 PostgreSQL database installed and would require help with Disk configuration.
 The database size would grow approx to 500 GB. I have gone through the
 following link http://momjian.us/main/writings/pgsql/hw_performance/index.html
 which was very helpful.  However what I would like to understand is, do you
 have any figures around minimum / optimal disk performance for postgresql to
 operate? As an example I remember MS have a document on Exchange that gives

Not really. If you have enough money, you can divide your disk-space
into 2 or 3 parts (separate disks and controllers) for data, indexes and
WAL-logging, using tablespaces. But all depends on your workload (and
the available money ...)


Regards, Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: - Header)
GnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431  2EB0 389D 1DC2 3172 0C99

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem with triggers

2010-06-15 Thread Sid
2010/6/16 Tom Lane t...@sss.pgh.pa.us

 Adrian Klaver adrian.kla...@gmail.com writes:
  On 06/15/2010 02:01 PM, Sid wrote:
  I am writing trigger function for validating values inserted into table.
 The
  goal is to print user friendly messages when inserted value is wrong.

  My question is: why do I get information about too long value before
 trigger
  fires?

  The database is beating you to the validation.

 People try this every few months :-(, but it's basically a dead-end idea.

I tried to search for this problem, but I failed :(. I spend few hours
trying to find what is
wrong with my code.

A large majority of the things you might want to report an error for are
 going to be rejected by the datatype input functions for the column
 datatypes --- for example, you're not going to be able to print a user
 friendly message on a bad timestamp, because that will be noticed long
 before any trigger gets to fire.

I didn't think about that,


 You can either decide that the built-in error messages aren't so awful
 after all, or do your data validation on the client side.

Yes, I'll probably do this that way.


 Or I guess you could lobotomize the database completely by making all
 your fields be unlimited-length varchar so that there's no interesting
 checking to be done.  But you really, really don't want to go there.

No. This solution is too ugly even for me :)



regards, tom lane


Thank you for your explanation, Tom


-- 
Best regards
Sid


Re: [GENERAL] Problem with triggers

2010-06-15 Thread Sid
2010/6/16 Adrian von Bidder avbid...@fortytwo.ch

 Heyho!

 On Wednesday 16 June 2010 00.56:14 Adrian Klaver wrote:
   My question is: why do I get information about too long value before
   trigger fires?
   Can I change this behavior?

 I firmly feel friendly error messages like this firmly beong into the
 application and not into the DB.  Next thing you'll want translated
 messages
 as well, and your triggers become so complex that you don't wnat to
 maintain
 them ...

 My plan was to keep as much as possible of the application logic on the
database side.
Triggers were not so complex, because I used one trigger function for many
tables (http://wiki.postgresql.org/wiki/PL/pgSQL_Dynamic_Triggers),so
printing user friendly information about null values was just one line.



  The database is beating you to the validation.

 With triggers.  A question to the experts: Couldn't this, in theory, be
 implememnted within the rules system?  From what I understand they are run
 right after the query is parsed; I'd expect data validation to come a bit
 later.  Not sure if this is right.

 cheers
 -- vbi


-- 
Best regards
Sid


Re: [GENERAL] Ideal Disk recommendation for PostgreSQL

2010-06-15 Thread Scott Marlowe
On Tue, Jun 15, 2010 at 1:48 PM, Bhaskar Sirohi
bhaskar.sir...@druvaa.com wrote:
 Hi All,

 We are right now in initial phase to setup a production server having
 PostgreSQL database installed and would require help with Disk
 configuration. The database size would grow approx to 500 GB. I have gone
 through the following link
 http://momjian.us/main/writings/pgsql/hw_performance/index.html which was
 very helpful.  However what I would like to understand is, do you have any
 figures around minimum / optimal disk performance for postgresql to operate?
 As an example I remember MS have a document on Exchange that gives some
 rough guidance around the number of IOPS it needs depending on number of
 users / mailbox size.

Whether you need fast sequential throughput or fast random access, the
best setup is usually to throw lots of drives at the problem.  If you
need multiple volumes to get lots of drives then use tablespaces to
move busy tables to less busy drive sets.  If you need lots of writes
to happen all the time, then using a battery backed caching RAID
controller or two or 8 is a necessity.

So, IOPS is important for transactional stuff, but for report
generation and things that tend to seq scan, throughput is more
important.  You tend to get good numbers either way with fast caching
RAID controllers RAID-10.  sometimes you can use software RAID and
lots of disks for report servers with good results, but if the write
load even blips up (like when loading data) they can slow to a crawl
real fast.  Note that software RAID can benefit greatly from a
hardware caching controller in JBOD (just a bunch of disks) mode and
offloads some small amount of work from the RAID controller to a cpu.

So, what are you doing with your DB, model wise?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general