[GENERAL] changing the locale of the DB cluster

2009-05-21 Thread Kent Tong

Hi,

I know that it is impossible to change the locale. But can I pg_dump all the
databases, run initdb again using the desired locale, and then pg_restore?



-
--
Kent Tong
Wicket tutorials freely available at http://www.agileskills2.org/EWDW
Axis2 tutorials freely available at http://www.agileskills2.org/DWSAA
-- 
View this message in context: 
http://www.nabble.com/changing-the-locale-of-the-DB-cluster-tp23648611p23648611.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Inserts hang in DB and error messages in log

2009-05-21 Thread Moshe Ben-Shoham
Hi,

 

We're working with version 8.3.5-1.

 

Lately we started seeing insert statements hang in the DB. The
statements come from two different clients.

 

When it happens, I see the following messages in the log every second or
so:

 

2009-05-21 08:56:49 IDT ERROR:  permission denied to set parameter
log_statement

2009-05-21 08:56:49 IDT STATEMENT:  SET log_statement='none';

2009-05-21 08:56:50 IDT ERROR:  permission denied to set parameter
log_statement

2009-05-21 08:56:50 IDT STATEMENT:  SET log_statement='none';

2009-05-21 08:56:51 IDT ERROR:  permission denied to set parameter
log_statement

2009-05-21 08:56:51 IDT STATEMENT:  SET log_statement='none';

2009-05-21 08:56:52 IDT ERROR:  permission denied to set parameter
log_statement

2009-05-21 08:56:52 IDT STATEMENT:  SET log_statement='none';

2009-05-21 08:56:53 IDT ERROR:  permission denied to set parameter
log_statement

2009-05-21 08:56:53 IDT STATEMENT:  SET log_statement='none';

2009-05-21 08:56:54 IDT ERROR:  permission denied to set parameter
log_statement

2009-05-21 08:56:54 IDT STATEMENT:  SET log_statement='none';

2009-05-21 08:56:55 IDT ERROR:  permission denied to set parameter
log_statement

2009-05-21 08:56:55 IDT STATEMENT:  SET log_statement='none';

 

Any help (or guidelines for additional required information) is
appreciated.

 

Thanks,

Moshe.


The information contained in this message is proprietary to the sender, 
protected from disclosure, and may be privileged. The information is intended 
to be conveyed only to the designated recipient(s) of the message. If the 
reader of this message is not the intended recipient, you are hereby notified 
that any dissemination, use, distribution or copying of this communication is 
strictly prohibited and may be unlawful. If you have received this 
communication in error, please notify us immediately by replying to the message 
and deleting it from your computer. Thank you.




This footnote confirms that this email message has been scanned by
PineApp Mail-SeCure for the presence of malicious code, vandals  computer 
viruses.




Re: [GENERAL] Regarding visual studio 2008 build

2009-05-21 Thread Venkat Rao Tammineni
Hi

 

  You should use npgsql.dll to play with pgsql.which stores in postgresql
install folder.please check or let me your are getting any problem with
that.

 

Thanks,

 

Venkat

 

From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Vikram Patil
Sent: Wednesday, May 20, 2009 11:51 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Regarding visual studio 2008 build

 

Hello All,

 

 I am trying to compile postgresql with visual studio but I am not
successful yet. Actually I just need binaries which are built with Visual
Studio 2008.

Please provide some idea or input about future plans to compile postgresql
with Visual Studio 2008.

 

Thanks  Regards,

Vikram



Re: [GENERAL] HOT question - insert/delete

2009-05-21 Thread Grzegorz Jaśkiewicz
On Wed, May 20, 2009 at 9:01 PM, Merlin Moncure mmonc...@gmail.com wrote:
 On Wed, May 20, 2009 at 3:11 PM, Gerhard Wiesinger li...@wiesinger.com 
 wrote:
 Hello!

 Is HOT in 8.3 used in the following scenario:
 INSERT ...
 DELETE ...
 at the same ratio.

 So for example current data is added every minute and all old data older
 than 2 years are deleted.

 Would the heap be bloated or is this avoided by the HOT feature of 8.3 and
 nearly kept constant?

 HOT doesn't help here...it only helps with updates and then only if
 you are updating fields that are not indexed.  if your table has a
 rolling set of data, for example a log file...you probably want to
 look at table partitioning (either manual or built in).
on that note, does HOT works in case I have TOASTed columns ? bytea,
varchar(), etc ?


-- 
GJ

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] HOT question - insert/delete

2009-05-21 Thread Simon Riggs

On Wed, 2009-05-20 at 16:01 -0400, Merlin Moncure wrote:
 On Wed, May 20, 2009 at 3:11 PM, Gerhard Wiesinger li...@wiesinger.com 
 wrote:
  Hello!
 
  Is HOT in 8.3 used in the following scenario:
  INSERT ...
  DELETE ...
  at the same ratio.
 
  So for example current data is added every minute and all old data older
  than 2 years are deleted.
 
  Would the heap be bloated or is this avoided by the HOT feature of 8.3 and
  nearly kept constant?
 
 HOT doesn't help here...it only helps with updates and then only if
 you are updating fields that are not indexed. 

Partial vacuum, in 8.4, will deal with this situation, though
partitioning does sound best for such clearly historical data.

-- 
 Simon Riggs   www.2ndQuadrant.com
 PostgreSQL Training, Services and Support


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] compilation test fails

2009-05-21 Thread Sofer, Yuval
Hi,

I have compiled 8.3.7 on HP-Itanium

After the build I have continued to the tests (gmake check)
One of the tests failed (division by zero error test)
The regression.diffs file shows the diff regarding of the expected result and 
the actual result:

*** ./expected/errors.out   Sat Nov 10 16:36:44 2007
--- ./results/errors.outThu May 21 10:15:15 2009
***
*** 304,310 
  select 1/0;
  ERROR:  division by zero
  select 1::int8/0;
! ERROR:  division by zero
  select 1/0::int8;
  ERROR:  division by zero
  select 1::int2/0;
--- 304,311 
  select 1/0;
  ERROR:  division by zero
  select 1::int8/0;
! ERROR:  floating-point exception
! DETAIL:  An invalid floating-point operation was signaled. This probably 
means an out-of-range result or an invalid operatio
n, such as division by zero.
  select 1/0::int8;
  ERROR:  division by zero
  select 1::int2/0;

==

Can you help?

Regards,

Yuval Sofer
BMC Software
CTMD Business Unit
DBA Team
972-52-4286-282
yuval_so...@bmc.commailto:yuval_so...@bmc.com



Re: [GENERAL] HOT question - insert/delete

2009-05-21 Thread Gerhard Wiesinger


Are there any plans to support this kind of scenario?

Ciao,
Gerhard


--
http://www.wiesinger.com/


On Wed, 20 May 2009, Merlin Moncure wrote:


On Wed, May 20, 2009 at 3:11 PM, Gerhard Wiesinger li...@wiesinger.com wrote:

Hello!

Is HOT in 8.3 used in the following scenario:
INSERT ...
DELETE ...
at the same ratio.

So for example current data is added every minute and all old data older
than 2 years are deleted.

Would the heap be bloated or is this avoided by the HOT feature of 8.3 and
nearly kept constant?


HOT doesn't help here...it only helps with updates and then only if
you are updating fields that are not indexed.  if your table has a
rolling set of data, for example a log file...you probably want to
look at table partitioning (either manual or built in).

merlin

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] compilation test fails

2009-05-21 Thread Sofer, Yuval
Hi,
I have compiled 8.3.7 on HP-Itanium

After the build I have continued to the tests (gmake check)
One of the tests failed (division by zero error test)
The regression.diffs file shows the diff regarding of the expected result and 
the actual result:

*** ./expected/errors.out   Sat Nov 10 16:36:44 2007
--- ./results/errors.outThu May 21 10:15:15 2009
***
*** 304,310 
  select 1/0;
  ERROR:  division by zero
  select 1::int8/0;
! ERROR:  division by zero
  select 1/0::int8;
  ERROR:  division by zero
  select 1::int2/0;
--- 304,311 
  select 1/0;
  ERROR:  division by zero
  select 1::int8/0;
! ERROR:  floating-point exception
! DETAIL:  An invalid floating-point operation was signaled. This probably 
means an out-of-range result or an invalid operatio
n, such as division by zero.
  select 1/0::int8;
  ERROR:  division by zero
  select 1::int2/0;

==

Can you help ?
Regards,


Yuval Sofer
BMC Software
CTMD Business Unit
DBA Team
972-52-4286-282
yuval_so...@bmc.commailto:yuval_so...@bmc.com



Re: [GENERAL] changing the locale of the DB cluster

2009-05-21 Thread Raymond O'Donnell
On 21/05/2009 07:39, Kent Tong wrote:

 I know that it is impossible to change the locale. But can I pg_dump all the
 databases, run initdb again using the desired locale, and then pg_restore?

Absolutely - this is no different from an upgrade scenario.

Ray.


--
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
r...@iol.ie
Galway Cathedral Recitals: http://www.galwaycathedral.org/recitals
--

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Problem in Postgresql DB backup setup

2009-05-21 Thread Ujjawal Misra
Hi,

We are facing a problem in setting up a database backup for our production
server using pg-standby.

We are using Postgres 8.3.5 in our production environment. We have an active
user-base of 2 million and need to support 5000 TPS. The service needs to be
up and running 24x7. The production database has a size of 200GB and growing
and is expected to reach 1TB in a couple of months.

In the past we have tried various strategies for db-backup including pg-dump
and Warm-backup using WAL shipping.
But our current needs demand a Hot-backup which can be set-up without
stopping the database server. For this we have evaluated pg-standby with WAL
shipping in a test-environment that has continous db inserts. The set-up
works fine and the slave (pg-standby) is able to consume WAL logs generated
by the master.

We tried to replicate the scenario of the master db going down in this test
environment in the following fashions:
a. Killing the master postgres server (using kill command).
b. Properly stopping the master postgres server (using pg_ctl command).
In both the above cases we saw that after the slave has consumed all the
generated WAL logs and the recovery completes, the slave lags the master by
a set of records. Additionally, the missing records don't seem to be a
consecutive set; a random set (non-consecutive) of records is missing in the
slave as compared with the master (by bringing the master back up).

Following are the steps that we have used for creating the pg-standby slave:
1. SLAVE: Ensure that postgres is not running on the slave. Stop it if it is
running.
  Create and INIT a new data directory on the slave
(/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/test_data).
2. SLAVE: Ensure that the slave can pull wal-logs from the MASTER. Authorize
the SLAVE on the MASTER if needed.
3. MASTER: The archive command should be enabled on the master server.
[/usr/local/pgsql/data/postgresql.conf]
===
archive_mode = on   # allows archiving to be done
# (change requires restart)
archive_command = '/usr/local/pgsql/bin/verseSQLs/storeWAL.sh %p %f
 /dev/null'
4. MASTER: Ensure that the consumeWAL.sh has WAL storage off. This would
automatically be turned on by the backup_rsync.sh script (see stemp 6).
5. SLAVE: The wal-consumption should be enabled on the slave server cron.
The consume_wals.sh script pulls a set of WALs in the shared storage on the
master, so that these WALs can be later consumed on the slave.
#---
# [CONSUME WALs from MASTER : VERY CRUCIAL]
#---
* * * * * cd /usr/local/pgsql/bin/verseSQLs/consume_wal_scripts 
./consume_wals.sh
6. MASTER: Run backup_rsync.sh.
   It does the following:
   a. Enables WAL storage in the storeWAL.sh script.
   b. Issues a pg_start_backup() to postgres on master.
   c. CD to the data dir of master. rsyncs the contents of the data
directory of the master to the slave.
  rsync -avz --delete --exclude=backup_label --exclude=.*
--exclude=*.pid --exclude=*.conf --exclude=pg_xlog --exclude=pg_tblspc
--exclude=pg_log/* --copy-unsafe-links . postgres@
SLAVE_IP:/usr/local/pgsql/test_data
   d. Issues a pg_stop_backup() to postgres on master.
   e. Cleans pg_xlog on slave.
7. SLAVE: Copy recovery.conf to the data dir. Start POSTGRES server. It
starts in STANDBY mode.

The above steps set up the slave server using pg-standby. Note that during
all this process the master db is up and there are continuous batch INSERTs
happening in the db by an automated process. Also note that this automated
process is the only process interacting with the db and there are no other
CRUD operations besides these batch INSERTs.

Further, in order to test the scenario for the master going down follow
these steps:
8. MASTER: If you want to recreate the scenario of the master db going down,
either kill the postgers process or stop the postgres server using pg_ctl.
9. MASTER: Disable the automated process that generates continuous batches
of INSERTs in the db
10. SLAVE: Once you have verified that there are no unconsumed WAL logs
left, trigger the pg-standby that the recovery has completed (touch the
trigger file expected by the pg-standby command specified in recovery.conf).
11. SLAVE: Once pg-standby sees the trigger file, the recovery completes and
the postgres server comes up in the production mode.
12. MASTER: Ensure that the storeWAL.sh has WAL storage off. Though by now
the slave is in a non-recovery mode, this step prevents the unnecessary
consumption of space bye WAL storage.
13. MASTER: Query the count of the records produced by the automated process
on the master.
14. SLAVE: Query the count of the records as present on the slave.
Compare the results of 

[GENERAL] fsm on 8.1

2009-05-21 Thread Grzegorz Jaśkiewicz
Question here,
on 8.1, is vacuum analyze actually updating fsm too ?


-- 
GJ

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] origins/destinations

2009-05-21 Thread Carson Farmer

Thanks Andy,

That was exactly what I needed! Now I just have to deal with this huge
matrix I've generated ;-)

Cheers,

Carson


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help with join syntax sought supplemental

2009-05-21 Thread Alban Hertroys

On May 20, 2009, at 7:17 PM, James B. Byrne wrote:

Looking at this I have to wonder what will be the effect of having
tens of thousands of rate-pairs on file.  Would this query be
improved by first doing a sub-query on base/quote pairs that
returned DISTINCT pairs and then do the IN condition using that?



If it turns out to be a problem a way around is to keep a reference to  
the 'actual' conversion rates from another table. Which ones are  
'actual' would be updated by an INSERT trigger on your rates table.  
The amount of data in the new table (and subsequently the index on  
it's PK) would be far smaller and therefore likely a lot faster to  
query.


I've done something similar in a database where a history of states  
about records was kept around. Determining the 'actual' state was  
relatively slow because it was difficult to determine a method to  
uniquely point to it (the same issue with determining the latest  
timestamp of a group of records for the same data).
Adding an FK from the record to it's 'actual' status record improved  
things a lot and had the added benefit that other derived information  
(detailed user information) was still easy to obtain with a simple join.


But as people often say here, premature optimisation is a waste of  
time, so don't go that route unless you have a reason to expect  
problems in that area.


Alban Hertroys

--
If you can't see the forest for the trees,
cut the trees and you'll see there is no forest.


!DSPAM:737,4a15269c10092027810544!



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] HOT question - insert/delete

2009-05-21 Thread Gerhard Wiesinger
The data isn't really historical, but some data have to be for e.g. some 
regulations after a period of time. But all the available data should be 
available for e.g. reporting. So partitioning doesn't make any sense 
in this case, right?


Ciao,
Gerhard

--
http://www.wiesinger.com/


On Thu, 21 May 2009, Simon Riggs wrote:



On Wed, 2009-05-20 at 16:01 -0400, Merlin Moncure wrote:

On Wed, May 20, 2009 at 3:11 PM, Gerhard Wiesinger li...@wiesinger.com wrote:

Hello!

Is HOT in 8.3 used in the following scenario:
INSERT ...
DELETE ...
at the same ratio.

So for example current data is added every minute and all old data older
than 2 years are deleted.

Would the heap be bloated or is this avoided by the HOT feature of 8.3 and
nearly kept constant?


HOT doesn't help here...it only helps with updates and then only if
you are updating fields that are not indexed.


Partial vacuum, in 8.4, will deal with this situation, though
partitioning does sound best for such clearly historical data.

--
Simon Riggs   www.2ndQuadrant.com
PostgreSQL Training, Services and Support


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] package for debugger/wizard was: Feedback on PG?

2009-05-21 Thread Dave Page
On Wed, May 20, 2009 at 8:50 AM, Ivan Sergio Borgonovo
m...@webthatworks.it wrote:
 On Wed, 20 May 2009 07:46:02 -0400
 Dave Page dp...@pgadmin.org wrote:

  Beside the fact I can only thank for all the great work around
  postgresql, is there a reason we can't have something similar on
  eg. Debian that will let us have a tuning wizard and a debugger
  that just works with an aptitude install?

 No, other than resources. We maintain close to 100 installers now,
 just for the EnterpriseDB supplied packages. Producing
 platform-specific builds of them as well as the one-click
 installers would be a mammoth task.

 My universe is Debian bound... so I even don't know if there is a
 *nix version of the tuning wizard.
 I'd consider it a quite useful tool even for marketing purposes on
 Linux too.
 Bad performance without tuning is a common thread here.

There is a linux version.

 I really didn't have time to investigate about the debugger, I'd
 expect that on Windows it just works.
 While many things on *nix just work, debugging pg functions on Linux
 is not one of those.
 As you may have guessed my definition of just works in not that
 different from aptitude install.

The debugger will work pretty much out of the box on Linux exactly as
it does on Windows if you use the one-click installers.

 I still have to find an howto for installing edb in Debian.
 Could it be packaged for Debian if there were resources?

We use a universal installer for edb (by which I assume you mean
Postgres Plus Advanced Server). Our customers run such a wide range of
platforms that it's simply not practical for us to build and properly
QA distro-native packages for every possibility, especially the less
commonly used platforms like Debian (I know, don't shoot me, but we're
more likely to see RHEL or Suse Enterprise in production).

-- 
Dave Page
EnterpriseDB UK:   http://www.enterprisedb.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Can I pause the pg_dump or pg_restore

2009-05-21 Thread Jasen Betts
On 2009-05-20, Chen, Dongdong (GE Healthcare) dongdongc...@ge.com wrote:
 This is a multi-part message in MIME format.

 In my application, when press button Backup in UI, it invokes pg_dump
 to backup the database. It may take few minutes for the whole process.
 If I want to pause the process, what should I do. And also how to resume
 the process? Can I pause while pg_restore?

POSIX has SIGSTOP and SIGCONT. you could send them to the pg_dump task
using kill() if you have permission (invoker does) and its PID.

as others have you can also force the process to pause by throttleing it's
I/O.  

if you invoke pg_dump using popen() (or similar)
when you stop reading the output pg_dump will (after filling the
buffer) also stop and wait for you to resume reading.

the converse is true when restoring.

Windows may have similar features available, no doubt with different
names and APIs.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Can I pause the pg_dump or pg_restore

2009-05-21 Thread Jasen Betts
On 2009-05-20, Sam Mason s...@samason.me.uk wrote:
 On Wed, May 20, 2009 at 03:39:39PM +0100, Howard Cole wrote:
 Sam Mason wrote:
 Note that when used on the pg_dump process all you're doing is stopping
 it from writing out the backup.  The server process will still be
 running and waiting for the backup to finish writing the data.  It will
 thus hold the transaction open and any other state needed to keep things
 going.  This should be fine for temporary pauses, but it wouldn't be
 recommended to pause the backup for days at a time.
 
 Just curious why would you want to pause a backup/restore?

 Yes, it seems a little perverse.  There seem to be valid use cases,
 disk/cpu time need temporarily elsewhere being one.  As the poem goes;
 ours not to reason why... (hum, I think it's supposed to be theirs
 not to.., ah well).

In that case just invoke it with a sufficiently low priority and let the O/S 
deal
with that issue.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Can not decompress a compressed string under plpy!

2009-05-21 Thread Jasen Betts
On 2009-05-20, Timmy timh...@netvigator.com wrote:
 Hi,
I have stored a compressed string in a table field.
 The compressed string is created by zlib using python.
 I want to decompress this string by zlib under plpy but plpy fails to do so. 
 Is plpy not support decompression? Why?

are you storing it in a bytea column?
is the string correctly unescaped before you attempt decompressing it?
are you able to extract and decompress the string by other means (as
proof that it has been stored correctly)


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: corruption on data table disconnects other clients silently when page is read

2009-05-21 Thread Jasen Betts
On 2009-05-20, Vick Khera vi...@khera.org wrote:

 Does psql silently reconnect to the DB? 

I have noticed that behaviour recently.




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [Windows] Feedback on PG?

2009-05-21 Thread Jasen Betts
On 2009-05-20, Gilles codecompl...@free.fr wrote:
 At 10:58 20/05/2009, Craig Ringer wrote:
I suspect a lot of that comes down to user/admin knowledge as much 
as anything. [...] That said, there are also a few bugs lurking that 
only affect the Windows version. [...] I've used Pg on my laptop at 
various points when it's been running Windows, and found it stable 
and reliable for my purposes (app dev and testing). [...] Some 
antivirus scanners must be fully uninstalled, not just told to 
ignore Pg [...] In short: Virus scanners are *E*V*I*L*. I've seen 
relatively few issues with recent versions of a few, but most seem 
to be way more trouble than they're worth unless you do only very 
simple things on your machine.

 Thanks much for the feedback. If customers choose to install the DBMS 
 on a Windows server, I'll tell them that it's just not a good idea to 
 have an AV running on it, and find other ways to secure it (firewall, 
 permissions, etc.)

 If some users have been using the native Win32 version of PG in 
 production, I'd like to hear how it runs in terms of stability and 
 performance. This DBMS is meant to be used in SOHO settings, meaning 
 it should be very easy to install, use, and kept up-to-date. 

our flagship product Gymmaster uses postgresql for the business logic 
and integrity checks, all of the business logic, and even some of the 
user interface is implemented in the database.

We have perhaps 80 clients 90% of them are using windows based
postgres, the remainder have linux servers, mainly because it's 
easier to install our other server processes on them.

The only issue found to date is that postgres doesn't use the windows
Timezone data (I don't think windows has historical timezone data - so
It can't) as a result if the politicians decide to mess with the
daylight savings rules (last time they gave us 4 whole weeks notice)
you need to upgrade the TZ data for your clients, or else suffer time
related business logic being off.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Inserts hang in DB and error messages in log

2009-05-21 Thread Adrian Klaver
On Wednesday 20 May 2009 11:36:38 pm Moshe Ben-Shoham wrote:
 Hi,



 We're working with version 8.3.5-1.



 Lately we started seeing insert statements hang in the DB. The
 statements come from two different clients.



 When it happens, I see the following messages in the log every second or
 so:



 2009-05-21 08:56:49 IDT ERROR:  permission denied to set parameter
 log_statement

 2009-05-21 08:56:49 IDT STATEMENT:  SET log_statement='none';

 2009-05-21 08:56:50 IDT ERROR:  permission denied to set parameter
 log_statement

 2009-05-21 08:56:50 IDT STATEMENT:  SET log_statement='none';

 2009-05-21 08:56:51 IDT ERROR:  permission denied to set parameter
 log_statement

 2009-05-21 08:56:51 IDT STATEMENT:  SET log_statement='none';

 2009-05-21 08:56:52 IDT ERROR:  permission denied to set parameter
 log_statement

 2009-05-21 08:56:52 IDT STATEMENT:  SET log_statement='none';

 2009-05-21 08:56:53 IDT ERROR:  permission denied to set parameter
 log_statement

 2009-05-21 08:56:53 IDT STATEMENT:  SET log_statement='none';

 2009-05-21 08:56:54 IDT ERROR:  permission denied to set parameter
 log_statement

 2009-05-21 08:56:54 IDT STATEMENT:  SET log_statement='none';

 2009-05-21 08:56:55 IDT ERROR:  permission denied to set parameter
 log_statement

 2009-05-21 08:56:55 IDT STATEMENT:  SET log_statement='none';



 Any help (or guidelines for additional required information) is
 appreciated.



 Thanks,

 Moshe.



Would seem that the insert statements are being done in conjunction with the 
SET 
log_statement and that the client(s) doing that do not have the necessary 
permission level to set log_statement.

-- 
Adrian Klaver
akla...@comcast.net

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help with join syntax sought supplemental

2009-05-21 Thread James B. Byrne

On Thu, May 21, 2009 06:02, Alban Hertroys wrote:


 But as people often say here, premature optimisation is a waste of
 time, so don't go that route unless you have a reason to expect
 problems in that area.


That was my very thought when I sent that message.  On the other
hand, in case I was doing something out of ignorance that was
notoriously wrong, it seemed best to ask.

Regards,

-- 
***  E-Mail is NOT a SECURE channel  ***
James B. Byrnemailto:byrn...@harte-lyne.ca
Harte  Lyne Limited  http://www.harte-lyne.ca
9 Brockley Drive  vox: +1 905 561 1241
Hamilton, Ontario fax: +1 905 561 0757
Canada  L8E 3C3


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Inserts hang in DB and error messages in log

2009-05-21 Thread Bill Moran
In response to Moshe Ben-Shoham mos...@nexperience.com:
 
 We're working with version 8.3.5-1.
 
 Lately we started seeing insert statements hang in the DB. The
 statements come from two different clients.
 
 When it happens, I see the following messages in the log every second or
 so:
 
 2009-05-21 08:56:49 IDT ERROR:  permission denied to set parameter
 log_statement
 
 2009-05-21 08:56:49 IDT STATEMENT:  SET log_statement='none';

[snip the same errors over an over ...]

 Any help (or guidelines for additional required information) is
 appreciated.

It's kind of hard to tell from what you've posted, but I'll take a guess.

First, the inability to turn log_statement off isn't going to cause the
server to hang or pause or anything.  This error is _not_ the problem,
although it's probably related, since it's happening in conjunction
with the problem.

Based on that, my guess is that you're running a commercial application
that is trying to hide its SQL from you, thus it refuses to run any
queries unless it can turn log_statement to none.  However, log_statement
can only be changed by a superuser, and I'm betting you did the _right_
thing and didn't make the application user a superuser.

If my guesses are right, the following is true:
* The application is probably garbage.  You'll probably have other problems
  with it if you continue to use it.  You should contact the vendor and
  chew them out for their crappy design.  Either that or they're so
  brilliant that they've managed to write SQL statements that are copyrighted
  or something.
* If you give the application user superuser privs, the application will
  probably start working.
* You _may_ be able to get the application to start working without
  granting superuser privs, simply by setting the value of log_statement
  to none in the postgresql.conf.  This is assuming the application is
  smart enough to check the value and only change it if it's not already
  none.

As I said, the advice is all based on guessing, so good luck with it.

-- 
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] fsm on 8.1

2009-05-21 Thread Tom Lane
=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= gryz...@gmail.com writes:
 on 8.1, is vacuum analyze actually updating fsm too ?

Yup.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] fsm on 8.1

2009-05-21 Thread Grzegorz Jaśkiewicz
2009/5/21 Tom Lane t...@sss.pgh.pa.us:
 =?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= gryz...@gmail.com writes:
 on 8.1, is vacuum analyze actually updating fsm too ?

 Yup.

Thanks, also to make sure that I got it. Whatever's in FSM, will be
actually reused by postgreqsl during normal operation, instead of
creating new files to store tuples, right ?


-- 
GJ

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] compilation test fails

2009-05-21 Thread Tom Lane
Sofer, Yuval yuval_so...@bmc.com writes:
 I have compiled 8.3.7 on HP-Itanium
 After the build I have continued to the tests (gmake check)
 One of the tests failed (division by zero error test)

Hm, what compiler and what optimization level?  We have seen a report
or two about this before, all from people using gcc on non-mainstream
architectures.  So far as I can tell it's a compiler bug.  The code
in int84div looks like

int64arg1 = PG_GETARG_INT64(0);
int32arg2 = PG_GETARG_INT32(1);
int64result;

if (arg2 == 0)
ereport(ERROR,
(errcode(ERRCODE_DIVISION_BY_ZERO),
 errmsg(division by zero)));

result = arg1 / arg2;

and the only way to get the behavior you're showing is if the division
is executing (and causing a trap) before control is passed to ereport().
So apparently the compiler is forgetting that division can have a side
effect (ie machine trap) and thinking it's safe to reorder the
operations.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] running postgresql on a private machine accessing it from public web pages

2009-05-21 Thread Keith D. Evans
We have a private machine that runs postgresql and have created a 
database. We want to access that database from the web for plotting 
using php. The web server and pages are on a public (different) machine 
that does not have postgresql. Nor will they install it for is. The 
postgresql database is also on the public machine (which the private 
machine can access).


So, the question is, can someone go through these (public) web pages and 
access the postgresql database if the postgresql server is running on 
the private machine?


We have other data in the postgresql and would like to only have to use 
one database types, i.e., postgresql.


Thanx,
keith

--
Nonviolence is not a cover for cowardice, but it is the supreme virtue of the 
brave.  Mohandas Karamchand Gandhi
===
Keith D. Evans
Joint Center for Earth Systems Technology/UMBC
(301) 614-6282 (M,Tu)
(410) 455-5751 (W,Th,F)
http://www.jcet.umbc.edu/bios/evanmain.html


Any opinions expressed in this email are not those of 
NASA, or the Goddard Space Flight Center, or the Joint 
Center for Earth Systems Technology or the University 
of Maryland Baltimore County. 






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] running postgresql on a private machine accessing it from public web pages

2009-05-21 Thread Joshua D. Drake

 So, the question is, can someone go through these (public) web pages and 
 access the postgresql database if the postgresql server is running on 
 the private machine?
 
 We have other data in the postgresql and would like to only have to use 
 one database types, i.e., postgresql.

If you are using proper host control then yes you can make it so that
the PHP user is only able to access the public data in the private
database. See here:

http://www.postgresql.org/docs/8.3/static/client-authentication.html
http://www.postgresql.org/docs/8.3/static/user-manag.html

Sincerely,

Joshua D. Drake

-- 
PostgreSQL - XMPP: jdr...@jabber.postgresql.org
   Consulting, Development, Support, Training
   503-667-4564 - http://www.commandprompt.com/
   The PostgreSQL Company, serving since 1997


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Postgres, DB design, and object IDs (of any ilk)

2009-05-21 Thread Kenneth Tilton

Just looking for postgres best practices input from the veterans:

If the primary key of the customer table is cust_short_name and my DB 
reflects also customer departments, I can link a customer to its 
departments one of three ways:


1. The department table has a cust_short_name column and makes that the 
first segment of its primary_key;


2. I use OIDs and link back to the customer's row with the row's OID;

3. I give the customer a cust_serial_id column and make it SERIAL and 
give the dept table a column called cust_serial_id.


I know serial IDs are preferred to OIDs so let's forget #2.

With #1, where we have a three or more level identifying hierarchy I end 
up repeating two or more primary keys in building up the primary key of 
the lower levels. Not the end of the world, but my last DB was built in 
an OODB so I got uses to having object identity and am tempted to use 
serial_ids for same under postgres.


Bad idea?

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres, DB design, and object IDs (of any ilk)

2009-05-21 Thread Merlin Moncure
On Thu, May 21, 2009 at 12:35 PM, Kenneth Tilton kentil...@gmail.com wrote:
 If the primary key of the customer table is cust_short_name and my DB
 reflects also customer departments, I can link a customer to its departments
 one of three ways:

 1. The department table has a cust_short_name column and makes that the
 first segment of its primary_key;

 3. I give the customer a cust_serial_id column and make it SERIAL and give
 the dept table a column called cust_serial_id.

This is the very well tread 'natural vs. surrogate key' debate.
There's tons of threads about this online...including the archives
here.  It's a very complicated issue with lots of facets (performance,
logic, elegance of design) with no clear right answer so it largely
boils down to personal choice.

I would venture to guess that a large majority of database developers
use incrementing serial keys.  That said, I personally was in that
camp until I was tasked with converting a large erp system written in
cobol/isam (where natural keys are used for technical reasons) into
sql.  Following that experience, I have decided that a hybrid approach
is best for me.

I would strongly advise learning how to map out your data either way
and choose the approach that best meets your design criteria.  I'm
especially skeptical of database development standards that _always_
use a serial primary key and _always_ use it for relating data.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres, DB design, and object IDs (of any ilk)

2009-05-21 Thread Grzegorz Jaśkiewicz
I wouldn't trust OIDs, because they are 32bits for once.
Secondly, Watch for index size. That's the main reason why (big)int as
a key reference is a win over other types - at least in my general
practice.

And third advice, try different approaches, and queries - to figureout
what would suit the solution. Anyone who says, that this is always
win, and something else is not - is a lier.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] running postgresql on a private machine accessing it from public web pages

2009-05-21 Thread Keith D. Evans

Joshua,

Let me rephrase what I wrote. The database is on a public machine 
(Linux), but the postgresql postmaster runs on a private machine 
(Solaris 10).  Can we access the postgresql database through the public 
machine, even though the postmaster is running from a (different) 
private machine?


thanx
keith

Joshua D. Drake wrote:
So, the question is, can someone go through these (public) web pages and 
access the postgresql database if the postgresql server is running on 
the private machine?


We have other data in the postgresql and would like to only have to use 
one database types, i.e., postgresql.



If you are using proper host control then yes you can make it so that
the PHP user is only able to access the public data in the private
database. See here:

http://www.postgresql.org/docs/8.3/static/client-authentication.html
http://www.postgresql.org/docs/8.3/static/user-manag.html

Sincerely,

Joshua D. Drake

  


--
Nonviolence is not a cover for cowardice, but it is the supreme virtue of the 
brave.  Mohandas Karamchand Gandhi
===
Keith D. Evans
Joint Center for Earth Systems Technology/UMBC
(301) 614-6282 (M,Tu)
(410) 455-5751 (W,Th,F)
http://www.jcet.umbc.edu/bios/evanmain.html


Any opinions expressed in this email are not those of 
NASA, or the Goddard Space Flight Center, or the Joint 
Center for Earth Systems Technology or the University 
of Maryland Baltimore County. 






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] running postgresql on a private machine accessing it from public web pages

2009-05-21 Thread Joshua D. Drake
On Thu, 2009-05-21 at 13:49 -0400, Keith D. Evans wrote:
 Joshua,
 
 Let me rephrase what I wrote. The database is on a public machine 
 (Linux), but the postgresql postmaster runs on a private machine 
 (Solaris 10).  Can we access the postgresql database through the public 
 machine, even though the postmaster is running from a (different) 
 private machine?

A postmaster must be running on whatever database you are trying to
access. You could use replication such as Slony to mirror the linux
database to the solaris one.

Joshua D. Drake

-- 
PostgreSQL - XMPP: jdr...@jabber.postgresql.org
   Consulting, Development, Support, Training
   503-667-4564 - http://www.commandprompt.com/
   The PostgreSQL Company, serving since 1997


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres, DB design, and object IDs (of any ilk)

2009-05-21 Thread Kenneth Tilton



Merlin Moncure wrote:

On Thu, May 21, 2009 at 12:35 PM, Kenneth Tilton kentil...@gmail.com wrote:

If the primary key of the customer table is cust_short_name and my DB
reflects also customer departments, I can link a customer to its departments
one of three ways:

1. The department table has a cust_short_name column and makes that the
first segment of its primary_key;

3. I give the customer a cust_serial_id column and make it SERIAL and give
the dept table a column called cust_serial_id.


This is the very well tread 'natural vs. surrogate key' debate.


Ah, thx for the label, I have been able to google up some pros and cons.


Thx again,

kt


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] running postgresql on a private machine accessing it from public web pages

2009-05-21 Thread Raymond O'Donnell
On 21/05/2009 18:49, Keith D. Evans wrote:

 The database is on a public machine (Linux), but the postgresql
 postmaster runs on a private machine (Solaris 10).

That doesn't make a lot of sense, unless you've got *two* postmasters
running, one on each machine, or maybe you've created a tablespace over
NFS or some such.

Can you explain more clearly what you mean by the above?


Ray.


--
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
r...@iol.ie
Galway Cathedral Recitals: http://www.galwaycathedral.org/recitals
--

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] running postgresql on a private machine accessing it from public web pages

2009-05-21 Thread Keith D. Evans
We usually run postgresql on a private machine (gravity) but due to 
space, we have moved the database to a different machine through afs (a 
local network at the university). We do not want the private machine to 
be accessible to outside users, so our web pages are on the university's 
public web pages. They have mysql, but won't install postgresql, which 
we've been using on gravity for years.


We want to add a database where users can get and plot data using php 
over the internet. But since the public computers don't have postgresql, 
can we have postgresql running on gravity allowing users accessing 
through the internet on the public pages to access the data?


thanx,
keith


Raymond O'Donnell wrote:

On 21/05/2009 18:49, Keith D. Evans wrote:

  

The database is on a public machine (Linux), but the postgresql
postmaster runs on a private machine (Solaris 10).



That doesn't make a lot of sense, unless you've got *two* postmasters
running, one on each machine, or maybe you've created a tablespace over
NFS or some such.

Can you explain more clearly what you mean by the above?


Ray.


--
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
r...@iol.ie
Galway Cathedral Recitals: http://www.galwaycathedral.org/recitals
--


  


--
Nonviolence is not a cover for cowardice, but it is the supreme virtue of the 
brave.  Mohandas Karamchand Gandhi
===
Keith D. Evans
Joint Center for Earth Systems Technology/UMBC
(301) 614-6282 (M,Tu)
(410) 455-5751 (W,Th,F)
http://www.jcet.umbc.edu/bios/evanmain.html


Any opinions expressed in this email are not those of 
NASA, or the Goddard Space Flight Center, or the Joint 
Center for Earth Systems Technology or the University 
of Maryland Baltimore County. 






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] running postgresql on a private machine accessing it from public web pages

2009-05-21 Thread Scott Mead
In your application, the connection string should have the ip address of the
postgres server.  The postgres server (solaris box) should allow port 5432
through the firewall.
   If you cannot access port 5432 on your solaris box from the linux box,
then make sure that you don't have a firewall in the way.

  You'll also want to make sure that in postgresql.conf, you have set


   listen_addresses='*'

  or

   listen_addresses='the.ip.of.the.solaris.box.'

 Good luck

--Scott
--
Scott Mead
Sr. Systems Engineer
EnterpriseDB

scott.m...@enterprisedb.com
C: 607 765 1395
www.enterprisedb.com


On Thu, May 21, 2009 at 1:49 PM, Keith D. Evans ev...@umbc.edu wrote:

 Joshua,

 Let me rephrase what I wrote. The database is on a public machine (Linux),
 but the postgresql postmaster runs on a private machine (Solaris 10).  Can
 we access the postgresql database through the public machine, even though
 the postmaster is running from a (different) private machine?

 thanx
 keith

 Joshua D. Drake wrote:

 So, the question is, can someone go through these (public) web pages and
 access the postgresql database if the postgresql server is running on the
 private machine?

 We have other data in the postgresql and would like to only have to use
 one database types, i.e., postgresql.



 If you are using proper host control then yes you can make it so that
 the PHP user is only able to access the public data in the private
 database. See here:

 http://www.postgresql.org/docs/8.3/static/client-authentication.html
 http://www.postgresql.org/docs/8.3/static/user-manag.html

 Sincerely,

 Joshua D. Drake




 --
 Nonviolence is not a cover for cowardice, but it is the supreme virtue of
 the brave.  Mohandas Karamchand Gandhi
 ===
 Keith D. Evans
 Joint Center for Earth Systems Technology/UMBC
 (301) 614-6282 (M,Tu)
 (410) 455-5751 (W,Th,F)
 http://www.jcet.umbc.edu/bios/evanmain.html
 

 Any opinions expressed in this email are not those of NASA, or the Goddard
 Space Flight Center, or the Joint Center for Earth Systems Technology or the
 University of Maryland Baltimore County.
 



 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] running postgresql on a private machine accessing it from public web pages

2009-05-21 Thread Scott Marlowe
On Thu, May 21, 2009 at 12:23 PM, Keith D. Evans ev...@umbc.edu wrote:
 We usually run postgresql on a private machine (gravity) but due to space,
 we have moved the database to a different machine through afs (a local

Wait, the whole database, including a postmaster running on the other
machine?  Or just that you're storing the database files there.  Where
the files are stored is inconsequential, except as applies to
reliability.  Where the postmaster lives is all that matters.

 network at the university). We do not want the private machine to be
 accessible to outside users, so our web pages are on the university's public
 web pages.

I assume you have php on the public web servers.

 They have mysql, but won't install postgresql, which we've been
 using on gravity for years.

Fine, there's really no great reason to install PostgreSQL on the same
machines that are running web pages anyway.  As long as those machines
can reach you pgsql server you're gold.

 We want to add a database where users can get and plot data using php over
 the internet. But since the public computers don't have postgresql, can we
 have postgresql running on gravity allowing users accessing through the
 internet on the public pages to access the data?

Yeah, you want some kind of connection opened up between those
machines that let's pgsql get through.  You can use an ssh tunnel, a
vpn, or something else if you can't get a direct route opened up by
the network guys for security reasons.  If you'll have to go through a
machine to get to the pgsql machine, look up two hop ssh tunnel, it's
pretty easy.

But I wonder why they won't install pgsql for you.  Maybe they're
afraid of it raising their workload too much?  If you offer to feed it
and take care of it, maybe they'll let you.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] running postgresql on a private machine accessing it from public web pages

2009-05-21 Thread Scott Marlowe
On Thu, May 21, 2009 at 1:04 PM, Scott Marlowe scott.marl...@gmail.com wrote:
 On Thu, May 21, 2009 at 12:23 PM, Keith D. Evans ev...@umbc.edu wrote:
 We usually run postgresql on a private machine (gravity) but due to space,
 we have moved the database to a different machine through afs (a local

 Wait, the whole database, including a postmaster running on the other
 machine?  Or just that you're storing the database files there.  Where
 the files are stored is inconsequential, except as applies to
 reliability.  Where the postmaster lives is all that matters.

 network at the university). We do not want the private machine to be
 accessible to outside users, so our web pages are on the university's public
 web pages.

 I assume you have php on the public web servers.

 They have mysql, but won't install postgresql, which we've been
 using on gravity for years.

 Fine, there's really no great reason to install PostgreSQL on the same
 machines that are running web pages anyway.  As long as those machines
 can reach you pgsql server you're gold.

 We want to add a database where users can get and plot data using php over
 the internet. But since the public computers don't have postgresql, can we
 have postgresql running on gravity allowing users accessing through the
 internet on the public pages to access the data?

 Yeah, you want some kind of connection opened up between those
 machines that let's pgsql get through.  You can use an ssh tunnel, a
 vpn, or something else if you can't get a direct route opened up by
 the network guys for security reasons.  If you'll have to go through a
 machine to get to the pgsql machine, look up two hop ssh tunnel, it's
 pretty easy.

 But I wonder why they won't install pgsql for you.  Maybe they're
 afraid of it raising their workload too much?  If you offer to feed it
 and take care of it, maybe they'll let you.

P.s. I'd still build a pgsql machine that did nothing but pgsql and
ran in the hosting center if I was gonna have pgsql there.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Tuning resource parameters for a logging database.

2009-05-21 Thread Alex Thurlow
I have a postgresql database that I'm using for logging of data. 
There's basically one table where each row is a line from my log files. 
 It's getting to a size where it's running very slow though.  There are 
about 10 million log lines per day and I keep 30 days of data in it. 
All the columns I filter on are indexed (mostly I just use date).  And I 
tend to pull one day of data at a time with grouped counts by 1 or 2 
other columns.  There also tends to be only 1 or 2 of these large 
queries running at any given time, so a lot of resources can be thrown 
at each one.


I'm wondering what my resource parameters should be for optimal speed of 
the selects on this database, since I haven't seen a good example where 
someone has done anything like this.


The machine is an 8 core opteron (I know I won't really use those, but 
Dell threw in the 2nd proc for free) with 8 Gb RAM.  The database is on 
a RAID 10 JFS partition.


This is what I have in postgresql.conf right now..

shared_buffers = 64MB
work_mem = 128MB
maintenance_work_mem = 256MB
max_fsm_pages = 614400
max_fsm_relations = 1

Can anyone give me some insight as to what I should set these to or if 
there are others I should be using that I'm missing?



Thanks,
Alex


--
Alex Thurlow
Blastro Networks

http://www.blastro.com
http://www.roxwel.com
http://www.yallwire.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Tuning resource parameters for a logging database.

2009-05-21 Thread Vick Khera
On Thu, May 21, 2009 at 3:13 PM, Alex Thurlow a...@blastro.com wrote:
 I have a postgresql database that I'm using for logging of data. There's
 basically one table where each row is a line from my log files.  It's
 getting to a size where it's running very slow though.  There are about 10
 million log lines per day and I keep 30 days of data in it. All the columns

Are you using partitioning on this table?  Your use case is literally
the exact example everyone uses to show how to do partitioning on
tables.

Since you mostly scan on date, this will speed up your queries significantly.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Tuning resource parameters for a logging database.

2009-05-21 Thread Alex Thurlow
I have a postgresql database that I'm using for logging of data.  
There's basically one table where each row is a line from my log files.  
It's getting to a size where it's running very slow though.  There are 
about 10 million log lines per day and I keep 30 days of data in it.  
All the columns I filter on are indexed (mostly I just use date).  And I 
tend to pull one day of data at a time with grouped counts by 1 or 2 
other columns.  There also tends to be only 1 or 2 of these large 
queries running at any given time, so a lot of resources can be thrown 
at each one.


I'm wondering what my resource parameters should be for optimal speed of 
the selects on this database, since I haven't seen a good example where 
someone has done anything like this.


The machine is an 8 core opteron (I know I won't really use those, but 
Dell threw in the 2nd proc for free) with 8 Gb RAM.  The database is on 
a RAID 10 JFS partition.


This is what I have in postgresql.conf right now..

shared_buffers = 64MB
work_mem = 128MB
maintenance_work_mem = 256MB
max_fsm_pages = 614400
max_fsm_relations = 1

Can anyone give me some insight as to what I should set these to or if 
there are others I should be using that I'm missing?



Thanks,
Alex


--
Alex Thurlow
Blastro Networks

http://www.blastro.com
http://www.roxwel.com
http://www.yallwire.com



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Tuning resource parameters for a logging database.

2009-05-21 Thread Scott Marlowe
On Thu, May 21, 2009 at 1:13 PM, Alex Thurlow a...@blastro.com wrote:
 I have a postgresql database that I'm using for logging of data. There's
 basically one table where each row is a line from my log files.  It's
 getting to a size where it's running very slow though.  There are about 10
 million log lines per day and I keep 30 days of data in it. All the columns
 I filter on are indexed (mostly I just use date).

**DING DING DING**  you've just said the magic phrase that says that
partitioning would be a help.

  And I tend to pull one
 day of data at a time with grouped counts by 1 or 2 other columns.  There
 also tends to be only 1 or 2 of these large queries running at any given
 time, so a lot of resources can be thrown at each one.

 I'm wondering what my resource parameters should be for optimal speed of the
 selects on this database, since I haven't seen a good example where someone
 has done anything like this.

With a logging database you're optimizing two often opposing actions.
Lots of small inserts in a stream that HAVE to get processed and put
in efficiently.  This is often accomplished with minimum
shared_buffers and work_mem, because there's no need for the overhead
of large shared_buffers and insert queries for logging dbs don't need
much work_mem.

With a reporting database you run queries that chew up tons of memory
both shared_buffers and work_mem for efficient operation.

 The machine is an 8 core opteron (I know I won't really use those, but Dell
 threw in the 2nd proc for free) with 8 Gb RAM.  The database is on a RAID 10
 JFS partition.

Yeah CPUs are cheap, might as well stock up on them.  A reporting
database can quickly go cpu bound if everything the users want to see
fits in memory.

 This is what I have in postgresql.conf right now..

 shared_buffers = 64MB

Small for reporting, just right for logging.  I'd try something bigger
but not insanely huge.  Let the OS do the caching of 90% of the data,
let the db cache a good sized working set.  256M to 1G is reasonable
based on benchmarks of your own queries.

 work_mem = 128MB

Bigger than needed for logging, good for reporting.  You can probably
just leave it.

 maintenance_work_mem = 256MB
 max_fsm_pages = 614400

If you're not partitioning then this needs to be big enough to contain
1 days+ worth of dead rows.

Look at lowering your random_page_cost, and increasing default stats
target to 100 to 1000 depending on your data and explain analyze query
testing.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Tuning resource parameters for a logging database.

2009-05-21 Thread Alex Thurlow
I should also specify how my inserts are happening I guess.  I'm 
actually doing the logs to flat files and then inserting them into the 
database on an hourly basis using COPY, so I don't need to worry as much 
about the log insert speed as I do the reporting.


On 5/21/2009 2:36 PM, Scott Marlowe wrote:

On Thu, May 21, 2009 at 1:13 PM, Alex Thurlowa...@blastro.com  wrote:
   

I have a postgresql database that I'm using for logging of data. There's
basically one table where each row is a line from my log files.  It's
getting to a size where it's running very slow though.  There are about 10
million log lines per day and I keep 30 days of data in it. All the columns
I filter on are indexed (mostly I just use date).
 


**DING DING DING**  you've just said the magic phrase that says that
partitioning would be a help.

   

  And I tend to pull one
day of data at a time with grouped counts by 1 or 2 other columns.  There
also tends to be only 1 or 2 of these large queries running at any given
time, so a lot of resources can be thrown at each one.

I'm wondering what my resource parameters should be for optimal speed of the
selects on this database, since I haven't seen a good example where someone
has done anything like this.
 


With a logging database you're optimizing two often opposing actions.
Lots of small inserts in a stream that HAVE to get processed and put
in efficiently.  This is often accomplished with minimum
shared_buffers and work_mem, because there's no need for the overhead
of large shared_buffers and insert queries for logging dbs don't need
much work_mem.

With a reporting database you run queries that chew up tons of memory
both shared_buffers and work_mem for efficient operation.

   

The machine is an 8 core opteron (I know I won't really use those, but Dell
threw in the 2nd proc for free) with 8 Gb RAM.  The database is on a RAID 10
JFS partition.
 


Yeah CPUs are cheap, might as well stock up on them.  A reporting
database can quickly go cpu bound if everything the users want to see
fits in memory.

   

This is what I have in postgresql.conf right now..

shared_buffers = 64MB
 


Small for reporting, just right for logging.  I'd try something bigger
but not insanely huge.  Let the OS do the caching of 90% of the data,
let the db cache a good sized working set.  256M to 1G is reasonable
based on benchmarks of your own queries.

   

work_mem = 128MB
 


Bigger than needed for logging, good for reporting.  You can probably
just leave it.

   

maintenance_work_mem = 256MB
max_fsm_pages = 614400
 


If you're not partitioning then this needs to be big enough to contain
1 days+ worth of dead rows.

Look at lowering your random_page_cost, and increasing default stats
target to 100 to 1000 depending on your data and explain analyze query
testing.

   



--
Alex Thurlow
Blastro Networks

http://www.blastro.com
http://www.roxwel.com
http://www.yallwire.com



Re: [GENERAL] Tuning resource parameters for a logging database.

2009-05-21 Thread Scott Marlowe
On Thu, May 21, 2009 at 1:36 PM, Scott Marlowe scott.marl...@gmail.com wrote:


Below, I meant with a logging / reporting database...

 With a logging database you're optimizing two often opposing actions.
 Lots of small inserts in a stream that HAVE to get processed and put
 in efficiently.  This is often accomplished with minimum
 shared_buffers and work_mem, because there's no need for the overhead
 of large shared_buffers and insert queries for logging dbs don't need
 much work_mem.

 With a reporting database you run queries that chew up tons of memory
 both shared_buffers and work_mem for efficient operation.


 work_mem = 128MB

 Bigger than needed for logging, good for reporting.  You can probably
 just leave it.

Note that you can set work_mem per user, so have the reporting users
log in with a different user and you can crank this up a bit, say 512M
to 1G if you're only ever running 1 or 2 reports.  Careful about
running the machine out of memory, work_mem is a foot gun if you set
it too high and run a lot of queries at once.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Tuning resource parameters for a logging database.

2009-05-21 Thread Scott Marlowe
On Thu, May 21, 2009 at 1:39 PM, Alex Thurlow a...@blastro.com wrote:
 I should also specify how my inserts are happening I guess.  I'm actually
 doing the logs to flat files and then inserting them into the database on an
 hourly basis using COPY, so I don't need to worry as much about the log
 insert speed as I do the reporting.

Cool.  Then definitely look at partitioning, and also start running
explain analyze on your longer running queries.  You'll often find
some part of the plan that makes no sense (usually a difference
between estimated and actual returned rows is a clue).  You can earn a
lot of performance by tuning your queries in this way.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] After each row trigger NOT seeing data changes?

2009-05-21 Thread Karl Nack
I'm attempting to create an inventory of trees. Here's a simplified, 
sample table:


CREATE TABLE tree (
tree_id SERIAL PRIMARY KEY,
tree_species_id INT NOT NULL REFERENCES tree_species,
tree_location   POINT NOT NULL,
tree_install_date   DATE NOT NULL,
tree_removal_date   DATE,
CHECK (tree_removal_date  tree_install_date)
);


I need to ensure that no two trees are located in the same place at the 
same time:


CREATE OR REPLACE FUNCTION check_unique_tree()
RETURNS trigger
AS $$
DECLARE
num_trees INT;
BEGIN
-- just to see what's going on
SELECT COUNT(tree_id) INTO num_trees FROM tree;
RAISE NOTICE '% % of new tree %, there are % trees.',
TG_WHEN, TG_OP, NEW, num_trees;

PERFORM tree_id
FROM tree
WHERE
-- first condition prevents updated tree from matching with itself
NEW.tree_id  tree_id
AND NEW.tree_location ~= tree_location
AND NEW.tree_install_date 
COALESCE(tree_removal_date, timestamp 'infinity')
AND COALESCE(NEW.tree_removal_date, timestamp 'infinity') 
tree_install_date;

IF FOUND THEN
RAISE EXCEPTION 'Conflicting trees';
END IF;

RETURN NULL;
END;
$$ LANGUAGE plpgsql STABLE;


CREATE TRIGGER check_unique_tree
AFTER INSERT OR UPDATE ON tree
FOR EACH ROW EXECUTE PROCEDURE check_unique_tree();


And yet, I'm able to do this:

= INSERT INTO tree (tree_species_id, tree_location, tree_install_date)
- VALUES
- (1, '(1,1)', 'today'),
- (1, '(1,1)', 'today');
NOTICE:  AFTER INSERT of new tree (20,1,(1,1),2009-05-21,), there are 0 
trees.
NOTICE:  AFTER INSERT of new tree (21,1,(1,1),2009-05-21,), there are 0 
trees.

INSERT 0 2


As a sanity check (on a fresh, truncated table):

= INSERT INTO tree (tree_species_id, tree_location, tree_install_date)
- VALUES (1, '(1,1)', 'today');
NOTICE:  AFTER INSERT of new tree (22,1,(1,1),2009-05-21,), there are 0 
trees.

INSERT 0 1

= INSERT INTO tree (tree_species_id, tree_location, tree_install_date)
- VALUES (1, '(1,1)', 'today');
NOTICE:  AFTER INSERT of new tree (23,1,(1,1),2009-05-21,), there are 1 
trees.

ERROR:  Conflicting trees


I notice the row count does not reflect the newly-inserted row, which 
suggests that the trigger is not seeing changes made to the table. This 
seems to be exactly opposite of what's in the manual:

http://www.postgresql.org/docs/8.3/interactive/trigger-datachanges.html
http://www.postgresql.org/docs/8.3/interactive/trigger-example.html

Am I doing something wrong here? Have I misunderstood the manual? Have I 
found a bug? Any help is greatly appreciated, as this check is pretty key 
to what I'm trying to do.


Thanks.

Karl Nack

Futurity, Inc.
773-506-2007

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] After each row trigger NOT seeing data changes?

2009-05-21 Thread Tom Lane
Karl Nack karln...@futurityinc.com writes:
 I notice the row count does not reflect the newly-inserted row, which 
 suggests that the trigger is not seeing changes made to the table. This 
 seems to be exactly opposite of what's in the manual:
 http://www.postgresql.org/docs/8.3/interactive/trigger-datachanges.html
 http://www.postgresql.org/docs/8.3/interactive/trigger-example.html

The reason is that you've declared the function STABLE, which causes it
to use the calling query's starting snapshot.  So it cannot see any
in-progress changes of the calling query.  Declare it VOLATILE (or
let it default to that) and it will act as you expect.

I'm not sure if the cited portions of the manual ought to contain notes
about this or not.  It seems a bit off-topic for them, but if other
people have been bit by this, then maybe ... comments anyone?

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] After each row trigger NOT seeing data changes?

2009-05-21 Thread Rodrigo Gonzalez

you have to change RETURN NULL; with RETURN NEW;



On 05/21/2009 04:57 PM, Karl Nack wrote:

I'm attempting to create an inventory of trees. Here's a simplified,
sample table:

CREATE TABLE tree (
tree_id SERIAL PRIMARY KEY,
tree_species_id INT NOT NULL REFERENCES tree_species,
tree_location POINT NOT NULL,
tree_install_date DATE NOT NULL,
tree_removal_date DATE,
CHECK (tree_removal_date  tree_install_date)
);


I need to ensure that no two trees are located in the same place at the
same time:

CREATE OR REPLACE FUNCTION check_unique_tree()
RETURNS trigger
AS $$
DECLARE
num_trees INT;
BEGIN
-- just to see what's going on
SELECT COUNT(tree_id) INTO num_trees FROM tree;
RAISE NOTICE '% % of new tree %, there are % trees.',
TG_WHEN, TG_OP, NEW, num_trees;

PERFORM tree_id
FROM tree
WHERE
-- first condition prevents updated tree from matching with itself
NEW.tree_id  tree_id
AND NEW.tree_location ~= tree_location
AND NEW.tree_install_date 
COALESCE(tree_removal_date, timestamp 'infinity')
AND COALESCE(NEW.tree_removal_date, timestamp 'infinity') 
tree_install_date;

IF FOUND THEN
RAISE EXCEPTION 'Conflicting trees';
END IF;

RETURN NULL;
END;
$$ LANGUAGE plpgsql STABLE;


CREATE TRIGGER check_unique_tree
AFTER INSERT OR UPDATE ON tree
FOR EACH ROW EXECUTE PROCEDURE check_unique_tree();


And yet, I'm able to do this:

= INSERT INTO tree (tree_species_id, tree_location, tree_install_date)
- VALUES
- (1, '(1,1)', 'today'),
- (1, '(1,1)', 'today');
NOTICE: AFTER INSERT of new tree (20,1,(1,1),2009-05-21,), there are 0
trees.
NOTICE: AFTER INSERT of new tree (21,1,(1,1),2009-05-21,), there are 0
trees.
INSERT 0 2


As a sanity check (on a fresh, truncated table):

= INSERT INTO tree (tree_species_id, tree_location, tree_install_date)
- VALUES (1, '(1,1)', 'today');
NOTICE: AFTER INSERT of new tree (22,1,(1,1),2009-05-21,), there are 0
trees.
INSERT 0 1

= INSERT INTO tree (tree_species_id, tree_location, tree_install_date)
- VALUES (1, '(1,1)', 'today');
NOTICE: AFTER INSERT of new tree (23,1,(1,1),2009-05-21,), there are 1
trees.
ERROR: Conflicting trees


I notice the row count does not reflect the newly-inserted row, which
suggests that the trigger is not seeing changes made to the table. This
seems to be exactly opposite of what's in the manual:
http://www.postgresql.org/docs/8.3/interactive/trigger-datachanges.html
http://www.postgresql.org/docs/8.3/interactive/trigger-example.html

Am I doing something wrong here? Have I misunderstood the manual? Have I
found a bug? Any help is greatly appreciated, as this check is pretty
key to what I'm trying to do.

Thanks.

Karl Nack

Futurity, Inc.
773-506-2007




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] After each row trigger NOT seeing data changes?

2009-05-21 Thread Raymond O'Donnell
On 21/05/2009 21:36, Tom Lane wrote:
 Karl Nack karln...@futurityinc.com writes:

 http://www.postgresql.org/docs/8.3/interactive/trigger-datachanges.html
 http://www.postgresql.org/docs/8.3/interactive/trigger-example.html

 I'm not sure if the cited portions of the manual ought to contain notes
 about this or not.  It seems a bit off-topic for them, but if other
 people have been bit by this, then maybe ... comments anyone?

Maybe just insert a brief reference to relevant section(s) in the manual
on STABLE and family? - e.g. See the following topics on data
visibility... or something like that.

Ray.

--
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
r...@iol.ie
Galway Cathedral Recitals: http://www.galwaycathedral.org/recitals
--

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] After each row trigger NOT seeing data changes?

2009-05-21 Thread Karl Nack

The reason is that you've declared the function STABLE


Yes, that did it!


I'm not sure if the cited portions of the manual ought to contain notes
about this or not.  It seems a bit off-topic for them, but if other
people have been bit by this, then maybe ... comments anyone?


Perhaps this should be documented on the page describing CREATE FUNCTION 
(which is what I referenced when I wrote the function)?


In particular, the wording describing IMMUTABLE, STABLE VOLATILE doesn't 
seem entirely accurate:


STABLE ... is the appropriate selection for functions whose results 
depend on database lookups, parameter variables (such as the current time 
zone), etc.


Apparently not the case for after-update triggers that need to reference 
the just-updated table.


Regardless, thank you very much for the help!


Karl Nack

Futurity, Inc.
773-506-2007



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres, DB design, and object IDs (of any ilk)

2009-05-21 Thread Chris Browne
kentil...@gmail.com (Kenneth Tilton) writes:
 Just looking for postgres best practices input from the veterans:

OIDs are decidedly a bad idea; the difference between natural IDs
and surrogate IDs is a general database issue that is fairly well
documented in the literature and is not notably a PostgreSQL-specific
issue.

There are competing doctrines, basically between the respective
beliefs:

 a) Some believe that there should always be a natural primary key,
and that it is wrong to attempt to use surrogates

 b) Others contend that even when users claim to provide natural
primary keys that they are actually lying when they suggest
certainty about this

Major bashing can take place back and forth.
-- 
output = (cbbrowne @ acm.org)
http://cbbrowne.com/info/wp.html
Rules  of  the Evil  Overlord  #100.  Finally,  to keep  my  subjects
permanently locked in  a mindless trance, I will  provide each of them
with free unlimited Internet access. http://www.eviloverlord.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: Re: [GENERAL] Can not decompress a compressed string under plpy!

2009-05-21 Thread Timmy
  Hi,
 I have stored a compressed string in a table field.
  The compressed string is created by zlib using python.
  I want to decompress this string by zlib under plpy but plpy fails to do
  so. Is plpy not support decompression? Why?
 
  Thanks!
 
 A bit more information is required. What version of Postgres are you using, 
what 
 OS?  I am guessing that the compressed string is being created by a Python 
 program external to the database and then inserted into a field, is this 
 correct? You are then trying to decompress the string via a plpythonu 
function 
 within the database,correct? Did you import the zlib module into the 
plpythonu 
 function?
 
 -- 
 Adrian Klaver
 akla...@comcast.net
 

Yes, most of your guess is correct.
I'm using postgresql 8.3.x and ms windows 2000.
The compressed string is saved to the table in binary 
format using the psycopg. I had set the table field to bytea data type.
I want to use the plpythonu to decompress the stored 
string. 
Althout I imported the zlib module, it still failed.
I can successfully get back the decompressed string if 
I use the psycopg either inside or outside the plpythonu.
But if without psycopg, plpythonu itself can not decompress the string.
I can use the psycopg inside the plpythonu but I must need to write security 
information (including user 
name,password,server ip,port number,...) to the connection string inside 
plpythonu too. 
This is a problem.

Thanks!





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] running postgresql on a private machine accessing it from public web pages

2009-05-21 Thread Reid Thompson

Keith D. Evans wrote:
We usually run postgresql on a private machine (gravity) but due to 
space, we have moved the database to a different machine through afs (a 
local network at the university). We do not want the private machine to 
be accessible to outside users, so our web pages are on the university's 
public web pages. They have mysql, but won't install postgresql, which 
we've been using on gravity for years.


We want to add a database where users can get and plot data using php 
over the internet. But since the public computers don't have postgresql, 
can we have postgresql running on gravity allowing users accessing 
through the internet on the public pages to access the data?


thanx,
keith



gravity = postgresql host = private
webpages = university web server = public

so, user accesses webpage on university web server, the webpage ( webapp ) 
has/or creates a connection to the postgresql server on gravity -- yes, user 
can access webpage on public server and pull data from postgresql server on 
private server ( as long as postgresql is configured to accept the connection 
from the webserver and network connectivity exists between public and private 
server )


Does this answer what you are asking?


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: Re: Re: [GENERAL] Can not decompress a compressed string under plpy!

2009-05-21 Thread Adrian Klaver
On Thursday 21 May 2009 3:26:22 pm Timmy wrote:


 Yes, most of your guess is correct.
 I'm using postgresql 8.3.x and ms windows 2000.
 The compressed string is saved to the table in binary
 format using the psycopg. I had set the table field to bytea data type.
 I want to use the plpythonu to decompress the stored
 string.
 Althout I imported the zlib module, it still failed.
 I can successfully get back the decompressed string if
 I use the psycopg either inside or outside the plpythonu.
 But if without psycopg, plpythonu itself can not decompress the string.
 I can use the psycopg inside the plpythonu but I must need to write
 security information (including user name,password,server ip,port
 number,...) to the connection string inside plpythonu too. This is a
 problem.

 Thanks!

Now you are getting outside my experience level. As a guess though I would say 
that psycopg is taking care of escaping the data on the way out, whereas your 
plpythonu function is not. 
See www.postgresql.org/docs/8.3/interactive/datatype-binary.html for more 
information.

-- 
Adrian Klaver
akla...@comcast.net

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] cascade deleting

2009-05-21 Thread Duan Ligong
Hi.

I am wondering whether the following problem can 
be avoided by using Cascade Deleting while the DB 
is in Read Committed Isolation Level:

In the following scenario, The DEL item1 from tb_pk 
command fails because of  involating foreign key 
constraints.

(The tb_fk1, tb_fk2, and tb_fk3 tables reference the 
tb_pk table)
Transaction A   Transaction B
  begin()|
   |   |
DEL item1 from tb_fk1  begin()
  |   |
DEL item1 from tb_fk2   INS item1 into tb_fk1
  |  |
DEL item1 from tb_fk3   commit()
  |
DEL item1 from tb_pk
  |   
  |( failed because of involating foreign key constraint.)
  |
commit()


My real question is:
Will Executing  CASCADE DEL item1 from tb_pk fail
if another transaction insert item1 into tb_fk1
during executing CASCADE DEL item1 from tb_pk?

Thanks
Duan


---
Duan Ligong
E-MAIL  : dua...@nec-as.nec.com.cn
Tel: 010-82334433-354
: 8-0086-22-354


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to restore a SQL-ASCII encoded database to a new UTF-8 db?

2009-05-21 Thread Postgres User
Hi,

I have a database that was created with SQL-ASCII encoding
(unfortunately).  I ran pg_restore to load the struct and data into a
new database with UTF-8 encoding but no surprise- I'm seeing this
error for a number of tables:

pg_restore: [archiver (db)] COPY failed: ERROR:  invalid byte sequence for encod
ing UTF8

Any idea on how I can copy the data between these databases without
any data loss?  For some reason I thought that a conversion to Unicode
would be easy.

Thanks

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Tuning resource parameters for a logging database.

2009-05-21 Thread Alex Thurlow
I was hoping to not have to change all my code to automate the 
partitioning table creation stuff, but if that's really the best way, 
I'll check it out.  Thanks for the advice.



Alex Thurlow
Blastro Networks

http://www.blastro.com
http://www.roxwel.com
http://www.yallwire.com


On 5/21/2009 2:24 PM, Vick Khera wrote:

On Thu, May 21, 2009 at 3:13 PM, Alex Thurlowa...@blastro.com  wrote:
   

I have a postgresql database that I'm using for logging of data. There's
basically one table where each row is a line from my log files.  It's
getting to a size where it's running very slow though.  There are about 10
million log lines per day and I keep 30 days of data in it. All the columns
 


Are you using partitioning on this table?  Your use case is literally
the exact example everyone uses to show how to do partitioning on
tables.

Since you mostly scan on date, this will speed up your queries significantly.