[GENERAL] PostgreSQL CE?

2009-02-22 Thread steve . gnulinux
Just to know, if actually there's -or where- now, a PostgreSQL Certified
Engineer program.
I think that obtaining -if able- a Postgresql CE and some kind of linux
certification like RHCE o LPI,
could be a lot of interest.
Finally, to know if there's any book (english) for the PostgreSQL
Certification, like in Japanese.

Thank you for your time,

Steve,


Re: [GENERAL] PostgreSQL CE?

2009-02-22 Thread Gerd Koenig

Hi Steve,

I know that EnterpriseDB offers 3 levels of certification.
Perhaps one of them suits your needs..?!?!

regards...:GERD:...



steve.gnuli...@gmail.com schrieb:
Just to know, if actually there's -or where- now, a PostgreSQL Certified 
Engineer program.
I think that obtaining -if able- a Postgresql CE and some kind of linux 
certification like RHCE o LPI,

could be a lot of interest.
Finally, to know if there's any book (english) for the PostgreSQL 
Certification, like in Japanese.


Thank you for your time,

Steve,





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] NOVALIDATE in postgresql?

2009-02-22 Thread Adrian Klaver
On Friday 20 February 2009 7:57:32 pm decibel wrote:
> On Feb 19, 2009, at 1:49 PM, Adrian Klaver wrote:
> > From the Oracle manual:
> > ENABLE NOVALIDATE means the constraint is checked for new or
> > modified rows, but existing data may violate the constraint.
> >
> > So you are looking for an incomplete constraint?
>
> More likely they want to add a constraint but can't afford the time
> it would take to scan the table while holding an exclusive lock. At
> least that's the situation we're facing at work.

I get it now, basically validate on demand, so the cost is spread out instead 
of 
incurred at the ALTER TABLE command.

>
> FWIW, I've been talking to Command Prompt about developing a fix for
> this, targeting inclusion in 8.5. I think Alvaro and I have come up
> with a reasonably plan, but there hasn't been time to present it to
> the community yet.



-- 
Adrian Klaver
akla...@comcast.net

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Cambiando Postgresql 7.4.3 a 8.1.11 !!!

2009-02-22 Thread Angelo Astorga
Tenia Red Hat enterprise 3.0 con postgresql 7.4.3 y PHP 4.3.2, por necesidad
de hardware migre todo a Red Hat enterprise 5.3 con postgresql 8.1.11 y PHP
5.1.6 (default del sistema operativo), si bien puedo crear y recuperar
la BD, no puedo acceder desde mi aplicacion web en php, si por consola, es
decir, # psql nombre_base_datos...  sera un tema de incompatibilidad de php,
postgresql o ambas... alguna ayudita al respecto, se agradece !!!

IMPORTANTE: La BD postgresql la pude recuperar con: # psql nombre_base_dato
< nombre_archivo_plano_almacena_base_datos ... pero al recuperar con   #
pg_restore -d nombre_base_dato   nombre_archivo_plano_almacena_base_datos
... no se puede, manda un error...

AAstorga


[GENERAL] question on viewing dependencies

2009-02-22 Thread Aaron Burnett

Hi,

postgresql version 8.25 running on RHEL4

Hopefully a quick answer. Went to drop a table:

drop table table_foo;
ERROR:  cannot drop table table_foo because other objects depend on it
HINT:  Use DROP ... CASCADE to drop the dependent objects too.

Wanted to see what the dependencies were:

BEGIN;
drop table table_foo CASCADE;
DROP TABLE
ROLLBACK;

Am I overlooking a step to actually seeing the dependant objects?

Thanking you in advance,

Aaron





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Mammoth replicator

2009-02-22 Thread Devrim GÜNDÜZ
On Wed, 2009-02-18 at 13:55 -0200, Martín Marqués wrote:
> And finally a question related with the instalation: are there debian
> binaries to install replicator?

Not currently, but we are working on it.

Regards,
-- 
Devrim GÜNDÜZ, RHCE
devrim~gunduz.org, devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr
   http://www.gunduz.org


signature.asc
Description: This is a digitally signed message part


[GENERAL] PERFORM

2009-02-22 Thread c k
Hello,
I have a small problem following statement executes well do not give me
proper results, rather subquery calleing another function is not executed at
all.
*perform sometable.pk, (select * from somefunction) from sometable where
somecondition;*
But* select * from somefunction;* and *perform * from somefunction*; and
also* select sometable.pk, (select * from somefunction) from sometable where
somecondition;*  gets executed properly with all side effects.
Is there anything wrong in my first statement? OR perform can not used to
make join as in selects.

Thanks,
CPK


Re: [GENERAL] question on viewing dependencies

2009-02-22 Thread Tom Lane
Aaron Burnett  writes:
> Hopefully a quick answer. Went to drop a table:

> drop table table_foo;
> ERROR:  cannot drop table table_foo because other objects depend on it
> HINT:  Use DROP ... CASCADE to drop the dependent objects too.

> Wanted to see what the dependencies were:

> BEGIN;
> drop table table_foo CASCADE;
> DROP TABLE
> ROLLBACK;

> Am I overlooking a step to actually seeing the dependant objects?

Maybe you have client_min_messages set to suppress NOTICEs?

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] question on viewing dependencies

2009-02-22 Thread Aaron Burnett

Thanks Tom,

It was not supressed for notice, so I changed it to 'debug1' and it gave me
the answers I was looking for.


On 2/22/09 6:07 PM, "Tom Lane"  wrote:

> Aaron Burnett  writes:
>> Hopefully a quick answer. Went to drop a table:
> 
>> drop table table_foo;
>> ERROR:  cannot drop table table_foo because other objects depend on it
>> HINT:  Use DROP ... CASCADE to drop the dependent objects too.
> 
>> Wanted to see what the dependencies were:
> 
>> BEGIN;
>> drop table table_foo CASCADE;
>> DROP TABLE
>> ROLLBACK;
> 
>> Am I overlooking a step to actually seeing the dependant objects?
> 
> Maybe you have client_min_messages set to suppress NOTICEs?
> 
> regards, tom lane
> 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Cambiando Postgresql 7.4.3 a 8.1.11 !!!

2009-02-22 Thread Mike Hall
Este podria ser un problema con SELinux?

Prova:

/usr/sbin/getsebool -a | grep httpd

Busca:

httpd_can_network_connect_db

El valor de este directivo debe ser "on"
Se puede cambiar permanentemente con:

/usr/sbin/setsebool -P httpd_can_network_connect_db on

Suerte



Tenia Red Hat enterprise 3.0 con postgresql 7.4.3 y PHP 4.3.2, por necesidad de 
hardware migre todo a Red Hat enterprise 5.3 con postgresql 8.1.11 y PHP 5.1.6 
(default del sistema operativo), si bien puedo crear y recuperar la BD, no 
puedo acceder desde mi aplicacion web en php, si por consola, es decir, # psql 
nombre_base_datos...  sera un tema de incompatibilidad de php, postgresql o 
ambas... alguna ayudita al respecto, se agradece !!!

IMPORTANTE: La BD postgresql la pude recuperar con: # psql nombre_base_dato
< nombre_archivo_plano_almacena_base_datos ... pero al recuperar con   #
pg_restore -d nombre_base_dato   nombre_archivo_plano_almacena_base_datos
... no se puede, manda un error...

AAstorga



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Backup Strategy Second Opinion

2009-02-22 Thread Bryan Murphy
Hey guys, we just moved our system to Amazon's EC2 service.  I'm a bit
paranoid about backups, and this environment is very different than
our previous environment.  I was hoping you guys could point out any
major flaws in our backup strategy that I may have missed.

A few assumptions:

1. It's OK if we lose a few seconds (or even minutes) of transactions
should one of our primary databases crash.
2. It's unlikely we'll need to load a backup that's more than a few days old.

Here's what we're currently doing:

Primary database ships WAL files to S3.
Snapshot primary database to tar file.
Upload tar file to S3.

Create secondary database from tar file on S3.
Put secondary database into continuous recovery mode, pulling wal files from S3.

Every night on secondary database:
  * shutdown postgres
  * unmount ebs volume that contains postgres data
  * create new snapshot of ebs volume
  * remount ebs volume
  * restart postgres

I manually delete older log files and snapshots once I've verified
that a newer snapshot can be brought up as an active database and have
run a few tests on it.

Other than that, we have some miscellaneous monitoring to keep track
of the # of logs files in the pg_xlog directory and the amount of
available disk space on all the servers.  Ideally, if the # of log
files starts to grow beyond a certain threshold, that indicates
something went wrong with the log shipping and we'll investigate to
see what the problem is.

I think this is a pretty good strategy, but I've been so caught up in
this I may not be seeing the forest through the trees so I thought I'd
ask for a sanity check here.

Thanks,
Bryan

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Backup Strategy Second Opinion

2009-02-22 Thread Tim Uckun
>
> 1. It's OK if we lose a few seconds (or even minutes) of transactions
> should one of our primary databases crash.
> 2. It's unlikely we'll need to load a backup that's more than a few days
> old.
>

How do you handle failover and falling back to the primary once it's up?


Re: [GENERAL] Backup Strategy Second Opinion

2009-02-22 Thread Bryan Murphy
On Sun, Feb 22, 2009 at 7:30 PM, Tim Uckun  wrote:
>> 1. It's OK if we lose a few seconds (or even minutes) of transactions
>> should one of our primary databases crash.
>> 2. It's unlikely we'll need to load a backup that's more than a few days
>> old.
>
> How do you handle failover and falling back to the primary once it's up?

We don't plan to fail back to the primary.  Amazon is a very different
beast, once a server is dead, we just toss it away.  The secondary
permanently becomes the primary and we create a new tertiary from
scratch which then becomes a log shipped copy of the secondary.

Bryan

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL clustering with DRBD

2009-02-22 Thread Tim Uckun
On Wed, Feb 11, 2009 at 11:24 PM, Serge Fonville
wrote:

> Hi,
> I am in the process of setting up a two node cluster.
> Can PostgreSQL use DRBD as its storage?
> Since the in-memory database would be synchronized with the on-disk
> database.
> If this would be done with every query, this would greatly impact
> performance.
> Since the cluster will be multi-master/dual-primary, do I need to have a
> separate block device for each PostgreSQL instance or can it use the DRBD
> device?
> I read mostly about MySQL clustering with DRBD and there the query cache
> should be disabled to make sure data is in-sync.
> To me it seems something similar would apply to PostgreSQL.
> I believe cybercluster is the most active and complete PostgreSQL
> clustering solution.
> My endgoal is a two node cluster with load sharing and fail over where both
> nodes can perform reads and writes.
>


After reading your post I decided to check out cybercluster.   In PgFoundry
there is a cybercluster project
http://pgfoundry.org/projects/cybercluster/but it hasn't been updated
since 2007.

Is that the one you are talking about or is there another cybercluster I
should be looking at.

Also


Is there an article or something that compares the different HA solutions
for postgres? What are the differences between pgpool, pgcluster,
cybercluster etc?


Any HOWTOs anywhere?

Thanks.


[GENERAL] High cpu usage after many inserts

2009-02-22 Thread Jordan Tomkinson
Hi list,

We are running postgresql 8.3.5 and are trying to stress test our LMS.
The problem is when our stress tester (Jmeter) inserts around 10,000 rows
(in 3 hours) over 2 tables (5000 rows each table) the CPU of the sql server
hits 100% over all 4 cores for all future inserts.

I have tried numerous things to get the cpu back down but so far the only
thing that works is deleting the 10,000 rows Jmeter inserted.

For more information on the problem along with a time stamped list of test
results and outcomes please see
http://spreadsheets.google.com/pub?key=pu_k0R6vNvOVP26TRZdtdYw

Any help would be appreciated

Regards,

Jordan Tomkinson
System Administrator
Moodle HQ


Re: [GENERAL] High cpu usage after many inserts

2009-02-22 Thread Scott Marlowe
On Sun, Feb 22, 2009 at 11:55 PM, Jordan Tomkinson  wrote:
> Hi list,
>
> We are running postgresql 8.3.5 and are trying to stress test our LMS.
> The problem is when our stress tester (Jmeter) inserts around 10,000 rows
> (in 3 hours) over 2 tables (5000 rows each table) the CPU of the sql server
> hits 100% over all 4 cores for all future inserts.
>
> I have tried numerous things to get the cpu back down but so far the only
> thing that works is deleting the 10,000 rows Jmeter inserted.
>
> For more information on the problem along with a time stamped list of test
> results and outcomes please see
> http://spreadsheets.google.com/pub?key=pu_k0R6vNvOVP26TRZdtdYw

Can you post the jmeter files?  OR create a SQL test case?  I haven't
had this problem myself, so I'm guessing something in your method or
something in your schema is setting something strange off.  OR the
background writer is busy writing all the changes out after the fact
while the database is breathing from the heavy run.  10,000 rows over
three hours isn't really a whole lotta work unless those are really
wide rows.

Oh, what is an LMS?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] High cpu usage after many inserts

2009-02-22 Thread Scott Marlowe
On Sun, Feb 22, 2009 at 11:55 PM, Jordan Tomkinson  wrote:
> Hi list,
>
> We are running postgresql 8.3.5 and are trying to stress test our LMS.
> The problem is when our stress tester (Jmeter) inserts around 10,000 rows
> (in 3 hours) over 2 tables (5000 rows each table) the CPU of the sql server
> hits 100% over all 4 cores for all future inserts.

And just to clarify, this is user / system CPU usage, not IO wait, right?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] High cpu usage after many inserts

2009-02-22 Thread Scott Marlowe
One last thing.  You were doing vacuum fulls but NOT reindexing, right?

I quote from the document at google docs:
13:50:00vacuum full & analyze on all databases through pgadmin

1: Do you have evidence that regular autovacuum isn't keeping up?
2: If you have such evidence, and you have to vacuum full, vacuum full
doesn't really shrink indexes all that well.

For a heavily updated database, the 1, 2, 3 punch of autovacuum
(adjusted properly!), the background writer (adjusted properly)
smoothing things out, and the HOT updates reusing all that space
autovacuum is constantly reclaiming, meaning you should be able to
avoid routine vacuum fulls.  It's made a huge difference in db
maintenance for me.

Still I do find myself in vacuum full territory once or twice a year
(rogue update or something like that on a live database).  If you do
have to vacuum full then reindex.  OR cluster on your favorite index.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] High cpu usage after many inserts

2009-02-22 Thread Jordan Tomkinson
On Mon, Feb 23, 2009 at 4:03 PM, Scott Marlowe wrote:

> On Sun, Feb 22, 2009 at 11:55 PM, Jordan Tomkinson 
> wrote:
> > Hi list,
> >
> > We are running postgresql 8.3.5 and are trying to stress test our LMS.
> > The problem is when our stress tester (Jmeter) inserts around 10,000 rows
> > (in 3 hours) over 2 tables (5000 rows each table) the CPU of the sql
> server
> > hits 100% over all 4 cores for all future inserts.
>
> And just to clarify, this is user / system CPU usage, not IO wait, right?
>

I am unable to post the jmeter file as it contains sensitive user/pass
details, but they simply login to a forum and create a new forum post, then
logout.
SQL wise this performs several SELECT's and 3 INSERT'S over 3 different
tables.

How does one create an SQL test case?
LMS is Learning Management System, in this case Moodle (moodle.org)

Yes this is user space CPU usage.

Running iostat -k 2 shows:
Device:tpskB_read/skB_wrtn/skB_readkB_wrtn
sda  31.50 0.00   456.00  0912

so not alot of disk writes.


Re: [GENERAL] High cpu usage after many inserts

2009-02-22 Thread Jordan Tomkinson
On Mon, Feb 23, 2009 at 4:08 PM, Scott Marlowe wrote:

> One last thing.  You were doing vacuum fulls but NOT reindexing, right?
>
> I quote from the document at google docs:
> 13:50:00vacuum full & analyze on all databases through pgadmin
>
> 1: Do you have evidence that regular autovacuum isn't keeping up?
> 2: If you have such evidence, and you have to vacuum full, vacuum full
> doesn't really shrink indexes all that well.
>
> For a heavily updated database, the 1, 2, 3 punch of autovacuum
> (adjusted properly!), the background writer (adjusted properly)
> smoothing things out, and the HOT updates reusing all that space
> autovacuum is constantly reclaiming, meaning you should be able to
> avoid routine vacuum fulls.  It's made a huge difference in db
> maintenance for me.
>
> Still I do find myself in vacuum full territory once or twice a year
> (rogue update or something like that on a live database).  If you do
> have to vacuum full then reindex.  OR cluster on your favorite index.
>

I have no evidence of autovacuum not working, the manual full was done for
purpose of elimination.


Re: [GENERAL] PostgreSQL clustering with DRBD

2009-02-22 Thread Gerd König
Hello,

the pgfoundry project seems to be the initial start of cybercluster
offered by CyberTec from Austria (German page:
http://www.postgresql-support.de/pr_cybercluster.html).
As far as I know this is a modified/adapted pgcluster solution.

We're very happy with pgpool-II for load-balancing and multi-master
usage of PostgreSQL (keep in mind to enable HA for pgpool-II itself to
avoid a SPOF, e.g. with heartbeat).

regards...GERD...

Tim Uckun schrieb:
> 
> 
> On Wed, Feb 11, 2009 at 11:24 PM, Serge Fonville
> mailto:serge.fonvi...@gmail.com>> wrote:
> 
> Hi,
> 
> I am in the process of setting up a two node cluster.
> Can PostgreSQL use DRBD as its storage?
> Since the in-memory database would be synchronized with the on-disk
> database.
> If this would be done with every query, this would greatly impact
> performance.
> Since the cluster will be multi-master/dual-primary, do I need to
> have a separate block device for each PostgreSQL instance or can it
> use the DRBD device?
> I read mostly about MySQL clustering with DRBD and there the query
> cache should be disabled to make sure data is in-sync.
> To me it seems something similar would apply to PostgreSQL.
> I believe cybercluster is the most active and complete PostgreSQL
> clustering solution.
> My endgoal is a two node cluster with load sharing and fail over
> where both nodes can perform reads and writes.
> 
> 
> 
> After reading your post I decided to check out cybercluster.   In
> PgFoundry there is a cybercluster project
> http://pgfoundry.org/projects/cybercluster/ but it hasn't been updated
> since 2007.
> 
> Is that the one you are talking about or is there another cybercluster I
> should be looking at.
> 
> Also
> 
> 
> Is there an article or something that compares the different HA
> solutions for postgres? What are the differences between pgpool,
> pgcluster, cybercluster etc?
> 
> 
> Any HOWTOs anywhere?
> 
> Thanks.
> 
> 

-- 
/===\
| Gerd König
| - Infrastruktur -
|
| TRANSPOREON GmbH
| Pfarrer-Weiss-Weg 12
| DE - 89077 Ulm
|
|
| Tel: +49 [0]731 16906 16
| Fax: +49 [0]731 16906 99
| Web: www.transporeon.com
|
\===/



Bleiben Sie auf dem Laufenden.
Jetzt den Transporeon Newsletter abonnieren!
http://www.transporeon.com/unternehmen_newsletter.shtml


TRANSPOREON GmbH, Amtsgericht Ulm, HRB 722056
Geschäftsf.: Axel Busch, Peter Förster, Roland Hötzl, Marc-Oliver Simon

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] High cpu usage after many inserts

2009-02-22 Thread Scott Marlowe
On Mon, Feb 23, 2009 at 12:18 AM, Jordan Tomkinson  wrote:
>
>
> On Mon, Feb 23, 2009 at 4:08 PM, Scott Marlowe 
> wrote:
>>
>> One last thing.  You were doing vacuum fulls but NOT reindexing, right?
>>
>> I quote from the document at google docs:
>> 13:50:00vacuum full & analyze on all databases through pgadmin
>>
>> 1: Do you have evidence that regular autovacuum isn't keeping up?
>> 2: If you have such evidence, and you have to vacuum full, vacuum full
>> doesn't really shrink indexes all that well.
>>
>> For a heavily updated database, the 1, 2, 3 punch of autovacuum
>> (adjusted properly!), the background writer (adjusted properly)
>> smoothing things out, and the HOT updates reusing all that space
>> autovacuum is constantly reclaiming, meaning you should be able to
>> avoid routine vacuum fulls.  It's made a huge difference in db
>> maintenance for me.
>>
>> Still I do find myself in vacuum full territory once or twice a year
>> (rogue update or something like that on a live database).  If you do
>> have to vacuum full then reindex.  OR cluster on your favorite index.
>
> I have no evidence of autovacuum not working, the manual full was done for
> purpose of elimination.

Oh, ok.  If you're trying to make a fair benchmark, you should
probably reindex after vacuum full.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] High cpu usage after many inserts

2009-02-22 Thread Scott Marlowe
Oh yeah, what OS is this?  Version and all that.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] High cpu usage after many inserts

2009-02-22 Thread Jordan Tomkinson
On Mon, Feb 23, 2009 at 4:29 PM, Scott Marlowe wrote:

> Oh yeah, what OS is this?  Version and all that.
>


Red Hat Enterprise Linux 5.3 x64 kernel 2.6.18-128.el5

os and hardware details are in the google spreadsheet, you might have to
refresh it.

Im working on getting the SQL log for you now.


Re: [GENERAL] High cpu usage after many inserts

2009-02-22 Thread Jordan Tomkinson
On Mon, Feb 23, 2009 at 4:29 PM, Scott Marlowe wrote:

> Oh yeah, what OS is this?  Version and all that.
>

I should probably clarify that the high cpu only exists while the jmeter
tests are running, once the tests are finished the cpu returns to 0% (this
isnt a production server yet, so no other queries other than my tests)
I have not yet tried other SQL queries to see if they are affected, i
suspect it may only be related to the two forum tables the test focuses on
but I may be incorrect - the database is filling up with data again now so I
can test this tomorrow.


[GENERAL] problems with win32 enterprisedb 8.3.6 ssl=on

2009-02-22 Thread raf
hi,

i've been getting nonsensical error messages all day with
postgres 8.3 on winxpsp3. i tried upgrading to 8.3.6
(enterprisedb) and fresh installs.

i get either of the following errors:

  PANIC: could not open control file
  "global/pg_control": Permission denied

  postgres cannot access the server configuration
  file "C:/app/postgres/data/postgresql.conf":
  No such file or directory

even though both files are present and accessible to
the postgres user. at some point i tried giving the postgres
user full control over these files and the directories
they were in but it made no difference. i didn't expect it
to since the permissions looked correct to begin with.

if i have ssl=off in my postgresql.conf file i can start the
server with "net start pg-plus-8.3" but:

if i try to start it with "pg_ctl start -D C:/app/postgres/data"
i get the postgresql.conf error message.

if i have ssl=on in my postgresql.conf file (and
server.{crt,key,req} files in the data directory) then "net
start pg-plus-8.3") doesn't work but it gives no reason (and
no log messages) and "pg_ctl start -D C:/app/postgres/data"
gives the global/pg_control error message.

no. i'm flipping ssl on and off now it's staying with the
global/pg_control error. i looked this error up in the
mailing list archives and only found 1 mention of it 2
years ago with no real resolution.

does anyone have an idea about what's happening. it seems i
can only run postgres on windows with ssl=off and i want to
force the use of ssl.

i'd also like the pg_ctl start/stop commands to work again
because they give actual error messages unlike the net
start/stop commands. are they supposed to work with
enterprisedb? i've seen them working on another windows
enterprisedb installation that i set up a while ago.

cheers,
raf


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general