Re: [GENERAL] Using timestamp(tz) in C functions

2016-07-29 Thread Keith Fiske
On Fri, Jul 29, 2016 at 11:49 AM, Vitaly Burovoy 
wrote:

> On 7/29/16, Keith Fiske  wrote:
> > On Fri, Jul 29, 2016 at 12:53 AM, Vitaly Burovoy <
> vitaly.buro...@gmail.com>
> > wrote:
> >
> >> On 7/28/16, Keith Fiske  wrote:
> >> > Working on trying to get a C version of the maintenance function for
> my
> >> > pg_partman extension working so I can hopefully make it more flexible
> >> > and
> >> > efficient.
> >> >
> >>
> https://gist.github.com/keithf4/81c32bf8b689c74b20c10ad8c91d45a3#file-pg_partman_bgw-c-L532
> >> >
> >> > There's what I've got working so far and links directly to the area
> >> > where
> >> > I'm having a problem. I found the DatumGetTimeTzADTP() function and
> the
> >> > TimeTzADT data type looking through the source and that seems to be
> exactly
> >> > what I'm looking for. However, when I get to the point of trying to
> simply
> >> > use the time value in that variable (line 544), Postgres segfaults.
> So far
> >> > I've just been trying to print the value out to the log to ensure I'm
> >> > pulling it out correctly. The "time" value of the struct appears to be
> >> > an
> >> > int64, so I thought %ld would be the correct, but even using %d or %s
> >> > fails.
> >> >
> >> > Thanks!
> >> >
> >> > --
> >> > Keith Fiske
> >> > Database Administrator
> >> > OmniTI Computer Consulting, Inc.
> >> > http://www.keithf4.com
> >> >
> >>
> >> I think it is not about timestamp(tz), but about usage of SPI.
> >> Since DatumGetTimeTzADTP is just a macros implements type conversion
> >> (declared at src/include/utils/date.h:60 (or 75)) you get segfault not
> >> in it but when the code tries to get value by dereference pointer
> >> (last_partition_timestamp->time).
> >>
> >> Please, answer questions:
> >> 1. How many rows SPI_execute returns (value of "ret" variable)?
> >> 2. Is last_partition_timestamp != NULL? Where it points to?
> >> 3. Try to check SPI_result just after SPI_getbinval. Has it error code?
> >
> >
> > It returns a single row. Here's an example of the results of the two
> > queries that are run that lead to providing the timestamp value
> >
> > keith=# select partition_tablename from
> > partman.show_partitions('partman_test.time_taptest_table', 'DESC') limit
> > 1;  partition_tablename
> > 
> >  time_taptest_table_p2016_08_02
> > (1 row)
> >
> > keith=# select child_start_time from
> >
> partman.show_partition_info('partman_test.time_taptest_table_p2016_08_02',
> > '1 day', 'partman_test.time_taptest_table');
> > child_start_time
> > 
> >  2016-08-02 00:00:00-04
> > (1 row)
> >
> > So there is valid data. As you're pointing out, this may just be a
> > misunderstanding of how to actually use the Datum retrieval function and
> C
> > in general. Appreciate the assistance.
> >
> > Keith
> >
>
> Please, add next lines (or debug this values via gdb) and post a result:
>
> ret = SPI_execute(buf.data, true, 1);
>
> ereport(NOTICE, (errmsg("query=%s", buf.data)));
> ereport(NOTICE, (errmsg("ret=%i -- rows=%ju", ret, SPI_processed)));
>
> last_partition_timestamp =
> DatumGetTimeTzADTP(SPI_getbinval(SPI_tuptable->vals[0],
> SPI_tuptable->tupdesc, 1, &isnull));
>
> ereport(NOTICE, (errmsg("SPI_result=%d -- ptr=%p", SPI_result,
> (void*)last_partition_timestamp )));
>
> /* elog(LOG, "Just seeing if it's time partitioned: %ld",
> last_partition_timestamp->time); --prevent segfaulting */
>
>
> --
> Best regards,
> Vitaly Burovoy
>

>From the postgresql logs:

2016-07-29 11:59:39 EDT [] [2021]: [10-1] user=,db=,e=0 LOG:  Just
checking that this thing is working. Loop: 0, parent_table:
partman_test.time_taptest_table, partition_type: time, partition_interval:
00:15:00, control: col3, premake: 4, datetime_string: %_MM_DD_HH24MI,
undo_in_progress: 0, sub_partition_set_full: 0, epoch: 0,
infinite_time_partitions: 0
2016-07-29 11:59:39 EDT [] [2021]: [11-1] user=,db=,e=0 NOTICE:
query=SELECT child_start_time FROM
partman.show_partition_info('partman_test.time_taptest_table_p2016_07_29_1245',
'00:15:00', 'partman_test.time_taptest_tab

Re: [GENERAL] Using timestamp(tz) in C functions

2016-07-29 Thread Keith Fiske
On Fri, Jul 29, 2016 at 12:53 AM, Vitaly Burovoy 
wrote:

> On 7/28/16, Keith Fiske  wrote:
> > Working on trying to get a C version of the maintenance function for my
> > pg_partman extension working so I can hopefully make it more flexible and
> > efficient.
> >
> >
> https://github.com/keithf4/pg_partman/blob/master/sql/functions/run_maintenance.sql
> >
> > Up until this point I've just been using SPI and the Datum conversion
> > functions (DatumGetCString, DatumGetInt32) to get string and numeric data
> > out of tables and things have been working fine. I'm at the first point
> in
> > that function where I'll need to start dealing with timestamp data.
> >
> >
> https://gist.github.com/keithf4/81c32bf8b689c74b20c10ad8c91d45a3#file-pg_partman_bgw-c-L532
> >
> > There's what I've got working so far and links directly to the area where
> > I'm having a problem. I found the DatumGetTimeTzADTP() function and the
> > TimeTzADT data type looking through the source and that seems to be
> exactly
> > what I'm looking for. However, when I get to the point of trying to
> simply
> > use the time value in that variable (line 544), Postgres segfaults. So
> far
> > I've just been trying to print the value out to the log to ensure I'm
> > pulling it out correctly. The "time" value of the struct appears to be an
> > int64, so I thought %ld would be the correct, but even using %d or %s
> > fails.
> >
> > You can see in my original plpgsql function how I eventually intend to
> use
> > the timestamp values. Here's a link directly to the equivalent section
> >
> >
> https://github.com/keithf4/pg_partman/blob/master/sql/functions/run_maintenance.sql#L139
> >
> > So, not sure if I'm even going about this the right manner for the way I
> > intend to use the timestamp values. Still fairly new to C and getting use
> > to postgres internals. Any help, or even an example of using timstamp
> data
> > pulled from a table in C with SPI, would be great.
> >
> > Thanks!
> >
> > --
> > Keith Fiske
> > Database Administrator
> > OmniTI Computer Consulting, Inc.
> > http://www.keithf4.com
> >
>
> I think it is not about timestamp(tz), but about usage of SPI.
> Since DatumGetTimeTzADTP is just a macros implements type conversion
> (declared at src/include/utils/date.h:60 (or 75)) you get segfault not
> in it but when the code tries to get value by dereference pointer
> (last_partition_timestamp->time).
>
> Please, answer questions:
> 1. How many rows SPI_execute returns (value of "ret" variable)?
> 2. Is last_partition_timestamp != NULL? Where it points to?
> 3. Try to check SPI_result just after SPI_getbinval. Has it error code?
>
> --
> Best regards,
> Vitaly Burovoy
>


It returns a single row. Here's an example of the results of the two
queries that are run that lead to providing the timestamp value

keith=# select partition_tablename from
partman.show_partitions('partman_test.time_taptest_table', 'DESC') limit
1;  partition_tablename

 time_taptest_table_p2016_08_02
(1 row)

keith=# select child_start_time from
partman.show_partition_info('partman_test.time_taptest_table_p2016_08_02',
'1 day', 'partman_test.time_taptest_table');
child_start_time

 2016-08-02 00:00:00-04
(1 row)

So there is valid data. As you're pointing out, this may just be a
misunderstanding of how to actually use the Datum retrieval function and C
in general. Appreciate the assistance.

Keith


Re: [GENERAL] Using timestamp(tz) in C functions

2016-07-28 Thread Keith Fiske
On Thu, Jul 28, 2016 at 5:28 PM, Adrian Klaver 
wrote:

> On 07/28/2016 02:15 PM, Keith Fiske wrote:
>
>> Working on trying to get a C version of the maintenance function for my
>> pg_partman extension working so I can hopefully make it more flexible
>> and efficient.
>>
>>
>> https://github.com/keithf4/pg_partman/blob/master/sql/functions/run_maintenance.sql
>>
>> Up until this point I've just been using SPI and the Datum conversion
>> functions (DatumGetCString, DatumGetInt32) to get string and numeric
>> data out of tables and things have been working fine. I'm at the first
>> point in that function where I'll need to start dealing with timestamp
>> data.
>>
>>
>> https://gist.github.com/keithf4/81c32bf8b689c74b20c10ad8c91d45a3#file-pg_partman_bgw-c-L532
>>
>> There's what I've got working so far and links directly to the area
>> where I'm having a problem. I found the DatumGetTimeTzADTP() function
>> and the TimeTzADT data type looking through the source and that seems to
>> be exactly what I'm looking for. However, when I get to the point of
>> trying to simply use the time value in that variable (line 544),
>> Postgres segfaults. So far I've just been trying to print the value out
>> to the log to ensure I'm pulling it out correctly. The "time" value of
>> the struct appears to be an int64, so I thought %ld would be the
>> correct, but even using %d or %s fails.
>>
>> You can see in my original plpgsql function how I eventually intend to
>> use the timestamp values. Here's a link directly to the equivalent section
>>
>>
>> https://github.com/keithf4/pg_partman/blob/master/sql/functions/run_maintenance.sql#L139
>>
>> So, not sure if I'm even going about this the right manner for the way I
>> intend to use the timestamp values. Still fairly new to C and getting
>> use to postgres internals. Any help, or even an example of using
>> timstamp data pulled from a table in C with SPI, would be great.
>>
>
> https://www.postgresql.org/docs/current/static/contrib-spi.html
>
>
> F.34.5. moddatetime — Functions for Tracking Last Modification Time
>
> moddatetime() is a trigger that stores the current time into a timestamp
> field. This can be useful for tracking the last modification time of a
> particular row within a table.
>
> To use, create a BEFORE UPDATE trigger using this function. Specify a
> single trigger argument: the name of the column to be modified. The column
> must be of type timestamp or timestamp with time zone.
>
> There is an example in moddatetime.example.
>
>
> Can be seen here:
>
>
> https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=contrib/spi/moddatetime.c;h=c6d33b73557ff499faa0d475e032fd1f8164afbd;hb=HEAD
>
>
>> Thanks!
>>
>> --
>> Keith Fiske
>> Database Administrator
>> OmniTI Computer Consulting, Inc.
>> http://www.keithf4.com
>>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Yeah I did see the SPI contrib module stuff. But none of that seems to
actually be extracting timestamp values from an existing table and using
them. The one you linked just generates the current time.


[GENERAL] Using timestamp(tz) in C functions

2016-07-28 Thread Keith Fiske
Working on trying to get a C version of the maintenance function for my
pg_partman extension working so I can hopefully make it more flexible and
efficient.

https://github.com/keithf4/pg_partman/blob/master/sql/functions/run_maintenance.sql

Up until this point I've just been using SPI and the Datum conversion
functions (DatumGetCString, DatumGetInt32) to get string and numeric data
out of tables and things have been working fine. I'm at the first point in
that function where I'll need to start dealing with timestamp data.

https://gist.github.com/keithf4/81c32bf8b689c74b20c10ad8c91d45a3#file-pg_partman_bgw-c-L532

There's what I've got working so far and links directly to the area where
I'm having a problem. I found the DatumGetTimeTzADTP() function and the
TimeTzADT data type looking through the source and that seems to be exactly
what I'm looking for. However, when I get to the point of trying to simply
use the time value in that variable (line 544), Postgres segfaults. So far
I've just been trying to print the value out to the log to ensure I'm
pulling it out correctly. The "time" value of the struct appears to be an
int64, so I thought %ld would be the correct, but even using %d or %s fails.

You can see in my original plpgsql function how I eventually intend to use
the timestamp values. Here's a link directly to the equivalent section

https://github.com/keithf4/pg_partman/blob/master/sql/functions/run_maintenance.sql#L139

So, not sure if I'm even going about this the right manner for the way I
intend to use the timestamp values. Still fairly new to C and getting use
to postgres internals. Any help, or even an example of using timstamp data
pulled from a table in C with SPI, would be great.

Thanks!

--
Keith Fiske
Database Administrator
OmniTI Computer Consulting, Inc.
http://www.keithf4.com


Re: [GENERAL] handling time series data

2016-02-03 Thread Keith Brown
Thanks for the responses.

So, it seems what I am trying to do isn't out of the norm. I will get the
column store driver a try. Are there any plans to have this natively
support in Postgresql? That would be a great "killer" feature.



On Tue, Feb 2, 2016 at 7:53 PM, Brent Wood  wrote:

> Hi Keith,
>
> This documents a time series database we manage with Postgis, from a
> research vessel. We use partitions & clustered indexes, as well as a
> "minute identifier" (of sorts) to allow various intervals to be rapidly
> identified and extracted. It works well for us, with a Mapserver
> application and other tools to provide interactive access, as well as SQL
> queries. We are up to over 600,000,000 records now, and it is still quite
> responsive.
>
> The new block indexes in PG9.5 might have some application for such
> databases as well, but I think that while they may be smaller/faster but
> also more limited than our approach.
>
> See: ftp://ftp.niwa.co.nz/os2020/ipy-caml/DAS_database.pdf
>
> Cheers
>
> Brent Wood
>
> Programme leader: Environmental Information Delivery
> NIWA
> DDI:  +64 (4) 3860529
>
> Brent Wood
> Principal Technician - GIS and Spatial Data Management
> Programme Leader - Environmental Information Delivery
> +64-4-386-0529 | 301 Evans Bay Parade, Greta Point, Wellington |
> www.niwa.co.nz
> [image: NIWA] <http://www.niwa.co.nz>
> To ensure compliance with legal requirements and to maintain cyber
> security standards, NIWA's IT systems are subject to ongoing monitoring,
> activity logging and auditing. This monitoring and auditing service may be
> provided by third parties. Such third parties can access information
> transmitted to, processed by and stored on NIWA's IT systems.
> 
> From: pgsql-general-ow...@postgresql.org <
> pgsql-general-ow...@postgresql.org> on behalf of Keith Brown <
> keith6...@gmail.com>
> Sent: Wednesday, February 3, 2016 12:04:14 PM
> To: pgsql-general@postgresql.org
> Subject: [GENERAL] handling time series data
>
>
> By reading this,
>
> http://www.postgresql.org/message-id/7e41ba8f0908191624g4501b5f7mcbe29ad2c8139...@mail.gmail.com
> ,
> I was wondering if anything has changed on the postgresql front.
>
> I have a large timeseries (2TB worth of uncompressed data). I will be
> doing some queries which change at times. Should I stick with my
> current approach which is a series of csv files or would it be better
> to load it into Postgresql and use its TOAST features (which will give
> me some sort of compression)
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>
>
>
>


[GENERAL] handling time series data

2016-02-02 Thread Keith Brown
By reading this,
http://www.postgresql.org/message-id/7e41ba8f0908191624g4501b5f7mcbe29ad2c8139...@mail.gmail.com,
I was wondering if anything has changed on the postgresql front.

I have a large timeseries (2TB worth of uncompressed data). I will be
doing some queries which change at times. Should I stick with my
current approach which is a series of csv files or would it be better
to load it into Postgresql and use its TOAST features (which will give
me some sort of compression)


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgresql 9.4 and ZFS?

2015-09-30 Thread Keith Fiske
On Wed, Sep 30, 2015 at 4:58 PM, Benjamin Smith  wrote:

> On Wednesday, September 30, 2015 03:49:44 PM Keith Fiske wrote:
> > We've run postgres on ZFS for years with great success (first on
> > OpenSolaris, now on OmniOS, and I personally run it on FreeBSD). The
> > snapshotting feature makes upgrades on large clusters much less scary
> > (snapshot and revert if it goes bad) and being able to bring a snapshot
> > backup up as a clone to restore an accidentally dropped table is great.
>
> Somebody mentioned some trouble running it with ZFS on Linux, which is
> exactly
> how we're planning our roll out. (We're a RHEL/CentOS shop) Have you tried
> that config, and has it worked for you?
>

We've not run it in production where I work and I haven't met anyone that
is doing it either. Personally, I tried it at home for a while when I used
to use Linux on my home server. But whenever there was a kernel or zfs
update, i'd occasionally have problems with it booting up or seeing the zfs
mount. Rebooting again usually fixed it, but it made me nervous every time
there was a kernel update. I switched to FreeBSD a few years ago to get
native ZFS support and haven't looked back since. As that was a few years
ago, things may have improved, but I couldn't speak to those improvements
anymore.


> > Others have given a lot of great advice as far as system tuning. Only
> other
> > thing I can add is you definitely do want your data directory on its own
> > pool. But I recommend putting the actual data in a folder under that pool
> > (possibly by major version name). For example if your pool is
> >
> > /data/postgres
> >
> > Create a folder under that directory to actually put the data:
> >
> > mkdir /data/postgres/9.4
> >
> > This allows pg_upgrade's --link option to work during major upgrades
> since
> > you can't have an upgrade destination on a different filesystem. Just
> make
> > a 9.5 directory in the same spot when the time comes around. With ZFS
> > snapshots available, there's really no reason not to use the --link
> option
> > to greatly speed up upgrades.
>
> Recently, the PGDG RPMs provided by PostgreSQL have done something similar
> by
> moving from /var/lib/pgsql/ to (EG) /var/lib/pgsql/9.4 and we've followed
> suit, trying to keep things "stock" where possible.
>
> Our intent is to make /var/lib/pgsql a filesystem in a pool containing no
> other
> file systems, with SSD-based VDEVs that aren't shared for any other
> purpose.
>


Re: [GENERAL] Postgresql 9.4 and ZFS?

2015-09-30 Thread Keith Fiske
On Tue, Sep 29, 2015 at 1:01 PM, Benjamin Smith  wrote:

> Does anybody here have any recommendations for using PostgreSQL 9.4
> (latest)
> with ZFS?
>
> We've been running both on ZFS/CentOS 6 with excellent results, and are
> considering putting the two together. In particular, the CoW nature (and
> subsequent fragmentation/thrashing) of ZFS becomes largely irrelevant on
> SSDs;
> the very act of wear leveling on an SSD is itself a form of intentional
> thrashing that doesn't affect performance since SSDs have no meaningful
> seek
> time. It would seem that PGCon  2013 even had a workshop on it!
> https://www.pgcon.org/2013/schedule/events/612.en.html
>
> The exact configuration we're contemplating is either (3x 400 RAIDZ1) or
> (4x
> 400 RAIDZ2) with Intel Enterprise SATA3 SSDs, with default (lz4)
> compression
> enabled.
>
> If this is a particularly good or bad idea, I'd like to hear it, and why?
>
> Thanks,
>
> BenP
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


We've run postgres on ZFS for years with great success (first on
OpenSolaris, now on OmniOS, and I personally run it on FreeBSD). The
snapshotting feature makes upgrades on large clusters much less scary
(snapshot and revert if it goes bad) and being able to bring a snapshot
backup up as a clone to restore an accidentally dropped table is great.

Others have given a lot of great advice as far as system tuning. Only other
thing I can add is you definitely do want your data directory on its own
pool. But I recommend putting the actual data in a folder under that pool
(possibly by major version name). For example if your pool is

/data/postgres

Create a folder under that directory to actually put the data:

mkdir /data/postgres/9.4

This allows pg_upgrade's --link option to work during major upgrades since
you can't have an upgrade destination on a different filesystem. Just make
a 9.5 directory in the same spot when the time comes around. With ZFS
snapshots available, there's really no reason not to use the --link option
to greatly speed up upgrades.

--
Keith Fiske
Database Administrator
OmniTI Computer Consulting, Inc.
http://www.keithf4.com


Re: [GENERAL] WAL Shipping and streaming replication

2015-09-28 Thread Keith Fiske
On Mon, Sep 28, 2015 at 10:54 AM, Scott Marlowe 
wrote:

> On Mon, Sep 28, 2015 at 8:48 AM, CS DBA 
> wrote:
> > All;
> >
> > We have a 3 node replication setup:
> >
> > Master (node1) --> Cascading Replication Node (node2)  --> Downstream
> > Standby node (node3)
> >
> > We will be deploying WAL archiving from the master for PITR backups and
> > we'll use the staged WAL files in the recovery.conf files in case the
> > standbys need to revert to log shipping.
> >
> > Question:  whats the best way to ensure consistency of WAL archiving in
> the
> > case of changes  (failover, etc)? can we setup the cascade node to
> archive
> > wals only if it's the master? is this a case where we should deploy
> repmgr?
>
> Look up WAL-E. It's works really well. We tried using OmniPITR and
> it's buggy and doesn't seem to get fixed very quickly (if at all).
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


If you've encountered bugs with OmniPITR, please feel free to open an issue
on Github. If you look at the issue and commit history you can see that we
do indeed fix reported issues or respond to help people with problems they
are having.

https://github.com/omniti-labs/omnipitr

--
Keith Fiske
Database Administrator
OmniTI Computer Consulting, Inc.
http://www.keithf4.com


Re: [GENERAL] Alter column from text[] to uuid[]

2015-06-12 Thread Keith Rarick
On Thu, Jun 11, 2015 at 12:57 PM Tom Lane  wrote:
> alter table t alter u type uuid[] using u::uuid[];
>
> The original command worked without a USING because anything-to-text is
> considered an allowable assignment coercion; but the other way around
> requires an explicit cast.

Got it. Thanks!

I feel silly not to have tried that. Somehow I interpreted
"cannot be cast automatically to type uuid[]" as meaning
that it cannot be cast at all (and would therefore need a
more involved expression to specify how to convert it),
rather than that the cast merely needed to be explicit.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Alter column from text[] to uuid[]

2015-06-11 Thread Keith Rarick
I have a table:

kr=# create table t (u uuid[]);
CREATE TABLE
Time: 3.742 ms
kr=# insert into t values
('{"0289b709-3cd7-431c-bcbe-f942eb31b4c5","86cc14d6-7293-488e-a85f-384ae6773d28"}');
INSERT 0 1
Time: 1.735 ms

I recently did the following:

kr=# alter table t alter u type text[];
ALTER TABLE
Time: 5.513 ms

Now I'd like to put it back the way it was, but my attempts didn't work:

kr=# alter table t alter u type uuid[];
ERROR:  column "u" cannot be cast automatically to type uuid[]
HINT:  Specify a USING expression to perform the conversion.
Time: 0.244 ms
kr=# alter table t alter u type uuid[] using array(select unnest(u));
ERROR:  cannot use subquery in transform expression
Time: 0.299 ms

I noticed that when I inserted a record into the table, postgres was able
to convert my input string to uuid[] with no problem, so I tried simulating
that, but to no avail:

kr=# alter table t alter u type uuid[] using array_to_string(u,',');
ERROR:  column "u" cannot be cast automatically to type uuid[]
HINT:  Specify a USING expression to perform the conversion.
Time: 0.321 ms

(Interestingly, postgres seems to think I don't even have a USING clause
here. Could there be some optimization that removed it?)

I'm not sure what to do here. Can someone help me?


Re: [GENERAL] Background worker assistance & review

2015-04-21 Thread Keith Fiske
<http://www.keithf4.com>

On Tue, Apr 21, 2015 at 5:47 PM, Keith Fiske  wrote:

>
>
>
> <http://www.keithf4.com>
>
> On Fri, Apr 10, 2015 at 1:00 PM, Keith Fiske  wrote:
>
>>
>> On Thu, Apr 9, 2015 at 11:56 PM, Craig Ringer 
>> wrote:
>>
>>>
>>>
>>> On 9 April 2015 at 05:35, Keith Fiske  wrote:
>>>
>>>> I'm working on a background worker (BGW) for my pg_partman extension.
>>>> I've gotten the basics of it working for my first round, but there's two
>>>> features I'm missing that I'd like to add before release:
>>>>
>>>> 1) Only allow one instance of this BGW to run
>>>>
>>>
>>> Load your extension in shared_preload_libraries, so that _PG_init runs
>>> in the postmaster. Register a static background worker then.
>>>
>>> If you need one worker per database (because it needs to access the DB)
>>> this won't work for you, though. What we do in BDR is have a single static
>>> background worker that's launched by the postmaster, which then launches
>>> and terminates per-database workers that do the "real work".
>>>
>>> Because of a limitation in the bgworker API in releases 9.4 and older,
>>> the static worker has to connect to a database if it wants to access shared
>>> catalogs like pg_database. This limitation has been lifted in 9.5 though,
>>> along with the need to use the database name instead of its oid to connect
>>> (which left bgworkers unable to handle RENAME DATABASE).
>>>
>>> (We still really need a hook on CREATE DATABASE too)
>>>
>>> 2) Create a bgw_terminate_partman() function to stop it more intuitively
>>>> than doing a pg_cancel_backend() on the PID
>>>>
>>>
>>> If you want it to be able to be started/stopped dynamically, you should
>>> probably use RequestAddinShmemSpace to allocate a small shared memory
>>> block. Use that to register the PGPROC for the current worker when the
>>> worker starts, and add a boolean field you can use to ask it to terminate
>>> its self. You'll also need a LWLock to protect access to the segment, so
>>> you don't have races between a worker starting and the user asking to
>>> cancel it, etc.
>>>
>>> Unfortunately the BackgroundWorkerHandle struct is opaque, so you cannot
>>> store it in shared memory when it's returned by
>>> RegisterDynamicBackgroundWorker() and use it to later check the worker's
>>> status or ask it to exit. You have to use regular backend manipulation
>>> functions and PGPROC instead.
>>>
>>> Personally, I suggest that you leave the worker as a static worker, and
>>> leave it always running when the extension is active. If it isn't doing
>>> anything, have it sleep on its latch, then set its latch from other
>>> processes when something interesting happens. (You can put the process
>>> latch from PGPROC into your shmem segment so you can set it from elsewhere,
>>> or allocate a new latch).
>>>
>>> This is my first venture into writing C code for postgres, so I'm not
>>>> familiar with a lot of the internals yet. I read
>>>> http://www.postgresql.org/docs/9.4/static/bgworker.html and I see it
>>>> mentioning how you can check the status of a BGW launched dynamically and
>>>> the function to terminate one, but I'm not clear how how you can get the
>>>> information on a currently running BGW to do these things.
>>>>
>>>
>>> You can't. It's a pretty significant limitation in the current API.
>>> There's no way to enumerate bgworkers via the bgworker API, only via PGPROC.
>>>
>>>
>>>> I used the worker_spi example for a lot of this, so if there's any
>>>> additional guidance for a better way to do what I've done, I'd appreciate
>>>> it. All I really have it doing now is calling the run_maintenance()
>>>> function at a defined interval and don't need it doing more than that yet.
>>>> <http://www.keithf4.com>
>>>>
>>>
>>> The BDR project has an extension with much more in-depth use of
>>> background workers, but it's probably *too* complicated. We have a static
>>> bgworker that launches and terminates dynamic bgworkers (per-database) that
>>> in turn launch and terminate  more dynamic background workers
>>> (per-connection to peer databa

Re: [GENERAL] Background worker assistance & review

2015-04-21 Thread Keith Fiske
<http://www.keithf4.com>

On Fri, Apr 10, 2015 at 1:00 PM, Keith Fiske  wrote:

>
> On Thu, Apr 9, 2015 at 11:56 PM, Craig Ringer 
> wrote:
>
>>
>>
>> On 9 April 2015 at 05:35, Keith Fiske  wrote:
>>
>>> I'm working on a background worker (BGW) for my pg_partman extension.
>>> I've gotten the basics of it working for my first round, but there's two
>>> features I'm missing that I'd like to add before release:
>>>
>>> 1) Only allow one instance of this BGW to run
>>>
>>
>> Load your extension in shared_preload_libraries, so that _PG_init runs in
>> the postmaster. Register a static background worker then.
>>
>> If you need one worker per database (because it needs to access the DB)
>> this won't work for you, though. What we do in BDR is have a single static
>> background worker that's launched by the postmaster, which then launches
>> and terminates per-database workers that do the "real work".
>>
>> Because of a limitation in the bgworker API in releases 9.4 and older,
>> the static worker has to connect to a database if it wants to access shared
>> catalogs like pg_database. This limitation has been lifted in 9.5 though,
>> along with the need to use the database name instead of its oid to connect
>> (which left bgworkers unable to handle RENAME DATABASE).
>>
>> (We still really need a hook on CREATE DATABASE too)
>>
>> 2) Create a bgw_terminate_partman() function to stop it more intuitively
>>> than doing a pg_cancel_backend() on the PID
>>>
>>
>> If you want it to be able to be started/stopped dynamically, you should
>> probably use RequestAddinShmemSpace to allocate a small shared memory
>> block. Use that to register the PGPROC for the current worker when the
>> worker starts, and add a boolean field you can use to ask it to terminate
>> its self. You'll also need a LWLock to protect access to the segment, so
>> you don't have races between a worker starting and the user asking to
>> cancel it, etc.
>>
>> Unfortunately the BackgroundWorkerHandle struct is opaque, so you cannot
>> store it in shared memory when it's returned by
>> RegisterDynamicBackgroundWorker() and use it to later check the worker's
>> status or ask it to exit. You have to use regular backend manipulation
>> functions and PGPROC instead.
>>
>> Personally, I suggest that you leave the worker as a static worker, and
>> leave it always running when the extension is active. If it isn't doing
>> anything, have it sleep on its latch, then set its latch from other
>> processes when something interesting happens. (You can put the process
>> latch from PGPROC into your shmem segment so you can set it from elsewhere,
>> or allocate a new latch).
>>
>> This is my first venture into writing C code for postgres, so I'm not
>>> familiar with a lot of the internals yet. I read
>>> http://www.postgresql.org/docs/9.4/static/bgworker.html and I see it
>>> mentioning how you can check the status of a BGW launched dynamically and
>>> the function to terminate one, but I'm not clear how how you can get the
>>> information on a currently running BGW to do these things.
>>>
>>
>> You can't. It's a pretty significant limitation in the current API.
>> There's no way to enumerate bgworkers via the bgworker API, only via PGPROC.
>>
>>
>>> I used the worker_spi example for a lot of this, so if there's any
>>> additional guidance for a better way to do what I've done, I'd appreciate
>>> it. All I really have it doing now is calling the run_maintenance()
>>> function at a defined interval and don't need it doing more than that yet.
>>> <http://www.keithf4.com>
>>>
>>
>> The BDR project has an extension with much more in-depth use of
>> background workers, but it's probably *too* complicated. We have a static
>> bgworker that launches and terminates dynamic bgworkers (per-database) that
>> in turn launch and terminate  more dynamic background workers
>> (per-connection to peer databases).
>>
>> If you're interested, all the code is mirrored on github:
>>
>> https://github.com/2ndquadrant/bdr/tree/bdr-plugin/next
>>
>> and the relevant parts are:
>>
>> https://github.com/2ndQuadrant/bdr/blob/bdr-plugin/next/bdr.c#L640
>> https://github.com/2ndQuadrant/bdr/blob/bdr-plugin/next/bdr_perdb.c
>> https://github.com/2ndQuadrant/bdr/blob/bdr-plugin/nex

[GENERAL] Background worker assistance & review

2015-04-08 Thread Keith Fiske
I'm working on a background worker (BGW) for my pg_partman extension. I've
gotten the basics of it working for my first round, but there's two
features I'm missing that I'd like to add before release:

1) Only allow one instance of this BGW to run
2) Create a bgw_terminate_partman() function to stop it more intuitively
than doing a pg_cancel_backend() on the PID

This is my first venture into writing C code for postgres, so I'm not
familiar with a lot of the internals yet. I read
http://www.postgresql.org/docs/9.4/static/bgworker.html and I see it
mentioning how you can check the status of a BGW launched dynamically and
the function to terminate one, but I'm not clear how how you can get the
information on a currently running BGW to do these things.

I used the worker_spi example for a lot of this, so if there's any
additional guidance for a better way to do what I've done, I'd appreciate
it. All I really have it doing now is calling the run_maintenance()
function at a defined interval and don't need it doing more than that yet.

https://gist.github.com/keithf4/0047eae0b3a22829d527

--
Keith Fiske
Database Administrator
OmniTI Computer Consulting, Inc.
http://www.keithf4.com


Re: [GENERAL] 9.3.5 failing to compile with dtrace on FreeBSD 10.1

2015-01-23 Thread Keith Fiske
Just wanted to thank Lacey for the assistance. I set up my first BSD server
recently and installing things via ports had been going great until this.
The docs don't mention how to enable the kernel modules permanently either,
so thanks also for that additional note.
https://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/dtrace-enable.html

Perhaps adding a section on BSD ports and/or expanding on the Dtrace
section (15.7.6.6) in this part of the postgresql install docs may be
helpful for others?
http://www.postgresql.org/docs/current/static/installation-platform-notes.html

None of the other steps besides the kernel loading are needed for
compilation and installation, but could be handy.

--
Keith Fiske
Database Administrator
OmniTI Computer Consulting, Inc.
http://www.keithf4.com

On Mon, Dec 22, 2014 at 2:22 PM, Lacey Powers 
wrote:

> Hello Luca,
>
> I had some success getting PostgreSQL 9.4 to build on FreeBSD 10.1-p2
> RELEASE with DTrace. The probes look fully functional, and fun to work
> with, though I haven't had time to get incredibly deep into using them.This
> is what worked for me, after much digging around.
>
> I was building and running PostgreSQL and dtrace directly on my
> workstation, not in a jail. I'm not sure what additional hoops would need
> to hopped through off-hand to run dtrace on FreeBSD in a jail.
>
> Initially, I got compile errors, but I tracked those down and realized
> that I needed to load the dtrace kernel modules, which I had expected to be
> loaded automatically, but weren't. Otherwise, you get cryptic compile
> errors.
>
> I had to load the dtrace kernel modules with:
>
> kldload dtraceall (and enable it in /boot/loader.conf with
> dtraceall_load="YES", if you want it to hang around)
>
> Add the following lines to your /etc/make.conf (because detailed
> backtraces are helpful)
>
> STRIP=
> CFLAGS+=-fno-omit-frame-pointer
>
> Once that's loaded, change the permissions to 0666 on /dev/dtrace/helper
> (and add that to /etc/devfs.conf with "perm /dev/dtrace/helper 0666" if you
> want it to persist)
>
> Though be aware that there are security implications for that, since other
> users could make a ton of probes and exhaust kernel memory. Alternately,
> you could add pgsql to wheel, since root and wheel are the owner and group
> for that devfs node.
>
> Also be sure to not use devel/libelf from ports, because that doesn't seem
> to play nice with dtrace, and keeps probes from registering.
>
> After doing all that, I could get dtrace to build, link, and register
> userland probes with the database/postgresql94-server and
> database/postgresql94-client, and successfully count transaction times in a
> dtrace script.
>
> One final note, the freebsd-dtr...@freebsd.org list, is very helpful with
> esoteric DTrace issues.
>
> I hope this all helps. =)
>
> Regards,
>
> Lacey
>
>
>
>  On Tue, Dec 16, 2014 at 4:06 PM, Tom Lane  wrote:
>>
>>> If you want to push on this I think you'll need to find a BSD dtrace
>>> expert.  You shouldn't need to show him/her much except the above
>>> dtrace invocation and the probes.d file.
>>>
>>>  I've filled a bug report and I'll report back here if I get any update
>> on the matter.
>> In the meantime I've tried 9.4 beta and the same issue arises.
>>
>> Luca
>>
>>
>>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


[GENERAL] Updating timezone setting

2014-11-10 Thread Keith Fiske
Discovered that a client of ours had assumed their database was running in
UTC, but turned out to be running in 'US/Eastern'. They had changed all
their systems a while ago to run in UTC but didn't realize the database was
independent of that. The postgresql.conf has

timezone = 'US/Eastern'

set in it. It also looks like they have all their timestamp columns set to
"timestamp without time zone". But their application has been running in
UTC, so the times being inserted are the time in UTC. So when queries are
run locally in pql, they appear to be ahead of now(), which would be
expected.

They'd like to get this fix, at least so that the database is running in
UTC. Is there any issue with just changing the postgresql.conf timezone to
GMT and restarting? All of their applications are inserting time in UTC
anyway, and have been for as long as they can remember.

Would there be any issues then just leaving the columns as "timestamp
without time zone"? I know that's not ideal, but that would be a big
project to try and convert every single one of those columns.

Thanks,

--
Keith Fiske
Database Administrator
OmniTI Computer Consulting, Inc.
http://www.keithf4.com


Re: [GENERAL] Bloat check query

2014-08-26 Thread Keith Fiske
On Tue, Aug 26, 2014 at 5:44 PM, Keith Fiske  wrote:

> So I recently wrote this script to help us in monitoring for table/index
> bloat
>
> https://github.com/keithf4/pg_bloat_check
>
> I based my query off of the one contained in check_postgres, since I
> thought it seemed fairly accurate with what I was expecting to get back.
>
>
> https://github.com/bucardo/check_postgres/blob/master/check_postgres.pl#L3512
>
> A client however, showed me that running the bloat check from heroku
> brings back some additional results. The additional results are actually
> tables we were kind of expecting to see that the check_postgres query does
> not return.
>
> I've tried decoding these queries to understand what they're actually
> doing, but they're both quite a bit over my head. I did notice one thing in
> the check_postgres one:
>
> CASE WHEN v ~ 'mingw32' OR v ~ '64-bit' THEN 8 ELSE 4 END AS ma
>
> Is that a typo? It seems to be looking for the 32bit version of mingw but
> the 64-bit version when compiled elsewhere. The heroku version just sets
> this value to 4 outright (since they know what they're running). I tried
> changing this value to see if it made a difference, but it doesn't seem to.
>
> Anyone that's better able to evaluate which one of these two bloat check
> queries is more accurate able to provide some assistance?
>
> --
> Keith Fiske
> Database Administrator
> OmniTI Computer Consulting, Inc.
> http://www.keithf4.com
>

Helps if I include the query from Heroku

https://github.com/heroku/heroku-pg-extras/blob/master/lib/heroku/command/pg.rb#L438-L504

--
Keith Fiske
Database Administrator
OmniTI Computer Consulting, Inc.
http://www.keithf4.com


[GENERAL] Bloat check query

2014-08-26 Thread Keith Fiske
So I recently wrote this script to help us in monitoring for table/index
bloat

https://github.com/keithf4/pg_bloat_check

I based my query off of the one contained in check_postgres, since I
thought it seemed fairly accurate with what I was expecting to get back.

https://github.com/bucardo/check_postgres/blob/master/check_postgres.pl#L3512

A client however, showed me that running the bloat check from heroku brings
back some additional results. The additional results are actually tables we
were kind of expecting to see that the check_postgres query does not return.

I've tried decoding these queries to understand what they're actually
doing, but they're both quite a bit over my head. I did notice one thing in
the check_postgres one:

CASE WHEN v ~ 'mingw32' OR v ~ '64-bit' THEN 8 ELSE 4 END AS ma

Is that a typo? It seems to be looking for the 32bit version of mingw but
the 64-bit version when compiled elsewhere. The heroku version just sets
this value to 4 outright (since they know what they're running). I tried
changing this value to see if it made a difference, but it doesn't seem to.

Anyone that's better able to evaluate which one of these two bloat check
queries is more accurate able to provide some assistance?

--
Keith Fiske
Database Administrator
OmniTI Computer Consulting, Inc.
http://www.keithf4.com


Re: [GENERAL] HOT standby with ONLY WAL shipping?

2014-06-17 Thread Keith Fiske
On Thu, Jun 12, 2014 at 1:01 PM, CS_DBA  wrote:

> Hi All;
>
> We would like to setup a hot standby server with a forced delay.
> Is it possible to setup a hot standby based ONLY on WAL shipping and NOT
> use streaming replication?
>
> Thanks in advance
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

The omnipitr tool can provide the delayed slave feature you're looking for.
https://github.com/omniti-labs/omnipitr/

Look into the --recovery-delay (-w) feature of the omnipitr-restore command
which is used in the recovery.conf of the slave
https://github.com/omniti-labs/omnipitr/blob/master/doc/omnipitr-restore.pod

Whether a slave is a hot standby is not determined by whether you're using
streaming replication or WAL replay. It is determined by setting the
parameter "hot_standby" on the slave and ensuring the master has a minimum
"wal_level" of "hot_standby" as well. So both streaming and wal replay
slaves can be hot standbys.

--
Keith Fiske
Database Administrator
OmniTI Computer Consulting, Inc.
http://www.keithf4.com


Re: [HACKERS] [GENERAL] Question about partial functional indexes and the query planner

2014-06-12 Thread Keith Fiske
On Wed, Jun 11, 2014 at 7:24 PM, Tom Lane  wrote:

> Robert Haas  writes:
> > On Tue, Jun 10, 2014 at 7:19 PM, Tom Lane  wrote:
> >> Given the lack of previous complaints, I'm not sure this amounts to
> >> a back-patchable bug, but it does seem like something worth fixing
> >> going forward.
>
> > Agreed, although I'd be willing to see us slip it into 9.4.  It's
> > doubtful that anyone will get upset if their query plans change
> > between beta1 and beta2, but the same cannot be said for released
> > branches.
>
> After further thought about this I realized that there's another category
> of proof possibilities that is pretty simple to add while we're at it.
> Namely, once we've found that both subexpressions of the two operator
> clauses are equal(), we can use btree opclass relationships to prove that,
> say, "x < y implies x <= y" or "x < y refutes x > y", independently of
> just what x and y are.  (Well, they have to be immutable expressions, but
> we'd not get this far if they're not.)  We already had pretty nearly all
> of the machinery for that, but again it was only used for proving cases
> involving comparisons to constants.
>
> A little bit of refactoring later, I offer the attached draft patch.
> I'm thinking this is probably more than we want to slip into 9.4
> at this point, but I'll commit it to HEAD soon if there are not
> objections.
>
> regards, tom lane
>
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hack...@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>
>
I applied Tom's patch to the latest HEAD
(e04a9ccd2ccd6e31cc4af6b08257a0a186d0fce8) and showed it to Brian. Looks to
solve the problem he originally reported

$ patch -p1 -i ../better-predicate-proofs-1.patch
(Stripping trailing CRs from patch.)
patching file src/backend/optimizer/util/predtest.c


$ /opt/pgsql_patch_review/bin/psql postgres
Timing is on.
Null display (null) is "«NULL»".
Expanded display (expanded) is used automatically.
psql (9.5devel)
Type "help" for help.

postgres=# CREATE OR REPLACE FUNCTION public.return_if_even(v_id integer)
returns integer
postgres-# LANGUAGE sql AS
postgres-# $$
postgres$# SELECT case when v_id % 2 = 1 then 0 else v_id end;
postgres$# $$;
CREATE FUNCTION
Time: 44.669 ms

postgres=# create table public.partial_functional_index_test as
postgres-# select id from generate_series(1,100) AS s(id);
SELECT 100
Time: 1037.993 ms

postgres=# create index partial_functional_idx ON
public.partial_functional_index_test
postgres-# USING btree ( public.return_if_even(id) )
postgres-# WHERE public.return_if_even(id) = id;
LOG:  sending cancel to blocking autovacuum PID 12521
DETAIL:  Process 12424 waits for ShareLock on relation 16385 of database
12217.
STATEMENT:  create index partial_functional_idx ON
public.partial_functional_index_test
USING btree ( public.return_if_even(id) )
WHERE public.return_if_even(id) = id;
ERROR:  canceling autovacuum task
CONTEXT:  automatic analyze of table
"postgres.public.partial_functional_index_test"
CREATE INDEX
Time: 1658.245 ms

postgres=# explain analyze select count(1) from
public.partial_functional_index_test where public.return_if_even(id) = id;
 QUERY
PLAN
-
 Aggregate  (cost=4818.05..4818.06 rows=1 width=0) (actual
time=2503.851..2503.854 rows=1 loops=1)
   ->  Bitmap Heap Scan on partial_functional_index_test
(cost=82.67..4805.55 rows=5000 width=0) (actual time=43.724..1309.309
rows=50 loops=1)
 Recheck Cond: (CASE WHEN ((id % 2) = 1) THEN 0 ELSE id END = id)
 Heap Blocks: exact=4425
 ->  Bitmap Index Scan on partial_functional_idx  (cost=0.00..81.42
rows=5000 width=0) (actual time=42.961..42.961 rows=50 loops=1)
 Planning time: 4.245 ms
 Execution time: 2505.281 ms
(7 rows)

Time: 2515.344 ms
postgres=# explain analyze select count(1) from
public.partial_functional_index_test where id = public.return_if_even(id);
 QUERY
PLAN
-
 Aggregate  (cost=4818.05..4818.06 rows=1 width=0) (actual
time=2483.862..2483.866 rows=1 loops=1)
   ->  Bitmap Heap Scan on partial_functional_index_test
(cost=82.67..4805.55 rows=5000 width=0) (actual time=40.704..1282.955
rows=50 loops=1)
 Recheck Cond: (CASE WHEN ((id % 2) = 1) THEN 0 ELSE id

Re: [GENERAL] How can I tell if pg_restore is running?

2014-06-11 Thread Keith
If you can't use a wrapper script and you can't use that flag, I'm not sure
how you're going to manage to keep this a process that's simple for a
typical end user to manage. From what I know, there's nothing internally on
the data loading process that pg_restore uses that would guarantee you that
what is loading the data is pg_restore. Might need to rethink your strategy
here.


On Wed, Jun 11, 2014 at 2:18 PM, Moshe Jacobson  wrote:

>
> On Tue, Jun 10, 2014 at 10:57 AM, Francisco Olarte  > wrote:
>
>> Is there any reason to avoid pg_restore --disable-triggers ?
>
>
> Yes, this extension needs to work with everyone's normal restore process.
> Otherwise I would have to ask everyone who used my extension to modify
> their pg_restore command.
>
> This also means that I can't rely on a wrapper to pg_restore, as Keith
> suggested earlier.
>
>
>
> Moshe Jacobson
> Manager of Systems Engineering, Nead Werx Inc. <http://www.neadwerx.com>
> 2323 Cumberland Parkway · Suite 201 · Atlanta, GA 30339
>
> "Quality is not an act, it is a habit." -- Aristotle
>


Re: [GENERAL] How can I tell if pg_restore is running?

2014-06-10 Thread Keith Fiske
On Tue, Jun 10, 2014 at 10:02 AM, Moshe Jacobson  wrote:

> My extension has a config table that is dumped by pg_dump and populated by
> pg_restore.
> However, this table has triggers on it that I would like not to do
> anything if the table is being populated by pg_restore. I want the triggers
> to operate only if the user is manipulating the table directly after the
> database has been restored and is running.
>
> Is there a way for my extension's trigger functions to return immediately
> when triggered by pg_restore?
>
> Thanks.
>
> Moshe Jacobson
> Manager of Systems Engineering, Nead Werx Inc. <http://www.neadwerx.com>
> 2323 Cumberland Parkway · Suite 201 · Atlanta, GA 30339
>
> "Quality is not an act, it is a habit." -- Aristotle
>

I'd look into advisory locks.

http://www.postgresql.org/docs/9.3/static/explicit-locking.html#ADVISORY-LOCKS
http://www.postgresql.org/docs/9.3/static/functions-admin.html#FUNCTIONS-ADVISORY-LOCKS

You'd probably have to wrap your pg_dump/restore in some sort of script
that can call the advisory locks, but I don't see why it wouldn't work.
Then you'd have your triggers check if the advisory lock is held and skip
whatever they do if so.

--
Keith Fiske
Database Administrator
OmniTI Computer Consulting, Inc.
http://www.keithf4.com


[GENERAL] Trigger function permissions

2014-06-06 Thread Keith Fiske
Just want to make sure I'm understanding the permissions needed for trigger
functions as well as making sure this is what the developers intended
before I go assuming things will always work this way. Also as a sanity
check for myself that I'm not missing something obvious.

I have an extension (https://github.com/omniti-labs/mimeo) that does
logical replication. The setup functions for the trigger-based DML
replication automatically create the queue table, trigger function &
trigger on the source database. I'm working on fixing a bug where the
correct permissions weren't being given and in my testing for a fix found
that just simply giving the trigger function SECURITY DEFINER fixed all the
problems and I don't have to even bother looking up which roles currently
have write permissions on the source table to set them on the queue table
and trigger function.

I understand how SECURITY DEFINER solves the issue of the function writing
to the queue table (function and queue table owner are the same). But I
would've thought that any roles with write privileges to the table would've
needed to be given EXECUTE permissions on the trigger function. I thought
maybe the trigger function was being called as the owner of the table, but
apparently even the owner of the table doesn't need these execute
permissions. Reading through the docs on triggers, I didn't see anything
mentioned about how this is expected to work. Examples are in the gist link
below. You can see the owner has no explicit permissions to the trigger
function and inserts still work even after revoking PUBLIC.

https://gist.github.com/keithf4/83c5c6516e2726609675
--
Keith Fiske
Database Administrator
OmniTI Computer Consulting, Inc.
http://www.keithf4.com


Re: [GENERAL] Merge a sharded master into a single read-only slave

2014-06-05 Thread Keith Fiske
On Thu, Jun 5, 2014 at 2:09 PM, Sébastien Lorion 
wrote:

> On Thu, Jun 5, 2014 at 12:55 PM, Francisco Olarte 
> wrote:
>
>> Hi Sébastien:
>>
>> On Thu, Jun 5, 2014 at 5:41 PM, Sébastien Lorion
>>  wrote:
>>
>> >  Correct me if I am wrong, but will it not also suffer the same
>> > limitation as any statement based replication, namely that the "merged"
>> > slave will have to sustain the same write load as all shards combined ?
>>
>> I cannot tell you the exact mimeo behaviour, but if you incremental
>> replication using an id/timestamp  by >pulling< changes from the
>> masters, you will normally batch them and insert all the changes to
>> the slaves in a single transaction, which leads to less load as many
>> times your limit is in transaction rate, not record rate. (i.e., every
>> 5 minutes you query for all the tuples changed, and insert/update them
>> all in one go ) ( Also, if tuples are updated many times between
>> sweeps the slave will get only one )
>>
>> Francisco Olarte.
>>
>
> ​You are right, requesting changes at fixed time intervals would certainly
> help reduce the load. I will have to test and see if a good balance can be
> achieved between not having stale data for too long and keeping up with
> writes.
>
> Sébastien
>
>
If you have any questions while evaluating it, feel free to ask or post any
issues to github.

--
Keith Fiske
Database Administrator
OmniTI Computer Consulting, Inc.
http://www.keithf4.com


Re: [GENERAL] Merge a sharded master into a single read-only slave

2014-06-04 Thread Keith Fiske
On Mon, Jun 2, 2014 at 2:47 PM, Sébastien Lorion 
wrote:

> On Mon, Jun 2, 2014 at 12:52 PM, Kevin Goess  wrote:
>
>> > So my conclusion is that for now, the best way to scale read-only
>> queries for a sharded master is to
>> > implement map-reduce at the application level.
>>
>> That's the conclusion I would expect. It's the price you pay for
>> sharding, it's part of the deal.
>>
>> But it's also the benefit you get from sharding.  Once your read traffic
>> grows to the point that it's too much for a single host, you're going to
>> have to re-shard it all again *anyway*.  The whole point of sharding is
>> that it allows you to grow outside the capacities of a single host.
>>
>
> ​I am not sure I am following you completely. I can replicate the
> read-only slaves almost as much as I want (with chained replication), so
> why would I be limited to a single host ? You would have a point concerning
> database size, but in my case, the main reason I need to shard is because
> of the amount of writes.
>
>
Not sure if this will work for you, but sharing a similar scenario in case
it may work for you.

An extension I wrote provides similar logical replication as you've
probably seen in other tools.
https://github.com/omniti-labs/mimeo
One difference is it has several methods for this replication, one being
incremental based on either time or serial ID. Since incremental
replication requires just read-only access on the source databases, it
causes no extra write overhead as most logical replication solutions do
(triggers writing to queue tables).

A client of ours had a table sharded by UUID to 512 clusters but needed
that data pulled to a single cluster for reporting purposes. The tables
also had a timestamp column that was set on each insert/update, so the
incremental replication method was able to be used here to pull data from
all clusters to a single cluster. The single reporting cluster then just
had an inheritance table set up with an empty parent table pointing to all
the child tables that pulled data into them.

Yes, it was a lot of setup since each of the 512 tables has to be set up
individually. But once it was set up it worked surprisingly well. And it's
honestly a use case I had never foreseen for the extension.

Not sure if this would work in your case, but maybe it can at least give
you an idea of what can be done.

--
Keith Fiske
Database Administrator
OmniTI Computer Consulting, Inc.
http://www.keithf4.com


Re: [GENERAL] what should be the best autovacuum configuration for daily partition table

2014-05-14 Thread Keith
On Wed, May 14, 2014 at 3:45 PM, Jeff Janes  wrote:

> On Wed, May 14, 2014 at 12:06 AM, AI Rumman  wrote:
>
>> Hi,
>>
>> I have a table with daily partition setup where old partitions are static
>> tables that is after each day we don't get any new data in old partitions.
>> The database size is 2 TB and I am running with autovacuum on for
>> Postgresql 8.4.
>> Now, I am facing a problem where old tables are not being vacuumed by
>> autovacuum deamon and every now and then we are seeing autovacuum to
>> prevent wrap around in the database and age(datfrozenzid) gets high for the
>> database.
>> Any idea what should be best configuration for this type of database
>> environment.
>>
>
> How high is age(datfrozenxid) getting?  What is the problem you are
> experiencing?
>
> Cheers,
>
> Jeff
>

It'd be good to know what you have autovacuum_freeze_max_age set to. You
may have it set a bit too low and causing that automatic vacuuming to kick
in too soon.

Even with autovacuum_freeze_max_age set to a reasonable value, we still see
this issue often with data warehousing systems with a lot of static data.
As you are seeing, autovacuum will never kick in for these tables until you
hit autovacuum_freeze_max_age. The best solution we've found for this is to
run a cronjob to routinely vacuum a controlled batch of the tables with the
oldest vacuum freeze age. This controls how many tables are being vacuumed
instead of running into the situation where many of them all hit
autovacuum_freeze_max_age at the same time and cause extensively long
vacuuming sessions.

Below is a script we run for one of our clients twice a day. You can adjust
the limit on the first query to set how many you want to run per batch.
This has to be high enough (or run the script often enough) to keep the
count of old tables below hitting autovacuum_freeze_max_age and having
autovacuum kick in on them. Just pass the name of the database as a
parameter to the script.

manual_vacuum.sh:

# Manually vacuum tables with the oldest xid (25)

psql -d $1 -t -o /tmp/manual_vacuum_$1.sql -c "select 'vacuum analyze
verbose ' || oid::regclass || ';' from pg_class where relkind in ('r', 't')
and age(relfrozenxid) > 1 order by age(relfrozenxid) desc limit 25"

psql -d $1 -t -a -f /tmp/manual_vacuum_$1.sql > $HOME/manual_vacuum_$1.log
2>&1


Keith
http://www.keithf4.com


[GENERAL] Postgres usage of session identifiers for security

2014-01-14 Thread Keith Minsel
Can anyone describe how Postgres generates session identifiers and how
Postgres uses a session identifier for community action protection?

More specifically, will Postgres allow a session identifier to be
non-system generated?  How does Postgres respond when presented with a
non-system generated session identifier?

My security team is asking and I can't find any documentation on this.

Thanks,
Keith


Re: [GENERAL] Planning error in dynamic string creation in plpgsql

2014-01-08 Thread Keith Fiske
Just wanted to say thanks again for the help to those that responded. For
anyone curious, this helped me get a more advanced constraint exclusion
feature finished for the partition manager I've been working on

http://www.keithf4.com/managing-constraint-exclusion-in-table-partitioning/

--
Keith Fiske
Database Administrator
OmniTI Computer Consulting, Inc.
http://www.keithf4.com


On Mon, Jan 6, 2014 at 11:33 AM, Keith Fiske  wrote:

> David,
>
> That seems to have fixed it! I was going down a path of grabbing the
> column's type from pg_attribute and trying to work from there, but was
> still having some of the same issues.
>
> Thanks everyone else that replied as well!
>
> --
> Keith Fiske
> Database Administrator
> OmniTI Computer Consulting, Inc.
> http://www.keithf4.com
>
>
> On Mon, Jan 6, 2014 at 10:42 AM, David Johnston  wrote:
>
>> Adrian Klaver-3 wrote
>> >>
>> >>
>> >> In the real function I'm writing, the columns to be used in the string
>> >> being created are pulled from a configuration table, so their types
>> >> could be anything. So casting the quote_literal() calls is not really
>> an
>> >> option here.
>> >>
>> >> Any help would be appreciated.
>> >
>> > Cast before the quote_literal?
>> >
>> > Example:
>> >
>> > EXECUTE 'SELECT min('||v_col||'::text) as min, max('||v_col||'::text) as
>> > max FROM test_temp' INTO v_record;
>>
>> Right idea if the loss of original type in the min/max query is acceptable
>> (which it should be).  But the cast wants to be of the min/max RESULT not
>> the min/max INPUT.
>>
>> SELECT min(' || v_col || ')::text AS min_text, max(' || v_col || ')::text
>> AS
>> max_text FROM ...
>>
>> Min/Max logic wants to be done by the rules for the original type, not
>> according to string collation rules.
>>
>> David J.
>>
>>
>>
>>
>> --
>> View this message in context:
>> http://postgresql.1045698.n5.nabble.com/Planning-error-in-dynamic-string-creation-in-plpgsql-tp5785421p5785523.html
>> Sent from the PostgreSQL - general mailing list archive at Nabble.com.
>>
>>
>> --
>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-general
>>
>
>


Re: [GENERAL] replicate per tablespace

2014-01-06 Thread Keith Fiske
On Mon, Jan 6, 2014 at 9:22 PM, Andy Colson  wrote:

> Here's an odd question.  Kind of an in the future thing.
>
> I have two db's on two smaller boxes I'd like to combine to one big box.
>  But one db is replicated and the other doesn't need it.
>
> I know currently you must replicate the entire cluster.  I was just
> wondering, would it be a possible way forward to support WAL logging per
> tablespace.  (each tablespace wal's to different directory).  Then, as a
> logical step, could we support replication per tablespace?
>
> Just kinda thinking out loud.
>
> -Andy
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

This is more popularly called logical replication. There was a talk about
one of the bigger projects to get this incorporated into core at last
year's PGCon. It's quite a complex feature to get implemented, so it's
probably not going to be around for a while yet. Links to the talk and
slides are on the conference website if you're curious to learn more
http://www.pgcon.org/2013/schedule/events/577.en.html

In the mean time, there are several third-party solutions that can provide
logical replication, the more popular being Slony & Bucardo. I've been
working on one myself as well (Mimeo) that's not quite as advanced as those
two, but a little simpler to configure if you just need a few tables
replicated.

http://bucardo.org/wiki/Bucardo
http://slony.info/
https://github.com/omniti-labs/mimeo

Hope that helps to answer your questions


--
Keith Fiske
Database Administrator
OmniTI Computer Consulting, Inc.
http://www.keithf4.com


Re: [GENERAL] Planning error in dynamic string creation in plpgsql

2014-01-06 Thread Keith Fiske
David,

That seems to have fixed it! I was going down a path of grabbing the
column's type from pg_attribute and trying to work from there, but was
still having some of the same issues.

Thanks everyone else that replied as well!

--
Keith Fiske
Database Administrator
OmniTI Computer Consulting, Inc.
http://www.keithf4.com


On Mon, Jan 6, 2014 at 10:42 AM, David Johnston  wrote:

> Adrian Klaver-3 wrote
> >>
> >>
> >> In the real function I'm writing, the columns to be used in the string
> >> being created are pulled from a configuration table, so their types
> >> could be anything. So casting the quote_literal() calls is not really an
> >> option here.
> >>
> >> Any help would be appreciated.
> >
> > Cast before the quote_literal?
> >
> > Example:
> >
> > EXECUTE 'SELECT min('||v_col||'::text) as min, max('||v_col||'::text) as
> > max FROM test_temp' INTO v_record;
>
> Right idea if the loss of original type in the min/max query is acceptable
> (which it should be).  But the cast wants to be of the min/max RESULT not
> the min/max INPUT.
>
> SELECT min(' || v_col || ')::text AS min_text, max(' || v_col || ')::text
> AS
> max_text FROM ...
>
> Min/Max logic wants to be done by the rules for the original type, not
> according to string collation rules.
>
> David J.
>
>
>
>
> --
> View this message in context:
> http://postgresql.1045698.n5.nabble.com/Planning-error-in-dynamic-string-creation-in-plpgsql-tp5785421p5785523.html
> Sent from the PostgreSQL - general mailing list archive at Nabble.com.
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Planning error in dynamic string creation in plpgsql

2014-01-05 Thread Keith Fiske
That is not an option either. This is for a publicly released extension and
I'm really not going to go requiring another scripting language be
installed, especially an untrusted one.


--
Keith Fiske
Database Administrator
OmniTI Computer Consulting, Inc.
http://www.keithf4.com


On Sun, Jan 5, 2014 at 11:44 PM, Adrian Klaver wrote:

> On 01/05/2014 08:34 PM, Keith Fiske wrote:
>
>> Actually, that doesn't work right. Gives weird results when the column
>> is an integer
>>
>> Example:
>>
>> keith=# select min(col1), max(col1) from
>> partman_test.time_static_table_p2014_01_01;
>>   min | max
>> -+-
>>86 | 100
>> (1 row)
>>
>> keith=# select min(col1::text), max(col1::text) from
>> partman_test.time_static_table_p2014_01_01;
>>   min | max
>> -+-
>>   100 | 99
>> (1 row)
>>
>
>
> Should have added to my previous post, that when I want to deal with truly
> dynamic values I use plpythonu. plpgsql is very useful but it does not like
> types changing under it. plpythonu deals with that better.
>
>
>
>>
>>
>> --
>> Keith Fiske
>> Database Administrator
>> OmniTI Computer Consulting, Inc.
>> http://www.keithf4.com
>>
>
>
> --
> Adrian Klaver
> adrian.kla...@gmail.com
>


Re: [GENERAL] Planning error in dynamic string creation in plpgsql

2014-01-05 Thread Keith Fiske
I can't remove the quote_literal() because the value could potentially be a
string, time, or number. Without the loop, quote_literal() handles the
variable being any one of those types without any issues and quotes (or
doesn't) as needed.

--
Keith Fiske
Database Administrator
OmniTI Computer Consulting, Inc.
http://www.keithf4.com


On Sun, Jan 5, 2014 at 11:40 PM, Adrian Klaver wrote:

> On 01/05/2014 08:34 PM, Keith Fiske wrote:
>
>> Actually, that doesn't work right. Gives weird results when the column
>> is an integer
>>
>> Example:
>>
>> keith=# select min(col1), max(col1) from
>> partman_test.time_static_table_p2014_01_01;
>>   min | max
>> -+-
>>86 | 100
>> (1 row)
>>
>> keith=# select min(col1::text), max(col1::text) from
>> partman_test.time_static_table_p2014_01_01;
>>   min | max
>> -+-
>>   100 | 99
>> (1 row)
>>
>>
> Because it is working on a string in the second case. In the function
> remove the quote_literal() and see if that works.
>
>
>>
>>
>
>
> --
> Adrian Klaver
> adrian.kla...@gmail.com
>


Re: [GENERAL] Planning error in dynamic string creation in plpgsql

2014-01-05 Thread Keith Fiske
Actually, that doesn't work right. Gives weird results when the column is
an integer

Example:

keith=# select min(col1), max(col1) from
partman_test.time_static_table_p2014_01_01;
 min | max
-+-
  86 | 100
(1 row)

keith=# select min(col1::text), max(col1::text) from
partman_test.time_static_table_p2014_01_01;
 min | max
-+-
 100 | 99
(1 row)



--
Keith Fiske
Database Administrator
OmniTI Computer Consulting, Inc.
http://www.keithf4.com


On Sun, Jan 5, 2014 at 11:23 PM, Keith Fiske  wrote:

> That fixed it! In the example and my original as well. Thank you very much!
>
> And wow that was incredibly misleading where the cast was supposed to go
> going by the error given and when it was thrown. That EXECUTE statement
> works perfectly fine, seeing as the v_record variable got its assignment
> with no issue.
>
> Any chance you can explain what's going on here? Never would've thought to
> put the cast there to fix the problem.
>
> --
> Keith Fiske
> Database Administrator
> OmniTI Computer Consulting, Inc.
> http://www.keithf4.com
>
>
> On Sun, Jan 5, 2014 at 11:06 PM, Adrian Klaver wrote:
>
>> On 01/05/2014 06:31 PM, Keith Fiske wrote:
>>
>>> Running into an issue trying to dynamically create some SQL statements
>>> in a plpgsql function. The function below is as simple an example I can
>>> make to reproduce the error. The first loop works without any issues,
>>> but the second throws an error.
>>>
>>> CREATE OR REPLACE FUNCTION testing_record() RETURNS void
>>>  LANGUAGE plpgsql
>>>  AS $$
>>> DECLARE
>>>  v_col   text;
>>>  v_col_names text[];
>>>  v_recordrecord;
>>>  v_sql   text;
>>> BEGIN
>>>
>>> CREATE TEMP TABLE test_temp (col1 int, col2 text, col3 timestamp);
>>> INSERT INTO test_temp VALUES (1, 'stuff', now());
>>> INSERT INTO test_temp VALUES (2, 'stuff', CURRENT_TIMESTAMP + '1
>>> day'::interval);
>>>
>>> v_col_names := '{"col1","col3"}';
>>>
>>> FOR i IN 1..2
>>> LOOP
>>>  IF i = 1 THEN
>>>  EXECUTE 'SELECT min(col1) as min, max(col1) as max FROM
>>> test_temp' INTO v_record;
>>>  RAISE NOTICE 'v_record: %', v_record;
>>>  v_sql := concat('col1 min: ', quote_literal(v_record.min), ',
>>> col1 max: ', quote_literal(v_record.max) );
>>>  RAISE NOTICE 'v_sql: %', v_sql;
>>>  ELSIF i = 2 THEN
>>>  EXECUTE 'SELECT min(col3) as min, max(col3) as max FROM
>>> test_temp' INTO v_record;
>>>  RAISE NOTICE 'v_record: %', v_record;
>>>  v_sql := concat('col3 min: ', quote_literal(v_record.min), ',
>>> col3 max: ', quote_literal(v_record.max) );
>>>  RAISE NOTICE 'v_sql: %', v_sql;
>>>  END IF;
>>> END LOOP;
>>>
>>> FOREACH v_col IN ARRAY v_col_names
>>> LOOP
>>> EXECUTE 'SELECT min('||v_col||') as min, max('||v_col||') as max
>>> FROM test_temp' INTO v_record;
>>> RAISE NOTICE 'v_record: %', v_record;
>>> v_sql := concat(v_col
>>>  , ' min: '
>>>  , quote_literal(v_record.min)
>>>  , ', '
>>>  , v_col
>>>  , ' max: '
>>>  , quote_literal(v_record.max)
>>>);
>>> RAISE NOTICE 'v_sql: %', v_sql;
>>> END LOOP;
>>>
>>> DROP TABLE IF EXISTS test_temp;
>>>
>>> END
>>> $$;
>>>
>>> keith=# select testing_record();
>>> NOTICE:  v_record: (1,2)
>>> NOTICE:  v_sql: col1 min: '1', col1 max: '2'
>>> NOTICE:  v_record: ("2014-01-05 21:24:21.039656","2014-01-06
>>> 21:24:21.039656")
>>> NOTICE:  v_sql: col3 min: '2014-01-05 21:24:21.039656', col3 max:
>>> '2014-01-06 21:24:21.039656'
>>> NOTICE:  v_record: (1,2)
>>> NOTICE:  v_sql: col1 min: '1', col1 max: '2'
>>> NOTICE:  v_record: ("2014-01-05 21:24:21.039656","2014-01-06
>>> 21:24:21.039656")
>>> ERROR:  type of parameter 7 (timestamp without time zone) does not match
>>> that when preparing the plan (integer)
>>> CONTEXT:  PL/pgSQL 

Re: [GENERAL] Planning error in dynamic string creation in plpgsql

2014-01-05 Thread Keith Fiske
That fixed it! In the example and my original as well. Thank you very much!

And wow that was incredibly misleading where the cast was supposed to go
going by the error given and when it was thrown. That EXECUTE statement
works perfectly fine, seeing as the v_record variable got its assignment
with no issue.

Any chance you can explain what's going on here? Never would've thought to
put the cast there to fix the problem.

--
Keith Fiske
Database Administrator
OmniTI Computer Consulting, Inc.
http://www.keithf4.com


On Sun, Jan 5, 2014 at 11:06 PM, Adrian Klaver wrote:

> On 01/05/2014 06:31 PM, Keith Fiske wrote:
>
>> Running into an issue trying to dynamically create some SQL statements
>> in a plpgsql function. The function below is as simple an example I can
>> make to reproduce the error. The first loop works without any issues,
>> but the second throws an error.
>>
>> CREATE OR REPLACE FUNCTION testing_record() RETURNS void
>>  LANGUAGE plpgsql
>>  AS $$
>> DECLARE
>>  v_col   text;
>>  v_col_names text[];
>>  v_recordrecord;
>>  v_sql   text;
>> BEGIN
>>
>> CREATE TEMP TABLE test_temp (col1 int, col2 text, col3 timestamp);
>> INSERT INTO test_temp VALUES (1, 'stuff', now());
>> INSERT INTO test_temp VALUES (2, 'stuff', CURRENT_TIMESTAMP + '1
>> day'::interval);
>>
>> v_col_names := '{"col1","col3"}';
>>
>> FOR i IN 1..2
>> LOOP
>>  IF i = 1 THEN
>>  EXECUTE 'SELECT min(col1) as min, max(col1) as max FROM
>> test_temp' INTO v_record;
>>  RAISE NOTICE 'v_record: %', v_record;
>>  v_sql := concat('col1 min: ', quote_literal(v_record.min), ',
>> col1 max: ', quote_literal(v_record.max) );
>>  RAISE NOTICE 'v_sql: %', v_sql;
>>  ELSIF i = 2 THEN
>>  EXECUTE 'SELECT min(col3) as min, max(col3) as max FROM
>> test_temp' INTO v_record;
>>  RAISE NOTICE 'v_record: %', v_record;
>>  v_sql := concat('col3 min: ', quote_literal(v_record.min), ',
>> col3 max: ', quote_literal(v_record.max) );
>>  RAISE NOTICE 'v_sql: %', v_sql;
>>  END IF;
>> END LOOP;
>>
>> FOREACH v_col IN ARRAY v_col_names
>> LOOP
>> EXECUTE 'SELECT min('||v_col||') as min, max('||v_col||') as max
>> FROM test_temp' INTO v_record;
>> RAISE NOTICE 'v_record: %', v_record;
>> v_sql := concat(v_col
>>  , ' min: '
>>  , quote_literal(v_record.min)
>>  , ', '
>>  , v_col
>>  , ' max: '
>>  , quote_literal(v_record.max)
>>);
>> RAISE NOTICE 'v_sql: %', v_sql;
>> END LOOP;
>>
>> DROP TABLE IF EXISTS test_temp;
>>
>> END
>> $$;
>>
>> keith=# select testing_record();
>> NOTICE:  v_record: (1,2)
>> NOTICE:  v_sql: col1 min: '1', col1 max: '2'
>> NOTICE:  v_record: ("2014-01-05 21:24:21.039656","2014-01-06
>> 21:24:21.039656")
>> NOTICE:  v_sql: col3 min: '2014-01-05 21:24:21.039656', col3 max:
>> '2014-01-06 21:24:21.039656'
>> NOTICE:  v_record: (1,2)
>> NOTICE:  v_sql: col1 min: '1', col1 max: '2'
>> NOTICE:  v_record: ("2014-01-05 21:24:21.039656","2014-01-06
>> 21:24:21.039656")
>> ERROR:  type of parameter 7 (timestamp without time zone) does not match
>> that when preparing the plan (integer)
>> CONTEXT:  PL/pgSQL function testing_record() line 34 at assignment
>>
>>
>> I've narrowed down the exact point of the error being the
>> quote_literal() calls. If I commend them out like this:
>>
>> v_sql := concat(v_col
>>  , ' min: '
>> --, quote_literal(v_record.min)
>>  , ', '
>>  , v_col
>>  , ' max: '
>> --, quote_literal(v_record.max)
>>);
>>
>> Then the function runs without any issues, but obviously the values are
>> missing from the NOTICE
>>
>> keith=# select testing_record();
>> NOTICE:  v_record: (1,2)
>> NOTICE:  v_sql: col1 min: '1', col1 max: '2'
>> NOTICE:  v_record: ("2014-01-05 21:25:58.603149","2014-01-06
>> 21:25:58.60

Re: [GENERAL] Planning error in dynamic string creation in plpgsql

2014-01-05 Thread Keith Fiske
Sorry, forgot to include that I've tested this on PostgreSQL versions 9.2.6
and 9.3.2 and same thing happens on both.

--
Keith Fiske
Database Administrator
OmniTI Computer Consulting, Inc.
http://www.keithf4.com


On Sun, Jan 5, 2014 at 9:31 PM, Keith Fiske  wrote:

> Running into an issue trying to dynamically create some SQL statements in
> a plpgsql function. The function below is as simple an example I can make
> to reproduce the error. The first loop works without any issues, but the
> second throws an error.
>
> CREATE OR REPLACE FUNCTION testing_record() RETURNS void
> LANGUAGE plpgsql
> AS $$
> DECLARE
> v_col   text;
> v_col_names text[];
> v_recordrecord;
> v_sql   text;
> BEGIN
>
> CREATE TEMP TABLE test_temp (col1 int, col2 text, col3 timestamp);
> INSERT INTO test_temp VALUES (1, 'stuff', now());
> INSERT INTO test_temp VALUES (2, 'stuff', CURRENT_TIMESTAMP + '1
> day'::interval);
>
> v_col_names := '{"col1","col3"}';
>
> FOR i IN 1..2
> LOOP
> IF i = 1 THEN
> EXECUTE 'SELECT min(col1) as min, max(col1) as max FROM test_temp'
> INTO v_record;
> RAISE NOTICE 'v_record: %', v_record;
> v_sql := concat('col1 min: ', quote_literal(v_record.min), ', col1
> max: ', quote_literal(v_record.max) );
> RAISE NOTICE 'v_sql: %', v_sql;
> ELSIF i = 2 THEN
> EXECUTE 'SELECT min(col3) as min, max(col3) as max FROM test_temp'
> INTO v_record;
> RAISE NOTICE 'v_record: %', v_record;
> v_sql := concat('col3 min: ', quote_literal(v_record.min), ', col3
> max: ', quote_literal(v_record.max) );
> RAISE NOTICE 'v_sql: %', v_sql;
> END IF;
> END LOOP;
>
> FOREACH v_col IN ARRAY v_col_names
> LOOP
>EXECUTE 'SELECT min('||v_col||') as min, max('||v_col||') as max
> FROM test_temp' INTO v_record;
>RAISE NOTICE 'v_record: %', v_record;
>v_sql := concat(v_col
> , ' min: '
> , quote_literal(v_record.min)
> , ', '
> , v_col
> , ' max: '
> , quote_literal(v_record.max)
>   );
>RAISE NOTICE 'v_sql: %', v_sql;
> END LOOP;
>
> DROP TABLE IF EXISTS test_temp;
>
> END
> $$;
>
> keith=# select testing_record();
> NOTICE:  v_record: (1,2)
> NOTICE:  v_sql: col1 min: '1', col1 max: '2'
> NOTICE:  v_record: ("2014-01-05 21:24:21.039656","2014-01-06
> 21:24:21.039656")
> NOTICE:  v_sql: col3 min: '2014-01-05 21:24:21.039656', col3 max:
> '2014-01-06 21:24:21.039656'
> NOTICE:  v_record: (1,2)
> NOTICE:  v_sql: col1 min: '1', col1 max: '2'
> NOTICE:  v_record: ("2014-01-05 21:24:21.039656","2014-01-06
> 21:24:21.039656")
> ERROR:  type of parameter 7 (timestamp without time zone) does not match
> that when preparing the plan (integer)
> CONTEXT:  PL/pgSQL function testing_record() line 34 at assignment
>
>
> I've narrowed down the exact point of the error being the quote_literal()
> calls. If I commend them out like this:
>
>v_sql := concat(v_col
> , ' min: '
> --, quote_literal(v_record.min)
> , ', '
> , v_col
> , ' max: '
> --, quote_literal(v_record.max)
>   );
>
> Then the function runs without any issues, but obviously the values are
> missing from the NOTICE
>
> keith=# select testing_record();
> NOTICE:  v_record: (1,2)
> NOTICE:  v_sql: col1 min: '1', col1 max: '2'
> NOTICE:  v_record: ("2014-01-05 21:25:58.603149","2014-01-06
> 21:25:58.603149")
> NOTICE:  v_sql: col3 min: '2014-01-05 21:25:58.603149', col3 max:
> '2014-01-06 21:25:58.603149'
> NOTICE:  v_record: (1,2)
> NOTICE:  v_sql: col1 min: , col1 max:
> NOTICE:  v_record: ("2014-01-05 21:25:58.603149","2014-01-06
> 21:25:58.603149")
> NOTICE:  v_sql: col3 min: , col3 max:
>  testing_record
> 
>
> (1 row)
>
>
> In the real function I'm writing, the columns to be used in the string
> being created are pulled from a configuration table, so their types could
> be anything. So casting the quote_literal() calls is not really an option
> here.
>
> Any help would be appreciated.
>
> --
> Keith Fiske
> Database Administrator
> OmniTI Computer Consulting, Inc.
> http://www.keithf4.com
>


[GENERAL] Planning error in dynamic string creation in plpgsql

2014-01-05 Thread Keith Fiske
Running into an issue trying to dynamically create some SQL statements in a
plpgsql function. The function below is as simple an example I can make to
reproduce the error. The first loop works without any issues, but the
second throws an error.

CREATE OR REPLACE FUNCTION testing_record() RETURNS void
LANGUAGE plpgsql
AS $$
DECLARE
v_col   text;
v_col_names text[];
v_recordrecord;
v_sql   text;
BEGIN

CREATE TEMP TABLE test_temp (col1 int, col2 text, col3 timestamp);
INSERT INTO test_temp VALUES (1, 'stuff', now());
INSERT INTO test_temp VALUES (2, 'stuff', CURRENT_TIMESTAMP + '1
day'::interval);

v_col_names := '{"col1","col3"}';

FOR i IN 1..2
LOOP
IF i = 1 THEN
EXECUTE 'SELECT min(col1) as min, max(col1) as max FROM test_temp'
INTO v_record;
RAISE NOTICE 'v_record: %', v_record;
v_sql := concat('col1 min: ', quote_literal(v_record.min), ', col1
max: ', quote_literal(v_record.max) );
RAISE NOTICE 'v_sql: %', v_sql;
ELSIF i = 2 THEN
EXECUTE 'SELECT min(col3) as min, max(col3) as max FROM test_temp'
INTO v_record;
RAISE NOTICE 'v_record: %', v_record;
v_sql := concat('col3 min: ', quote_literal(v_record.min), ', col3
max: ', quote_literal(v_record.max) );
RAISE NOTICE 'v_sql: %', v_sql;
END IF;
END LOOP;

FOREACH v_col IN ARRAY v_col_names
LOOP
   EXECUTE 'SELECT min('||v_col||') as min, max('||v_col||') as max
FROM test_temp' INTO v_record;
   RAISE NOTICE 'v_record: %', v_record;
   v_sql := concat(v_col
, ' min: '
, quote_literal(v_record.min)
, ', '
, v_col
, ' max: '
, quote_literal(v_record.max)
  );
   RAISE NOTICE 'v_sql: %', v_sql;
END LOOP;

DROP TABLE IF EXISTS test_temp;

END
$$;

keith=# select testing_record();
NOTICE:  v_record: (1,2)
NOTICE:  v_sql: col1 min: '1', col1 max: '2'
NOTICE:  v_record: ("2014-01-05 21:24:21.039656","2014-01-06
21:24:21.039656")
NOTICE:  v_sql: col3 min: '2014-01-05 21:24:21.039656', col3 max:
'2014-01-06 21:24:21.039656'
NOTICE:  v_record: (1,2)
NOTICE:  v_sql: col1 min: '1', col1 max: '2'
NOTICE:  v_record: ("2014-01-05 21:24:21.039656","2014-01-06
21:24:21.039656")
ERROR:  type of parameter 7 (timestamp without time zone) does not match
that when preparing the plan (integer)
CONTEXT:  PL/pgSQL function testing_record() line 34 at assignment


I've narrowed down the exact point of the error being the quote_literal()
calls. If I commend them out like this:

   v_sql := concat(v_col
, ' min: '
--, quote_literal(v_record.min)
, ', '
, v_col
, ' max: '
--, quote_literal(v_record.max)
  );

Then the function runs without any issues, but obviously the values are
missing from the NOTICE

keith=# select testing_record();
NOTICE:  v_record: (1,2)
NOTICE:  v_sql: col1 min: '1', col1 max: '2'
NOTICE:  v_record: ("2014-01-05 21:25:58.603149","2014-01-06
21:25:58.603149")
NOTICE:  v_sql: col3 min: '2014-01-05 21:25:58.603149', col3 max:
'2014-01-06 21:25:58.603149'
NOTICE:  v_record: (1,2)
NOTICE:  v_sql: col1 min: , col1 max:
NOTICE:  v_record: ("2014-01-05 21:25:58.603149","2014-01-06
21:25:58.603149")
NOTICE:  v_sql: col3 min: , col3 max:
 testing_record


(1 row)


In the real function I'm writing, the columns to be used in the string
being created are pulled from a configuration table, so their types could
be anything. So casting the quote_literal() calls is not really an option
here.

Any help would be appreciated.

--
Keith Fiske
Database Administrator
OmniTI Computer Consulting, Inc.
http://www.keithf4.com


Re: [GENERAL] Foreign Key violated

2013-06-04 Thread Keith Fiske
Apologies for not replying sooner. After a few days, we actually found out
the cause was a user turning off all triggers on the table, forcing some
data into it to try and solve an RMA issue manually, then turning the
triggers back on. This hadn't showed up on any logs, and after finding zero
signs of corruption or other FKs being violated, we asked the for more
information about what had been done recently and they fessed up.

So, relief on one hand that there was no data corruption. But a bit
troubling that the user did that :p

--
Keith Fiske
Database Administrator
OmniTI Computer Consulting, Inc.
http://www.keithf4.com


On Wed, May 29, 2013 at 10:52 AM, Thom Brown  wrote:

> On 23 May 2013 15:33, Thom Brown  wrote:
> > On 23 May 2013 10:15, Keith Fiske  wrote:
> >> Client reported an issue where it appears a foreign key has been
> violated
> >>
> >> prod=#\d rma_items
> >> [snip]
> >> rma_items_rma_id_status_fk" FOREIGN KEY (rma_id, rma_status) REFERENCES
> >> rmas(id, status) ON UPDATE CASCADE ON DELETE CASCADE
> >>
> >> prod=# select i.rma_id, i.rma_status, r.id, r.status from rmas r join
> >> rma_items i on i.rma_id = r.id and i.rma_status != r.status;
> >>rma_id   | rma_status | id | status
> >> +++
> >>  1008122437 | r  | 1008122437 | c
> >> (1 row)
> >>
> >>
> >> Attempting to reinsert this data again causes a violation error, so it
> >> doesn't appear to be broken
> >>
> >> prod=# begin;
> >> BEGIN
> >> prod=# insert into rma_items (rma_id, order_item_id, return_reason_id,
> >> rma_status) values (1008122437, 1007674099, 9797623, 'r');
> >> ERROR:  insert or update on table "rma_items" violates foreign key
> >> constraint "rma_items_rma_id_status_fk"
> >> DETAIL:  Key (rma_id, rma_status)=(1008122437, r) is not present in
> table
> >> "rmas".
> >> prod=# rollback;
> >> ROLLBACK
> >>
> >> This is running 9.2.4 on CentOS. If anyone can suggest how I can look
> into
> >> this deeper and find what the problem may be, I'd appreciate it. I'm
> here at
> >> PGCon if anyone is available to help IRL as well
> >
> > What do you get with:
> >
> > SELECT conname
> > FROM pg_constraint
> > WHERE NOT convalidated;
>
> Did you resolve this?
>
> --
> Thom
>


[GENERAL] Foreign Key violated

2013-05-23 Thread Keith Fiske
Client reported an issue where it appears a foreign key has been violated

prod=#\d rma_items
[snip]
rma_items_rma_id_status_fk" FOREIGN KEY (rma_id, rma_status) REFERENCES
rmas(id, status) ON UPDATE CASCADE ON DELETE CASCADE

prod=# select i.rma_id, i.rma_status, r.id, r.status from rmas r join
rma_items i on i.rma_id = r.id and i.rma_status != r.status;
   rma_id   | rma_status | id | status
+++
 1008122437 | r  | 1008122437 | c
(1 row)


Attempting to reinsert this data again causes a violation error, so it
doesn't appear to be broken

prod=# begin;
BEGIN
prod=# insert into rma_items (rma_id, order_item_id, return_reason_id,
rma_status) values (1008122437, 1007674099, 9797623, 'r');
ERROR:  insert or update on table "rma_items" violates foreign key
constraint "rma_items_rma_id_status_fk"
DETAIL:  Key (rma_id, rma_status)=(1008122437, r) is not present in table
"rmas".
prod=# rollback;
ROLLBACK

This is running 9.2.4 on CentOS. If anyone can suggest how I can look into
this deeper and find what the problem may be, I'd appreciate it. I'm here
at PGCon if anyone is available to help IRL as well

--
Keith Fiske
Database Administrator
OmniTI Computer Consulting, Inc.
http://www.keithf4.com


[GENERAL] 9.1.3 AIX build issue

2012-11-04 Thread Keith Handlon
 gcc -maix64 -Wall -Wmissing-prototypes -Wpointer-arith
-Wdeclaration-after-statement -Wendif-labels -Wformat-security
-fno-strict-aliasing -fwrapv pg_ctl.o  -L../../../src/port -lpgport
-L../../../src/interfaces/libpq -lpq -L../../../src/port -Wl,-bbigtoc
-Wl,-blibpath:'/users/kehand/pgsql_9.1.3/lib:/usr/lib:/lib'  -lpgport -lld
-lm  -o pg_ctl

ld: 0711-317 ERROR: Undefined symbol: .PQping



Previously I was able to get 9.1.1 and 9.1.4 to build on AIX.



strings -a src/interfaces/libpq/fe-connect.o | grep PQping

PQpingParams

PQping

.PQpingParams

.PQping

PQpingParams

PQpingParams

PQping

PQping



dump -tv -X64 src/interfaces/libpq/libpq.so.5 | grep PQping



The same dump command on the 9.1.4 libpq.so.5:

dump -tv -X64 /users/kehand/postgresql_9.1.4/lib/libpq.so.5 | grep PQping

[1120]  m   0x1000c2a8 .text 1  extern
.PQpingParams

[1124]  m   0x1000c3e4 .text 1  extern.PQping

[1278]  m   0x200030b8 .data 1  unamex
PQpingParams

[1280]  m   0x200030b8 .data 1  extern
PQpingParams

[1286]  m   0x200030e8 .data 1  unamexPQping

[1288]  m   0x200030e8 .data 1  externPQping



So the symbol isn’t making it into the shared object.  Anyone have any idea
as to why?


There is actually only ~478 symbols in the 9.1.3 generated libpq.so.5, and
seemingly none of the postgres ones.


[GENERAL] Can't reset password

2012-07-17 Thread Keith Chen
Hello,

I deleted my Postgresql and try to reinstall it.  I try to reset the
password by typing in net user postgres *.  However, the new password
doesn't work when the installer prompt me to input the password.
Could you please help me?

Thanks,

Keith

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Issue with extension updates to pg_extension table

2012-07-05 Thread Keith Fiske
For reasons I've brought up before
(http://archives.postgresql.org/pgsql-general/2012-06/msg00174.php), I
need to stop some of my extension tables from dumping data when a
schema only dump is done because they have the potential to contain A
LOT of data.

For reference my extension is https://github.com/omniti-labs/pg_jobmon

The problem I'm having is changing this with an extension update. I
want to turn off the data dumping for the job_log and job_detail
tables. The only method I've found to do this is by directly updating
the extconfig column in the pg_extensions table. If I run this update
directly via psql, it works fine.

db=# select unnest(extconfig) as oid,
split_part(unnest(extconfig)::regclass::text, '.', 2) as tablename
from pg_extension where extname = 'pg_jobmon';
oid|tablename
---+--
 214224990 | job_log
 214225005 | job_detail
 214225022 | job_check_log
 214225028 | job_check_config
 214225038 | job_status_text
 214972369 | dblink_mapping
(6 rows)

db=# begin;
BEGIN
db=# UPDATE pg_extension SET extconfig = (SELECT array_agg(t.oid) FROM (
db(# SELECT unnest(extconfig) AS oid,
split_part(unnest(extconfig)::regclass::text, '.', 2) AS tablename
db(# FROM pg_extension WHERE extname = 'pg_jobmon') t
db(# WHERE t.tablename NOT IN ('job_log', 'job_detail') ) WHERE
extname = 'pg_jobmon';
UPDATE 1
db=# select unnest(extconfig) as oid,
split_part(unnest(extconfig)::regclass::text, '.', 2) as tablename
from pg_extension where extname = 'pg_jobmon';
oid|tablename
---+--
 214225022 | job_check_log
 214225028 | job_check_config
 214225038 | job_status_text
 214972369 | dblink_mapping
(4 rows)

db=# rollback;
ROLLBACK


However, if I run this exact same query as an extension update, it does nothing

db=# select unnest(extconfig) as oid,
split_part(unnest(extconfig)::regclass::text, '.', 2) as tablename
from pg_extension where extname = 'pg_jobmon';
oid|tablename
---+--
 214224990 | job_log
 214225005 | job_detail
 214225022 | job_check_log
 214225028 | job_check_config
 214225038 | job_status_text
 214972369 | dblink_mapping
(6 rows)

db=# alter extension pg_jobmon update to '0.3.3';
ALTER EXTENSION
db=# select unnest(extconfig) as oid,
split_part(unnest(extconfig)::regclass::text, '.', 2) as tablename
from pg_extension where extname = 'pg_jobmon';
oid|tablename
---+--
 214224990 | job_log
 214225005 | job_detail
 214225022 | job_check_log
 214225028 | job_check_config
 214225038 | job_status_text
 214972369 | dblink_mapping
(6 rows)

I know this isn't really something that would be done often, but it
just seemed a rather odd behavior so I thought I'd bring it up in case
it's something that can be easily fixed.

--
Keith Fiske
Database Administrator
OmniTI Computer Consulting, Inc.
443.325.1357 x251

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Extension table data

2012-06-11 Thread Keith Fiske
Along with Chris, the need for having tables defined in the extension
is about keeping control of the table structure so the extension
functions are guaranteed to work as intended and to make upgrading
versions easier. As an example, the fact that I can specifically name
constraints makes it easier to write an extension upgrade script if
that constraint needs to change because I know exactly what I called
it. Trying to support extensions and requiring that we write long,
explicit instructions for creating and maintaining the associated
tables is just asking for trouble. Especially when we can have the
control we need to avoid these issues.

Honestly, the big issue I have right now is that it is dumping data
with the schema-only option and only dumping that data if you do a
complete database dump. That is making it very difficult to manage
extensions even using them as they're documented now.

I think its time to recognize the extension system is more widely
usable than it was originally intended. And that's a good thing! We
just need to try to find ways to make the existing tools work in a
more predictable manner now.

--
Keith Fiske
Database Administrator
OmniTI Computer Consulting, Inc.
443.325.1357 x251


On Sat, Jun 9, 2012 at 9:56 AM, Chris Travers  wrote:
> On Sat, Jun 9, 2012 at 5:33 AM, Tom Lane  wrote:
>> Keith Fiske  writes:
>>> Just found something else rather disturbing. If you try to exclude the
>>> schema that the extension tables are in, their data is still output.
>>
>> This is a common misconception: extensions do not live within schemas.
>> (An extension might own a schema, not the other way around.)  So a
>> dump with a -n switch is never going to select an extension.
>>
>> By and large, if the current behavior bothers you, ISTM it probably
>> means you are using these tables in a way other than what the concept of
>> an extension configuration table was meant for: namely, to hold
>> configuration data that would be referenced by the functions in that
>> extension, but would not normally be considered part of the user's data.
>> There has been some talk of trying to cater for a more general notion of
>> tables created by extensions, but we do not have a design or even a
>> clear idea of a set of requirements for that.  Perhaps it would be good
>> if you explained what is your use-case --- why are you concerned about
>> being able to manage these tables as if they were regular data?
>>
> Here's a use case I have been thinking a lot about lately.  I am not
> sure that extensions is the right vehicle for it, but it may be a good
> starting point.
>
> LedgerSMB is moving towards a more modular structure and LedgerSMB 1.5
> will probably require PostgreSQL 9.1 or higher (owing to the writable
> CTE's).  I have been looking at how the extension system can be used
> to simplify our maintenance and it helps quite a bit.  However, one
> key aspect that would be really nice would be managing schema changes
> along with changing versions of an extension.  For example, one might
> have a fixed asset module, and that module might have a series of
> stored procedures.  We would have some tables that store configuration
> data regarding the module.  For example we might have a table that
> stores info on stored procedures that track depreciation methods.  The
> use of tables for configuration data in the current approach fits this
> nicely.
>
> However we might have other tables which store data, things like which
> fixed assets get depreciated in which ways, and the like.  These may
> need to have columns added from time to time, or have other alter
> table operations performed.  It would be nice to be able to manage
> these schema changes and upgrading the extension in the same
> framework.
>
> I am wondering if some sort of "database modules" framework might be
> helpful with modules possibly having extensions, but also having data
> tables.
>
> Best Wishes,
> Chris Travers

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Extension table data

2012-06-09 Thread Keith Fiske
Just found something else rather disturbing. If you try to exclude the
schema that the extension tables are in, their data is still output.
Explicitly naming other schemas doesn't seem to dump the extension
data. So the only way to avoid getting the extension data in a
schema-only dump is to explicitly name all schemas but the one your
extension is in, which I think is another bug you had actually fixed
for 9.1.3 where extension data was always being dumped.

--
Keith Fiske
Database Administrator
OmniTI Computer Consulting, Inc.
443.325.1357 x251


On Sat, Jun 9, 2012 at 2:56 AM, Keith Fiske  wrote:
> With the current design, I understand what you're saying now.  Just
> doing some more testing, I was able to do a pg_dump -Fc -s for the
> entire database and looking through the resulting object list with
> pg_restore -l I'm actually seeing the extension table data included in
> the dump file. Doing a restore on the schema I put the extension in, I
> see the COPY commands to restore the data. I think this is a serious
> shortcoming, and a confusing state of affairs. The only way to get
> this data out seems to be to do a full database dump and the
> schema-only option to pg_dump is outputting data.
>
> Looking at the docs, I think the extension authors may have only had
> configuration data in mind for extension tables. I don't see any
> reason why we shouldn't be able to put any sort of table in our
> extensions, some having actual data, not just config. That's actually
> what I'm doing with my pg_jobmon extension, which could potentially
> have millions of rows over time. Not having the tables included in the
> extension definition would just make setting the whole thing up more
> difficult and error prone.
>
> If extensions really are going to be self contained like this as far
> as dumps, perhaps another option to pg_dump is needed, and have the
> schema-only or data-only options be honored in that case as well.
>
> --
> Keith Fiske
> Database Administrator
> OmniTI Computer Consulting, Inc.
> 443.325.1357 x251
>
>
> On Fri, Jun 8, 2012 at 11:38 PM, Tom Lane  wrote:
>> Keith Fiske  writes:
>>> I've read the documentation for extensions and how their data is not
>>> normally dumped by pg_dump and how to configure the table so it should
>>> dump its data
>>> http://www.postgresql.org/docs/9.1/static/extend-extensions.html
>>> However, after setting this option for the tables, the data is not
>>> being dumped when I do a pg_dump of either individual tables or the
>>> schema I've installed the extension to.
>>
>> IIRC, the design intention is that such a table's data would be dumped
>> when (and only when) the extension is dumped.  That is, I'd expect to
>> see a "CREATE EXTENSION foo;" and then data for the extension's tables.
>> The partial-dump scenarios you mention wouldn't dump extensions, hence
>> not extension table data either.
>>
>> Whether this design is a good one is still under debate, but I think
>> pg_dump is operating as designed here ...
>>
>>                        regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Extension table data

2012-06-08 Thread Keith Fiske
With the current design, I understand what you're saying now.  Just
doing some more testing, I was able to do a pg_dump -Fc -s for the
entire database and looking through the resulting object list with
pg_restore -l I'm actually seeing the extension table data included in
the dump file. Doing a restore on the schema I put the extension in, I
see the COPY commands to restore the data. I think this is a serious
shortcoming, and a confusing state of affairs. The only way to get
this data out seems to be to do a full database dump and the
schema-only option to pg_dump is outputting data.

Looking at the docs, I think the extension authors may have only had
configuration data in mind for extension tables. I don't see any
reason why we shouldn't be able to put any sort of table in our
extensions, some having actual data, not just config. That's actually
what I'm doing with my pg_jobmon extension, which could potentially
have millions of rows over time. Not having the tables included in the
extension definition would just make setting the whole thing up more
difficult and error prone.

If extensions really are going to be self contained like this as far
as dumps, perhaps another option to pg_dump is needed, and have the
schema-only or data-only options be honored in that case as well.

--
Keith Fiske
Database Administrator
OmniTI Computer Consulting, Inc.
443.325.1357 x251


On Fri, Jun 8, 2012 at 11:38 PM, Tom Lane  wrote:
> Keith Fiske  writes:
>> I've read the documentation for extensions and how their data is not
>> normally dumped by pg_dump and how to configure the table so it should
>> dump its data
>> http://www.postgresql.org/docs/9.1/static/extend-extensions.html
>> However, after setting this option for the tables, the data is not
>> being dumped when I do a pg_dump of either individual tables or the
>> schema I've installed the extension to.
>
> IIRC, the design intention is that such a table's data would be dumped
> when (and only when) the extension is dumped.  That is, I'd expect to
> see a "CREATE EXTENSION foo;" and then data for the extension's tables.
> The partial-dump scenarios you mention wouldn't dump extensions, hence
> not extension table data either.
>
> Whether this design is a good one is still under debate, but I think
> pg_dump is operating as designed here ...
>
>                        regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Extension table data

2012-06-08 Thread Keith Fiske
I've read the documentation for extensions and how their data is not
normally dumped by pg_dump and how to configure the table so it should
dump its data
http://www.postgresql.org/docs/9.1/static/extend-extensions.html
However, after setting this option for the tables, the data is not
being dumped when I do a pg_dump of either individual tables or the
schema I've installed the extension to. The database I'm working with
right now is far too big to do a full pg_dump or pg_dumpall, so I
haven't been able to test that. I would assume that doing just the
table or schema should work, though?

Extension I'm working on: https://github.com/omniti-labs/pg_jobmon

Anyone else having this issue or am I doing something wrong?

--
Keith Fiske
Database Administrator
OmniTI Computer Consulting, Inc.
443.325.1357 x251

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] ALTER DEFAULT PRIVILEGES target_role doesn't work with group roles

2012-02-21 Thread Keith Fiske
Situation:
I have two roles, alice & bob. Both are members of the dev_user group role.
I have a schema called 'reports' that both of these users would like
to be able to manage.

I thought I could use the ALTER DEFAULT PRIVILEGES option
(http://www.postgresql.org/docs/9.1/static/sql-alterdefaultprivileges.html)
to set it up so that if anyone in the dev_user group role created a
table in the reports schema, then Postgres would automatically grant
all privileges to the group role. Then both Alice and Bob could access
each other's objects in a schema other than their own. This would also
make it so that any future roles added to the dev_user schema would
have this happen automatically.

ALTER DEFAULT PRIVILEGES FOR ROLE dev_user IN SCHEMA reports GRANT ALL
ON TABLES TO dev_user;

It turns out the "target_role" does not work for group roles. When
either Alice or Bob creates a table in the reports schema, the
dev_user grants are not automatically added. I had to explicitly set
the default privileges for each role:

ALTER DEFAULT PRIVILEGES FOR ROLE alice IN SCHEMA reports GRANT ALL ON
TABLES TO dev_user;
ALTER DEFAULT PRIVILEGES FOR ROLE bob IN SCHEMA reports GRANT ALL ON
TABLES TO dev_user;

This isn't ideal for long term management. I wasn't really sure if
this was a bug or a lack of clarity in the docs, so thought I'd throw
it out to General for comments first. And to make sure I'm explaining
this clearly enough for others to reproduce it and see if I'm not
asking for something unreasonable.

--
Keith Fiske
Database Administrator
OmniTI Computer Consulting, Inc.
443.325.1357 x251

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to update stavaluesN columns in pg_statistics (type anyarry)

2009-05-22 Thread Keith Hayden
2009/5/22 Keith Hayden 

> 2009/5/22 Tom Lane 
>
>   Keith Hayden  writes:
>> > I need to spoof statistics, and so need to update the stavaluesN columns
>> in
>> > pg_statistics, which are of type anyarray. Can this be done using an
>> UPDATE
>> > statement ? I have tried using array[...] and '{...}' syntax with no
>> luck.
>> > Any other ideas as to how to achieve this ?
>>
>> I've done it successfully but it's not something you should consider for
>> any sort of production purpose.  IIRC the non-obvious part is that the
>> columns are declared anyarray which has alignment 'd', but if you want
>> to stick in an array of, say, integer then that only has alignment 'i'.
>> I think what I did to make it work was to temporarily change the
>> typalign entry in _int4's pg_type row to 'd' ... this was in a test
>> database so I wasn't afraid of how badly it might break ;-)
>>
>> Something that might actually be sustainable is to take advantage
>> of the planner statistics hook function that is new in 8.4.  That is,
>> make a hook function that shoves in the values you want at the time
>> of use, rather than modifying pg_statistic directly.
>>
>>regards, tom lane
>
>
> Thanks Tom, I will give the typalign change a go and see if that works. 8.4
> is not an option at the moment. I am looking for a way to create a bunch of
> tables and indices with meaningful/realistic stats and tuple counts, without
> actually loading data into these relations, and then get meaningful query
> plans out. So any other hints or tips you have will be gratefully received,
>
> Keith.
>

I tried the following:

update  pg_type set typalign = 'd' where typname = 'int4'

then

update pg_statistic set stavalues1 = array[1,10,20,30,40,50,60,70,80,90,100]
where starelid = 24751

this failed with:

ERROR:  table row type and query-specified row type do not match
DETAIL:  Table has type anyarray at ordinal position 18, but query expects
integer[].

Any idea how you got this to work once you updated pg_type.typalign to 'd'
for int4 ?

Thanks,
Keith.


Re: [GENERAL] How to update stavaluesN columns in pg_statistics (type anyarry)

2009-05-22 Thread Keith Hayden
2009/5/22 Tom Lane 

>  Keith Hayden  writes:
> > I need to spoof statistics, and so need to update the stavaluesN columns
> in
> > pg_statistics, which are of type anyarray. Can this be done using an
> UPDATE
> > statement ? I have tried using array[...] and '{...}' syntax with no
> luck.
> > Any other ideas as to how to achieve this ?
>
> I've done it successfully but it's not something you should consider for
> any sort of production purpose.  IIRC the non-obvious part is that the
> columns are declared anyarray which has alignment 'd', but if you want
> to stick in an array of, say, integer then that only has alignment 'i'.
> I think what I did to make it work was to temporarily change the
> typalign entry in _int4's pg_type row to 'd' ... this was in a test
> database so I wasn't afraid of how badly it might break ;-)
>
> Something that might actually be sustainable is to take advantage
> of the planner statistics hook function that is new in 8.4.  That is,
> make a hook function that shoves in the values you want at the time
> of use, rather than modifying pg_statistic directly.
>
>regards, tom lane


Thanks Tom, I will give the typalign change a go and see if that works. 8.4
is not an option at the moment. I am looking for a way to create a bunch of
tables and indices with meaningful/realistic stats and tuple counts, without
actually loading data into these relations, and then get meaningful query
plans out. So any other hints or tips you have will be gratefully received,

Keith.


[GENERAL] How to update stavaluesN columns in pg_statistics (type anyarry)

2009-05-22 Thread Keith Hayden
Hi,

I need to spoof statistics, and so need to update the stavaluesN columns in
pg_statistics, which are of type anyarray. Can this be done using an UPDATE
statement ? I have tried using array[...] and '{...}' syntax with no luck.
Any other ideas as to how to achieve this ?

Thanks,
Keith.


Re: [GENERAL] running postgresql on a private machine accessing it from public web pages

2009-05-21 Thread Keith D. Evans
We usually run postgresql on a private machine (gravity) but due to 
space, we have moved the database to a different machine through afs (a 
local network at the university). We do not want the private machine to 
be accessible to outside users, so our web pages are on the university's 
public web pages. They have mysql, but won't install postgresql, which 
we've been using on gravity for years.


We want to add a database where users can get and plot data using php 
over the internet. But since the public computers don't have postgresql, 
can we have postgresql running on gravity allowing users accessing 
through the internet on the public pages to access the data?


thanx,
keith


Raymond O'Donnell wrote:

On 21/05/2009 18:49, Keith D. Evans wrote:

  

The database is on a public machine (Linux), but the postgresql
postmaster runs on a private machine (Solaris 10).



That doesn't make a lot of sense, unless you've got *two* postmasters
running, one on each machine, or maybe you've created a tablespace over
NFS or some such.

Can you explain more clearly what you mean by the above?


Ray.


--
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
r...@iol.ie
Galway Cathedral Recitals: http://www.galwaycathedral.org/recitals
--


  


--
"Nonviolence is not a cover for cowardice, but it is the supreme virtue of the 
brave."  Mohandas Karamchand Gandhi
===
Keith D. Evans
Joint Center for Earth Systems Technology/UMBC
(301) 614-6282 (M,Tu)
(410) 455-5751 (W,Th,F)
http://www.jcet.umbc.edu/bios/evanmain.html


Any opinions expressed in this email are not those of 
NASA, or the Goddard Space Flight Center, or the Joint 
Center for Earth Systems Technology or the University 
of Maryland Baltimore County. 






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] running postgresql on a private machine accessing it from public web pages

2009-05-21 Thread Keith D. Evans

Joshua,

Let me rephrase what I wrote. The database is on a public machine 
(Linux), but the postgresql postmaster runs on a private machine 
(Solaris 10).  Can we access the postgresql database through the public 
machine, even though the postmaster is running from a (different) 
private machine?


thanx
keith

Joshua D. Drake wrote:
So, the question is, can someone go through these (public) web pages and 
access the postgresql database if the postgresql server is running on 
the private machine?


We have other data in the postgresql and would like to only have to use 
one database types, i.e., postgresql.



If you are using proper host control then yes you can make it so that
the PHP user is only able to access the public data in the private
database. See here:

http://www.postgresql.org/docs/8.3/static/client-authentication.html
http://www.postgresql.org/docs/8.3/static/user-manag.html

Sincerely,

Joshua D. Drake

  


--
"Nonviolence is not a cover for cowardice, but it is the supreme virtue of the 
brave."  Mohandas Karamchand Gandhi
===
Keith D. Evans
Joint Center for Earth Systems Technology/UMBC
(301) 614-6282 (M,Tu)
(410) 455-5751 (W,Th,F)
http://www.jcet.umbc.edu/bios/evanmain.html


Any opinions expressed in this email are not those of 
NASA, or the Goddard Space Flight Center, or the Joint 
Center for Earth Systems Technology or the University 
of Maryland Baltimore County. 






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] running postgresql on a private machine accessing it from public web pages

2009-05-21 Thread Keith D. Evans
We have a private machine that runs postgresql and have created a 
database. We want to access that database from the web for plotting 
using php. The web server and pages are on a public (different) machine 
that does not have postgresql. Nor will they install it for is. The 
postgresql database is also on the public machine (which the private 
machine can access).


So, the question is, can someone go through these (public) web pages and 
access the postgresql database if the postgresql server is running on 
the private machine?


We have other data in the postgresql and would like to only have to use 
one database types, i.e., postgresql.


Thanx,
keith

--
"Nonviolence is not a cover for cowardice, but it is the supreme virtue of the 
brave."  Mohandas Karamchand Gandhi
===
Keith D. Evans
Joint Center for Earth Systems Technology/UMBC
(301) 614-6282 (M,Tu)
(410) 455-5751 (W,Th,F)
http://www.jcet.umbc.edu/bios/evanmain.html


Any opinions expressed in this email are not those of 
NASA, or the Goddard Space Flight Center, or the Joint 
Center for Earth Systems Technology or the University 
of Maryland Baltimore County. 






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Suspected bug: outer WHERE reordered before inner WHERE -> input syntax ERROR

2008-09-08 Thread John Keith Hohm
On Mon, 08 Sep 2008 13:53:03 -0400
Tom Lane <[EMAIL PROTECTED]> wrote:

> This isn't a bug: the optimizer is entitled to rearrange WHERE clauses
> any way it pleases.  If you want an optimization fence between the
> inner and outer SELECTS, add OFFSET 0 (or LIMIT ALL if you like).

Thanks for telling me about LIMIT ALL being an optimization fence.
I believe you that it is not a bug, but I admin I cannot find any
documentation of that entitlement.

The description of sub-selects in the SELECT statement documentation
suggests that a sub-select is equivalent to creating a temporary table
with the inner select for the duration of that statement, and clearly
that would not have the same effect.  And the documentation of LIMIT
ALL suggests it is optional noise.

http://www.postgresql.org/docs/8.3/interactive/sql-select.html:

A sub-SELECT can appear in the FROM clause. This acts as though its
output were created as a temporary table for the duration of this
single SELECT command. [...]

http://www.postgresql.org/docs/8.3/interactive/queries-limit.html:

[...] LIMIT ALL is the same as omitting the LIMIT clause. 

Is there a SQL standard document that gives permission for various
optimizations, and the PostgreSQL documentation contains some
technically inaccurate simplifications for exposition?

> It does raise the question of why you aren't just doing
>   where trim(n) != '-1'
> I'm also wondering whether the logic is even consistent: something
> with a minus sign in it will never get through the inner WHERE,
> so what is the point of the outer one?

Sorry, in minimizing my example to demonstrate the problem I made it
appear pointless; the actual query that motivated the discussion is:

select * from (
   select * from Inquiry where nullif(trim(Member_Nbr), '') is not
NULL and trim(trim(Member_Nbr), '0123456789') = ''
-- and I've added LIMIT ALL here to make it work
) as valid_Inquiry where Member_Nbr::integer not in (
   select Member_Nbr::integer from Member
);

...which is part of the conversion process for data dumped from an
Access database.

-- 
John Keith Hohm
<[EMAIL PROTECTED]>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Suspected bug: outer WHERE reordered before inner WHERE -> input syntax ERROR

2008-09-08 Thread John Keith Hohm
This fails with ERROR:  invalid input syntax for integer: "JOHN":

select * from (
  select * from (VALUES ('100'), ('JOHN')) as A (n)
  where trim(trim(n), '0123456789') = ''
) as B where n::integer <> -1;

But without the nested trim, this works just fine (although it would not
suffice for my purpose):

select * from (
  select * from (VALUES ('100'), ('JOHN')) as A (n)
  where trim(n, '0123456789') = ''
) as B where n::integer <> -1;

I think they should both work because the documentation says "A
sub-SELECT can appear in the FROM clause. This acts as though its
output were created as a temporary table for the duration of this
single SELECT command."

The cause is clear when you look at the error-ing EXPLAIN:

 Values Scan on "*VALUES*"  (cost=0.00..0.06 rows=1 width=32)
   Filter: (((column1)::integer <> (-1)) AND (btrim(btrim(column1),
'0123456789'::text) = ''::text))

Versus the single-trim EXPLAIN:

 Values Scan on "*VALUES*"  (cost=0.00..0.05 rows=1 width=32)
   Filter: ((btrim(column1, '0123456789'::text) = ''::text) AND
((column1)::integer <> (-1)))

The extra trim causes the cast-and-compare to happen before the
trim-and-compare. By my understanding PostgreSQL should not be allowed
to reorder the clause of the subselect before the outer select.

I'm running the Ubuntu postgresql package version 8.3.3-0ubuntu0.8.04.

I'm also interested in responses of the form "why not just do X?".

-- 
John Keith Hohm
<[EMAIL PROTECTED]>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PL/PGSql function within a view definition

2008-02-15 Thread Keith Haugh
  Due to limitations (perceived or real) within my client application I am 
trying to return a complex dataset which I am assembling using an expensive 
PL/PGSql function which I would like to wrap in a writeable view.
I have written the function as both a row-level function which returns a 
ROWTYPE and as a table level function which returns a SETOF ROWTYPES.  In both 
cases I have encountered issues.  Please keep in mind that my end goal is to 
create a writeable view that my client app will treat as a simple table.
OPTION ONE - ROWTYPE
--this works correctly.
Select my_func(1); 
--and this works correctly
Select my_table.a, my_func(my_table.a) 
Where my_table.a in (1,2,3); 
--works great.
--however when i create the following view and use the following query...
Create view my_view as select my_table.a as a, my_func(my_table.a)  from 
my_table; 
Select * from my_view where a in (1,2,3);
  --the function appears to be run on each row of my_table which is not 
tolerable due to the size of my_table and the cost of my_func.
Any suggestions on how to force the selection of my_table records prior to 
executing the function?
  OPTION TWO – SETOF ROWTYPE
--this works correctly.
Select * from my_func2(1);
--however
Select * from my_table, my_func(my_table.a) where my_table.a in (1,2,3);
--appears to be an illegal construct within postgres which prevents me from 
creating the following view.
Create view my_view as select a, b.* from my_table, my_func(my_table.a) as 
b;
--to be used in the following manner
  Select * from my_view where a in (1,2,3);
Any suggestions on either of these two potential solutions or suggestions as to 
other methods are greatly appreciated.
   
-
Never miss a thing.   Make Yahoo your homepage.

Re: [GENERAL] Restore problem

2007-12-11 Thread Keith Turner

On 12/10/07, Keith Turner <[EMAIL PROTECTED]> wrote:

> We are running 8.1 on Windows 2003 server and have had a server crash
> over the weekend. A virus is suspected - we maintain an app server on
> someone else's network, though we do have anti-virus running, the
> symptoms were worrying - so we had to wipe and reinstall the OS and
all
> programs. We had recovered the 8.1 folder and the data off the crashed
> computer prior to the wipe, but the last good .backup file is a few
days
> older than that.
>
> Are there step by step instructions on restoring from the folder
itself
> instead of a backup file? Is it even possible?

Trevor Talbot wrote:
>I would try installing 8.1 again, stop the service, _copy_ your data
>and config over the top of it, then start it again and see what
>happens. Any problems with that should show up immediately in the
>logs.

>And if all else fails, you can just nuke the attempt and restore from
>the older backups.

I did this - it took a couple of tries, but it worked. 

For the record, here's a description of the problem and solution:

Problem (before I became involved): Someone had turned off RPC service
while trying to fix something. Someone else rebooted the server and all
the windows services failed on restart. Panicked and suspecting a virus
they reinstalled the OS, but had the wit to savethe postgres folders but
couldn't make a .backup file. And the last backup file was the night
before, and they had done a lot of work that would be lost if that was
used.

Solution:
It wasn't a simple matter of just copying over the data folder. There
are reasons having to do with not letting the client see you sweat and
to do it in the BG. So on a fresh system we did as suggested above,
copied over the files. One step that seems to be required (at least in
this case) was a clean new install of postgres, stopping it immediately
after the install, not one that had been used in any way.

Required: The postgres files from the crashed server. 

Step 1.
Clean install of same version of Postgres (8.1) on second computer using
same superuser and password.

Step 2.
Stop postgtres service

Step 3.
Copy over Data folder replacing the one that was created on install

Step 4.
Start postgres

Step 5. 
Log in to pgadmin, confirm that the data is there

Step 6. 
Create a .backup that can be restored on the live site. 

Step 7.
Create the database on the live site and restore from the .backup file.

Thanks for your suggestion. We had tried it and failed, but this
reinforced that it might work, so we then tried with a fresh install as
well, and were delighted to have our data back.

Regards,

Keith

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Hijack!

2007-12-11 Thread Keith Turner
Thank you for your response. What may be obvious to some isn't always to
others.  It's never a bad idea to remind users how you want your data
formatted if there are roadblocks that are not obvious on the surface. 

Thanks again...

K.

-Original Message-
From: Richard Huxton [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, December 11, 2007 6:58 AM
To: Keith Turner
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Hijack!

Keith Turner wrote:
> Someone scolding wrote:
> 
> Please don't hijack other threads, the original thread was 'TIMESTAMP
> difference'.

I think it was probably intended as a *gentle* scolding. We try to be as

polite as possible on the PG lists. Particularly important given their 
international nature of course.

> (don't answer to an arbitrary other mail and change the subject. Every
> mail contains references-header)

Yep - Thunderbird (for example) threads messages by this header. Your 
original question was hidden two layers down and I'd never have seen it 
if I hadn't been reading the one you replied to. That's the reason why 
people say not to do it - if you reply to an existing question many 
people will miss yours.

> I apologize; I had assumed that the threads were simply grouped by
> subject. If this is such a problem, probably it should be laid out on
> the list information page, otherwise how is anyone to know? 

It's one of those "common knowledge" things that are obvious to everyone

who's done it once themselves. It's just part of the nature of how email

works. Google around "mailing list etiquette" and you should see plenty 
of guidelines.

-- 
   Richard Huxton
   Archonet Ltd

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] Hijack!

2007-12-11 Thread Keith Turner
Someone scolding wrote:

 

Please don't hijack other threads, the original thread was 'TIMESTAMP

difference'.

 

(don't answer to an arbitrary other mail and change the subject. Every

mail contains references-header)

 

 

I apologize; I had assumed that the threads were simply grouped by
subject. If this is such a problem, probably it should be laid out on
the list information page, otherwise how is anyone to know? 

 

Putting this information here would be a good thing:
http://www.postgresql.org/community/lists/

 

Keith (not scolding, but Hijack is an accusative term)

 

 

 



[GENERAL] Restore problem

2007-12-10 Thread Keith Turner
Hi first post here, I hope you can help.

We are running 8.1 on Windows 2003 server and have had a server crash
over the weekend. A virus is suspected - we maintain an app server on
someone else's network, though we do have anti-virus running, the
symptoms were worrying - so we had to wipe and reinstall the OS and all
programs. We had recovered the 8.1 folder and the data off the crashed
computer prior to the wipe, but the last good .backup file is a few days
older than that.

Are there step by step instructions on restoring from the folder itself
instead of a backup file? Is it even possible?

Thanks,

Keith


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] Bitemporal sequenced unique constraint (function/trigger)

2007-11-05 Thread Keith Carr
no
 AND ((A.vt_begin < B.vt_end AND B.vt_end < A.vt_end)
  OR (A.tt_start < B.tt_stop AND B.tt_stop < A.tt_stop))
 AND NOT EXISTS
   (SELECT customer_no
FROM Customers AS B2
WHERE B2.customer_no = B.customer_no
  AND ((B2.vt_begin <= B.vt_end AND B.vt_end < B2.vt_end)
   OR (B2.tt_start <= B.tt_stop AND B.tt_stop < B2.tt_stop;

And if run on the data below, should pull out customer_no's '2' and '3'.
But does not seem to select any of the rows in which there are gaps in 
Customers during the validity of Prop_Owner??

The data I used is as follows:
Customers:
customer_no |customer_name |  vt_begin  |   vt_end   |  tt_start  |  tt_stop
-++----+----++
   1 | keith  | 2006-01-01 | -12-31 | 2006-01-01 | 2006-12-31
   1 | keith  | 2006-01-01 | 2006-12-31 | 2006-12-31 | -12-31
   1 | keith  | 2006-12-31 | -12-31 | 2006-12-31 | 2007-12-31
   1 | keith  | 2006-12-31 | 2007-12-31 | 2007-12-31 | -12-31
   2 | simon| 2004-01-01 | -12-31 | 2004-01-01 | 2004-12-01
   2 | simon| 2004-01-01 | 2004-12-31 | 2004-12-01 | -12-31 <=
   2 | simon| 2004-12-31 | -12-31 | 2004-12-15 | -12-31 <=
   3 | john   | 2000-01-01 | -12-31 | 2000-01-01 | 2001-01-01
   3 | john   | 2000-01-01 | 2001-01-01 | 2001-01-01 | -12-31 <=
   3 | john   | 2002-01-01 | -12-31 | 2002-01-01 | -12-31 <=

The arrows indicate where there are gaps.

Properties:
 prop_no |  prop_name
-+--
   1 | house
   2 | flat
   3 | penthouse

Prop_Owner:
prop_no | customer_no |  vt_begin  |   vt_end   |  tt_start  |  tt_stop
-+-++++
   1 |   1 | 2006-02-01 | -12-31 | 2006-02-01 | 2006-12-01
   2 |   2 | 2004-02-01 | -12-31 | 2004-01-01 | 2004-12-25
   3 |   3 | 2000-02-01 | -12-31 | 2003-01-01 | -12-31

The 2nd and 3rd lines should have not been able to be inserted with the trigger 
as they "bridge" records with gaps in the Customers table.

I hope this makes sense and that someone can explain why the last part of 
P_O_integrity is not selecting the records covered with gaps correctly.

Thanks for any help you can give.
Keith

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Compiling libpqddll.lib win32 mingw msys

2006-09-02 Thread Keith Hutchison

Context postgres-8.1.3. win32

Found out to get libpqddll this had to be compiled with
the VS compiler.

Using the following steps.
.
1. Download the MS Platform SDK
http://download.microsoft.com/download/a/5/f/a5f0d781-e201-4ab6-8c6a-9bb4efed1e1a/PSDK-x86.exe
2. Change pg_config.h line 11 to have a closing quote on a string
3. Set paths to include
C:\Program Files\Microsoft Platform SDK\Include and
C:\Program Files\Microsoft Platform SDK\Lib
4. change fe-connector.c to include a reference for ShFolder.h
#include 
5. compile release files using nmake /f win32.mak and debug files
using nmake /f win32.mak DEBUG=1 from the src folder for
postgresql-8.1.3.

The binaries don't compile but the libraries and dlls do.

--
Keith Hutchison
http://balance-infosystems.com http://realopen.org

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] Compiling libpqddll.lib win32 mingw msys

2006-09-01 Thread Keith Hutchison

What parameters have to be passed to configure or make to build the
import library files like libpqddll.lib on win32 using msys and mingw?

Thank in advance

--
Keith Hutchison
http://balance-infosystems.com http://realopen.org
http://www.kasamba.com/Keith-Hutchison

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] pg_config --includedir is null

2006-08-30 Thread Keith Hutchison

G'day

After a successful make install of postgresql-8.1.3 with msys and mingw
pg_config --includedir is null ...

I want to to show the correct includedir to allow libpqxx to compile.
Any clues on how to recompile postgres or pg_config to show the
correct directories?

Thanks in advance

--
Keith Hutchison
http://balance-infosystems.com http://realopen.org
http://www.kasamba.com/Keith-Hutchison


Output of pg_config
$ /c/usr/src/postgresql-8.1.3/src/bin/pg_config/pg_config.exe
BINDIR = c:/usr/src/POSTGR~1.3/src/bin/PG_CON~1
DOCDIR =
INCLUDEDIR =
PKGINCLUDEDIR =
INCLUDEDIR-SERVER =
LIBDIR =
PKGLIBDIR =
LOCALEDIR =
MANDIR =
SHAREDIR =
SYSCONFDIR =
PGXS =
CONFIGURE =
CC = gcc
CPPFLAGS = -I./src/include/port/win32 -DEXEC_BACKEND
-I../../../src/include/port/win32
CFLAGS = -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Winline
-Wdeclaration-after-statement -Wendif-labels -fno-strict-aliasing
CFLAGS_SL =
LDFLAGS = -Wl,--allow-multiple-definition
LDFLAGS_SL =
LIBS = -lpgport -lz -lwsock32 -lm  -lws2_32 -lshfolder
VERSION = PostgreSQL 8.1.3

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Java Eclipse

2006-05-27 Thread Keith Hutchison

Any views on the best way to access postgresql within java?

--
Keith Hutchison
http://balance-infosystems.com http://realopen.org
http://www.kasamba.com/Keith-Hutchison

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] clustering by partial indexes

2005-11-09 Thread Keith C. Perry
Quoting Tom Lane <[EMAIL PROTECTED]>:

> "Keith C. Perry" <[EMAIL PROTECTED]> writes:
> > This might have been discussed before but I wanted to know if clustering
> tables
> > by partial indexes will be availble in a later release of pgSQL?
> 
> What in the world would it mean to do that?

I'm not sure I understand your question.  Right now you can cluster tables based
on an index but when I tried to do that with a partial index I got an error. 
That information was in my first email.  Perhaps a more basic question why can't
I cluster by a partial index.  Here is the information again cut & pasted from
pgadminIII v1.4 (I omitted the table ddl previously).  

CREATE TABLE report
(
  "match" int4 NOT NULL,
  "action" varchar(16),
  stamp timestamptz NOT NULL,
  account varchar(32),
  ipaddress inet,
  profile varchar(16),
  rating text,
  url text,
  CONSTRAINT report_pkey PRIMARY KEY ("match", stamp)
) 
WITHOUT OIDS;


CREATE INDEX hrs_idx
  ON report
  USING btree
  (stamp)
  WHERE thehour(stamp) >= 0::double precision AND thehour(stamp) <= 23::double
precision;


CREATE OR REPLACE FUNCTION thehour(timestamptz)
  RETURNS float8 AS
$BODY$
begin
  return extract(hour from $1);
end;
$BODY$
  LANGUAGE 'plpgsql' IMMUTABLE;

Now when I go into the database with psql...

Welcome to psql 8.1.0, the PostgreSQL interactive terminal.

Type:  \copyright for distribution terms
   \h for help with SQL commands
   \? for help with psql commands
   \g or terminate with semicolon to execute query
   \q to quit

iprism=# \h cluster
Command: CLUSTER
Description: cluster a table according to an index
Syntax:
CLUSTER indexname ON tablename
CLUSTER tablename
CLUSTER

iprism=# cluster hrs_idx on report;
ERROR:  cannot cluster on partial index "hrs_idx"
iprism=#  



-- 
Keith C. Perry, MS E.E.
Director of Networks & Applications
VCSN, Inc.
http://vcsn.com
 

This email account is being host by:
VCSN, Inc : http://vcsn.com

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] clustering by partial indexes

2005-11-08 Thread Keith C. Perry
This might have been discussed before but I wanted to know if clustering tables
by partial indexes will be availble in a later release of pgSQL?

For the record, this is the error I get in 8.1:

iprism=# cluster hrs_idx on report;
ERROR:  cannot cluster on partial index "hrs_idx"

hrs_idx is defined as:

iprism=# \d hrs_idx
  Index "public.hrs_idx"
 Column |   Type
+--
 stamp  | timestamp with time zone
btree, for table "public.report", predicate (thehour(stamp) >= 0::double
precision AND thehour(stamp) <= 23::double precision)


-- 
Keith C. Perry, MS E.E.
Director of Networks & Applications
VCSN, Inc.
http://vcsn.com
 

This email account is being host by:
VCSN, Inc : http://vcsn.com

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Why database is corrupted after re-booting

2005-10-27 Thread Keith C. Perry
Actually, because I lost several thousands of dollars or equipement a couple of
years ago, I recommended these "brickwall" products to a company.

http://brickwall.com/index.htm

We actually never deployed these units (grounding the communications lines ended
up being a much cheaper solution) but I did talk and engineer at the company and
apparently they have some hospitals as client that use unitss.  I'm won't get
into the technology of how they work since you can read that yourself but I
remember having a warm and fuzzy after my conversation.

I will pull one quote from their web site though...

"Unlike MOV’s, TRANS-ZORBS and similar shunt based surge protectors that use
elements weighing less than 1/4 ounce, Brick Wall surge protectors can easily
absorb any surge repeatedly with absolutely no degradation."

The important phrase here is "...absorb any surge repeatedly with absolutely no
degradation."

Quoting Tom Lane <[EMAIL PROTECTED]>:

> Alex Stapleton <[EMAIL PROTECTED]> writes:
> > suspicion is that if the power failure isn't a particularly fast one,  
> > (e.g. you overloaded a fuse somewhere, fuses are insanely slow to  
> > fail compared to alternatives like MCBs) then your RAID card's RAM  
> > will get corrupted as the voltage drops or the system memory will  
> > resulting in bad data getting copied to the RAID controller as RAM  
> > seems to be pretty sensitive to voltage variations in experiments  
> > i've done on my insanely tweak-able desktop at home. I would of  
> > though ECC probably helps, but it can only correct so much.
> 
> Any competently designed battery-backup scheme has no problem with this.
> 
> What can seriously fry your equipment is a spike (ie, too much voltage
> not too little).  Most UPS-type equipment includes surge suppression
> hardware that offers a pretty good defense against this, but if you get
> a lightning strike directly where the power comes into your building,
> you're going to be having a chat with your insurance agent.  There is
> nothing made that will withstand a point-blank strike.
> 
>   regards, tom lane
> 
> ---(end of broadcast)---
> TIP 1: if posting/reading through Usenet, please send an appropriate
>subscribe-nomail command to [EMAIL PROTECTED] so that your
>message can get through to the mailing list cleanly
> 


-- 
Keith C. Perry, MS E.E.
Director of Networks & Applications
VCSN, Inc.
http://vcsn.com
 

This email account is being host by:
VCSN, Inc : http://vcsn.com

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Why database is corrupted after re-booting

2005-10-26 Thread Keith C. Perry
Just to add another story...

I've been running PostgreSQL on Linux since the 6.x days and back then I was
almost always on IDE drives with an EXT2 filesystem.  To date, the worse class 
of
experiences I've had was going through the fs recovery steps for EXT2.  In
those cases I never lost data in the database even when I might have lost files.
 Once XFS became an in kernel option for Linux, I moved almost all my servers to
that filesystem whether they are IDE or SCSI.  In a recent experience where I
was forced to hard reset a server with XFS and IDE drives, the box came right
back up with no data loss.

There is only one case of a major "problem" I've have in the last 8 years or so
and I posted to this list and with Tom's help I was able to get the box online.
 That wasn't a filesystem problem though.  Its off topic but (for those
interested) that thread, "Database Recovery Procedures", was from  September 16,
2003.  It had to deal with padding out one of the pg_clog files in a 7.3.x 
system.

Quoting "Welty, Richard" <[EMAIL PROTECTED]>:

> Wes Williams writes:
> >Even with a primary UPS on the *entire PostgreSQL server* does one still
> >need, or even still recommend, a battery-backed cache on the RAID
> controller
> >card?  [ref SCSI 320, of course]
> 
> >If so, I'd be interest in knowing briefly why.
> 
> it can be a lot faster.
> 
> if the raid controller knows it has a battery backup, then it'll be free
> to do whatever it sees fit in terms of write order.
> 
> some controllers (the ibm serveraid 4 units that i have a couple of, for
> example) won't do this unless they know the battery is there, they have no
> option for overriding that setting.
> 
> richard
> 
> ---(end of broadcast)---
> TIP 3: Have you checked our extensive FAQ?
> 
>http://www.postgresql.org/docs/faq
> 


-- 
Keith C. Perry, MS E.E.
Director of Networks & Applications
VCSN, Inc.
http://vcsn.com
 

This email account is being host by:
VCSN, Inc : http://vcsn.com

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] [INTERFACES] calculated identity field in views, again...

2005-05-04 Thread Keith Worthington
>> Zlatko Matic wrote:
>>
>>> I asked this question several weeks ago, but nobody proposed a
>>> solution, so I am repeating the same question again...
>>> I have an MS Access front-end for a database on PostgreSQL.
>>> I could use pass-through queries as record sources for reports and it
>>> works fine...
>>> Unfortunately, MS Access doesn't allow pass-through queries to be
>>> records sources for subforms.
>>> Therefore I tried to base subforms on regular JET queries on linked
>>> tables. It was too slow...
>>> Then I tried to base subforms on DAO recordset code generated from
>>> pass-through QueryDef objects. Although it worked, it was very
>>> unstable...
>>>
>>> Now it seems to me that POstgreSQL views are the best solution, but
>>> Access considers views as tables (!) and needs column with unique
>>> values.
>>> All those views are complicated queries on several tables, so I can't
>>> use any table's column as primary key. I need a calculated column in
>>> the view that Access will consider as primary key column.
>>> In regular tables, I use bigserial field, but how can I create
>>> calculated bigserial column in a view ?
>>>
>>> Thanks.
>>>
>>> ---(end of 
broadcast)---
>>> TIP 9: the planner will ignore your desire to choose an index scan if
>>> your
>>>  joining column's datatypes do not match
>>>
>>>
>>
>> Another option is to toss the MS Access altogether and program the
>> front end entirely in VB.  That is what we did.
>>
>> --
>> Kind Regards,
>> Keith
>>
> Zlatko Matic wrote:
> You mean VB.NET ?

Actually we wrote our interface using VB6.
--
Kind Regards,
Keith
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] [INTERFACES] calculated identity field in views, again...

2005-05-03 Thread Keith Worthington
Zlatko Matic wrote:
I asked this question several weeks ago, but nobody proposed a solution, 
so I am repeating the same question again...
I have an MS Access front-end for a database on PostgreSQL.
I could use pass-through queries as record sources for reports and it 
works fine...
Unfortunately, MS Access doesn't allow pass-through queries to be 
records sources for subforms.
Therefore I tried to base subforms on regular JET queries on linked 
tables. It was too slow...
Then I tried to base subforms on DAO recordset code generated from 
pass-through QueryDef objects. Although it worked, it was very unstable...

Now it seems to me that POstgreSQL views are the best solution, but 
Access considers views as tables (!) and needs column with unique values.
All those views are complicated queries on several tables, so I can't 
use any table's column as primary key. I need a calculated column in the 
view that Access will consider as primary key column.
In regular tables, I use bigserial field, but how can I create 
calculated bigserial column in a view ?

Thanks.
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match

Another option is to toss the MS Access altogether and program the front 
end entirely in VB.  That is what we did.

--
Kind Regards,
Keith
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] PostgreSQL still for Linux only?

2005-03-09 Thread Keith C. Perry
Quoting "Joshua D. Drake" <[EMAIL PROTECTED]>:

> >
> >
> >The only additional thing I would add to this if it hasn't been mentioned
> >already is that 2000 had/has some major security issues and even though 2003
> is
> >more secure out of the box from what I've experienced so far, I would
> **never**
> >trust a windows box to anything other than my LAN using private IP blocks
> and if
> >it has inbound access via a public IP then it would more certainly be
> behind
> >another firewall that is NAT'ing/Port Forwarding its traffic.
> >  
> >
> Nobody should ever put a server regardless of OS on a public IP.
> It should always be firewalled/Nat/Port Forwarding.
> 
> Sincerely,
> 
> Joshua D. Drake
> 
> 
> 
> 
> -- 
> Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
> Postgresql support, programming shared hosting and dedicated hosting.
> +1-503-667-4564 - [EMAIL PROTECTED] - http://www.commandprompt.com
> PostgreSQL Replicator -- production quality replication for PostgreSQL
> 
> 

As with all things technology there is an art too it as well- several ways
to do things.  I don't, for instance, NAT/Port forward public interfaces for
Linux hosts because in my experience they can be hardened without much ambiguity
to be placed there.  Similarly, I don't feel the same is true with most of the
windows variants so for security sake increased an network complexity is 
justified.

My point is that along with the performance issues this thread has point out,
data security is another reason to consider a non-windows platform to run your
production database.


-- 
Keith C. Perry, MS E.E.
Director of Networks & Applications
VCSN, Inc.
http://vcsn.com
 

This email account is being host by:
VCSN, Inc : http://vcsn.com

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] PostgreSQL still for Linux only?

2005-03-09 Thread Keith C. Perry
Quoting "Joshua D. Drake" <[EMAIL PROTECTED]>:

> Shelby Cain wrote:
> 
> >--- "Uwe C. Schroeder" <[EMAIL PROTECTED]> wrote:
> >
> >  
> >
> >>>The problem is, that it's a question of
> >>>  
> >>>
> >>perception. Most windows fans don't
> >>
> >>
> >>>see that "their" OS is pretty instable.
> >>>  
> >>>
> >
> >That may have been true in 1995.  However, in this day
> >and age most Windows fans don't see that their OS as
> >unstable because it isn't - unless of course you are
> >referring to the non-NT variations.
> >  
> >
> O.k. I don't want to start an OS war here. However
> there are a couple of things I know.
> 
> 1. As of Windows 2000, Windows is reasonably stable.
> However there is a caveat, it still can not perform
> under load (read slowness, possible crash) like Linux
> or other UNIX variants can.
> 
> 2. As of Windows 2003, Windows is very stable and
> performs fairly well under load. However it still
> can not keep up with Linux or other UNIX variants.
> 
> The majority of the problem with Windows in these
> days is people who hire other people with little
> pieces of paper that say they are knowledgeable.
> 
> A properly managed Windows server can be reliable,
> can perform reasonably well, if you have the expertise
> to do so. This is not that much unlike UNIX. The difference
> is that UNIX requires the expertise, Windows makes you
> feel like you have it when you don't.
> 
> Sincerely,
> 
> Joshua D. Drake
> 
> 
> 
> 
> 
> >Regards,
> >
> >Shelby Cain
> >
> >
> > 
> > 
> >__ 
> >Celebrate Yahoo!'s 10th Birthday! 
> >Yahoo! Netrospective: 100 Moments of the Web 
> >http://birthday.yahoo.com/netrospective/
> >
> >---(end of broadcast)---
> >TIP 5: Have you checked our extensive FAQ?
> >
> >   http://www.postgresql.org/docs/faq
> >  
> >
> 
> 
> -- 
> Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
> Postgresql support, programming shared hosting and dedicated hosting.
> +1-503-667-4564 - [EMAIL PROTECTED] - http://www.commandprompt.com
> PostgreSQL Replicator -- production quality replication for PostgreSQL
> 
> 

The only additional thing I would add to this if it hasn't been mentioned
already is that 2000 had/has some major security issues and even though 2003 is
more secure out of the box from what I've experienced so far, I would **never**
trust a windows box to anything other than my LAN using private IP blocks and if
it has inbound access via a public IP then it would more certainly be behind
another firewall that is NAT'ing/Port Forwarding its traffic.

-- 
Keith C. Perry, MS E.E.
Director of Networks & Applications
VCSN, Inc.
http://vcsn.com
 

This email account is being host by:
VCSN, Inc : http://vcsn.com

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] Replication from other SQL Server

2005-03-01 Thread Keith Tsao
Hi,

I am new to postgresql!
We have a M$SQL server and would like to do a replication from this
server to postgresql.
Would this be possible? 

If so, what would be the appropiate method.
Any suggestion?

Thx!

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Lost rows/data corruption?

2005-02-25 Thread Keith C. Perry
Quoting Andrew Hall <[EMAIL PROTECTED]>:

> > Do you happen to have the same type disks in all these systems? That could
> 
> > point to a disk cache "problem" (f.e. the disks lying about having written
> 
> > data from the cache to disk).
> >
> > Or do you use the same disk parameters on all these machines? Have you 
> > tried using the disks w/o write caching and/or in synchronous mode 
> > (contrary to "async").
> 
> It's all pretty common stuff, quite a few customers use standard IDE 
> (various flavours of controller/disk), some now use SATA (again various 
> brands) and the rest use SCSI. The kernel we use is the standard Linus 
> approved kernel with the inbuilt drivers as part of the kernel. We don't 
> supply any non-default parameters to the disk controllers.
> 
> Thanks for your suggestion on write caching, I'll look into this, I'm also 
> tempted to try a different journalling FS too. 
> 
> 
> ---(end of broadcast)---
> TIP 2: you can get off all lists at once with the unregister command
> (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
> 

I'm a little late on this thread but in regards to the SATA support.  2.4.29 in
my experience is really the first kernel that decent SATA support (i.e. much
better data throughput).  I think that would corresponse to 2.6.9 or .10 but
even before you get into all that.  I am curious to know what do you mean by
"standard Linus kernel".  Do you not compile your own kernels for the hardware
platform being used?

-- 
Keith C. Perry, MS E.E.
Director of Networks & Applications
VCSN, Inc.
http://vcsn.com
 

This email account is being host by:
VCSN, Inc : http://vcsn.com

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] PostgreSQL users on webhosting

2005-01-09 Thread Keith C. Perry
planner will ignore your desire to choose an index scan if
> your
> > > >   joining column's datatypes do not match
> > > 
> > > 
> > > ---(end of broadcast)---
> > > TIP 5: Have you checked our extensive FAQ?
> > > 
> > >http://www.postgresql.org/docs/faqs/FAQ.html
> > >
> > 
> > ---(end of broadcast)---
> > TIP 3: if posting/reading through Usenet, please send an appropriate
> >   subscribe-nomail command to [EMAIL PROTECTED] so that your
> >   message can get through to the mailing list cleanly
> 
> 
> ---(end of broadcast)---
> TIP 9: the planner will ignore your desire to choose an index scan if your
>   joining column's datatypes do not match
> 


-- 
Keith C. Perry, MS E.E.
Director of Networks & Applications
VCSN, Inc.
http://vcsn.com
 

This email account is being host by:
VCSN, Inc : http://vcsn.com

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] What HW / OS is recommeded

2004-12-16 Thread Keith C. Perry
Quoting Scott Marlowe <[EMAIL PROTECTED]>:

> On Thu, 2004-12-16 at 06:39, Michael Ben-Nes wrote:
> > I think and please correct me that Postgres loves RAM, the more the
> better.
> > 
> > Any way RAID5 is awful with writing, go with  RAID1 ( mirroring )
> 
> With battery backed cache and a large array, RAID 5 is quite fast, even
> with writes.  Plus with a lot of drives in a mostly read environment,
> it's quite likely that each read will hit a different drive so that many
> parallel requests can be handled quite well.  The general rule I use is
> 6 or fewer drives will do better in RAID 1+0, 7 or more will tend to do
> better with RAID 5.
> 
> > Perl is very slow, maybe you can use PHP ?
> 
> While mod_perl and its relations have never been fast running under
> apache in comparison to PHP, it's no slouch, paying mostly in startup
> time, not run time.  For complex apps, the startup time difference
> becomes noise compared to the run time, so it's no big advantage to
> PHP.  I really like PHP by the way.  But Perl is pretty nice too. 

I run apache2, ssl, mod_perl and php.  I have yet to hear complaints from my
perl or php programmer.  Without have another PHP vs. Perl "thing" lets all
agree that they are both pretty nice  :)  

> Run the Unix OS you're most comfortable with, knowing that PostgreSQL
> gets lots of testing on the free unixes more so than on the commercial
> ones.  Give it a machine with plenty of RAM and a fast I/O subsystem,
> and two CPUS and you'll get good performance.  If your needs exceed the
> performance of one of these machines, you're probably better off going
> to a pgpool / slony cluster than trying to build a bigger machine.

I'm not sure I heard any mention of filesystems but I've been moving all my EXT3
filesystems to XFS.  Some other journaling filesystem that you might want to
look into are JFS and ReiserFS.

-- 
Keith C. Perry, MS E.E.
Director of Networks & Applications
VCSN, Inc.
http://vcsn.com
 

This email account is being host by:
VCSN, Inc : http://vcsn.com

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Help with query: indexes on timestamps

2004-07-07 Thread Keith C. Perry
Quoting Richard Huxton <[EMAIL PROTECTED]>:

> Keith C. Perry wrote:
> > 
> > I have a table with with 1 million records in it.  Here is the definition
> > 
> > CREATE TABLE report
> > (
> >   match int4,
> >   action varchar(16),
> >   stamp timestamptz,
> >   account varchar(32),
> >   ipaddress inet,
> >   profile varchar(16),
> >   rating text,
> >   url text
> > ) 
> > WITHOUT OIDS;
> > 
> > The is one index:
> > 
> > CREATE INDEX stamp_idx
> >   ON report
> >   USING btree
> >   (stamp);
> > 
> > That query I'm running is:
> > 
> >  SELECT date_part('hour'::text, report.stamp) AS "hour", count(*) AS count
> >FROM report
> >   GROUP BY date_part('hour'::text, report.stamp)
> >   ORDER BY date_part('hour'::text, report.stamp);
> 
> You will always get a sequential scan with this query - there is no 
> other way to count the rows.
> 
> With PostgreSQL being MVCC based, you can't know whether a row is 
> visible to you without checking it - visiting the index won't help. Even 
> if it could, you'd still have to visit every row in the index.
> 
> Assuming the table is a log, with always increasing timestamps, I'd 
> create a summary table and query that.

Yea, actually it a proxy server log each month the databasae is 500k records.  I
have two months loaded only to put some stress on the server.  Some ever month
I'm loading the data just so I can do some analysis.  The optimization question
came up when one of the other database folks wanted to play with the database in
MS-SQL server.

How can I add a column that respresents a function that returns just the
date_part?  I wondering if that will increase the speed of the query in similar
fashion as the MS-SQL did.

I hadn't though about the MVCC vs. file locking issue.  The MS-SQL server does
not have any load on it and I'm sure if other users were hitting it the same
table with the same query, PG would be perform better.

-- 
Keith C. Perry, MS E.E.
Director of Networks & Applications
VCSN, Inc.
http://vcsn.com
 

This email account is being host by:
VCSN, Inc : http://vcsn.com

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[GENERAL] Help with query: indexes on timestamps

2004-07-07 Thread Keith C. Perry
Ok, I've tried a number of things here and I know I'm missing something but at
this point my head is spinning (i.e. lack of sleep, too much coffee, etc...)

My environment is PG 7.4.3 on Linux with 512Mb of ram and swap.  This was just
upgraded from 7.4 (just to make sure I'm current).  Some of my settings in
postgresql are giving fatal errors but I don't think my issue is related to my
query problems.  I also have a laptop running with the same basic specs (no
raid, slower processor).

I use a recent pgadmin-III as my client.

We're also running this query in MS-SQL.

I have a table with with 1 million records in it.  Here is the definition

CREATE TABLE report
(
  match int4,
  action varchar(16),
  stamp timestamptz,
  account varchar(32),
  ipaddress inet,
  profile varchar(16),
  rating text,
  url text
) 
WITHOUT OIDS;

The is one index:

CREATE INDEX stamp_idx
  ON report
  USING btree
  (stamp);

That query I'm running is:

 SELECT date_part('hour'::text, report.stamp) AS "hour", count(*) AS count
   FROM report
  GROUP BY date_part('hour'::text, report.stamp)
  ORDER BY date_part('hour'::text, report.stamp);



Here is the plan I get:

 QUERY PLAN

 Sort  (cost=47420.64..47421.14 rows=200 width=8)
   Sort Key: date_part('hour'::text, stamp)
   ->  HashAggregate  (cost=47412.00..47413.00 rows=200 width=8)
 ->  Seq Scan on report  (cost=0.00..42412.00 rows=100 width=8)
(4 rows)


Now from from I understand that, the index I created would not be used since I
would be looking at every row to do the date part.  The query under 7.4 ran in
about 8 seconds.  In 7.4.3, its taking 37 seconds for the same plan (which is
fine for the system not be tuned yet).  On my laptop its taking 6 seconds. 
MS-SQL is taking 8 seconds.  These runs are after I do vacuum full, vacuum
analyse and reindex on the database and table respectively

My question:  How can I get this query to use an index build on the date_part
function.  On the MS-SQL side, creating a computed column with the date part and
then don't an index on that column bring the query done to 2 seconds.

I tried creating this function:

CREATE OR REPLACE FUNCTION whathour(timestamptz)
  RETURNS int4 AS
'begin
  return date_part(\'hour\',$1);
end;'
  LANGUAGE 'plpgsql' IMMUTABLE;

and then and index:

CREATE INDEX hour_idx
  ON report
  USING btree
  (stamp)
  WHERE whathour(stamp) >= 0 AND whathour(stamp) <= 23;

but I get the same plan- which makes sense to me because I'm again inspect 
quiet a few row.  I'm sure I'm missing something...

I couldn't see from the docs how to make a column equal a function (like
MS-SQL's computed column) but to me it seems like I should not have to do
something like that since it really is wasting space in the table.  I hoping a
partial index or a function index will solve this and be just as efficient. 
However, that method **does** work.  Is there a better way?

Thanks to all in advance.

-- 
Keith C. Perry, MS E.E.
Director of Networks & Applications
VCSN, Inc.
http://vcsn.com
 

This email account is being host by:
VCSN, Inc : http://vcsn.com

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] upgrade postgres 7.1 to 7.4.2

2004-04-20 Thread Keith C. Perry
Quoting csysi <[EMAIL PROTECTED]>:

> Hello, i upgrade postgres version 7.1 to version 7.4.2 and it tell me that
> the databases are incompatible.
> 
> Can i convert the databases ?
> 
> 
> Thank´s for the help
> 
> 
> Jorge
> 
> 
> 
> ---(end of broadcast)---
> TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
> 

If you could post the exact error that you are getting and how you are
proceeding that would be helpful.

However, I've upgraded 7.1.3 to 7.4 and I had lots of problems until I used the
pg_dumpall from 7.4.  In my case, it had two boxes so I ran the pg_dump from the
7.4 box to the 7.1.3.

Basically, you are dumping over the network.  Its very straight forward.  All
you have to do is set up the appropriate parameters in the pg_hba.conf file of
your 7.1 box and follow the directions in the man page of pg_dumpall (see the
"-h" parameter).

If you are not on a network, you could install 7.4 version in a different
directory and configure to run on a different port.  You still dump via IP this
way.  You can run both systems head to head that way and see how things went.

Good luck-

-- 
Keith C. Perry, MS E.E.
Director of Networks & Applications
VCSN, Inc.
http://vcsn.com
 

This email account is being host by:
VCSN, Inc : http://vcsn.com

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Postgres clustering?

2004-02-27 Thread Keith Bottner
Thanks Andrew, I will do some digging on the Postgres-R front to see what
their focus is.

Keith

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Andrew Sullivan
Sent: Thursday, February 26, 2004 4:36 PM
To: [EMAIL PROTECTED]
Subject: Re: [GENERAL] Postgres clustering?


On Thu, Feb 26, 2004 at 03:22:02PM -0600, Keith Bottner wrote:
> I am very interested in a similar solution and believe that I could 
> get some resources from my company for implementing such a system. Are 
> you interested in helping to develop the functionality for Postgres?
> 
> Is anybody else in the Postgres world even interested in pursuing this 
> functionality set in more than lip service? Are there any currently 
> active Postgres projects out there trying to achieve this?

If you want this, then go and support the folks working on the Postgres-R
project.  They have a design for this sort of thing there, although it does
some remarkably tricky things that not everyone thinks will work.  (I happen
to be among the somewhat optimistic on this front, but I haven't been able
to bring any money to the
project.)

A

-- 
Andrew Sullivan  

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] How should I get started?

2004-01-17 Thread Keith C. Perry
Quoting Leif K-Brooks <[EMAIL PROTECTED]>:

> I've been programming web applications in PHP/MySQL for a few years. 
> It's worked pretty well, but I've been getting annoyed with the lack of 
> more advanced features lately. After some reading, I've decided to 
> switch to Perl/PostgreSQL.
> 
> I'll be discarding all database data for other reason anyway, so moving 
> data isn't an issue. I just want to learn how to best use some of 
> PostgreSQL's cool features. Is there any documentation about that? I've 
> looked at the official docs, but they're very dry. Any suggestions?
> 
> 
> ---(end of broadcast)---
> TIP 3: if posting/reading through Usenet, please send an appropriate
>   subscribe-nomail command to [EMAIL PROTECTED] so that your
>   message can get through to the mailing list cleanly
> 

Well from me, "glad to hear it".  My environment is Linux, Apache, mod_perl (or
perl for simple things) and PG.  I'm been using this environment in one for or
another since 1997.  When you say "cool features" I'm not sure what you've
looked at but if you could tell us what you've read that you like, I'm sure
someone will respond if you'd like to know more about specific items.

I will say this though, the "cool"-ness of PG (or "slick"-ness as I like to call
it) really comes in the integration of PG to your environment.  I'm at the point
now where I can't build an application without using PG for the data storage,
maniplulation or mining.  PG copuled with perl really have become an invaluable
tool for me.  Hopefully you'll find the same.

In regards to the docs, they are really meant to be more of a reference.  Plus
you'll have to get used to how the community leads here present information.  If
you haven't already, I would encourage you to check our the
http://techdocs.postgresql.org site to read about some more hands on and in
depth information.  There is also http://gborg.postgresql.org where you can
download some of the other software tools, interfaces, etc that you may need
develope and deploy your applications.

-- 
Keith C. Perry, MS E.E.
Director of Networks & Applications
VCSN, Inc.
http://vcsn.com
 

This email account is being host by:
VCSN, Inc : http://vcsn.com

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Postgress and MYSQL

2004-01-15 Thread Keith C. Perry
Quoting "Joshua D. Drake" <[EMAIL PROTECTED]>:

> 
> >You're kidding me...
> >
> >  
> >
> No I am not... You missed the point, there is no official Linux website. 
> Linux.org is ran by one guy,
> linux.com is run by VA (I actually used to have [EMAIL PROTECTED]), I own 
> Linuxdoc.org and Linuxdoc.com...
> The closest you get to a "linux" website is kernel.org.
> Sincerely,
> 
> Joshua D. Drake

Then the word "official" is ambiguous and perhaps not even applicable.  If a
site is NOT run by Linus is it not official?  Due to how Linux is structured is
hard to say what is "official" (if you believe that concept applies).  That is
one of the things MS is expliot about Linux and OSS in general.

> >linux.org
> >linux.com   (e, way to cluttered these days)
> >linuxhq.org
> >linuxhq.com (looks like their undergoing a facelift)
> >
> >
> >I seem to remember someone saying that PG is supposed to be leaning towards
> >being more like a "kernel" so I would like one day the site might become
> >conceptually like linux.org in the sense that they point you toward the
> various
> >distributions of Linux.
> >
> >Comments?
> >
> >  
> >
> 
> 
> -- 
> Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
> Postgresql support, programming shared hosting and dedicated hosting.
> +1-503-667-4564 - [EMAIL PROTECTED] - http://www.commandprompt.com
> PostgreSQL Replicator -- production quality replication for PostgreSQL
> 


-- 
Keith C. Perry, MS E.E.
Director of Networks & Applications
VCSN, Inc.
http://vcsn.com
 

This email account is being host by:
VCSN, Inc : http://vcsn.com

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Best practice? Web application: single PostgreSQL

2004-01-13 Thread Keith Murphy
scott.marlowe wrote:

On Tue, 13 Jan 2004, Keith G. Murphy wrote:


I'm trying to get a feel for what most people are doing or consider best 
practice.

Given a mod_perl application talking to a PostgreSQL database on the 
same host, where different users are logging onto the web server using 
LDAP for authentication, do most people

1) have the web server connecting to the database using its own user 
account (possibly through ident), and controlling access to different 
database entities strictly through the application itself

2) have the web server connecting to the database actually using the 
user's account (possibly using LDAP authentication against PostgreSQL), 
and controlling access to different database entities through GRANT, etc.

Obviously, (2) leads to more database connections, and you still have to 
have the application do some work in terms of which forms are available 
to which users, etc.  But I'm a little worried about whether it's best 
security practice.


I do 1.  different language (PHP) same basic thing though.  All security 
is handled by ACLS I build myself in Postgresql and interrogate via my own 
application.

Thanks to all for the discussion so far.  I have been doing option 1 so 
far as well, but was a bit uncomfortable allowing the web server account 
to have complete control over the database, and wondered what the 
alternatives might be.

At some point, I may try rolling my own PAM module (as Tom Lane 
suggested) that uses the user's browser-authenticated username and 
password to map to a PostgreSQL username that constitutes a "role" 
(assuming that's possible).  The benefit I can see to such a scheme is 
that it means anyone who might manage to run a process under the web 
server's account would still have to have the appropriate username and 
password to do anything to the database.  And the limited number of 
"role" PostgreSQL usernames would mean fewer connections than using the 
original usernames, as John Sidney-Woollett pointed out.



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] Best practice? Web application: single PostgreSQL

2004-01-13 Thread Keith G. Murphy
Tom Lane wrote:

"Keith G. Murphy" <[EMAIL PROTECTED]> writes:

Hmmm, mightn't it be kind of nice if there were PAM or krb5 maps in 
addition to ident maps?


ISTM the whole point of PAM is that you plug in your desired security
policy outside of the application.  You shouldn't be asking for more
security frammishes from Postgres, you should be off coding a PAM module
that does things exactly the way you want.
I believe I see what you mean.  Given the original premise, I imagine 
you could have the PAM module do something like:

(1) Authenticate via LDAP using the user's username and password

(2) Look up the "role" name (real PostgreSQL username) via LDAP, using 
the username

(3) Tell PostsgreSQL that the user is authenticated under role name.

I really hadn't thought much about how the PAM module might work.
--
Why waste time learning when ignorance is instantaneous?
-- Hobbes
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [GENERAL] Best practice? Web application: single PostgreSQL

2004-01-13 Thread Keith G. Murphy
John Sidney-Woollett wrote:

Keith G. Murphy said:

That sounds like an excellent compromise.  How do you typically handle
the mechanics of authentication from web server to PostgreSQL on the
connect, using this scheme?


Sorry but I can't help you out here, I'm too much of a newbie with
Postgres - I was hoping that someone else would answer your part 1! :)
John

Perhaps I can answer my own question.  I could use ident and a map that 
lists the web server username as able to map to the different "role" 
usernames.  Unfortunately, that still would allow the web server account 
to "fake" role names.

If the "real" PostgreSQL accounts do not coincide to the 
browser-authenticated usernames, I don't see a good way to use PAM/LDAP 
or another mechanism to require that PostgreSQL itself makes sure that 
the given username and password are valid.  Not saying that's a big 
problem, but...

Hmmm, mightn't it be kind of nice if there were PAM or krb5 maps in 
addition to ident maps?
--
Why waste time learning when ignorance is instantaneous?
-- Hobbes

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] Best practice? Web application: single PostgreSQL

2004-01-13 Thread Keith G. Murphy
John Sidney-Woollett wrote:

Keith G. Murphy said:

2) have the web server connecting to the database actually using the
user's account (possibly using LDAP authentication against PostgreSQL),
and controlling access to different database entities through GRANT, etc.


My experience with java web/app servers indicates that for most setups
using a pool of connections is preferable to using a single connection per
connected user - it scales much better.
What you could consider is one or more pools which map to the "roles" that
your (web) app supports. For example, if a user needs "minimal rights"
access to db resources, then your cgi (request handler) accesses the data
using a connection from the "minimal rights" connection pool. A user
needing "greater rights" would have the cgi access the database from the
"greater rights" pool.
That sounds like an excellent compromise.  How do you typically handle 
the mechanics of authentication from web server to PostgreSQL on the 
connect, using this scheme?
--
Why waste time learning when ignorance is instantaneous?
-- Hobbes

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] TPC-C and Postgres

2004-01-04 Thread Keith C. Perry
Quoting Mark Kirkwood <[EMAIL PROTECTED]>:

> 
> 
> 
> >On page 7 of this document:
> >
> >"Earlier versions of Postgres were often unstable and quite often there
> were
> >even data losses. The Postgres developer team has invested a great deal of
> time
> >in regression tests, which guarantee high stability and data security.
> Releases
> >now only occur after longer Beta phases. The new versions of Postgres have
> >become considerably more effective (see box  Benchmarks ).
> >
> >There are not known to be any serious stability problems with MySQL. With
> the
> >new feature of replicability, the system has taken another major step in
> the
> >direction of increased failure safety. Especially in conjunction with PHP,
> MySQL
> >achieves good performance values in web applications (see box  Benchmarks ).
> A
> >paper by the founder of MySQL, Michael Widenius on performance optimisation
> is
> >recommended in this respect"
> >
> >
> >H...
> >
> >  
> >
> 
> Sounds like he has Mysql And Pg confused... :-)

Yea, thats what I thought- 2001 was not that long ago...

> e.g :
> 
> Recently a colleage of mine, who is fond of Mysql initiated a friendly 
> challenge to the effect "Mysql can do big queries just as well as Pg".
> 
> I suggested he try my Data Warehouse benchmark 
> (http://sourceforge.net/projects/benchw)
> with size 10 (about 10G or so).

Nice real nice.  I need to probably get that and run some test against MSSQL. 
I'm working on developing my own test dataset for some environments tests in the
social services world but your software would be useful for some "raw" tests.

> I went away, produced my results (using Pg 7.4) and mailed them to 
> him... however at his end - he cannot get Mysql to load the data (it 
> kills his connection,.. thats nice). He is not impressed at all, and is 
> considering using Pg!
> 
> (he was using Mysql 4.0.15)

Once again proof is in the pudding!

> regards
> 
> Mark
> 
> 
> 
> 
> 
> ---(end of broadcast)---
> TIP 7: don't forget to increase your free space map settings
> 


-- 
Keith C. Perry, MS E.E.
Director of Networks & Applications
VCSN, Inc.
http://vcsn.com
 

This email account is being host by:
VCSN, Inc : http://vcsn.com

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] 'like' refuses to use an index???

2003-12-30 Thread Keith C. Perry
Quoting Tom Lane <[EMAIL PROTECTED]>:

> Mike Nolan <[EMAIL PROTECTED]> writes:
> > However, I think RH always sets the LANG environmental variable, so 
> > that's going to be picked up by initdb, which means that the C locale 
> > will NOT be used unless specifically asked for.  Other OS packages may
> > also force the choice of a default LANG value. 
> 
> Yeah.  There have been some discussions on pgsql-hackers about
> defaulting to C locale instead of honoring LANG, but we haven't done
> anything.
> 
> > Dumb question of the hour:  How does one find out what locale a
> > DB is initialized in?
> 
> In 7.4, you can just "show lc_collate".  In prior versions you need
> to use pg_controldata to see what's stored in pg_control.

Show the pg_controldata program work in 7.4?  If not, shouldn't it be removed
from the docs in favor of the show method of finding out the locale.

> BTW, 7.4 also has a specialized index opclass that can be used to create
> LIKE-compatible indexes even if you are using a non-C locale.
> 
>   regards, tom lane
> 
> ---(end of broadcast)---
> TIP 5: Have you checked our extensive FAQ?
> 
>http://www.postgresql.org/docs/faqs/FAQ.html
> 


-- 
Keith C. Perry, MS E.E.
Director of Networks & Applications
VCSN, Inc.
http://vcsn.com
 

This email account is being host by:
VCSN, Inc : http://vcsn.com

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] 'like' refuses to use an index???

2003-12-30 Thread Keith C. Perry
Quoting Mike Nolan <[EMAIL PROTECTED]>:

> > You didn't initdb in C locale ...
> 
> The documentation section on localization could use some enhancements 
> and maybe some more examples.  
> 
> The 'c' locale isn't very well defined in the docs, except to say that it 
> is 'special' and is the default if no other locale is defined.  That
> section doesn't mention that you need that locale to get 'like' to 
> use an index, for example.  
> 
> However, I think RH always sets the LANG environmental variable, so 
> that's going to be picked up by initdb, which means that the C locale 
> will NOT be used unless specifically asked for.  Other OS packages may
> also force the choice of a default LANG value. 
> 
> Dumb question of the hour:  How does one find out what locale a
> DB is initialized in?
> --
> Mike Nolan
> 
> ---(end of broadcast)---
> TIP 4: Don't 'kill -9' the postmaster
> 

According to the doc its the pg_controldata utility but when I run it, I get an
error:

"could not open file "-D/global/pg_control" for reading: No such file or directory"

I wonder if that is because I didn't use a locale when I initialized the
database. (My locale -a return "C")

-- 
Keith C. Perry, MS E.E.
Director of Networks & Applications
VCSN, Inc.
http://vcsn.com
 

This email account is being host by:
VCSN, Inc : http://vcsn.com

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] 'like' refuses to use an index???

2003-12-30 Thread Keith C. Perry
Quoting Dima Tkach <[EMAIL PROTECTED]>:

> Nah...
> This is a different story - for teh index to be useful, the *beginning* 
> of your search string must be known.
> So "like '00423%" and "~ '^00423'" should both work, but "like '%423'" 
> and "~ '00423'" both won't - it's like searching a telephone book for 
> somebody, whose last name ends with "erry" (as opposed to begins with 
> "Perr").
> 
> Dima
>  
> 
> Keith C. Perry wrote:
> 
> >I wanted to know this too because I notice that using like with wildcards
> >appears to be similar to a regular expression in that the index is not used.
> 
> >This is what I have...
> >
> >ethernet=# select version();
> >   version
> >--
> > PostgreSQL 7.4 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.2.2
> >(1 row)
> >
> >ethernet=# \d vendors
> >   Table "public.vendors"
> > Column  | Type  | Modifiers
> >-+---+---
> > header  | character(6)  |
> > company | character varying(80) |
> >Indexes:
> >"vender_id_idx" btree (header)
> >
> >
> >ethernet=# explain select * from vendors where header like '000423';
> >   QUERY PLAN   
>
> >-
> > Index Scan using vender_id_idx on vendors  (cost=0.00..113.26 rows=36
> width=68)
> >   Index Cond: (header = '000423'::bpchar)
> >   Filter: (header ~~ '000423'::text)
> >(3 rows)
> >
> >
> >Ok, that made sense-
> >
> >ethernet=# explain select * from vendors where header like '%000423%';
> >QUERY PLAN
> >--
> > Seq Scan on vendors  (cost=0.00..151.15 rows=3 width=68)
> >   Filter: (header ~~ '%000423%'::text)
> >(2 rows)
> >
> >This didn't make sense until I did...
> >
> >ethernet=# explain select * from vendors where header like '0004%';
> >   QUERY PLAN   
>
> >-
> > Index Scan using vender_id_idx on vendors  (cost=0.00..113.26 rows=36
> width=68)
> >   Index Cond: ((header >= '0004'::bpchar) AND (header < '0005'::bpchar))
> >   Filter: (header ~~ '0004%'::text)
> >(3 rows)
> >
> >which again made sense because of the header's size but both-
> >
> >ethernet=# explain select * from vendors where header ~* '0004';
> >QUERY PLAN
> >---
> > Seq Scan on vendors  (cost=0.00..151.15 rows=58 width=68)
> >   Filter: (header ~* '0004'::text)
> >(2 rows)
> >
> >ethernet=# explain select * from vendors where header ~* '000423';
> >QUERY PLAN
> >--
> > Seq Scan on vendors  (cost=0.00..151.15 rows=3 width=68)
> >   Filter: (header ~* '000423'::text)
> >(2 rows)
> >
> >are sequentially scanned which means that regex's do not use indexes.  Is
> that
> >right also?
> >
> >  
> >
> 
> 

Ahhh, so it is!!  So let me ask you this.  In order to build an index that would
be able to handle something like "lastname like '%erry'", would you need that
full text search patch in contrib (tsearch?) or could you do it with an index on
a function?

-- 
Keith C. Perry, MS E.E.
Director of Networks & Applications
VCSN, Inc.
http://vcsn.com
 

This email account is being host by:
VCSN, Inc : http://vcsn.com

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] Is my MySQL Gaining ?

2003-12-29 Thread Keith C. Perry
Quoting "Joshua D. Drake" <[EMAIL PROTECTED]>:

> 
> >This concerns me.  This is the second time recently someone has said
> something
> >is NOT documented and it it turn out it is.
> >
> >So my question is (no offense to anyone) are the web sites not "clear"
> enough to
> >find information quickly or are people just being lax/lazy when they are
> searching.
> >
> >  
> >
> Well, at anything greater than 1024x768 the "docs" link on the main site
> is near invisible. The font size is fine, but combined with the color scheme
> and location, it can be hard to spot... Mainly, I think because the page
> is so busy.

Agreed- I was hoping some else would say that.

> If you look at the front page the first thing you see is News which is fine,
> but IMHO the first thing should be the nav bar comes before News but
> News is big, bold print.
> 
> Also searching the PostgreSQL docs is a useless venture. I just typed in
> trigger and hit search 20 seconds later I am still waiting.

I mentioned that earlier in this thread.

> Why don't we just add Google search to the page?
> 
> Sincerely,
> 
> Joshua D. Drake

That and it is possible to propose a new layout.  Something that is somewhat
consistant across the major sites (www,gborg,techdoc,advocacy)?  And yes, I'd be
will to do some work on that.

> 
> 
> 
> 
> -- 
> Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC - S/JDBC
> Postgresql support, programming, shared hosting and dedicated hosting.
> +1-503-222-2783 - [EMAIL PROTECTED] - http://www.commandprompt.com
> 
> 


-- 
Keith C. Perry, MS E.E.
Director of Networks & Applications
VCSN, Inc.
http://vcsn.com
 

This email account is being host by:
VCSN, Inc : http://vcsn.com

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Is my MySQL Gaining ?

2003-12-29 Thread Keith C. Perry
Quoting Shridhar Daithankar <[EMAIL PROTECTED]>:

> On Monday 29 December 2003 12:47, Tom Lane wrote:
> > Shridhar Daithankar <[EMAIL PROTECTED]> writes:
> > > That is right. but that fact remains that postgresql documentation is
> > > just sufficient. If you read the manual and follow it religously to
> comma
> > > and fullstop, it tells you everythings. But it certainly isn't a place
> > > where you can glance over it and get hang of it.
> >
> > This is surely true, and I've not seen anyone denying it.  The people
> 
> Well, for newbies to postgresql, let's state this fact upfront and not make 
> them discover it..:-)
> 
> > who are doing development are, um, not strong at documentation (I
> > include myself here).  What we need are some folks to step up and
> > improve the documentation --- and then maintain it in the face of future
> > changes.  Any volunteers out there?  This is an open-source project
> > after all, and that means "scratch your own itch" among other things...
> 
> If you ask me, let's not do that. Not at least on a grand scale. Isolated 
> areas are OK on case by case basis..
> 
> I regualrly use development build documentation from
> developers.postgresql.org 
> and I have seen the documentation in source code. In my view, postgresql 
> developers do document it very clearly whenever required.
> 
> If we dilute the documentation too much, that will make things simpler 
> initially but that will simply create a maintainance nightmare as one has to
> 
> maintain much larger amount of documentation.
> 
> And once you get used to precise style of postgresql documentation, going
> back 
> to anything else is a pain. ( MSDN.. I scream at nights but I digress).
> 
> IMO documentation of postgresql is fine overall. What we need to do is.
> 
> 1. State upfront that this is not handholding. 
> 
> It will make lots of things easier and offload work of expanding documents 
> given limited human resources working on the project. A disclaimer is far 
> easier to maintain than a manual..:-)
> 
> And it will prepare anybody for upcoming hardships..:-)
> 
> 2. Document and reuse it.
> 
> Personally I would like to see responses on general and oter such list as 
> URLs. If we answer it repeatedly, let's document it and point the people to 
> them. Let them dig around 3-4 URLs around it and they will have islands of 
> enlightenments. Over the period, these island will merge in a great 
> landscape..:-)
> 
> Just a thought..
> 
>  Shridhar
> 
> P.S. If somebody thinks I can not imagine how a newbie feels, I will agree. 
> But looking back, dumbing down anything is not good in long term..an 
> experience that is
> 
> 
> ---(end of broadcast)---
> TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
> 

Shridhar,

I tend to agree with you.  I personally think the docs are very good and have
the techical depth warranted for a product like PostgreSQL.  On the other hand
for the ad & m (advocacy and marketing) side of things.  I'm betting some
clearly labelled tutorials/guide next to the disclaimer about the the main docs
be more of a reference would appease those who might be a bit green to a product
of PG breadth and depth (heck I still think I'm in the category sometimes).

'bout two weeks ago there was another thread where certificating/training et al
were discussed and one of the things that I had mentioned was that in that
regard, we should probably have more tutorial/guide based on real world
scenarios available on techdocs.  Although I don't think I qualified to write
for the main docs, I definitely can contribute to the techdocs in the manner I
just mentioned.

Matter a fact, I finally finish my first one "Using PostgreSQL for Domino 6
RDBMS Backends".  I'm doing the final read now so hopefully I can get it over to
Robert for posting.

Perhaps the "newer" folks on the list could tell us what type of guides they
want to see.  I'm sure someone has a wish list somewhere.

-- 
Keith C. Perry, MS E.E.
Director of Networks & Applications
VCSN, Inc.
http://vcsn.com
 

This email account is being host by:
VCSN, Inc : http://vcsn.com

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Is my MySQL Gaining ?

2003-12-28 Thread Keith C. Perry
Quoting Gaetano Mendola <[EMAIL PROTECTED]>:

> Chris Travers wrote:
> > Regarding the importance of PostgreSQL on Windows.
> > 
> > For example, I am developing a hotel reservation management application
> > using Python and PostgreSQL (http://sourceforge.net/projects/openres). 
> This
> > will only run on Linux and UNIX, so in order to get this to run on
> Windows,
> > I need to use either MySQL or Firebird.  Or aI can require Cygwin.  But
> that
> > is a bit over the top IMO, for a small hotel or B&B to consider,
> especially
> > because I want to run it if possible on existing equipment to keep
> > implimentation costs down.
> 
> Who cares about where the GUI must run?

Chris and his client-

> May you please explain me why the GUI must be on the same DB server?
> After all is better have the user's hand far away from the datas.

If its a small hotel or B&B I would think an addtional workstation might be cost
prohibitive.  Then again, that might simply be the way they want it.

> 
> Regards
> Gaetano Mendola
> 
> 
> 
> ---(end of broadcast)---
> TIP 6: Have you searched our list archives?
> 
>http://archives.postgresql.org
> 


-- 
Keith C. Perry, MS E.E.
Director of Networks & Applications
VCSN, Inc.
http://vcsn.com
 

This email account is being host by:
VCSN, Inc : http://vcsn.com

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] Is my MySQL Gaining ?

2003-12-28 Thread Keith C. Perry
Quoting Tony <[EMAIL PROTECTED]>:

> Sadly a company will believe anything that a consultant they trust tells 
> them.  Otherwise there'd be little point in hiring a consultant to give 
> them advice would there?

There are different levels of trust and in addition sometimes consultants are
used for feasibility studies- "how would you do this?"  If you're telling me
you've never been in a situation where a client called you in because they want
to implement a project with certain products or other specification because they
have "done the research and want to proceed this way" then I'm very glad to hear
that.  No matter how much you are trusted as a consultant or technical advisor
you are still just a guide. That means it is possible for your client is "wander
off the path".  I remember in the not so long ago days when people wanted to run
certain hardware or software because to not do so would give the perception that
you were not up to par.  Sometimes what is used has nothing to do with using the
best product for the job.  That seems to be a sub-text of this thread.

> It seems rather illogical that you'd refuse to work with a company that 
> had been given potentially sub-standard advice, based on what appears to 
> be a theological view?

I'm sure the MySQL folks don't think they are sub-standard.  A fair amount of my
business is "clean up" so if someone said, "we have an app on MySQL that is not
working for us" I would most definitely be interested.  If someone said to me
what DB do I use to build applications, I would say PG.  If then someone says to
 me that "well we're a MySQL shop" then I would have to hear more because
depending on what they want to do, I might not take on that project.  There is
nothing illogical or theological in that.

> Either that or you have more consulting work than you know what to do 
> with, that you can afford to base business decisions on an ideological 
> basis.

This really doesn't make sense.  Are you telling me you are going to accept any
an all work regardless of competency and confidence in that product?  Would you
really build a financial application on MySQL?  We both know that we all have a
certain ideology (read: religion) when it comes to our trade.  To be clear, I'm
not saying anything against someone who would use MySQL for a financial app. 
I'm just saying that I would not (or at least try very hard not to) involve
myself in that project or any other project where I thought there was a bad
design or implementation.

When you are a smaller operation your reputation is going to weigh in a lot more
than a larger company.  I do not want my name to be tied to something
sub-standard.  If a consultant values his or her reputation I don't see how you
 can NOT consider what products you are willing to put your name on the line for.

> If I chose not to work with companies that used Windows as servers 
> (because IMHO, Windows is not a good server environment) my house 
> would've been repossessed, and I'd have probably starved by now.
> 
> T.

12 years ago calling myself a consultant one day meant putting in a netware 3.11
server for a bunch of PCs and MACs and pulling coax.  Did I want to do that- I
can't really say because at the time I had to eat.  That for me is on the outer
fringes of this thread.  Few organzations are NOT using Windows somewhere, and
an increasing number of organizations are starting understand OSS solutions.  So
both world are merging so it not about avoiding and one thing.  Its about
picking an choosing your battles.

> 
> Keith C. Perry wrote:
> 
> > The way I look at it is that I probably don't want to deal with a
> >company that thinks that MySQL on windows is "good environment".
> >
> >  
> >
> 


-- 
Keith C. Perry, MS E.E.
Director of Networks & Applications
VCSN, Inc.
http://vcsn.com
 

This email account is being host by:
VCSN, Inc : http://vcsn.com

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Is my MySQL Gaining ?

2003-12-28 Thread Keith C. Perry
Quoting Shridhar Daithankar <[EMAIL PROTECTED]>:

> On Sunday 28 December 2003 11:15, D. Dante Lorenso wrote:
> > The only SQL customizations that MySQL has that I really miss in
> PostgreSQL
> > are the commands:
> >
> > SHOW DATABASES;
> 
> \l
> 
> > SHOW TABLES;
> 
> \dt
> 
> > DESC table;
> 
> \d tablename
> 
> >
> > That was ubber simple to do in MySQL.  To this day, I have trouble with
> > that in PostgreSQL.  I'm constantly doing:
> >
> > psql> \?
> > psql> help;
> > ERROR:  syntax error at or near "help" at character 1
> > psql> \h
> > ...
> > * damnit, that's not it...*
> > psql> \?
> > psql> \d
> > * ok, now which flag do I use for tables vs functions..etc?*
> 
> \df for functions and \dt for tables.
> 
> Problem is psql is unique though very powerful. I need to use oracle's 
> sql-plus on HP-UX at times(Otherwise I crawl back to TOAD) and I don't think
> 
> it is nowhere near to psql. 
> 
> or may be I play with postgresql more than oracle..:-) anyways
> 
> > I finally figure it out, I just end up forgetting again later.  I still
> > have no clue how I'd find the same data without using psql.  In MySQL
> > I can run those queries from PHP, PERL...etc.  I know you can find that
> > data in system tables in PostgreSQL, but I don't wanna muck around with
> > all that.  I just wanna do something as simple as MySQL.
> 
> Well, actually I would say it is great way of learning postgresql internals.
> 
> There is a switch -E to psql which shows you queries sent to server for each
> 
> command you provide.
> 
> Problem with mysql is the approach is easy to start with but adding those 
> command in your standard list of SQL commands falls out on standard 
> compliance and maintainability.
> 
> Another post on this thread mentioned postgresql should run against oracle. 
> Sole reason postgresql v/s mysql debate should exist is to provide 
> comparision in feasibility study. The hurdles you mentioned are true but that
> 
> are just part of bit steeper learning curve of a standard way of doing 
> things..
> 
>  Shridhar

This is what I don't get.  Why do people thing learn PG is going to be like
learning MySQL in the first place?  Because its OSS??  I certainly hope not. 
This is apples to oranges.

I read someone say the documentation was "light" too.  I'm not sure what that
meant but I looked for at the 3 inch doubled side binded of my 7.3.2 docs-
admin,user &,programmer- its as big as my J2EE binder.


Not very scientific I know  :)


Seriously though, when people indicate PG is "hard", I hear, "if it was easy
everone would be doing it".

-$0.02

> 
> ---(end of broadcast)---
> TIP 4: Don't 'kill -9' the postmaster
> 


-- 
Keith C. Perry, MS E.E.
Director of Networks & Applications
VCSN, Inc.
http://vcsn.com
 

This email account is being host by:
VCSN, Inc : http://vcsn.com

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] Is my MySQL Gaining ?

2003-12-28 Thread Keith C. Perry
I've asked this before and I'll apologize now if there was a response but how
does http://gborg.postgresql.org NOT fill this.

Quoting Chris Travers <[EMAIL PROTECTED]>:

> Hi all;
> 
> The problem with trying to maintain an image of unity is that PostgreSQL is
> moving in a direction of being sort of like a kernel.  In this sense, we
> already are unified.  But regarding new types, client libs, etc. then unity
> is neither necessary nor desirable IMO.
> 
> If that is something that some people see here as important, maybe they can
> start their own PostgreSQL "distributions."  Maybe we can link to them via
> the PostgreSQL advocacy site :-)
> 
> Best Wishes,
> Chris Travers
> 
> - Original Message -
> From: "Dave Cramer" <[EMAIL PROTECTED]>
> To: "Robert Treat" <[EMAIL PROTECTED]>
> Cc: "Marc G. Fournier" <[EMAIL PROTECTED]>;
> <[EMAIL PROTECTED]>
> Sent: Sunday, December 28, 2003 5:31 AM
> Subject: Re: [GENERAL] Is my MySQL Gaining ?
> 
> 
> > Well, I'm not suggesting  that we force them to do anything, just give
> > the appearance of unity, this should be possible with tools available,
> > no?
> >
> > Dave
> > On Sat, 2003-12-27 at 16:57, Robert Treat wrote:
> > > But your examples also lists things like interface libraries. For
> > > postgresql to do that, we would have to pick specific interfaces
> > > applications / libraries, then have them all centralize their
> > > development/release process around the main distribution. If you can get
> > > everyone to agree to this (and I recommend starting by picking the
> > > official python interface), we can start down a unified path, but I
> > > don't see it happening.
> > >
> > > Robert Treat
> > >
> > > On Sat, 2003-12-27 at 09:41, Dave Cramer wrote:
> > > > Regardless of the reasons, perception is reality. If we appear to be
> > > > disheveled then we are.
> > > >
> > > > I would think that it should be possible to give the appearance of
> unity
> > > > without actually requiring a full time web-master?
> > > >
> > > >
> > > > Dave
> > > >
> > > > On Fri, 2003-12-26 at 12:43, Marc G. Fournier wrote:
> > > > > On Fri, 26 Dec 2003, Dave Cramer wrote:
> > > > >
> > > > > > One thing that they do have over postgres is a unified experience,
> one
> > > > > > doesn't have to go to n different sites to find things, such as
> > > > > > interface libraries, advocacy sites, development sites, etc.
> > > > >
> > > > > Course they don't ... cause they have one, full time, paid webmaster
> that
> > > > > has nothing else on his plate ... one advantage to being able to
> control
> > > > > everything is the ability to keep everything centralized ...
> > > > >
> > > > >  >
> > > > > > Dave
> > > > > >
> > > > > > On Fri, 2003-12-26 at 11:53, Marc G. Fournier wrote:
> > > > > > > On Fri, 26 Dec 2003, B. van Ouwerkerk wrote:
> > > > > > >
> > > > > > > > I think I will switch to PG anywhere soon but sometimes it's
> hard to
> > > > > > > > find whatever information I need. Google is a great help but I
> would
> > > > > > > > expect it in the docs.
> > > > > > >
> > > > > > > Like ... ?
> > > > > > >
> > > > > > > 
> > > > > > > Marc G. Fournier   Hub.Org Networking Services
> (http://www.hub.org)
> > > > > > > Email: [EMAIL PROTECTED]   Yahoo!: yscrappy
> ICQ: 7615664
> > > > > > >
> >
> >
> > ---(end of broadcast)---
> > TIP 3: if posting/reading through Usenet, please send an appropriate
> >   subscribe-nomail command to [EMAIL PROTECTED] so that your
> >   message can get through to the mailing list cleanly
> >
> >
> 
> 
> ---(end of broadcast)---
> TIP 4: Don't 'kill -9' the postmaster
> 


-- 
Keith C. Perry, MS E.E.
Director of Networks & Applications
VCSN, Inc.
http://vcsn.com
 

This email account is being host by:
VCSN, Inc : http://vcsn.com

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Duplication to External Server

2003-12-18 Thread Keith C. Perry
Quoting David Hofmann <[EMAIL PROTECTED]>:

> I'm setting up 2 database servers. The first is on our local network which 
> is our staging server. The second is an external server setup at my hosting 
> company. On a nightly bases I want to copy all the data from the local 
> Postgre database to the production server at hosting company overriding 
> whatever was there previously.
> 
> Does anyone have any suggestions on an easy was to do this ?
> 
> _
> Tired of slow downloads? Compare online deals from your local high-speed 
> providers now.  https://broadband.msn.com
> 
> 
> ---(end of broadcast)---
> TIP 2: you can get off all lists at once with the unregister command
> (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
> 

You should check out some of the replication software on gborg.postgresql.org
but off the top of my head I would say that if you truely want to copy the data
up to the server you can use the psql program over an SSL connection.  I'm not
quite sure if psql supports SSL natively but you can always tunnel the
connection with ssh if your ISP supports it.  If you are using 7.4 you could
even use pg_dump with ssh in a similar manner.

-- 
Keith C. Perry, MS E.E.
Director of Networks & Applications
VCSN, Inc.
http://vcsn.com
 

This email account is being host by:
VCSN, Inc : http://vcsn.com

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] [NOVICE] PostgreSQL Training

2003-12-16 Thread Keith C. Perry
Quoting Peter Eisentraut <[EMAIL PROTECTED]>:

> Amy Young wrote:
> > In the mean time, I will investigate the "21 day" book (I have used
> > the series many times!) and hope the PostgreSQL community will
> > recognize the need for some training classes
> 
> I don't see that there is a lack of availability of training 
> opportunities.  Just ask any of the PostgreSQL consultants and they 
> will do custom training for you.
> 
> 
> ---(end of broadcast)---
> TIP 7: don't forget to increase your free space map settings
> 

Great idea!!

As a part of advocacy perhaps we in the community should add "training" as one
of the things we do.  I'm sure some of us have done training before but if we
mobilize this effort more formally from within, we could quickly have quite a
bit of "trainers" once we decide how to divide up the knowledge (i.e. training
levels).  I think Bruce's said his materials are on his web site so perhaps we
should start there with the intention of repackaging that information for
community distribution.

-- 
Keith C. Perry, MS E.E.
Director of Networks & Applications
VCSN, Inc.
http://vcsn.com
 

This email account is being host by:
VCSN, Inc : http://vcsn.com

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] tablespaces in 7.5?

2003-12-13 Thread Keith C. Perry
Quoting Greg Stark <[EMAIL PROTECTED]>:

> "Keith C. Perry" <[EMAIL PROTECTED]> writes:
> 
> > Ok, thats for the response.  I take it a PG namespace = Oracle table space
> (or
> > namespace is simply the generic term).  
> 
> Actually if you check back you'll notice you're the first person to say
> "namespace". The original question was about "tablespaces"
> 
> -- 
> greg

Whoa, I sure did- my apologies.  That would also explain my confusion.  Ok, so
on PG-

namespaces   = yes, via schemas
tablesspaces = forthcoming

-- 
Keith C. Perry, MS E.E.
Director of Networks & Applications
VCSN, Inc.
http://vcsn.com
 

This email account is being host by:
VCSN, Inc : http://vcsn.com

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] tablespaces in 7.5?

2003-12-13 Thread Keith C. Perry
Quoting Mike Nolan <[EMAIL PROTECTED]>:

> > Ok, thats for the response.  I take it a PG namespace = Oracle table space
> (or
> > namespace is simply the generic term).  I can see some definite benefits
> > especially with disk i/o throughput though I thought database partitioning
> (I
> > think that is what its called) would provide the same thing.
> 
> I could be wrong, but I think 'namespace' is an existing concept in
> PG that is a way of organizing objects into logical groups.  
> 
> As I recall, the group working on it decided to call it a 'directory' rather
> 
> than a 'tablespace', because of concerns that the latter word might be 
> proprietary to Oracle.  I've lost touch with the rest of the members in
> that group, though, since the computer I was using for PG development 
> purposes got zapped by lightning in August.
> --
> Mike Nolan
> 
> ---(end of broadcast)---
> TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
> 

I think that is what I was getting confused with before- schemas...

http://www.postgresql.org/docs/7.4/static/catalog-pg-namespace.html


-- 
Keith C. Perry, MS E.E.
Director of Networks & Applications
VCSN, Inc.
http://vcsn.com
 

This email account is being host by:
VCSN, Inc : http://vcsn.com

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


  1   2   >