Re: [GENERAL] Mac OS Roman import?

2009-10-19 Thread Sam Mason
On Mon, Oct 19, 2009 at 02:54:38PM -0700, Christophe Pettus wrote:
> I find myself needing to regularly import data encoded as Mac OS Roman  
> (yes, it is coming from an Macintosh OS 9 system) into a UTF-8  
> PostgreSQL database.

How tightly integrated into PG do you want it?  You can define custom
character conversions if you want (although I've never tried this).
Python appears to know how to convert to/from "MacRoman"[1] by default,
so pl/python should make this reasonably easy.

Otherwise, iconv is a useful tool.  The project's homepage suggests
that it knows how to do the conversions, but I can't figure out the
command line needed.

-- 
  Sam  http://samason.me.uk/

 [1] http://docs.python.org/library/codecs.html
 [2] http://www.gnu.org/software/libiconv/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 3d Vector Types and operators

2009-10-14 Thread Sam Mason
On Wed, Oct 14, 2009 at 12:04:26PM -0500, Andrew Bailey wrote:
> I cant find in the documentation support for a 3 dimensional vector,
> I have only seen the array type, I am interested in doing vector dot
> products and vector cross products, also summing vectors and
> multiplying by a scalar quantity

If you did do this, I'd be tempted to use something like:

  create type point3d AS (
x float8, y float8, z float8
  );

and then write your functions using this.  The length of an array isn't
part of its type and so PG wouldn't be able to stop you from writing:

  select array[1,2,3] + array[2,3,4,5,6];

if you provided the appropriate operators.  If you use a fixed sized
tuple, as above, you'd get errors if you tried to use points of the
wrong dimensionality.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] different sort order for primary key index

2009-10-14 Thread Sam Mason
On Wed, Oct 14, 2009 at 10:29:56AM -0400, Tom Lane wrote:
> Paul Hartley  writes:
> > ... I'm unclear
> > if PostgreSQL treats primary keys differently from unique, non-null
> > constraints.
> 
> The *only* thing that the system does specially with a primary key
> constraint is that a PK creates a default column target for foreign key
> references.

It also (silently) overrides any NOT NULL constraint doesn't it?  For
example:

  CREATE TABLE x ( id INT NULL PRIMARY KEY );

ends up with "id" being NOT NULL, even though I asked for it to be
nullable.  Not sure if it's useful for this case to be an error, though
it would be more in line with PG throwing errors when you asked for
something bad instead of making a best guess.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Procedure for feature requests?

2009-10-14 Thread Sam Mason
On Tue, Oct 13, 2009 at 10:22:04PM +, Tim Landscheidt wrote:
> Sam Mason  wrote:
> > Calculating "(C - B) / C" isn't easy for timestamps, whereas it's easy
> > for dates.  I believe this is why there's a specific version for the
> > former but not the latter.
> 
> (I obviously meant "(B - A) / C" :-).)

Huh, I hadn't even noticed that!

> I would assume
> that you just have to convert A, B and C to seconds (since
> epoch) and then use a normal integer division.

The problem is that the Gregorian calender is far too complicated.  For
example, think what would happen with an interval of "months".  It
doesn't help converting to seconds because the length of a month in
seconds changes depending on which year the month is in and which
month you're actually dealing with.  This makes any definition of
"division" I've ever been able to think of ill defined and hence the
above calculation won't work.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] `must be superuser to COPY to or from a file' - using perl DBI - approaches to work around this

2009-10-12 Thread Sam Mason
On Tue, Oct 13, 2009 at 11:10:12AM +1030, Dan Kortschak wrote:
> On Mon, 2009-10-12 at 20:21 -0400, Stephen Frost wrote:
> > Read the 'COPY support' section.
> 
> Seems like the way to go, though it will be significantly slower than
> psql or superuser reads (a couple of tables have ~10s-100sM rows).

Unless perl is doing some very funky stuff I'd expect you'll be waiting
for the disks most of the time, Perl will just be shoving blocks of data
around and this is fast.  If performance is really your thing then C may
help.

> I was just wondering about the reasons for
> making that decision - the relative danger of creation and read from
> stdin vs read from a file.

"stdin" effectively just means data from the client, the filesystem
would be from "inside" the server and hence in the presence of a
malicious client letting it do stuff with its own query seems OK whereas
the server's filesystem is an authority you probably don't want to go
spreading too widely and hence is limited to userusers.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to send multiple SQL commands from Python?

2009-10-11 Thread Sam Mason
On Sat, Oct 10, 2009 at 01:14:56PM -0700, Adrian Klaver wrote:
> sql_str = "ALTER TABLE " + $xn + " OWNER TO xdev;"
> sql_str += "GRANT ALL ON TABLE " + $xn + " TO xdev;"
> sql_str += "REVOKE ALL ON TABLE " + $xn + " FROM PUBLIC;"
> sql_str += "GRANT SELECT ON TABLE " + $xn + " TO PUBLIC;"

One minor stylistic point.  Python appears to follow the same string
literal rules as C in that multiple adjacent string literals are
concatenated at compile time[1].  Thus you could write the above as:

  sql_str = (
"ALTER TABLE " + $xn + " OWNER TO xdev;"
"GRANT ALL ON TABLE " + $xn + " TO xdev;"
"REVOKE ALL ON TABLE " + $xn + " FROM PUBLIC;"
"GRANT SELECT ON TABLE " + $xn + " TO PUBLIC;"
);

This wouldn't help much here, but may in more complicated bits of code.

-- 
  Sam  http://samason.me.uk/

 [1] 
http://docs.python.org/reference/lexical_analysis.html#string-literal-concatenation

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Posgresql to MSsql encoding problem

2009-10-09 Thread Sam Mason
On Fri, Oct 09, 2009 at 11:39:47AM +0100, Pedro Doria Meunier wrote:
> I cannot simply use "Set Client Encoding to LATIN1" as the "client"
> itself is the script ... :)

What does this mean? surely if it's just a "script" you can put another
line at the beginning of the script that says this?

If you really can't do this, how about using ALTER USER to change the
the user that they use latin1 by default?

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to reduce WAL file Size

2009-10-09 Thread Sam Mason
On Thu, Oct 08, 2009 at 11:48:29PM -0700, Mitesh51 wrote:
> I am dealing with backup of databases. Now eventhough there are not many
> changes in the DB, it creates WAL file of 16 mb. Now because I am working on
> a application which takes full & inc backup of db after specific
> intervals..if for small changes it creates 16 mb files then storing them
> will eat up lot many space.

I think the following utility would help you here:

  http://pgfoundry.org/projects/clearxlogtail/

Also I'd recommend reading the articles Alban pointed to, it can be
somewhat tricky to backup PG correctly.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Best data type to use for sales tax percent

2009-10-09 Thread Sam Mason
On Fri, Oct 09, 2009 at 12:10:41AM -0700, Mike Christensen wrote:
> Wouldn't (4,3) let me store 0.000 through 9.999?  Maybe I'm still not
> following what both numbers mean.

I think Rich was getting confused about how you wanted to represent your
percentages.

> I understand the point about states/counties with 3 decimal digits of
> sales tax, so I'd probably want to do (5,5) which should give me
> 0.0 - 0.9, and store 9.825% sales tax as .09825.  I'm
> suggesting storing sales tax as a number between 0 and 1 so I can
> easily multiply it against a subtotal to get the tax amount, storing
> anything over 1.0 is unnecessary.

This is how I'd normally do it.  Ratios for inside the code, just
"format" them as percentages when you want the user to see them.

> Also, if you just say "numeric" (without any numbers) then Postgres
> lets you store any number you wish and will never do any rounding of
> any sort, correct?  If there a price you pay for this in terms of
> perf, bytes on disk, etc?

It's not possible to do division accurately (not sure about the caveats
in other operators).  For example, 1/3 is represented as "0.3" and
multiplying this by three again will give "0.9".  When people say
that numeric types are "exact" they're not giving you whole truth.

> Another idea is if I'm tying myself down to a certain level of decimal
> accuracy in the first place, why not just store everything as an Int2?
>  9.825% would be stored as 9825 and I'll divide everything by 10
> when I calc sales tax.  If I'm not mistaken, integral data types are
> faster for Postgres and less bytes on disk, right?  BTW, I will never
> be doing any math using Postgres, it's just for pure storage..

Not sure what range of values you have to cover; you wouldn't be able to
do this with fixed width integer types:

  select numeric '100' ^ 300;

Numeric types allow you to do the above, the flexibility of allowing the
representation of a number to get this wide that causes things to be
slower.  It's not much slower though, I'd benchmark a test case that's
meaningful to you and then can you make a sensible decision.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Query inside RTF

2009-10-08 Thread Sam Mason
On Thu, Oct 08, 2009 at 10:58:05AM -0300, Leonardo M. Rammm wrote:
>   cast(rtf_field as varchar) like '%condition%'

I don't think that cast isn't doing what you expect at all.  As an
example, try doing something like:

  select cast(bytea e'hi\nbye' as varchar);

encode() may be what you want instead.  How do RTF files handle
encoding?  You may have better luck with using a specialized program to
pull the text out of the RTF file and then using the text search stuff
in PG.  LIKE conditions can't be indexed well and hence the above is
going to be slow for any reasonable number of documents.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Using complex PRIMARY KEY

2009-10-07 Thread Sam Mason
On Wed, Oct 07, 2009 at 09:19:58PM +0200, Zsolt wrote:
> For a given house I would like to start the numbering of tenants from
> 1. Each house could have tenant_ID=1, obviously in this case the
> house_ID will differ. The combination of tenant_ID and house_ID will
> be the unique identifier of each tenant.

The term for this that tends to float around for this concept (in PG
anyway) is "gap-less sequences": this looks like a reasonable link:

  http://www.varlena.com/GeneralBits/130.php

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How useful is the money datatype?

2009-10-05 Thread Sam Mason
On Mon, Oct 05, 2009 at 01:07:16PM -0700, Christophe Pettus wrote:
> A quick check of the source code (src/backend/utils/adt/numeric.c)  
> shows it's base 1, each "digit" represented as an int16.

I was going to note that in my post but thought it was needless detail;
ah well, maybe next time I will! :)

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] connection failure

2009-10-05 Thread Sam Mason
On Mon, Oct 05, 2009 at 03:06:41PM +0200, Miklosi Attila wrote:
> What does the message below mean?
> 
> server closed the connection unexpectedly
> This probably means the server terminated abnormally
> before or while processing the request.

It means the server disconnected from the client; as it says this can
happen because the server has crashed.  To figure out what really
happened, you want to look at postgres' logs on the server.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Procedure for feature requests?

2009-10-05 Thread Sam Mason
On Mon, Oct 05, 2009 at 12:03:38AM +, Tim Landscheidt wrote:
> Sam Mason  wrote:
> >Tim Landscheidt wrote:
> >> "generate_series(A, B, C)" can also
> >> be written as "A + generate_series(0, (C - B) / C) * C"
> >
> > If you can figure out the limit then it seems easy,
> > though I'm not sure how you'd do that.
> 
> What limit?

Sorry, I was calling the second parameter to generate_series the "limit".

Calculating "(C - B) / C" isn't easy for timestamps, whereas it's easy
for dates.  I believe this is why there's a specific version for the
former but not the latter.

> > Hum, now I'll have to see which is "better".
>
> Which of my mails made you think that I was not satisfied
> with PostgreSQL's current performance?

Nothing, it's was my personal interest to see which was faster.

> "generate_series(DATE, DATE)" would just be syntactic sugar,
> and I like sweets.

We all do, but in software it's got to be balanced against the overhead
of maintaining support for these functions.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How useful is the money datatype?

2009-10-04 Thread Sam Mason
On Sun, Oct 04, 2009 at 09:31:02AM -0700, Rich Shepard wrote:
> On Sun, 4 Oct 2009, Sam Mason wrote:
> >The point is that on *any* computer it's impossible to perform arbitrary
> >calculations to infinite precision (i.e. "without a loss of precision as
> >you put it).
> 
>   I've not followed this tread, but read this one message, so perhaps my
> comments are not appropriate. In that case, I apologize for jumping in.

More comments are normally good!

>   Monetary values have always been an issue with computers. For a while, at
> least in the mainframe world of decades ago, binary-coded decimals (BCD)
> were a working approach.

I've never had to use BCDs for anything real, but I believe the
reason they're nice is that when you get a result you can't represent
accurately then it tends to get rounded to something that will always
look nicer than when you're working in base 2.  PG's numeric type
effectively uses base 10 internally so would be a good fit for cases
when you used to use BCD numeric encodings before.

>   In the early and mid-1980s we used a procedure for business applications
> involving money that worked regardless of programming language or platform.
> To each (float, real) monetary amount we added 0.005 and truncated the 
> result
> to two digits on the right of the decimal point. In almost all cases, this
> allowed financial calculations to be correct to the nearest penny.

I was under the impression that floats have about 6 useful decimal
digits of precision, thus any calculations involving units of a 100
thousand or more would start to give arbitrary values to the cents.

>   Financial calculations are still imperfect. Now and then I see this in
> both my business and personal bank statements when reconciliation is off by
> a penny or two. The transaction amounts (debits and credits) match, but the
> bank comes out with a different total than do I. This is usually only for a
> month or two before we are once again in agreement.

That seems to be the bug that Greg Stark noted in this thread; the bank
is probably storing values with more precision than it's choosing to
report to you.  Thus the totals will drift into and out of being correct
over time.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How useful is the money datatype?

2009-10-04 Thread Sam Mason
On Sat, Oct 03, 2009 at 10:14:53PM -0400, V S P wrote:
> Withing PG procedures at least in pgsql it is impossible to do 'money'
> calculations without a loss of precision.

The point is that on *any* computer it's impossible to perform arbitrary
calculations to infinite precision (i.e. "without a loss of precision as
you put it).

You can do things losslessly in certain common situations and the
numeric type in PG helps with a lot of these.

> There is an open source library by IBM that I use in my C++ code to do
> this, and may be it can be incorporated into PG
>
> it is called decNumber
> http://speleotrove.com/decimal/decnumber.html

How would this help over PG's existing numeric type?

Support for decimal floating point numbers would be nice, but I'm pretty
sure you're not asking for this.

> Micropayment systems (that for example, I am implementing) require to
> have a reasonably good precision. Support for currencies such as yen
> also dictates that reasonably large numbers are supported

Which limits do you find to be problematic in PG?

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] query is taking longer time after a while

2009-10-04 Thread Sam Mason
On Sun, Oct 04, 2009 at 11:08:01AM -0400, Tom Lane wrote:
> Sam Mason  writes:
> > On Sun, Oct 04, 2009 at 01:44:30AM -0700, tomrevam wrote:
> >> ->  Bitmap Index Scan on session_allocation_info_status_idx 
> >> (cost=0.00..5.28 rows=1 width=0) (actual time=1619.652..1619.652 
> >> rows=51025 loops=1)
> >> Index Cond: ((status)::text = 'active'::text)
> >> ->  Bitmap Index Scan on session_allocation_info_status_idx 
> >> (cost=0.00..5.28 rows=1 width=0) (actual time=806.770..806.770 rows=46601 
> >> loops=1)
> >> Index Cond: ((status)::text = 'setup'::text)
> >> Total runtime: 4819.990 ms
> 
> > Wow, that's quite a change in run time!  Are you sure planner stats are
> > being kept up to date?
> 
> It's not the planner's fault.  Note that the parent BitmapHeapScan is
> still returning the same number of rows.

Sorry, I chopped out too much context.  Here's the "early" run, the
estimated and real row counts look good to me:

On Sun, Oct 04, 2009 at 01:44:30AM -0700, tomrevam wrote:
>  ->  Bitmap Index Scan on session_allocation_info_status_idx 
> (cost=0.00..48.93 rows=1555 width=0) (actual time=0.244..0.244 rows=1557 
> loops=1)
>Index Cond: ((status)::text = 'active'::text)
>  ->  Bitmap Index Scan on session_allocation_info_status_idx 
> (cost=0.00..48.93 rows=1555 width=0) (actual time=0.181..0.181 rows=1609 
> loops=1)
>Index Cond: ((status)::text = 'setup'::text)
>  Total runtime: 2.193 ms

Or did I missing something else?

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] query is taking longer time after a while

2009-10-04 Thread Sam Mason
On Sun, Oct 04, 2009 at 01:44:30AM -0700, tomrevam wrote:
>  ->  Bitmap Index Scan on session_allocation_info_status_idx 
> (cost=0.00..5.28 rows=1 width=0) (actual time=1619.652..1619.652 rows=51025 
> loops=1)
>Index Cond: ((status)::text = 'active'::text)
>  ->  Bitmap Index Scan on session_allocation_info_status_idx 
> (cost=0.00..5.28 rows=1 width=0) (actual time=806.770..806.770 rows=46601 
> loops=1)
>Index Cond: ((status)::text = 'setup'::text)
>  Total runtime: 4819.990 ms

Wow, that's quite a change in run time!  Are you sure planner stats are
being kept up to date?  It's expecting a *single* row back from an index
scan of "session_allocation_info_status_idx" when looking for "active"
and a single row when looking for "setup" but gets 51025 and 46601 rows
back respectively.  These are a *long* way out and would explain why it's
taking an inordinate amount of time.

If you try running "analyze session_allocation_info" and then seeing
what changes it would be interesting.  I'd try to get the "rows=n"
numbers to be in the same order of magnitude in the estimates and in the
actual run time.  Improving stats targets helps in some situations, but
may not here:

  http://www.postgresql.org/docs/current/static/sql-altertable.html

Something like:

  alter table session_allocation_info alter status set statistics 200;

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Embarassing GROUP question

2009-10-03 Thread Sam Mason
On Sat, Oct 03, 2009 at 06:12:20PM -0500, Corey Tisdale wrote:
> We also weren't adding image data to blobs, we were bit mapping  
> faceted data to blob and shifting to allow people to shop by artist or  
> color or subject matter across millions of posters. Normalized tables  
> just weren't cutting it, and bit shifting up to 32 bit was crazy fast.  

Just out of interest; have you tried PG's support of fancier index
types?  HStore or intarray would appear to help with what you're doing.
Not quite sure what you're actually doing so my guess could be a long
way off!

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Procedure for feature requests?

2009-10-03 Thread Sam Mason
On Sat, Oct 03, 2009 at 04:14:21PM -0400, Tom Lane wrote:
> Sam Mason  writes:
> > the decision to officially bless some code as being a cast
> > rather than "just" a function seems very arbitrary
> 
> It's useful when the conversion semantics are sufficiently natural that
> you want the conversion to be applied implicitly.

Thanks!  After a big think I've ended up thinking the implicit casts
between the various numeric and date types are a good thing.  They can
cause some confusion and semantic strangeness, but the increase in code
verbosity that results without them normally offsets these costs.

In higher assurance code this balance may tip back the other way, but
databases have more focus on having a sane set of defaults rather than
forcing you to make all the decisions up front.

> I agree that the
> explicit CAST syntax hasn't got very much to recommend it over a
> function call.  That part you can blame on the SQL committee ;-) ...

What more would you want them to do?  Casts that is, the SQL committee
do enough I think!

> the historical PostQUEL syntax for this was exactly a function call,
> and you can still write it that way if you choose.

I have a feeling I'll probably carry on doing that then.  I'm not sure
if I'm ever going to write enough almost overlapping bits of code that
casts would become useful.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Embarassing GROUP question

2009-10-03 Thread Sam Mason
On Sat, Oct 03, 2009 at 01:05:49PM -0400, Tom Lane wrote:
> What you might be after is something like Postgres' DISTINCT ON
> feature

Yup, looks that way to me as well.

> I have never really played around with this aspect of MySQL ...

Me neither.

> but looking at this example, and presuming that you find that
> it actually does something useful, I wonder whether they interpret
> the combination of GROUP BY and ambiguous-per-spec ORDER BY
> in some fashion similar to DISTINCT ON.

Yup, does look that way doesn't it.  It's still a weird pair of
semantics to conflate.

Hum, if they were assuming that you'd always have to implement GROUP BY
by doing a sort step first then I can see why they'd end up with this.
But if you want to do *anything* else (i.e. hash aggregate in PG) then
you want to keep the semantics of GROUP BY and ORDER BY separate as the
spec and indeed PG does.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Procedure for feature requests?

2009-10-03 Thread Sam Mason
On Sat, Oct 03, 2009 at 12:48:57PM -0400, Tom Lane wrote:
> I think the reason CREATE CAST exists is exactly that the cast mechanism
> *isn't* intended to provide conversions between any arbitrary pair of
> datatypes.  It's only intended to provide conversions in those cases
> where the conversion semantics are obvious to some degree or other.

Yup, but the decision to officially bless some code as being a cast
rather than "just" a function seems very arbitrary, I think this is why
I don't understand its purpose.

> Since that's somewhat in the eye of the beholder, we allow the user
> to adjust edge cases by creating/removing casts --- but there's no
> expectation that when you define a new datatype, you'll provide casts
> to or from unrelated types.

I know there's no expectation to create any casts.  I think what I'm
confused about is why anyone would ever bother creating any in the first
place.  I have a feeling I may have used the functionality once, but
I can't think why or for what now.  Having a function seems just as
expressive to me, which is why I think I'm missing the point.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Procedure for feature requests?

2009-10-03 Thread Sam Mason
 [ please don't top-post it's difficult to follow for those not directly
   involved in the discussion ]

On Sat, Oct 03, 2009 at 12:14:19PM -0400, Martin Gainty wrote:
> is there a way to create a cast with assignment e.g.?

Hum, I'm unsure how this would help.  Maybe more explanation would help?

> CREATE CAST ((date,date) AS int4) WITH FUNCTION generate_series(date,date) AS 
> ASSIGNMENT;

generate_series returns to a SETOF values.  It also has *much* more
complicated semantics than I'd expect most people would attribute as
useful to a datatype conversion function.  For example, why would
casting from a pair of dates end up as a set of rows containing a single
date value?


I have a large problem understanding the real purpose casts so maybe I'm
missing something.  My problem is that I don't understand the purpose
of trying to provide a "standard" way of converting between arbitrary
datatypes, it seems much easier to just provide a standard set of domain
specific functions that are explicitly used by the user.  The SQL
standard specifies that they need to exist so PG has to support them,
but their purpose still confuses me!

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How useful is the money datatype?

2009-10-03 Thread Sam Mason
On Sat, Oct 03, 2009 at 11:49:50AM -0400, Merlin Moncure wrote:
> On Sat, Oct 3, 2009 at 11:40 AM, Sam Mason  wrote:
> > it's still a computer and thus can't represent anything
> > with infinite precision (just numeric fractions in PG's case, let alone
> > irrational numbers).
> 
> I don't quite agree with your statement (I agree with your point, just
> not the way you worded it).

Maybe I didn't emphasize "numeric" enough; the current implementation
of numeric datatypes in PG does not allow fractions to be represented
accurately.  Is that any better?

> I could make a type, 'rational', define
> the numerator, denominator, and do calculations like the above with
> zero loss.

Yes, if you defined a datatype like this then it would be able to
express a strictly larger subset of all numbers.

> So it depends how you define 'represent'.
> Computers can do pretty much any type of bounded calculation given
> enough time and memory.

Which is why I said "with infinite precision".  Assuming infinite time
or space doesn't seem to help with any real world problem, it's the
details of the assumptions made and the use case(s) optimized for that
tend to be interesting.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How useful is the money datatype?

2009-10-03 Thread Sam Mason
On Sat, Oct 03, 2009 at 12:20:57PM +0100, Raymond O'Donnell wrote:
> I thought the idea of NUMERIC was that the value was exact, avoiding
> rounding problems that you might get with other floating-point types?

Nope, sorry it's still a computer and thus can't represent anything
with infinite precision (just numeric fractions in PG's case, let alone
irrational numbers). For example:

  select (numeric '1'/3) * 3;

Gives me back 0..

What NUMERIC datatypes allow you to do however is allow you to specify
the precision used in calculations and storage (i.e. as 10 digits, four
of those being fractional digits, as above).  Thus you've got a chance
of putting a bound on the total error that can accumulate during a
calculation.

For example, you can choose between storing a few more digits in your
accounting tables so that when doing aggregations it comes out with the
"right" number at the end---i.e. 10 orders of something cost the same as
one order of 10 items.  Or you set the precision to be coarser and then
the values that have been rounded off will match everything else.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Procedure for feature requests?

2009-10-03 Thread Sam Mason
On Fri, Oct 02, 2009 at 09:48:14PM +, Tim Landscheidt wrote:
> Sam Mason  wrote:
> > 8.4 has a generate_series(timestamp,timestamp,interval) which would seem
> > to be a bit more flexible than you want.
> 
> Yes, I know :-). But as "generate_series(A, B, C)" can also
> be written as "A + generate_series(0, (C - B) / C) * C" (or
> something "flexible" like that :-)), a

For things as complicated as timestamps I'm not sure if this is such a
trivial transform.  If you can figure out the limit then it seems easy,
though I'm not sure how you'd do that.

> "generate_series(DATE, DATE)" would inter alia get rid off
> the need to cast the result from TIMESTAMP to DATE and to
> explicitly specify "'1 day'". Just a small, trivial enhance-
> ment for a popular use case :-).

Interesting, I tend to aim for maximum expressiveness not ease of
expressiveness.  It would be somewhat easy to add the above if you want
though:

  CREATE FUNCTION generate_series(date,date)
  RETURNS SETOF date
  IMMUTABLE LANGUAGE sql AS $$
SELECT generate_series($1::timestamp,$2::timestamp,interval '1 day')::date;
  $$;

or I suppose you could use the integer series generation:

  SELECT $1 + generate_series(0,$2 - $1);

Hum, now I'll have to see which is "better".

That second version seems to be slightly quicker (20 to 30%, for ranges
from a year up to a century respectively) so you may prefer it, but the
difference is going to be in the noise for any query I've ever used
generate_series for.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Procedure for feature requests?

2009-10-02 Thread Sam Mason
On Thu, Oct 01, 2009 at 11:35:25PM +, Tim Landscheidt wrote:
> suppose I thought that PostgreSQL would benefit greatly from
> a "generate_series(DATE, DATE[, INT]) RETURNS DATE" function

8.4 has a generate_series(timestamp,timestamp,interval) which would seem
to be a bit more flexible than you want.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Boolean storage takes up 1 byte?

2009-10-01 Thread Sam Mason
On Thu, Oct 01, 2009 at 11:37:40AM +0100, Thom Brown wrote:
> I've read the PostgreSQL documentation page on the boolean datatype (
> http://www.postgresql.org/docs/8.4/static/datatype-boolean.html) to find out
> what PostgreSQL's definition of a boolean is, as I believe it is distinctive
> from a bit(1) datatype

Yup, they're really for different things.  AND, OR and NOT are defined
for the BOOLEAN datatype and not for bit strings.

> (as you can't max() a boolean.. not sure what an
> efficient alternative to that is).

bool_or and bool_and are aggregates that work over boolean data types.

> However, I see that a boolean takes up 1
> byte of storage, which is 8 bits.  Is this due to the fact that the value
> can be null?

I believe it's more to do with the fact that if you add a boolean column
and then subsequently an int column then you're going to struggle to
"pack" them efficiently.  PG always puts columns on the "end" so that you
can add a column in constant time (i.e. no need to rewrite the table
in some common situations).  Once you start doing this then packing is
awkward and a single byte becomes much easier.  Whether the value is
NULL is stored elsewhere in the row.

Yes, this could be made more efficient; whether it's worth it is a
difficult question!

> And does its storage as a byte affect indexing or query planning?

Not sure which aspects you're referring to here, sorry.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL Macro Query

2009-09-30 Thread Sam Mason
On Wed, Sep 30, 2009 at 11:13:06AM +0100, Bronagh Grimes wrote:
> I have written some code to calculate basic summary stats and wish to
> then incorporate this code within a macro... I want to run the code
> many multiple times and define the variables (on which the summary
> statistics are calculated) outside of the basic code.

I'd normally do this "outside" Postgres, maybe from something like
Python or Perl.

> For example, see some very basic code below... I would use a macro so
> that I don't have to change 'STUDY_GROUP' in the code each time.

If you want to stay inside PG and use psql, you could rewrite the query
to make this less of a problem.

> select "STUDY_GROUP" , count("STUDY_GROUP")
> from "TABLE1" group by "STUDY_GROUP";

  SELECT "STUDY_GROUP", COUNT(*)
  FROM "TABLE1"
  GROUP BY 1;

Will do the same thing; except where the column is NULL.  You will have
been getting zero before, but now you'll be told how many null entries
you have--this may or may not be what you want.  If you don't want it,
you could do:

  SELECT v, COUNT(v)
  FROM (SELECT "STUDY_GROUP" AS v FROM "TABLE1") x
  GROUP BY v;

There would still only be one name to change then and PG would optimize
the query to do the same thing either way so performance shouldn't be
affected.

Hope that helps!

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] bulk inserts

2009-09-29 Thread Sam Mason
On Tue, Sep 29, 2009 at 12:17:51PM -0400, Tom Lane wrote:
> Sam Mason  writes:
> > On Tue, Sep 29, 2009 at 08:45:55AM -0700, Alan Hodgson wrote:
> >> I think a big reason is also that the client can stream the data without 
> >> waiting for a network round trip ack on every statement.
> 
> > I don't think so.  I'm pretty sure you can send multiple statements in a
> > single round trip.
> 
> You can, but that doesn't scale to megabytes of data (at least not
> well).

No, but I didn't think that was being talked about.  I was thinking
network round trip time does seem to become a thousand times less
important when you're putting a thousand statements together.  This
would seem to imply that network latency can be almost arbitrarily
reduced.

> I think the big points are elimination of per-row network and
> transaction commit overhead ...

Well, if you start including transaction commit then you've just changed
semantics away from COPY.  I was implicitly thinking of what changes
when you keep the same semantics as COPY.

> but there are some other optimizations
> in the COPY path too.

Cool, I'll continue to prefer COPY then!

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] bulk inserts

2009-09-29 Thread Sam Mason
On Tue, Sep 29, 2009 at 09:11:19AM -0700, Alan Hodgson wrote:
> On Tuesday 29 September 2009, Sam Mason  wrote:
> > I'm pretty sure you can send multiple statements in a
> > single round trip.  libpq is defined to work in such cases anyway:
> >
> >   http://www.postgresql.org/docs/current/static/libpq-exec.html
> 
> I'm sure you probably _can_, but how many programming loops do so?

It's not a very big sample, but I'm pretty sure I do! :)

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] computed values in plpgsql

2009-09-29 Thread Sam Mason
On Tue, Sep 29, 2009 at 06:30:42PM +0200, Pavel Stehule wrote:
> 2009/9/29 Sam Mason :
> > I may have got that wrong somewhere else.
> 
> I afraid so this technique is very buggy. You need unpacked serialised
> record.

Hum, I'm not sure what an "unpacked serialised record" is or why I'd
need one.

> And the result have to be valid sql literal.

I'm asking PG to generate one for me, and if it doesn't know what a
valid literal is I don't know who does.  Here's a more complete example:

  CREATE TABLE t (name varchar, addr varchar);
  CREATE TABLE s (name varchar, addr varchar);

  CREATE OR REPLACE FUNCTION trig () RETURNS trigger AS $$
BEGIN
  EXECUTE 'INSERT INTO s (SELECT (t '||quote_literal(new)||').*);';
  RETURN NULL;
END $$ LANGUAGE plpgsql;

  CREATE TRIGGER trig BEFORE INSERT ON t
FOR EACH ROW EXECUTE PROCEDURE trig();

  INSERT INTO t VALUES ('Pavel Stehule','Benesov');

  SELECT * FROM s;

This does the right thing for me in both 8.3 and 8.4, it would also seem
as though it's easy to apply this to the problem the OP was having.

> you cannot apply quote literal on two or more columns. I thing, so
> this isn't possible now.

Maybe I mis-interpret the problem?

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] bulk inserts

2009-09-29 Thread Sam Mason
On Tue, Sep 29, 2009 at 08:45:55AM -0700, Alan Hodgson wrote:
> On Tuesday 29 September 2009, Sam Mason  wrote:
> > it's faster is because
> > parsing CSV data is easier than parsing SQL.
> >
> > At least I think that's the only difference; anybody know better? 
> 
> I think a big reason is also that the client can stream the data without 
> waiting for a network round trip ack on every statement.

I don't think so.  I'm pretty sure you can send multiple statements in a
single round trip.  libpq is defined to work in such cases anyway:

  http://www.postgresql.org/docs/current/static/libpq-exec.html

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] computed values in plpgsql

2009-09-29 Thread Sam Mason
On Tue, Sep 29, 2009 at 05:42:37PM +0200, Pavel Stehule wrote:
> you cannot use double quotes. It's not php.

Normally yes, but *inside* literals you do indeed want double quotes.


I think the OP wants to be using quote_literal here.  I.e. instead of:

  execute 'insert into foo_something select (''' || new::text || '''::foo).*';

it wants to be closer to:

  execute 'insert into foo_something select (foo ' || quote_literal(new) || 
').*;';

but it's a bit fiddly and I may have got that wrong somewhere else.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] query is taking longer time after a while

2009-09-29 Thread Sam Mason
On Tue, Sep 29, 2009 at 03:13:49PM +0200, Brian Modra wrote:
> 2009/9/29 Sam Mason :
> > Plain vacuum should allow things to reach a steady state after
> > a while, 
>
> If there are a lot of deletes, then likely the index parameters are
> not the best.

My interpretation of the OPs problem was that the inserts and deletes
were happening at similar rates.  Thus this won't be a problem.

> ANALYSE yourtable;
> 
> Then, reindex (or create new index followed by drop index and rename -
> if you want to leave the index online.

Analyse is just about collecting statistics for the planner, I'm not
sure why you'd want to run it before a reindex.  Autovacuum was being
run, so it's not going to make much difference is it?

> > Why not just do:
> >
> >   REINDEX TABLE yourbigtable;
> 
> Thats OK if the table can be taken offline. REINDEX locks the index
> while in progress.

Good point, forgot about that.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] query is taking longer time after a while

2009-09-29 Thread Sam Mason
On Tue, Sep 29, 2009 at 02:25:52PM +0200, Brian Modra wrote:
> 2009/9/29 tomrevam :
> > My DB is auto-vacuuming all the time. The specific table I'm talking about
> > gets vacuumed at least every 2 hours (usually a little more frequently than
> > that).
> > Deletes are happening on the table at about the same rate as inserts (there
> > are also some updates).
> 
> The index quite likely is in a poor state.

Really? Plain vacuum should allow things to reach a steady state after
a while, doing a large delete will put things out of kilter, but that
doesn't sound to be the case here.  Vacuum full can also cause things to
go amiss, but if it's just regular vacuums then things should be OK.

What do you get out of vacuum analyse verbose? for this table?

> You could try this:
> 
> analyse 
> create index ... (same parameters as existing index)
> delete the old index.
> rename the new index to the same name as the old one
> repeat this for all indexes.

Why not just do:

  REINDEX TABLE yourbigtable;

No need to worry about rebuilding foreign key constraints or anything
like that then.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Using Insert - Default in a condition expression ??

2009-09-29 Thread Sam Mason
On Tue, Sep 29, 2009 at 04:04:46AM -0700, Postgres User wrote:
> >> > I'm trying to write an INSERT INTO statement that will use a DEFAULT
> >> > value when an input parameter is null.

 [ workaround given ]

> Again, this approach works for a simple example.  But for a larger
> function with any number of input params and multiple columns with
> default values, it's not practical for an ORM code generator.

Just to reiterate what Richard said; this is really the duty of the ORM.

If you're telling the database to insert a NULL then that's what the
database should do.  If you tell it to insert a DEFAULT that that's what
it should do.  The fact that your ORM doesn't distinguish between the
two is a bug in your ORM and should really be fixed there.

One of PG's design mantras is to do (exactly) what the user says or tell
them why it's not possible.  Silently changing NULLs into DEFAULT values
doesn't seem like correct behavior and will generally come back and bite
you later on---for example, what if the user really does want to insert
a NULL value into *any* column that has a non-NULL DEFAULT value.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] bulk inserts

2009-09-29 Thread Sam Mason
On Mon, Sep 28, 2009 at 08:33:45PM -0400, Martin Gainty wrote:
> INSERTS/UPDATES are historically slow especially with autocommit is
> on (implied autocommit on) the Database writer actually stops any
> processing and applies that one record to the database

That seems to be overstating the issue somewhat.  Each connection is
still independent and (assuming no locks are taken) will carry on as
such.

> Most bulk operations such as import/export and copy are well worth
> their weight as they apply en-masse before any commit ..

?? I'm not sure what you're implying about the semantics here, but it
doesn't seem right.  COPY doesn't somehow break out of ACID semantics,
it's only an *optimization* that allows you to get large quantities of
data into the database faster.  The main reason it's faster is because
parsing CSV data is easier than parsing SQL.

At least I think that's the only difference; anybody know better?

> remember the DB actually stops flushes its buffers to Disk and then
> resumes..

The DB as a whole does not stop if you issue a commit; just your
session/connection.

> the only solution here is to disable autocommit but be wary
> you may have 100's of statements waiting to be commited and then
> someone does a quit on your session..all your work is lost

I'm not sure what you're saying here.  These are normal transactional
semantics and are what all ACID databases are specified to do.  You need
to issue a "COMMIT" for data to be committed.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] bulk inserts

2009-09-28 Thread Sam Mason
On Mon, Sep 28, 2009 at 04:35:53PM -0500, Dave Huber wrote:
> One assumption I am operating under right now is
> that the format of the binary file is the same as the buffer in
> PQputCopyData, including the header.  If I am wrong, could someone
> please let me know? Thanks,

I've always used ASCII representations of the data; no need to worry
about formats then.  Not sure what sort of performance considerations
you have, but it's certainly less of a worry for me.  I'm reasonably
sure the binary format changes more often than you'd like, maybe
something like:

  http://libpqtypes.esilo.com/

would help.  I can't see anything about COPY support, but it should help
with other things.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] bulk inserts

2009-09-28 Thread Sam Mason
On Mon, Sep 28, 2009 at 10:38:05AM -0500, Dave Huber wrote:
> Using COPY is out of the question as the file is not formatted for
> that and since other operations need to occur, the file needs to be
> read sequentially anyway.

Just to expand on what Martin said; if you can generate a set of EXECUTE
commands, you can certainly generate a COPY command to insert the same
data.  The advantage is a large drop in parse time for inserting larger
numbers of rows.  As you're saying you want to insert 500 rows, I'd
suggest at least trying to get COPY working.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Newbie's question: How can I connect to my postgresql-server?

2009-09-28 Thread Sam Mason
On Mon, Sep 28, 2009 at 11:55:51AM +0700, Ricky Tompu Breaky wrote:
> After I found the solution of my problem and again read the postgres
> manual, I've understood you're correct that I tried "too much" as an
> initial step for a newbie like me.

Sorry it wasn't as easy as it could be and I hope your experiences are
exceptional and most people have an easier time initially!


To the rest of pg-general (and should I be ccing pgsql-www ?); would it
be worth putting a little survey on the front of the PG web-site to ask
how people get on initially or would the signal to noise ratio be too
low to be useful?

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL reads each 8k block - no larger blocks are used - even on sequential scans

2009-09-28 Thread Sam Mason
On Sun, Sep 27, 2009 at 07:22:47PM -0600, Scott Marlowe wrote:
> >> dd if=/dev/zero of=test.txt bs=8192 count=1310720 conv=fdatasync
> >> 10737418240 bytes (11 GB) copied, 169.482 s, 63.4 MB/s
> >>
> >> dd if=test.txt of=/dev/null bs=8192
> >> 10737418240 bytes (11 GB) copied, 86.4457 s, 124 MB/s
> >
> > These look slow.

> They are slow, they are not atypical for RAID5; especially the slow
> writes with SW RAID-5 are typical.

Wow, no wonder it's shunned so much here!  I'd not realized before that
it incurred such a hit.

> I'd try a simple test on a 2 or 3 disk RAID-0 for testing purposes
> only to see how much faster a RAID-10 array of n*2 disks could be.
> The increase in random write performance for RAID-10 will be even more
> noticeable.

I was thinking that the higher the bandwidth the IO subsystem could push
the data though the more important a larger block size would be--less
to and fro between the kernel and userspace.  If the OP reported
considerably higher CPU usage than expected then he could try rebuilding
with larger block sizes to see if it helps.

I'm assuming that PG only issues block sized reads?  How does changing
block size affect index access performance; does it slow it down because
it has to pull the whole block in?

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL reads each 8k block - no larger blocks are used - even on sequential scans

2009-09-27 Thread Sam Mason
On Sun, Sep 27, 2009 at 10:01:27PM +0100, Sam Mason wrote:
> Tested by turning on "\timing" mode in psql, dropping
> caches and running:
> 
>   SELECT 715833344 / 7597.216 / 1024;

Help, I can't do maths!  This is overestimating the performance and
should be:

  SELECT 715833344 / 7597.216 / 1024 / 1024 * 1000;

After a few more runs to increase confidence, the read performance is
87.17 and a stddev of 2.8.  Which seems more reasonable, it should *not*
be going above 90MB/s as often as it was.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL reads each 8k block - no larger blocks are used - even on sequential scans

2009-09-27 Thread Sam Mason
On Sun, Sep 27, 2009 at 09:04:31PM +0200, Gerhard Wiesinger wrote:
> I'm talking about 2 cases
> 1.) Sequential scans
> 2.) Bitmap index scans
> which both hopefully end physically in blocks which are after each other 
> and were larger block sizes can benefit.

Unfortunately it's all a bit more complicated than you hope :(
Sequential scans *may* benefit from larger block sizes, but not much.
Your testing below doesn't seem to test this at all though.

Bitmap index scan will still be accessing blocks in a somewhat random
order (depending on how much correlation there is between the index and
physical rows, and what the selectivity is like).  The result of any
index scan (bitmap or otherwise) must come back in the correct order
(PG is designed around this) and the the best idea to speed this up has
been Greg's read ahead patch.  This pushes more information down into
the kernel so it can start reading the blocks back before PG actually
gets to them.  They are still going to be somewhat out of order so, in
the general case, you're going to be limited by the seek speed of your
disks.


> Detailed benchmarks are below, the original one from PostgreSQL have 
> already been posted.

Which was saying what?  you were getting 32MB/s and 53MB/s from what?

As a quick test, maybe:

  create table benchmark ( i integer, j text, k text );
  begin; truncate benchmark; insert into benchmark select 
generate_series(1,1024*1024*10), '0123456789abcdef','0123456789abcdef'; commit;

The first run of:

  select count(*) from benchmark;

Will cause the "hint" bits to get set and will cause a lot of writing to
happen.  Subsequent runs will be testing read performance.  My simple
SATA disk at home gets ~90MB/s when tested hdparm, which I'm taking as
the upper performance limit.  When I perform the above query, I see the
disk pulling data back at 89.60MB/s (stddev of 2.27) which is actually
above what I was expecting (there's a filesystem in the way). CPU usage
is around 10%.  Tested by turning on "\timing" mode in psql, dropping
caches and running:

  SELECT 715833344 / 7597.216 / 1024;

Where 715833344 is the size of the file backing the benchmark table
above and 7597.216 is the time taken in ms.

> http://pgfoundry.org/projects/pgiosim/

This seems to just be testing seek performance, not sequential
performance.

> dd if=/dev/zero of=test.txt bs=8192 count=1310720 conv=fdatasync
> 10737418240 bytes (11 GB) copied, 169.482 s, 63.4 MB/s
> 
> dd if=test.txt of=/dev/null bs=8192
> 10737418240 bytes (11 GB) copied, 86.4457 s, 124 MB/s

These look slow.  RAID5 isn't going to be amazing, but it should be
better than this.  I'd spend some more time optimizing your system
config before worrying about PG.  If I can read at 90MB/s from a single
stock SATA drive you should be almost hitting 200MB/s with this, or
300MB/s in a RAID1 across three drives.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL reads each 8k block - no larger blocks are used - even on sequential scans

2009-09-27 Thread Sam Mason
On Sun, Sep 27, 2009 at 06:05:51PM +0200, Gerhard Wiesinger wrote:
> A google research has shown that Gregory Stark already worked on that issue 
> (see references below) but as far as I saw only on bitmap heap scans.

Greg Stark's patches are about giving the IO subsystem enough
information about where the random accesses will be ending up next.
This is important, but almost completely independent from the case
where you know you're doing sequential IO, which is what you seem to be
talking about.

> I think this is one of the most critical performance showstopper of 
> PostgreSQL on the I/O side.

PG's been able to handle data as fast as it can come back from the disk
in my tests.  When you start doing calculations then it will obviously
slow down, but what you were talking about wouldn't help here either.

Then again, I don't have a particularly amazing IO subsystem.  What
sort of performance do your disks give you and at what rate is PG doing
sequential scans for you?

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Newbie's question: How can I connect to my postgresql-server?

2009-09-27 Thread Sam Mason
On Sat, Sep 26, 2009 at 03:00:41PM +0700, Ricky Tompu Breaky wrote:
> You've solved my several problem. But now I'm stucked on another
> problem which I know easy to anybody else but difficult as a
> postgres starter like me.

A general observation; you seem to be making things awfully complicated
initially.  It may be worth going with the defaults of assuming that PG
usernames and Unix usernames are the same initially.  Once you've got
a bit more experience then it may not seem so complicated.  The main
reason I'm saying this is that I *very* rarely have to fiddle with these
sorts of things and yet they always do the "right" thing when I try.

Maybe there are other things going on that are getting in the way of
solving the real issue that will go away with more experience.

> I did these steps:
> "
> 1. I created a new opensuse11.1-linux login account + its password
> (username: ivia) with YaST2;

PG users and Unix users/accounts are unrelated so this step is redundant
(think of large sites with thousands of database users).  If you've
always got Unix user accounts why not just use the ident auth as
default?

> 2. i...@sussy:~> su postgres -c psql postgres
> 3. postgres'# ALTER USER postgres WITH PASSWORD '<>';
> ALTER ROLE
> postgres=# ALTER USER ivia WITH PASSWORD '<>';
> postgres'# 
> 4. sussy:/etc # cat /var/lib/pgsql/data/pg_hba.conf
> host all all 0.0.0.0/0 md5

Were these setting in place when you ran the initial connection with the
postgres account to change people's passwords?  If so, I can't see how
it would work.  Maybe you are changing the "wrong" config file.

Try putting something invalid into the config file and check to see if
it starts up or gives an error.

> 7. sussy:/etc # psql -h 127.0.0.1 -U ivia -W
> Password for user ivia: 
> psql: FATAL:  Passwort-Authentifizierung für Benutzer »ivia«
> fehlgeschlagen (my translation: Password-Authentication for user »ivia«
> failed)

It's obviously using password auth, are you sure you got the passwords
right?  You haven't got a backslash in the password have you? it'll need
to be escaped in the initial creation bit if you do.  A simple ASCII
password such as 'test' may be a good place to start.

> Why can I not login with 'iVia' to my postgresql? Is it because I use
> 'md5()' but not blowfish as I remember OpenSuSE11.1 use 'blowfish()'
> as its default password encryption. But AFAIK, there's nothing to do
> with the RDBMS Encryption (PostgreSQL in my case) and the Host OS
> password encryption method.

Yup, as far as I can tell this shouldn't have any effect.  Then again,
I don't use SuSE and it's possible (though very unlikely) that they
changed this.


  Sam

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] generic modelling of data models; enforcing constraints dynamically...

2009-09-25 Thread Sam Mason
On Fri, Sep 25, 2009 at 11:01:02AM -0700, Ron Mayer wrote:
> Sam Mason wrote:
> > It all depends on the problem domain of course, but this seems to work
> > OK for us!  I really want to hack Samba around so that the users can
> > view the files directly from inside the database, but I'm not sure how
> > good an idea this really.
> 
> "hack Samba"?   Wouldn't it be easier to use one of the database-as-a
> filesystem FUSE bindings and run stock samba over that?

Huh, that would indeed be much easier.  I hadn't thought about this for
a while and Rob's post reminded me.  I don't think FUSE existed when I
started thinking about it and as all our clients are Windows boxes it
didn't matter at the time.

> The perl Fuse::DBI module's example  sounds pretty similar to the
> system you described where he "file" seems to be a column in a table.
> http://www.rot13.org/~dpavlin/fuse_dbi.html

FUSE looks pretty easy to get going and I think I'd want more control
over how files were presented than this gives so I'd probably end up
rolling my own code.  Thanks for pointing out that FUSE though, not sure
why I'd not thought of it before.  I'll probably still never get around
to it, but maybe I will!

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [HACKERS] libpq port number handling

2009-09-25 Thread Sam Mason
On Fri, Sep 25, 2009 at 09:29:24AM +0300, Peter Eisentraut wrote:
> On Thu, 2009-09-24 at 20:36 -0400, Tom Lane wrote:
> > BTW, are port numbers still limited to 16 bits in IPv6?
> 
> Port numbers are in TCP, not in IP.

I'd checked that it should work with IPv6, but I hadn't realized that
it was because ports were at a different level of abstraction.  This
mailing list is good for otherwise obscure details like that!

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] generic modelling of data models; enforcing constraints dynamically...

2009-09-25 Thread Sam Mason
On Thu, Sep 24, 2009 at 11:07:31PM +0200, InterRob wrote:
> What I'm trying to do is to facilitate different fieldwork methodologies for
> archaeological research (on project basis); there is no final agreement on
> data structure and semantics; however, on a meta-level all choices are
> rational and can be modelled... Infact, all models can be related to each
> other: that's where the "hybrid" part comes in: I wish to implement the
> common denominator (90%) and then further extend this, enabing specific data
> model implementations -- including checks for data integrity.

I'm my experience it depends on how technically competent your users
are.  Most of mine are fine working with "their own" data files/formats
and only want the data in the database to keep track of the larger
structural stuff.

I therefore tend to just leave their data as opaque blobs (stored in
large objects, as they're reasonably small) of data and only pull out
the parts of it that are needed to keep the other parts of the projects
happy.  That way I can make sure the bits the database takes care of can
be appropriately designed and the users get to keep their data exactly as
they want.

To support this I've written various bits of code that get automatically
run when users insert their data files to pull them out into the
appropriate tables.  The detailed bits of the structure are of course
missed, but most of the time this data isn't needed and when it is they
want the rest of the original (normally proprietary binary file formats
that I've had to reverse engineer) file so that their program can figure
out what's going on.

It all depends on the problem domain of course, but this seems to work
OK for us!  I really want to hack Samba around so that the users can
view the files directly from inside the database, but I'm not sure how
good an idea this really.

The bigger datasets (1GB+) tend to be nicely structured, so they get
handled specially.

> As soon as that works, it becomes possible to record changes at row-level --
> providing access to data-snapshots in time.

I think these are what my blobs are...

> Furthermore, it becomes possible
> to build upon this central database automated tools for management and
> filing of information and different modes of data entry (including
> webbased)...

...and this is what I'd call my structural bits.

> The thing is: altering table structures (changes should be limited to adding
> columns) is required on a ad hoc basis and End User should be able to do
> so...

I generally find it's easier if I'm involved in that.  Maybe it's just
my users!

> I guess that requires some over engineering... ?

By "over engineering" I was meaning that you seem to be trying to solve
a more complicated problem than is necessary.  There will be some
essential complexity inherent in any problem, but it's the job of every
engineer (software or hardware) to ensure that only minimal amounts of
incidental complexity are introduced.


In my case the "important" thing is to make sure that we know the
state of what's going on in the projects.  I can do this by getting a
combination of data from the user (through traditional means) and by
pulling apart their data files.  The "incidental complexity" I've added,
that of writing fiddly little programs to interpret their files, seems
to be better than getting the users to input the data twice; once in
their programs and once into the database.

In your case you've introduced this strange new EAV style design and the
constraint system on top of it.  The benefits of this design may well be
better than the costs of developing it, but I have a feeling it may be
easier to "side-step" the problem somehow.

That all got a bit longer than I was expecting, but I hope it's useful!

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] generic modelling of data models; enforcing constraints dynamically...

2009-09-24 Thread Sam Mason
On Thu, Sep 24, 2009 at 10:33:37PM +0200, InterRob wrote:
> I came to think of another option: putting additional columns (that is:
> addittional to the default set of fields) in xml, in a column that is part
> of row (=object) it belongs to.
> Any body has done so before? Any body has experience with XML schema
> validation within PostgreSQL?

Sorry; but was sounding a little over engineered before, it seems to be
blowing out of proportion now.  By whom and how are these (immensely
complicated) rule sets going to be maintained? how is using XML going to
make this any easier than using the tools native to the database?

If they're validated inside the database then it's going to be done by a
DB admin anyway, or am I missing something?  If they're done by the DB
admin, isn't it easy to just use the tools they're used to?

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] generic modelling of data models; enforcing constraints dynamically...

2009-09-24 Thread Sam Mason
On Thu, Sep 24, 2009 at 09:23:35PM +0200, Rob Marjot wrote:
> SELECT doesComply('relationname', keyValues.*) FROM (VALUES('col1',
> CAST(col1 AS TEXT)), VALUES('col2', CAST(col2 AS TEXT))) AS
> keyValues(the_key, the_value);
> 
> The function "doesComply()" will then process the CONSTRAINTS table and
> raise an Error if the new / updated row does not fit...

I'd have a set of doesComply functions, the first two parameters
as you have them but overload a set to support different datatypes
specifically.  Something like:

  CREATE FUNCTION doesComply(_rel TEXT, _key TEXT, _val INT) ...
  CREATE FUNCTION doesComply(_rel TEXT, _key TEXT, _val DATE) ...
  CREATE FUNCTION doesComply(_rel TEXT, _key TEXT, _val TEXT) ...
  CREATE FUNCTION doesComply(_rel TEXT, _key TEXT, _val NUMERIC) ...

And then have a set of "attribute" tables (one for each datatype) to
store the actual values in.  At least PG can do some type checking for
you that way.  Either that, or just leave them all as text to text
mappings in the database and only attempt to type things out in the
client code.

Not sure why you're doing the VALUES contortions as well, why not just:

  SELECT doesComply('relationname', 'col1', col2);

?

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Newbie's question: How can I connect to my postgresql-server?

2009-09-24 Thread Sam Mason
On Fri, Sep 25, 2009 at 02:47:23AM +0700, Ricky Tompu Breaky wrote:
> I'm sure that I used the correct password as "createuser" the
> account of 'ricky'.

By default; PG is set up not to use passwords in the "local" case.  This
is what the "ident" in the error is about.  Your PG username needs to be
the same as your Unix username or ident authentication will fail.

Have a look at:

  http://www.postgresql.org/docs/current/interactive/auth-pg-hba-conf.html

if you want to change it to use passwords as the authentication method
in the local case.


Note, that you don't need to restart PG when adding/removing users,
but you do when changing the config files like pg_hba.conf and
postgresql.conf.  It's a "inside"/"outside" PG thing, users are stored
in the database and hence PG knows when they're added/removed, but the
config files are outside and PG doesn't know when to go looking for
changes.  There are plans to change this for pg_hba.conf, but you'll
have to wait a bit before that happens.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] libpq port number handling

2009-09-24 Thread Sam Mason
On Thu, Sep 24, 2009 at 07:57:55PM +0100, Sam Mason wrote:
> > postg...@sussy:/root> createuser -D -p ricky
> 
> I don't think you want to be passing "-p" here; it's saying to use
> "ricky" as the port number, which fails (sounds like a bug if it doesn't
> complain about this) giving a port number of zero

Hum, why is PG doing an (unchecked) atoi on the user specified port
rather than leaving it up to getaddrinfo to resolve the port?  It would
seem to require changing UNIXSOCK_PATH to accept a string as the "port
number", which is probably a bit much of a change.

The included doesn't feel very nice, but is probably more acceptable.

-- 
  Sam  http://samason.me.uk/
--- src/interfaces/libpq/fe-connect.c~	2009-06-11 15:49:13.0 +0100
+++ src/interfaces/libpq/fe-connect.c	2009-09-24 20:48:53.0 +0100
@@ -817,7 +817,16 @@
 
 	/* Set up port number as a string */
 	if (conn->pgport != NULL && conn->pgport[0] != '\0')
+	{
 		portnum = atoi(conn->pgport);
+		if (portnum < 1 || portnum > 65535)
+		{
+			appendPQExpBuffer(&conn->errorMessage,
+			  libpq_gettext("invalid port number \"%s\" specified\n"),
+			  conn->pgport);
+			goto connect_errReturn;
+		}
+	}
 	else
 		portnum = DEF_PGPORT;
 	snprintf(portstr, sizeof(portstr), "%d", portnum);

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Newbie's question: How can I connect to my postgresql-server?

2009-09-24 Thread Sam Mason
On Fri, Sep 25, 2009 at 01:47:03AM +0700, Ricky Tompu Breaky wrote:
> I am still new in PostgreSQL. Usually I use MySQL on OpenSuSE.

Welcome over, PG is *normally* much better behaved than this and
generally gives very good error messages.

> postg...@sussy:/root> createuser -D -p ricky

I don't think you want to be passing "-p" here; it's saying to use
"ricky" as the port number, which fails (sounds like a bug if it doesn't
complain about this) giving a port number of zero, which is why you see
this:

>   connections on Unix domain socket "/tmp/.s.PGSQL.0"?

the ".0" is normally something like ".5432".

Maybe you want "-P" to be asking for the password?

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] generic modelling of data models; enforcing constraints dynamically...

2009-09-24 Thread Sam Mason
On Thu, Sep 24, 2009 at 06:28:28PM +0200, InterRob wrote:
> I am trying to implement the following:
> 
> In a database I wish to implement a GENERIC datamodel, thus on a meta-level.

Sounds like you're describing an EAV design:

  http://en.wikipedia.org/wiki/Entity-attribute-value_model

Designs like this tend to result in you getting very little support
from the database and get awkward as they grow.  If your problem really
is suited to this then go for it, but surprisingly few actually are.
I'd highly recommend using a more traditional design until you've been
through at least one big revision and then you'll know whether EAV
really fits.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Looking for way to replicate master db to multiple mobile databases

2009-09-24 Thread Sam Mason
On Wed, Sep 23, 2009 at 06:00:03PM -0400, Bryan Montgomery wrote:
> Thanks for the reply. This is a one way push to the slaves. In theory, there
> shouldn't be any conflicts  although I wouldn't swear to that. If
> there's a conflict, the master db should win. At the moment we just drop the
> tables, recreate the schema and reload the tables. However, some of the
> large tables literally take hours across the network, for maybe a few dozen
> changes.

Could you just replay WAL updates?  I.e. have two copies of the database
on each device, one as the mirror of the "master" and one as the "live"
version.  When you need to push the changes out, just push out the WAL
updates, dump the "live" version, copy the "master" into a new "live"
version and then replay the new WAL records.

Depends on how much disk space you have I guess.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to get variable out to shell script

2009-09-20 Thread Sam Mason
On Sun, Sep 20, 2009 at 04:49:03PM -0500, Alex Gadea wrote:
> ie: select into ct count(*) from table; 
> 
> I can't figure out how to make the ct variable available to the shell
> script once the external sql file completes execution.

Just tell psql not to output any surrounding stuff and then just
redirect as normal:

  ct="`psql -tc 'select count(*) from table;'`"
  echo $ct

I expect it'll probably be easier to use a "real" scripting language
though; Python and Perl both have reasonable libraries for talking to
Postgres with.  Python would be something like:

  import psycopg2;
  conn = psycopg2.connect("dbname='db1'");
  cur = conn.cursor();
  cur.execute ("select count(*) from table;");
  [[n]] = cur.fetchall();

It's a bit of a fiddle to change over, but having a something more
expressive than a bourne shell can help.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] limiting query time and/or RAM

2009-09-17 Thread Sam Mason
On Thu, Sep 17, 2009 at 03:56:09PM -0400, Alan McKay wrote:
> Our databases are pretty big, and our queries pretty complex.

How big is "big" and how complex is "complex"?

An EXPLAIN (EXPLAIN ANALYSE if it's not going to hurt things) of some of
your common queries would help a lot here.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Substitutes for some Oracle packages

2009-09-17 Thread Sam Mason
On Thu, Sep 17, 2009 at 03:53:36PM -0400, Arnold, Sandra wrote:
> We are in the process of migrating from Oracle to PostgreSQL.  One of
> the things that we are needing to find out is what to use in place of
> Oracle supplied functionality such as "DBMS_OUTPUT" and "UTL_FILE".

For those of us who use PG and not Oracle a description of the
functionality you need would help, the artifacts of your current
implementation are less helpful.  That said:

  plpgsql can RAISE NOTICE, which looks similar to DBMS_OUTPUT

  most "untrusted" scripting languages (i.e. plperl or plpython) can
  touch the filesystem, which is what UTL_FILE seems to be about

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] creation of foreign key without checking prior data?

2009-09-17 Thread Sam Mason
On Thu, Sep 17, 2009 at 08:34:20PM +0200, hubert depesz lubaczewski wrote:
> On Thu, Sep 17, 2009 at 01:22:52PM -0500, Peter Hunsberger wrote:
> > You can't have a foreign key that doesn't have relational integrity,
> > it is no longer a foreign key.
> 
> you do realize that having foreign key defined doesn't guarantee
> integrity?

The obvious cases would be software bugs and bad hardware.  What else?
Huh, how about users scribbling over PG's files!  Not sure where to
classify that but could either happen maliciously or accidentally as the
result of trying to clean up.

By having an override here you seem to be saying that you ultimately
trust yourself more than PG and/or the hardware its running on.  I
suppose the trade off is time you *may* spend cleaning up later on if
this isn't true vs. the time PG *will* spend verifying the constraint
now.  Interesting trade off, never really considered it before.

Sounds valid, though the general mantra here is that PG knows best.  Is
that always true?

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] What kind of JOIN, if any?

2009-09-17 Thread Sam Mason
On Thu, Sep 17, 2009 at 10:29:11AM -0400, Paul M Foster wrote:
> I want all the records of the
> url table, one row for each record, plus the userid field that goes with
> it, for a specified user (paulf), with NULLs as needed

Maybe something like this?

  SELECT a.userid, u.url
  FROM urls u
LEFT JOIN access a ON u.id = a.url_id AND a.userid = 'paulf';

> I can do *part* of this with various JOINs, but the moment I specify
> userid = 'paulf', I don't get the rows with NULLs.

I guess you were putting "userid = 'paulf'" into the WHERE clause,
that's the wrong place.  It needs to be up in the ON clause.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Unicode normalization

2009-09-17 Thread Sam Mason
On Thu, Sep 17, 2009 at 12:01:57AM -0400, Alvaro Herrera wrote:
> http://wiki.postgresql.org/wiki/Strip_accents_from_strings

I'm still confused as to why plpython doesn't know the server's encoding
already; seems as though all text operations are predicated on knowing
this and hence all but the most trivial code has to go out of its way to
be correct with respect to this.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Unicode normalization

2009-09-16 Thread Sam Mason
On Wed, Sep 16, 2009 at 09:35:02PM +0200, Andreas Kalsch wrote:
> CREATE OR REPLACE FUNCTION test (str text)
>  RETURNS text
> AS $$
>import unicodedata
>return unicodedata.normalize('NFKD', str.decode('UTF-8'))
> $$ LANGUAGE plpythonu;

I'd guess you want that to be:

  return unicodedata.normalize('NFKD', str.decode('UTF-8')).encode('UTF-8');

If you're converting from a utf8 encoding, you probably need to go
back again!  This could certainly be made easier though, PG knows what
encoding its strings are stored in, why doesn't it work with unicode
strings by default?

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] I need a Postgres Admin $130K + 20K in NYC Any Ideas?

2009-09-16 Thread Sam Mason
 [ crap, that wasn't supposed to go to the list ]

On Wed, Sep 16, 2009 at 06:30:46PM +0100, Sam Mason wrote:
> Martin, *please* stop responding publicly to this thread

I'm feeling very embarrassed now!

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] I need a Postgres Admin $130K + 20K in NYC Any Ideas?

2009-09-16 Thread Sam Mason
On Wed, Sep 16, 2009 at 12:54:07PM -0400, Martin Gainty wrote:
> Amen!

Martin, *please* stop responding publicly to this thread you idiotic child.

You're digging yourself further and further into a mess, and making
yourself look horribly unprofessional at the same time.  That and
ignoring several well known guidelines on this list; no top posting, and
no html email.  Yes Ed's been doing the same, but at least he's being
professional now and has elevated himself above your childish taunts.

Your last, gratuitous, message resulted in about 90MB of email being
sent out (25KB message * 3653 subscribers) and wasting all their time
wondering whether they should or shouldn't send a complaining email.
All because your silly little ego dictated the need to say "amen".

Wait a day or two to calm down, read back through what was said and then
come to a sensible decision.  A public apology would probably save a bit
of face, but I'd wait a bit for that as well.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Non-check constraint def for a static list ?

2009-09-15 Thread Sam Mason
On Tue, Sep 15, 2009 at 08:01:19AM -0700, Gauthier, Dave wrote:
> How could I define a constraint on a colum who's value I want to
> restrict to a static list of strings?  For example, a column colled
> "gender" which cannot be NULL and must be either 'M' or 'F'.  I can do
> it with a "check" constraint, but I can't defer those constraints (the
> reason I'm asking for a non-check soln)

If this is to do with your similar questions from earlier; at what point
does the data become valid and why don't you just delay putting the data
into the "real" table at then.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgresql.key secure storage

2009-09-14 Thread Sam Mason
On Mon, Sep 14, 2009 at 05:45:14PM +0200, Saleem EDAH-TALLY wrote:
> Le Monday 14 September 2009 16:13:45, vous avez écrit :
> > "Secure
> > wallet" is an exercise in self-delusion.
> 
> Not really. How can a user extract data from a container, by whatever
> name we call it, if he does not have the key to open it ?
> 
> Could you please instruct how to achieve this ?

Exactly the same way that libpq does--debuggers are powerful tools!

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgresql.key secure storage

2009-09-14 Thread Sam Mason
On Mon, Sep 14, 2009 at 09:40:47AM +0200, Saleem Edah-Tally wrote:
> >a separate application server
> 
> Well this can be a solution in a trustworthy and friendly environment, on 
> which I can't count.

There must be some mis-communication going on; the above is how things
tend to be done on the Internet (e.g. you have code in the web server
that hands off known "good" queries to the database) and the Internet
certainly isn't a "trustworthy and friendly environment".

> I would have been more at ease if libpq could manage a PKCS12 cert. or some 
> secure wallet/keystore that contains both the public and private keys for SSL 
> traffic. Neither the end user nor any admin would have to provide the 
> password 
> to access the keys inside the secured storage as I would have prefered to 
> hard-code the password. Hard coding is not an elegant solution I agree, but 
> leaving on the table an unencrypted private key is not something to do IMO.

As Tom said; if this is implemented in software on a conventional OS
then this can *never* work.  Even if you start using a smart card to do
the crypto things don't get any better, how can you ensure that only the
"right" libpq process is talking to the card.

You need something you trust in the middle, and the conventional answer
to this is another server running code that only you control.  The PKI
stuff in libpq is about preventing man-in-the-middle attacks *between*
libpq and the PG server.  Your problem is that you don't trust the code
*calling* libpq and I can't think how PKI would help here.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgresql.key secure storage

2009-09-14 Thread Sam Mason
On Mon, Sep 14, 2009 at 12:17:55PM -0400, Tom Lane wrote:
> Sam Mason  writes:
> > On Mon, Sep 14, 2009 at 05:45:14PM +0200, Saleem EDAH-TALLY wrote:
> >> How can a user extract data from a container, by whatever
> >> name we call it, if he does not have the key to open it ?
> 
> > Exactly the same way that libpq does--debuggers are powerful tools!
> 
> Or even easier, modify the source code of libpq to print out the data
> after it's extracted it.

Yup, I suppose you could even modify libpq to rewrite the "good" SQL
into whatever the attackers wants--bypassing any secret based scheme
completely.

> Security in an open-source world requires
> a different set of tools than security in a closed-source world.

Strictly speaking, a debugger is the universal mallet :)

Also, it shouldn't change much.  Security through obscurity is never
good, it is employed far too often though thankfully (a bit) less in
open-source programs.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Adding integers ( > 8 bytes) to an inet

2009-09-10 Thread Sam Mason
On Thu, Sep 10, 2009 at 10:30:49AM -0400, Tom Lane wrote:
> Kristian Larsson  writes:
> > Do we
> > c) add a conversation between NUMERIC and INET so one can add a
> > NUMERIC to an INET just as is possible today with INTEGERs?
> 
> Proposal (c) is disingenuous because it ignores the fact that NUMERIC
> does not have (and cannot easily implement) most of the bitwise
> operations that people might think they want here.

Huh, good point.  What you want is a finite field; which looks exactly
like what the "bit" type is for.  Why not use that?

You can't cast them to or from numeric which is a bit annoying, but
doesn't seem too hard in principle.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Adding integers ( > 8 bytes) to an inet

2009-09-08 Thread Sam Mason
On Tue, Sep 08, 2009 at 05:58:01PM +0200, Kristian Larsson wrote:
> On Tue, Sep 08, 2009 at 11:37:02AM -0400, Tom Lane wrote:
> > I think the whole thing is a bit of a crock; adding integers to inet
> > addresses doesn't make a lot of sense logically.  Perhaps what is
> > really wanted is functions on CIDR net identifiers, for instance
[...]
> For me, as a network engineer, adding an integer to a inet feels
> quite natural. Inet is just another representation of a integer
> anyway... so I'd really not have a problem with having either a
> int16 or being able to add numerics to inets :) 

Indeed, it seems similar to the (somewhat arbitrary) decision that
adding an int to a date results that many days being added to it.
Timestamp INTERVALs may be more flexible, but it's a useful shortcut
that I use quite often.

Something to convert to/from a NUMERIC value and INET would seem useful
as well.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to store data on an external drive

2009-09-08 Thread Sam Mason
 [ please CC the mailing list and not the list owner, they answer
   mailing list questions not PG questions ]

On Tue, Sep 08, 2009 at 10:31:50AM -0400, Jia Chen wrote:
> Sam Mason wrote:
> >I don't think you need to go that far.  I'd just do an "initdb"
> >somewhere on the removable disk and then start PG pointing at where the
> >cluster was (i.e. postgres -D /media/disk/psqldata) and all should be
> >good.  I'd stay away from the official system startup scripts for PG.
> 
> If I am not mistaken, the paragraph above means that I don't need to 
> reinstall postgresql from source.

Yes; these are all standard programs included with Debian/Ubuntu
packages as normal.  Have a look through the man pages for:

  update-rc.d
  initdb
  postgres

> >Yup, the table data is very tied to the state of transactions and
> >other "system level" information, you need to keep everything together
> >unfortunately.  This is the price of having transactions with ACID
> >semantics.
> 
> However, this paragraph implies that I do need to put other "system 
> level" information together on the external drive. Do you mean that I 
> can put it together without re-installation?  If so, could you offer 
> some hints on how to do that? Thanks.

initdb creates a new PG cluster (i.e. the set of files that PG considers
to be a database).  You should direct this to be run on your external
disk and then get PG running using this cluster.  This is what "postgres
-D /media/disk/psqldata" does, i.e. start the postgres server.  Once
it's started you can connect to it from the "normal" clients, psql, odbc
whatever you want.

I expect all you need to run is:

  sudo /etc/init.d/postgresql-8.3 stop
  sudo update-rc.d -f postgresql-8.3 remove
  initdb /media/disk/psqldata
  postgres -D /media/disk/psqldata

from there on, all you need to do is to run the last line when you plug
the drive in.  Before you take the drive out, just hit the normal Ctrl+C
and PG will shutdown cleanly.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Order By Date Question

2009-09-08 Thread Sam Mason
On Tue, Sep 08, 2009 at 06:29:28AM -0700, BlackMage wrote:
> I want to order by date and then by name, so I want the result A,B,C,D. The
> problem is when I do a 'SELECT * FROM table_name ORDER BY Event_Date, DESC',
> it includes the actual time (HH:MM:SS) so the order comes out B,A,D,C.
> 
> So what I am asking is how do I order only by the date? -MM-DD?

Casting to a date first is probably easiest, date_trunc would also work;
so one of:

  ORDER BY Event_Date::DATE, Name;
or
  ORDER BY date_trunc('day',Event_Date), Name;

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to store data on an external drive

2009-09-08 Thread Sam Mason
On Tue, Sep 08, 2009 at 08:52:23AM +, Jasen Betts wrote:
> On 2009-09-08, Jia Chen  wrote:
> > Now, I want to store PostgreSQL data on an external drive because I work 
> > both on my office machine and on my home machine a lot.  This way, I can 
> > always bring my external drive to either location and work on data 
> > stored on the drive.
> 
> start again and install from the source.
> the ubuntu packge spreads the components of the postgres database about 
> putting some in /usr/bin, /usr/lib, /etc/postgres, /var/log, /var/lib and /etc

I don't think you need to go that far.  I'd just do an "initdb"
somewhere on the removable disk and then start PG pointing at where the
cluster was (i.e. postgres -D /media/disk/psqldata) and all should be
good.  I'd stay away from the official system startup scripts for PG.

> you should install them all onto the removable drive instead.
> it's not just the table data that is needed need for the database to work.

Yup, the table data is very tied to the state of transactions and
other "system level" information, you need to keep everything together
unfortunately.  This is the price of having transactions with ACID
semantics.

One other thing to note is that PG tends to change the on-disk format
between "major" versions, where a major version is defined as the second
digit changing.  8.3 and 8.2 are different major versions and have
different on-disk formats, while 8.3.6 and 8.3.7 are not.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] how to use multiple schema's

2009-09-02 Thread Sam Mason
On Wed, Sep 02, 2009 at 10:52:07AM -0400, Himanshu Gupta wrote:
> Thanks for response, bit if I point it to all the schema, I am just  
> wondering how data insertion will work, I have multiple ables with  
> same name in these applications.

Do you know that you can schema qualify table names don't you?  For
example, if you have a "users" table in schemas "app1" and "app2", you
can refer to the table in the first schema as:

  SELECT uid, name
  FROM app1.users;

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Join efficiency

2009-09-02 Thread Sam Mason
On Wed, Sep 02, 2009 at 10:11:24PM +0900, tanjunhua wrote:
> thanks for your response.
> 
> >Maybe if you could describe what you want to do in English then the
> >query would make a bit more sense.
> I just want those records as the below rule:
> 1. the record of which uid is 2, status is more than 20, bpassword is 0 
> and realdelflag is 0 in tab_main;
> 1.1 the record of which kind is 1 in those that filtered through step1;
> 1.2 the record of which kind is 0 in those that filtered through step1;
> 1.2.1 the record of which delflag doesn't equal 0 in those filtered 
> through step1.2;
> 1.2.2 the record of which uid equal 2, printauth equal 2 or 3 and 
> bprtpermit equal 0 in tab_user left join those filtered through step1.2;
> 1.2.2.1 the record of which mode equal to 0 or 1 in tab_property and left 
> join  those filtered through step1.2.2 using id;

That's not a very "english" explanation.  That's just a translation of
what the code does, and I can do that easily enough myself.  What you're
missing is what the query "means" and the intuition as to how to go
about understanding what all that really means.

I'm guessing there's a clever combination of outer joins that would make
this go fast, but I've tried to do the translation but it's all a bit
complicated to do in my head.  I think it's something like:

  SELECT COUNT(DISTINCT t1.id)
  FROM tab_main t1
LEFT JOIN (SELECT TRUE AS userok FROM tab_user WHERE uid = 2 AND printauth 
IN (2,3) AND bprtpermit = 0 GROUP BY 1) t2 ON TRUE,
LEFT JOIN tab_property t3 ON t1.id = t3.id AND t3.mode IN (0,1)
  WHERE t1.uid = 2
AND t1.status >= 21
AND t1.bpassword = 0
AND t1.realdelflag = 0
AND (t1.kind = 1 OR
(t1.kind = 0 AND (t1.delflag <> 0 OR (t2.userok AND t3.id IS NOT 
NULL;

but I'm not sure how much I'd trust that without some testing.

> It is my first time to use database in practise, could you give me more 
> detail? such as how to decision the WHERE clause complication?
> how to  make the best choice by analyze result? Would you supply some 
> documents about postgresql performance?

There are lots of guides around on the internet; google is your friend!
Other than trying to rewrite your queries in different ways I'm not sure
what to suggest, it'll give you experience which is the important thing.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] creating array of integer[] out of query - how?

2009-09-02 Thread Sam Mason
On Wed, Sep 02, 2009 at 03:47:53PM +0200, Massa, Harald Armin wrote:
> ibox=# CREATE TYPE intarr AS (arr int[]);
> CREATE TYPE
> ibox=#  SELECT array(
>   SELECT x::intarr FROM (
>   SELECT array[2,3]
>   UNION ALL
>   SELECT array[3,4]) x(a));
>?column?
> ---
>  {"(\"{2,3}\")","(\"{3,4}\")"}
> (1 Zeile)
> 
>  the result seems to be an array with two strings containing escaped
> string-represenations of arrays :)

I think that's what you want though--PG just formats the literal the
only way it knows how.  You can use the normal array indexing operators
to get the elements out that you want.  For example:

  SELECT x.arr[1].arr[1]
  FROM (SELECT e'{"(\\"{2,3}\\")","(\\"{3,4}\\")"}'::intarr[]) x(arr);

Or from your original query:

  SELECT x.arr[1].arr[1]
  FROM (
SELECT array(
  SELECT x::intarr FROM (
 SELECT array[2,3]
 UNION ALL
 SELECT array[3,4]) x(a))) x(arr);

If you really do care how the literals are formatted, then you're going
to have to come up with your own data type and associated input and
output functions.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Join efficiency

2009-09-02 Thread Sam Mason
On Wed, Sep 02, 2009 at 02:31:46PM +0900, tanjunhua wrote:
> I 
> have the trouble that it cost me a lot of time when execute the select 
> syntax. the following is the select syntax and analyze result.

> EXPLAIN ANALYZE  SELECT count(Id) FROM (SELECT DISTINCT t1.Id AS Id FROM 
> tab_main t1, tab_user t2, tab_property t3 WHERE (t1.uid = 2 AND t1.status 
> >= 21 AND t1.bpassword = 0 AND t1.realdelflag = 0 AND (t1.kind= 1  OR 
> (t1.kind = 0 AND (t1.delflag <> 0 OR (t2.uid = 2 AND (t2.printauth = 2 OR 
> t2.printauth = 3) AND t2.bprtpermit = 0 AND t3.id = t1.id AND (t3.mode = 0 
> OR t3.mode = 1))) subt0;

That WHERE clause is far too complicated to allow PG's optimizer to have
a chance.  The "Nested Loop" running over sequential scans is a sign
that things aren't going to work out well.

OR clauses are the awkward one, as you've got one at the top of your
WHERE clause it's going to force PG to do slow things.  It looks
somewhat strange as well, do you really want to join *every* row in
"tab_main" to *every* row in "tab_user" when "tab_main.kind" doesn't
equal zero?

Maybe if you could describe what you want to do in English then the
query would make a bit more sense.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] creating array of integer[] out of query - how?

2009-09-02 Thread Sam Mason
On Wed, Sep 02, 2009 at 11:50:38AM +0200, Massa, Harald Armin wrote:
>   select array(
> >  select x from (
> >  select array[2,3] as a
> >  union
> >  select array[3,4] as a ) x);
> >
> > ERROR:  could not find array type for datatype record
> 
> ... I remember being there before :( arrays of rows are also not available.

Doh, sorry I forgot that that's an 8.4 only.  Before that you must
create your own composite type.

> To all: is there a deeper reason why there is no array type for datatype
> record available?

Not enough demand :)

>   [1] http://www.postgresql.org/docs/current/static/sql-createtype.html
> >
> > Thanks for the hint with CREATE TYPE, especially the lines
> 
> """
> Whenever a user-defined type is created, PostgreSQL automatically creates an
> associated array type,
> """
> fills me with joy. ;)

Try:

  CREATE TYPE intarr AS (arr int[]);
  SELECT array(
SELECT x::intarr FROM (
      SELECT array[2,3]
  UNION ALL
  SELECT array[3,4]) x(a));

and it should do the right thing in 8.3.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] creating array of integer[] out of query - how?

2009-09-02 Thread Sam Mason
On Wed, Sep 02, 2009 at 10:34:31AM +0200, Massa, Harald Armin wrote:
> postgres=# select array[[2,3],[3,4]];
>  array
> ---
>  {{2,3},{3,4}}
> 
> -> the result looks for me as an array of integer-arrays

No, as depesz says it's not doing that.  Depending on what you want out
you can get most of the way by having an array of ROWs that contain an
array of integers.  You just need to change:

> select array(
> select a from (
> select array[2,3] as a
> union
> select array[3,4] as a ) x);

to return "x" instead of "a" in the inner select.  Something like:

  select array(
  select x from (
  select array[2,3] as a
  union
  select array[3,4] as a ) x);

getting the resulting tuples out again is a bit of a struggle and you
may be better off with using a custom type.  Have a look at CREATE
TYPE[1] for this.

-- 
  Sam  http://samason.me.uk/
 
 [1] http://www.postgresql.org/docs/current/static/sql-createtype.html

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Aggregate function with subquery in 8.3 and 8.4.

2009-09-01 Thread Sam Mason
On Tue, Sep 01, 2009 at 10:05:44AM +0100, Sam Mason wrote:
> On Mon, Aug 31, 2009 at 04:02:43PM -0700, Sheng Cheng wrote:
> > I though the following query would give me the same results in 8.4.0 and
> > 8.3.1. 
> 
> It should give the same results! This looks like a bug in 8.4 to me

I've just noticed this was (invisibly to me) cross-posted to -bugs as
well.  Probably best to reply there.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Aggregate function with subquery in 8.3 and 8.4.

2009-09-01 Thread Sam Mason
On Mon, Aug 31, 2009 at 04:02:43PM -0700, Sheng Cheng wrote:
> I though the following query would give me the same results in 8.4.0 and
> 8.3.1. 

It should give the same results! This looks like a bug in 8.4 to me, in
an attempt to optimize things it's pulling the CASE out from inside the
inner select and this is changing the semantics of the query.  Doing an
EXPLAIN in 8.4 gives the following:

 GroupAggregate  (cost=181.86..387.73 rows=200 width=64)
   ->  Merge Left Join  (cost=181.86..341.83 rows=8580 width=64)
 Merge Cond: (t1.f1 = (CASE WHEN (t2.f1 = '111'::text) THEN '111'::text 
ELSE t2.f1 END))
 ->  Sort  (cost=90.93..94.20 rows=1310 width=32)
   Sort Key: t1.f1
   ->  Seq Scan on t1  (cost=0.00..23.10 rows=1310 width=32)
 ->  Sort  (cost=90.93..94.20 rows=1310 width=64)
   Sort Key: (CASE WHEN (t2.f1 = '111'::text) THEN '111'::text ELSE 
t2.f1 END)
   ->  Seq Scan on t2  (cost=0.00..23.10 rows=1310 width=64)

While in 8.3 I get:

 GroupAggregate  (cost=198.23..378.88 rows=200 width=64)
   ->  Merge Left Join  (cost=198.23..333.48 rows=8580 width=64)
 Merge Cond: (t1.f1 = ts.f1)
 ->  Sort  (cost=90.93..94.20 rows=1310 width=32)
   Sort Key: t1.f1
   ->  Seq Scan on t1  (cost=0.00..23.10 rows=1310 width=32)
 ->  Sort  (cost=107.30..110.58 rows=1310 width=64)
   Sort Key: ts.f1
   ->  Subquery Scan ts  (cost=0.00..39.48 rows=1310 width=64)
 ->  Seq Scan on t2  (cost=0.00..26.38 rows=1310 width=32)

Notice that the "Merge Cond" is working on the CASE expression in 8.4.
This is too late and is breaking things.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Query and the number of row result

2009-08-31 Thread Sam Mason
On Mon, Aug 31, 2009 at 04:10:48AM -0700, bilal ghayyad wrote:
> Just writing a Function in the PostgreSQL it self (so it is sql
> scripting). It is not from any development language.

You mean you're using plpgsql?  If so then the plpgsql docs are
available here and should help:

  http://www.postgresql.org/docs/current/static/plpgsql.html

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] A safe way to upgrade table definitions by using ALTER's

2009-08-28 Thread Sam Mason
On Fri, Aug 28, 2009 at 12:42:59AM +0400, Sergey Samokhin wrote:
> But how do programmers guarantee that ALTER's they have wrote will
> always be applied by administrators to the corresponding version of
> the database?

How about using the normal integrity constraints that databases provide?
Have some table like:

  CREATE TABLE version (
feature  TEXT PRIMARY KEY,
depends  TEXT REFERENCES version,
inserted TIMESTAMP DEFAULT now()
  );

and at the start of every modification script put a row (or several)
into the table:

  BEGIN;
  INSERT INTO version (feature,depends) VALUES
('table foo',NULL);
  CREATE TABLE foo ( id TEXT PRIMARY KEY, value TEXT );
  COMMIT;

and then you can check to see if the constraints are met by doing:

  BEGIN;
  INSERT INTO version (feature,depends) VALUES
('table foo add startend dates','table foo');
  ALTER TABLE foo
ADD COLUMN startdate TIMESTAMP DEFAULT now(),
ADD COLUMN enddate   TIMESTAMP DEFAULT 'infinity';
  COMMIT;

Not sure if that's the sort of thing that you want/need but I don't
think there's a general solution to the problem.  Determining the
relevant context for this sort of thing is hard.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Getting the column to a which a sequence belongs.

2009-08-27 Thread Sam Mason
On Thu, Aug 27, 2009 at 09:18:57PM +0200, Thomas Kellerer wrote:
> I'm trying to extend the Postgres support in my SQL tool. I'm trying to 
> recreate the SQL for a sequence, and I wonder if there is a way to find 
> out the column to which a sequence "belongs". 

The information is all in the system catalogs; I've not had much
opportunity to fiddle with them so far but the following may be a start
to help get things out for you.

  SELECT c.relname, a.attname, t.relname
  FROM pg_class c, pg_depend d, pg_class t, pg_attribute a
  WHERE c.relkind = 'S'
AND d.objid   = c.oid
AND d.refobjid= t.oid
AND (d.refobjid,d.refobjsubid) = (a.attrelid,a.attnum);

The first reference to "pg_class" can probably be dropped as you can
convert the names of tables/sequences into their oid by using literals
of type "regclass".  For example, to pull out all the column names from
table "foo", you can do:

  SELECT attname
  FROM pg_attribute
  WHERE attrelid = 'foo'::regclass;

Have a look here for docs:

  http://www.postgresql.org/docs/current/static/catalogs.html

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Select data for current week only

2009-08-27 Thread Sam Mason
On Thu, Aug 27, 2009 at 11:09:36AM -0700, BlackMage wrote:
> The field I am using for sorting is a Date type with the format -mm-dd
> hh:mm:ss .

Values that look like that are normally stored in timestamp columns, not
date column.  You wouldn't get the time part if it was just a date.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Select data for current week only

2009-08-27 Thread Sam Mason
On Thu, Aug 27, 2009 at 08:36:45PM +0200, Thomas Kellerer wrote:
> BlackMage wrote on 27.08.2009 20:09:
> >I only want to select events happening for the current week(Mon-Sun).
> 
> SELECT *
> FROM the_table
> WHERE extract(week from the_date_column) = extract(date from current_date);

The OP leaves it somewhat open, but wouldn't date_trunc be better here?
Something like:

  SELECT * FROM the_table
  WHERE date_trunc('week',the_date_column) = 
date_trunc('week',CURRENT_TIMESTAMP);

Otherwise you'll end up getting values for other years as well as the
current one.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Creating index for convert text to integer

2009-08-26 Thread Sam Mason
On Wed, Aug 26, 2009 at 07:13:41AM -0700, xaviergxf wrote:
>   How can i create a index to index all the fields that has the type
> INTEGER, in the following table:
> 
> create type properties_types as enum('INTEGER', 'STRING', 'FLOAT');
> 
> create table properties_types(
>value text NOT NULL,
>value_type properties_types NOT NULL
> );

You can't create an enum that has the same name as a table can you?

> how do i create index for the integer types?
> 
> create index properties_types_index on properties_types ((value ::integer)) 
> where value_type='INTEGER'

Yup, that should work.

> Can i use this select with the index?
> select valor from properties_types where value::integer<3

You need the where clause in there:

  SELECT value
  FROM properties_types
  WHERE value_type = 'INTEGER'
AND value::integer < 3;

This is generally considered pretty bad form though; there are lots
of discussions about "EAV" style designs that this seems similar to.
Slightly better would be creating your original table as:

  CREATE TABLE properties_types (
value_type properties_type,
value_int  INTEGER
  CHECK ((value_type = 'INTEGER') = (value_int IS NOT NULL)),
value_text TEXT
  CHECK ((value_type = 'STRING') = (value_text IS NOT NULL)),
value_float FLOAT8
  CHECK ((value_type = 'FLOAT') = (value_float IS NOT NULL))
  );

You can then just build a normal index on the appropriate columns and
run your queries the naive way.  Something like:

  SELECT *
  FROM properties_types
  WHERE value_int < 3;

Arranging things this way shouldn't take much (if any) more space and it
should run faster as it doesn't need to go converting between datatypes
the whole time.

This is still pretty bad form though and you'll get much more
leverage/help from PG if you arrange the tables so they reflect the
structure of the data you're really putting in.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] No download of Windows binaries without registering?

2009-08-26 Thread Sam Mason
On Wed, Aug 26, 2009 at 08:58:17PM +0200, Magnus Hagander wrote:
> On Wed, Aug 26, 2009 at 20:45, Sam Mason wrote:
> > On Wed, Aug 26, 2009 at 08:11:58PM +0200, Thomas Kellerer wrote:
> >> If one goes directly http://www.enterprisedb.com/products/pgbindownload.do
> >> this is not necessary (which is what I expect), but as far as I can tell,
> >> there is no direct link to that page.
> >
> > hum,
> >
> >   http://www.postgresql.org/download/windows
> >
> > seems to point there.
> 
> Really? When i click that link, I get to the page that doesn't require
> registration...

um, that's what I said isn't it?  What I meant to say anyway was that
the link I gave contained a pretty prominent link to the page that
Thomas gave that doesn't require registration.

> Exactly which link do you click to get to the page that requires
> registration?

I didn't get any, that was the question I was trying to ask.  Maybe I
should have been clearer somehow? :)

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] No download of Windows binaries without registering?

2009-08-26 Thread Sam Mason
On Wed, Aug 26, 2009 at 08:11:58PM +0200, Thomas Kellerer wrote:
> If one goes directly http://www.enterprisedb.com/products/pgbindownload.do 
> this is not necessary (which is what I expect), but as far as I can tell, 
> there is no direct link to that page. 

hum,

  http://www.postgresql.org/download/windows

seems to point there.  Maybe the link there from:

  http://www.postgresql.org/download/

could be clearer somehow?

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] It looks like transaction, but it isn't transaction

2009-08-26 Thread Sam Mason
On Wed, Aug 26, 2009 at 05:06:27AM +0400, Sergey Samokhin wrote:
> There is one thing I find especially interesting: queries I pass to
> the pgsql:squery() are executed with some properties specific to
> transactions!

This behavior seems to be what libpq exposes by default:

  http://www.postgresql.org/docs/current/static/libpq-exec.html

Not sure if it's actually PG imposing these semantics or the libpq driver
itself.  I'd guess it's PG and that's why you're seeing the strange lack
of support for savepoints.

> I'm going to rely on this behaviour as "transactions without
> SAVEPOINT/ROLLBACK TO" in some cases but I'm a bit afraid of troubles
> that may appear in the future.

I always tend to bracket things in an explicit BEGIN+COMMIT, why
wouldn't you do this?

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Aggregate function with Join stop working under certain condition

2009-08-26 Thread Sam Mason
On Wed, Aug 26, 2009 at 11:17:10AM -0400, Naoko Reeves wrote:
> I am joining 4 tables (doc, bpt, con, emp) - emp and doc relationship is
> one to many. I want to only one doc record per emp as condition shown
> below:

[...]

> However, I wan to add one more doc column but as soon as I add one, it
> try to return all unique doc records. Could you tell me what am I doing
> wrong here please?

Descriptions of the problem are normally easier to understand than code;
but I *guess* what you want to do is to get the subject of the last
document created by each person and when it was created.  If that's the
case then DISTINCT ON is normally the easiest way.  Maybe something
like:

  SELECT b.bpt_key, e.emp_full_name, c.con_full_name,
d.doc_date_created, d.doc_subject
  FROM bpt b, emp e
LEFT JOIN con c ON e.emp_con_key = c.con_key
LEFT JOIN (
SELECT DISTINCT ON (doc_emp_key) doc_emp_key,
  doc_date_created, doc_subject
FROM doc
ORDER BY doc_emp_key, doc_date_created DESC) d
  ON e.emp_key = d.doc_emp_key
  WHERE b.bpt_emp_key = e.emp_key
AND b.bpt_com_key = 22
AND b.bpt_status  <> -1;

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Import data from XML file

2009-08-26 Thread Sam Mason
On Wed, Aug 26, 2009 at 09:10:25AM -0700, Erwin Brandstetter wrote:
> How do you import data from an xml-file?

If they're all that small, put the file into the database as is and then
use xpath[1] to pull it apart and turn it into something a database
understand.

-- 
  Sam  http://samason.me.uk/

 [1] 
http://www.postgresql.org/docs/current/static/functions-xml.html#FUNCTIONS-XML-PROCESSING

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] cluster replication over only http protocol

2009-08-25 Thread Sam Mason
On Wed, Aug 26, 2009 at 01:13:48AM +0200, Szabolcs MMMrton wrote:
> my situation is:
> - i have a running/working postgre databse
> - i have to make another database exactly the same as the first (for
> development purposes)
> - the two databases have only(!) http or https connection, nothing else !
> 
> is there a method to set-up a cluster (replication) using only http
> protocols?

Are these two databases on different boxes?  Not sure how much it
matters either way, just write a simple CGI script that does a pg_dump
of the master database.  Have the other box download this and send it
through to its database to be synchronized every few hours.  Is that
enough for development purposes?  You could even pipe it through some
crypto code if you're worried about the data going missing.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to create a multi-column index with 2 dates using 'gist'?

2009-08-25 Thread Sam Mason
On Tue, Aug 25, 2009 at 07:39:26PM +0800, Fred Janon wrote:
> Basically I have an events table representing events with a duration
> (startdate, enddate). I was wondering if it would improve the performance if
> I was creating a separate table (indexed as you suggested) with the date
> ranges (startdate, enddate) and point to that from my events table. That
> would eliminate the duplicate ranges, costing a join to find the events
> within a date range, but maybe improving the search performance for events
> that overlap a certain date range. Any feedback on that?

It depends on the sorts of queries you're going to be doing most often.

Not sure how is best to explain when GiST is going to win, but if you
think of a rectangle with the start dates going along the top edge and
the end dates going down the side.  If you sort the values by the start
date will you end up with most of them on a diagonal or will they be
scattered randomly around.  I.e the less correlation between the start
and end date the better GiST will do, relative to a btree index.  I
think that's right anyway!

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to create a multi-column index with 2 dates using 'gist'?

2009-08-25 Thread Sam Mason
On Mon, Aug 24, 2009 at 05:24:59PM +0800, Fred Janon wrote:
> I am using 8.3 and pgAdmin III. I have a couple of tables using 2 DATE
> columns like 'startdate' and 'enddate' (just date, not interested in time in
> these columns). I have some queries (some using OVERLAPS) involving both
> 'startdate' and 'enddate' columns. I tried to create a multi column index
> using pgAdmin and it comes back with this error:
> 
> ERROR: data type date has no default operator class for access method "gist"
> HINT: You must specify an operator class for the index or define a default
> operator class for the data type.

I've not had the opportunity to try doing this, but it would seem to
require hacking some C code to get this working.  Have a look here:

  http://www.postgresql.org/docs/current/static/gist.html

> I search the pdf docs and online without finding what an "operator class"
> for DATE would be. Would a multi-column index help in that case (OVERLAPS
> and dates comparison) anyway? Or should I just define an index for each of
> the dates?

An operator class bundles together various bits of code so that the
index knows which functions to call when it needs to compare things.

If you were creating an GiST index over a pair of dates to support
an "overlaps" operator you'd have to define a set of functions that
implement the various checks needed.


Depending on your data you may be easier with just a multi-column index
and using normal comparisons, I can't see how OVERLAPS could use indexes
as it does some strange things with NULL values.  The cases a B-Tree
index would win over GiST (this is an educated guess) is when few of the
ranges overlap within a table.  If that's the case then I'd do:

  CREATE INDEX tbl_start_end_idx ON tbl (startdate,enddate);

to create the btree index (they're the default, so nothing else is
needed) and then write queries as:

  SELECT r.range, t.*
  FROM tbl t, ranges r
  WHERE t.startdate <= r.rangeend
AND t.enddate   >= r.rangestart;

if there are lots of overlapping ranges in the table then this is going
to do badly and you may need to start thinking about writing some C code
to get a GiST index going.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] how to return field based on field= NULL or not

2009-08-24 Thread Sam Mason
On Mon, Aug 24, 2009 at 12:54:31PM +0200, Alban Hertroys wrote:
> CASE
>   WHEN fieldA IS NOT NULL THEN fieldA
>   WHEN fieldB IS NOT NULL THEN fieldB
>   WHEN fieldC IS NOT NULL THEN fieldC
>   ELSE fieldD
>   END

BTW, the above expression is identical to:

  COALESCE(fieldA,fieldB,fieldC,fieldD)

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] join from array or cursor

2009-08-21 Thread Sam Mason
On Fri, Aug 21, 2009 at 01:58:34PM -0400, Merlin Moncure wrote:
> On Fri, Aug 21, 2009 at 1:13 PM, Sam Mason wrote:
> > On Fri, Aug 21, 2009 at 12:05:51PM -0400, Tom Lane wrote:
> >> We might be able to do that based on the row-returning-subselect
> >> infrastructure being discussed over here:
> >> http://archives.postgresql.org/message-id/4087.1250867...@sss.pgh.pa.us
> >
> > Not sure if I'm interpreting this correctly, but what would
> > the difference in semantics between:
> >
> >  SELECT (SELECT 1,2);
> >
> > and
> >
> >  SELECT (SELECT (1,2));
> 
> The first form is not allowed because subqueries used that way must
> return only one column.  The second form works because the extra
> parens constructs a row type which gets around that restriction.

I was under the impression that the message Tom was pointing to was
about lifting this restriction.  I believe the use case was of being
able to do:

  UPDATE foo SET (a,b,c) = (SELECT 1,2,3);

if I'm reading things correctly.  I was expecting this to be generally
available where any sub-select was supported and hence I was wondering
what its semantics would be.  It seemed like an interesting corner case
and I'd not seen it discussed in the linked threads.

If I'm not reading things correctly then the rest is a mute point.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] join from array or cursor

2009-08-21 Thread Sam Mason
On Fri, Aug 21, 2009 at 12:05:51PM -0400, Tom Lane wrote:
> Sam Mason  writes:
> > ... PG should instead arrange that the expression
> > "t" is run exactly once and reuse the single result for all columns.
> 
> We might be able to do that based on the row-returning-subselect
> infrastructure being discussed over here:
> http://archives.postgresql.org/message-id/4087.1250867...@sss.pgh.pa.us

Huh, fun.  Not sure if I'm interpreting this correctly, but what would
the difference in semantics between:

  SELECT (SELECT 1,2);

and

  SELECT (SELECT (1,2));

PG seems to make the distinction somewhat blurry at the moment.  For
example, upthread I did:

  SELECT (id((SELECT (1,2)::foo))).*;

and got back two columns, and yet when I do what I think is equivalent:

  SELECT x.*
  FROM (SELECT (1,2)::foo) x;

I get back my tuple, and not the values inside my tuple.


Should I be posting this to -hackers?

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Error inserting data to bytea column in 8.4

2009-08-21 Thread Sam Mason
On Fri, Aug 21, 2009 at 06:54:51PM +0300, Andrus Moor wrote:
> create temp table test ( test bytea );
> insert into test values(E'\274')
> 
> Causes error

Yup, you want another backslash in there, something like:

  insert into test values(E'\\274');

The first backslash is expanded out during parsing the SQL into a
literal and the second during parsing of the literal into a bytea value.
You've got the following transformation going on:

  SQL -> Literal -> Bytea value

The character values at each stage go like this:

  SQL : 5c 5c 32 37 34
  Literal : 5c 32 37 34   (because the '\\' has been unescaped to a '\')
  Bytea   : bc

> In 8.2 this script runs OK.

Maybe it's in SQL_ASCII encoding?  NUL characters wouldn't work in your
8.2 database if that's the case.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


<    1   2   3   4   5   6   7   8   9   >