Re: [HACKERS] Postgresql Caching

2006-10-16 Thread Shane Ambler

Harvell F wrote:

  Getting back to the original posting, as I remember it, the question 
was about seldom changed information.  In that case, and assuming a 
repetitive query as above, a simple query results cache that is keyed on 
the passed SQL statement string and that simply returns the previously 
cooked result set would be a really big performance win.


I believe the main point that Mark made was the extra overhead is in the 
sql parsing and query planning - this is the part that postgres won't 
get around. Even if you setup simple tables for caching it still goes 
through the parser and planner and looses the benefits that memcached 
has. Or you fork those requests before the planner and loose the 
benefits of postgres.
The main benefit of using memcached is to bypass the parsing and query 
planning.


You will find there is more to sql parsing than you first think, it 
needs to find the components that make up the sql statement (tables 
column names functions) and check that they exist and can be used in the 
context of the given sql and the given data matches the context that is 
given to be used in, it needs to check that the current user has enough 
privileges to perform the requested task, then it locates the data 
whether it be in the memory cache, on disk or an integrated version of 
memcached, this would also include checks to make sure another user 
hasn't locked the data to change it and whether there exists more than 
one version of the data, committed and uncommitted and then sends the 
results back to the client requesting it.


  Registering each cache entry by the tables included in the query and 
invalidating the cache during on a committed update or insert 
transaction to any of the tables would, transparently, solve the 
consistency problem.


That was part of my thinking when I made the suggestion of adding 
something like memcached into postgres.



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Postgresql Caching

2006-10-16 Thread mark
On Sun, Oct 15, 2006 at 06:33:36PM -0700, Jeremy Drake wrote:
  2) When updating a PostgreSQL record, I updated the memcache record
 to the new value. If another process comes along in parallel before
 I commit, that is still looking at an older view, cross-referencing
 may not work as expected.
 Shouldn't you be able to use 2-stage commit for this?  Prepare to commit,
 update the memcache record, then commit?  Or am I thinking of something
 else?

Two stage commits makes the window of error smaller, it can't eliminate it.

I believe the window would be reduced to:

1) The transactions that are currently looking at older data, and:

2) Any memcache query that happens between the time of the PostgreSQL
   commit and the memcache commit.

It's a pretty small window. The question for my use case, would be
very hundreds of people clicking on web links per second, might happen
to hit the window. By setting the memcache store to 5 seconds instead
of the regular 60+, I seem to have eliminated all reported cases of the
problems. If the stored data is invalid, it only stays invalid for a
short time. I'm compromising accuracy for efficiency.

The thing about all of this is, if what memcache is doing could be
done with consistency? PostgreSQL would probably already be doing it
right now...

Cheers,
mark

-- 
[EMAIL PROTECTED] / [EMAIL PROTECTED] / [EMAIL PROTECTED] 
__
.  .  _  ._  . .   .__.  . ._. .__ .   . . .__  | Neighbourhood Coder
|\/| |_| |_| |/|_ |\/|  |  |_  |   |/  |_   | 
|  | | | | \ | \   |__ .  |  | .|. |__ |__ | \ |__  | Ottawa, Ontario, Canada

  One ring to rule them all, one ring to find them, one ring to bring them all
   and in the darkness bind them...

   http://mark.mielke.cc/


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Postgresql Caching

2006-10-16 Thread mark
On Mon, Oct 16, 2006 at 05:59:05PM +0930, Shane Ambler wrote:
  Registering each cache entry by the tables included in the query and 
 invalidating the cache during on a committed update or insert 
 transaction to any of the tables would, transparently, solve the 
 consistency problem.
 That was part of my thinking when I made the suggestion of adding 
 something like memcached into postgres.

There is a valid suggestion in here, but I think it's the caching of
query plans, and caching of query plan results that the PostgreSQL
gain would be at.

The query to query plan cache could map SQL statements (with parameters
specified) to a query plan, and be invalidated upon changes to the
statistical composition of any of the involved tables. The query plan to
query results cache would keep the results and first and last transaction
ids that the results are valid for.

Although it sounds simple, I believe the above to be very complicated to
pursue. The real PostgreSQL hackers (not me) have talked at length about
it over the last while that I've read their mailing lists. They've come
up with good ideas, that have not all been shot down. Nobody is willing
to tackle it, because it seems like a lot of effort, for a problem that
can be mostly solved by application-side caching.

It's a subject that interests me - but it would take a lot of time, and
that's the thing that few of us have. Time sucks. :-)

Cheers,
mark

-- 
[EMAIL PROTECTED] / [EMAIL PROTECTED] / [EMAIL PROTECTED] 
__
.  .  _  ._  . .   .__.  . ._. .__ .   . . .__  | Neighbourhood Coder
|\/| |_| |_| |/|_ |\/|  |  |_  |   |/  |_   | 
|  | | | | \ | \   |__ .  |  | .|. |__ |__ | \ |__  | Ottawa, Ontario, Canada

  One ring to rule them all, one ring to find them, one ring to bring them all
   and in the darkness bind them...

   http://mark.mielke.cc/


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Problems building 8.2beta1 on macos G5 xserve

2006-10-16 Thread Sean Davis
On Saturday 14 October 2006 19:48, Tom Lane wrote:
 Sean Davis [EMAIL PROTECTED] writes:
  Trying to build 8.2beta1 on MacOS G5 Xserver, OS version 10.4.7.  I got
  this:
  /usr/bin/libtool: for architecture: cputype (16777234) cpusubtype (0)
  file: -lSystem is not an object file (not allowed in a library)

 What Xcode version have you got?  My recollection is that this is the
 symptom of trying to build with pre-10.4 Xcode tools on 10.4.

 (Yeah, time to download that 800MB file again :-( ... but at least it's
 free...)

For the archive, upgrading to Xcode 2.4 did the trick.

Sean

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Postgresql Caching

2006-10-16 Thread Markus Schaber
Hi, Shane,

Shane Ambler wrote:

 CREATE TABLESPACE myramcache LOCATION MEMORY(2GB);

It's already possible to do this, just create the TABLESPACE in a
ramdisk / tmpfs or whatever is available for your OS.

HTH,
Markus

-- 
Markus Schaber | Logical TrackingTracing International AG
Dipl. Inf. | Software Development GIS

Fight against software patents in Europe! www.ffii.org
www.nosoftwarepatents.org



signature.asc
Description: OpenPGP digital signature


Re: [HACKERS] postgres database crashed

2006-10-16 Thread Markus Schaber
Hi, Ashish,

Ashish Goel wrote:
 But the same code worked when I inserted around 2500 images in the
 database. After that it started crashing.

Testing can never prove that there are no bugs.

It's like the proof that all odd numbers above 1 are prime:

3 is prime, 5 is prime, 7 is prime, so I conclude that all odd numbers
above 1 are prime.


 So , I don't think it's
 because of error in the code. Can u suggest some other possible reasons
 and also why is it crashing at call to memcpy().

- broken hardware
- compiler bugs
- bugs in PostgreSQL

But without having seen your code, I tend to assume that it's something
like a wrong length flag in some corner case in your code.
...

Markus
-- 
Markus Schaber | Logical TrackingTracing International AG
Dipl. Inf. | Software Development GIS

Fight against software patents in Europe! www.ffii.org
www.nosoftwarepatents.org



signature.asc
Description: OpenPGP digital signature


Re: [HACKERS] Threaded python on FreeBSD

2006-10-16 Thread Marko Kreen

On 10/16/06, Bruce Momjian [EMAIL PROTECTED] wrote:

Jim C. Nasby wrote:
 On Sun, Oct 15, 2006 at 06:19:12PM -0400, Tom Lane wrote:
  I suspect the problem here is that the backend isn't linked with
  -lpthread.  We aren't going to let libpython dictate whether we do so,
  either...


Fix config test to report this earlier.

--
marko


pybsd.diff
Description: Binary data

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Getting the type Oid in a CREATE TYPE output function

2006-10-16 Thread Marko Kreen

On 10/12/06, Tom Lane [EMAIL PROTECTED] wrote:

Weslee Bilodeau [EMAIL PROTECTED] writes:
 It works perfectly so long as I used the same key for all my custom
 types. When I want a different key for each type though (so for example,
 encrypt credit cards with one key, addresses with another, etc) I need a
 way to tell them apart.

[ shrug... ]  Seems like you should be putting the key ID into the
stored encrypted datums, then.


The PGP functions happen to do it already - pgp_key_id().

--
marko

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [HACKERS] Postgresql Caching

2006-10-16 Thread Merlin Moncure

On 10/15/06, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:

Using memcache, I've had problems with consistency brought right to
the front. Both of these have failed me:

1) When updating a PostgreSQL record, I invalidate the memcache record.
   If another process comes along in parallel before I commit, notices
   that the memcache record is invalidated, it queries the data from
   SQL, and updates the memcache record back to the old value. :-(

2) When updating a PostgreSQL record, I updated the memcache record
   to the new value. If another process comes along in parallel before
   I commit, that is still looking at an older view, cross-referencing
   may not work as expected.

I'm currently settled on 2), but setting a short timeout (5 seconds) on
the data. Still an imperfect compromise between speed and accuracy, but
it isn't causing me problems... yet.


use advisory locks for 'race sensitive' data. (or user locks in 
8.2).  or, just use tables, becuase you need mvcc, not performance :)

merlin

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] [BUGS] BUG #2683: spi_exec_query in plperl returns

2006-10-16 Thread Martijn van Oosterhout
On Sun, Oct 15, 2006 at 06:15:27PM -0400, Tom Lane wrote:
 Andrew Dunstan [EMAIL PROTECTED] writes:
  I am also wondering, now that it's been raised, if we need to issue a use
  utf8; in the startup code, so that literals in the code get the right
  encoding.
 
 Good question.  I took care to ensure that the code strings passed to
 Perl are marked as UTF8; perhaps that makes it happen implicitly?
 If not, are there any downsides to issuing use utf8?

What use utf8 does is allow the *source* to be in utf8, thus affecting
what's a valid identifier and such. It doesn't affect the data, for
that you need use encoding 'utf8'.

It's clear whether you actually want to allow people to put utf8
characters directly into their source (especially if the database is
not in utf8 encoding anyway). There is always the \u{} escape.

The perlunicode man page describe it better, though I only have
perl5.8. In know the perl5.6 model was different and somewhat more
awkward to use.

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.


signature.asc
Description: Digital signature


Re: [HACKERS] [BUGS] BUG #2683: spi_exec_query in plperl returns

2006-10-16 Thread Tom Lane
Martijn van Oosterhout kleptog@svana.org writes:
 It's clear whether you actually want to allow people to put utf8
 characters directly into their source (especially if the database is
 not in utf8 encoding anyway). There is always the \u{} escape.

Well, if the database encoding isn't utf8 then we'd not issue any such
command anyway.  But if it is, then AFAICS the text of pg_proc entries
could be expected to be utf8 too.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] Postgresql Caching

2006-10-16 Thread Harvell F


On 16 Oct 2006, at 4:29, Shane Ambler wrote:


Harvell F wrote:

  Getting back to the original posting, as I remember it, the  
question was about seldom changed information.  In that case, and  
assuming a repetitive query as above, a simple query results cache  
that is keyed on the passed SQL statement string and that simply  
returns the previously cooked result set would be a really big  
performance win.


I believe the main point that Mark made was the extra overhead is  
in the sql parsing and query planning - this is the part that  
postgres won't get around. Even if you setup simple tables for  
caching it still goes through the parser and planner and looses the  
benefits that memcached has. Or you fork those requests before the  
planner and loose the benefits of postgres.
The main benefit of using memcached is to bypass the parsing and  
query planning.


  That was the basis of my suggestion to just use the passed query  
string as the key.  No parsing or processing of the query, just a  
simple string match.


You will find there is more to sql parsing than you first think, it  
needs to find the components that make up the sql statement (tables  
column names functions) and check that they exist and can be used  
in the context of the given sql and the given data matches the  
context that is given to be used in, it needs to check that the  
current user has enough privileges to perform the requested task,  
then it locates the data whether it be in the memory cache, on disk  
or an integrated version of memcached, this would also include  
checks to make sure another user hasn't locked the data to change  
it and whether there exists more than one version of the data,  
committed and uncommitted and then sends the results back to the  
client requesting it.


  The user permissions checking is a potential issue but again, for  
the special case of repeated queries by the same user (the webserver  
process) for the same data, a simple match of the original query  
string _and_ the original query user, would still be very simple.   
The big savings by having the simple results cache would be the  
elimination of the parsing, planning, locating, combining, and  
sorting of the results set.


  I don't believe normal locking plays a part in the cache (there  
are basic cache integrity locking issues though) nor does the  
versioning or commit states, beyond the invalidation of the cache  
upon a commit to a referenced table.  It may be that the invalidation  
needs to happen whenever a table is locked as well.  (The hooks for  
the invalidation would be done during the original caching of the  
results set.)


  I know that the suggestion is a very simple minded suggestion and  
is limited to a very small subset of the potential query types and  
interactions, however, at least for web applications, it would be a  
very big win.  Many website want to display today's data on their  
webpage and have it change as dates change (or as users change).  The  
data in the source table doesn't change very often (especially  
compared to a popular website) and the number of times that the exact  
same query could be issued between changes can measure into the  
hundreds of thousands or more.  Putting even this simple results  
cache into the database would really simplify the programmer's life  
and improve reliability (and the use of PostgreSQL).




---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [HACKERS] Postgresql Caching

2006-10-16 Thread Neil Conway
On Mon, 2006-10-16 at 13:59 +0200, Markus Schaber wrote:
 It's already possible to do this, just create the TABLESPACE in a
 ramdisk / tmpfs or whatever is available for your OS.

This is not an ideal solution: if the machine reboots, the content of
the tablespace will disappear, requiring manual administrator
intervention to get Postgres running again.

-Neil



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Threaded python on FreeBSD

2006-10-16 Thread Peter Eisentraut
Marko Kreen wrote:
 On 10/16/06, Bruce Momjian [EMAIL PROTECTED] wrote:
  Jim C. Nasby wrote:
   On Sun, Oct 15, 2006 at 06:19:12PM -0400, Tom Lane wrote:
I suspect the problem here is that the backend isn't linked
with -lpthread.  We aren't going to let libpython dictate
whether we do so, either...

 Fix config test to report this earlier.

Fixed.

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[HACKERS] Is python 2.5 supported?

2006-10-16 Thread Jim C. Nasby
Since installing python 2.5, tapir has been failing:

http://pgbuildfarm.org/cgi-bin/show_log.pl?nm=tapirdt=2006-10-15%2020:20:16

Several of the failures appear to be a simple change in error reporting;
I haven't investigated why import_succeed() failed.

Should python 2.5 work with plpython?
-- 
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Getting the type Oid in a CREATE TYPE output function

2006-10-16 Thread Weslee Bilodeau
Marko Kreen wrote:
 On 10/12/06, Tom Lane [EMAIL PROTECTED] wrote:
 Weslee Bilodeau [EMAIL PROTECTED] writes:
  It works perfectly so long as I used the same key for all my custom
  types. When I want a different key for each type though (so for
 example,
  encrypt credit cards with one key, addresses with another, etc) I
 need a
  way to tell them apart.

 [ shrug... ]  Seems like you should be putting the key ID into the
 stored encrypted datums, then.
 
 The PGP functions happen to do it already - pgp_key_id().
 

Actually, Tom helped me realize I made a mistake, which I'm following
his suggestion. Not tying keys to OIDs which change when backup/restored.


But actually for me, the key ID is not a PGP key.

When you create a new type you create a key ID, and map that key ID to
the OID attached to that type, it stores a hashed password value in a
little far-off place that it can use to ensure all inserts into that
same type are using the exact same encryption key (the key is actually
only half, the database has its own key. It combines the two keys to
encrypt/decrypt data).


Having the same column encrypted with 20 different keys is a bit of a
mess. So I just needed a way to ensure it was the same key with each
INSERT/UPDATE.


At login, you call -

SELECT enc_key( 'type', 'password' );

Returns OK if its the real key for that type, otherwise returns an
error with Invalid Key and refuses all read/writes (SELECT, INSERT,
UPDATE, etc) to those types, as it would if you never called enc_key()
in the first place.


If anyone else is curious I'll release the code once I have it actually
working. A few more days basically.

Allows things like -

-- Create the new type, just hides all the CREATE TYPE
-- and assigns the key to the type
select enc_type_new( 'enc_cardnumber', 'new_password' );

create table credit_card (
card_number enc_cardnumber not null,
card_name varchar(20) not null
);


insert into credit_card values ( '1234', 'test' ) ;

Login again -

select * from credit_card ;

ERROR: Please provide key

select enc_key( 'enc_cardnumber', 'new_password' );

SELECT * from credit_card ;

1234 | test

And yes, you can back it up. Map a user to be able to read/write raw
encrypted values and it allows backup/restores using pg_(dump|restore).


I'm not sure if anyone else needs something like it, but it allows us to
transparently encrypt data directly in the tables. Minimum application
changes ('select enc_key' at connection) - the main requirement when
working on legacy code that needs to match todays security polices quickly.


Weslee

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Postgresql Caching

2006-10-16 Thread mark
On Mon, Oct 16, 2006 at 12:40:44PM -0400, Neil Conway wrote:
 On Mon, 2006-10-16 at 13:59 +0200, Markus Schaber wrote:
  It's already possible to do this, just create the TABLESPACE in a
  ramdisk / tmpfs or whatever is available for your OS.
 This is not an ideal solution: if the machine reboots, the content of
 the tablespace will disappear, requiring manual administrator
 intervention to get Postgres running again.

It's enough to show whether disk read/write is the crux of this issue
or not. I suspect not.

Anybody have numbers?

Cheers,
mark

-- 
[EMAIL PROTECTED] / [EMAIL PROTECTED] / [EMAIL PROTECTED] 
__
.  .  _  ._  . .   .__.  . ._. .__ .   . . .__  | Neighbourhood Coder
|\/| |_| |_| |/|_ |\/|  |  |_  |   |/  |_   | 
|  | | | | \ | \   |__ .  |  | .|. |__ |__ | \ |__  | Ottawa, Ontario, Canada

  One ring to rule them all, one ring to find them, one ring to bring them all
   and in the darkness bind them...

   http://mark.mielke.cc/


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] Is python 2.5 supported?

2006-10-16 Thread Peter Eisentraut
Jim C. Nasby wrote:
 Since installing python 2.5, tapir has been failing:

I have removed the use of the deprecated whrandom module, which should 
take care of one regression test failure, but after that I get

*** glibc detected *** free(): invalid pointer: 0xa5df6e78 ***
LOG:  server process (PID 1720) was terminated by signal 6

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 6: explain analyze is your friend


[HACKERS] Anyone using POSIX time zone offset capability?

2006-10-16 Thread Tom Lane
While trying to clean up ParseDateTime so it works reliably with full
timezone names, I found out about a feature that so far as I can tell
has never been documented except in comments in datetime.c.  The
datetime input code tries to recognize what it calls POSIX time zones,
which are timezone abbreviations followed by an additional hour/minute
offset:

/* DecodePosixTimezone()
 * Interpret string as a POSIX-compatible timezone:
 *  PST-hh:mm
 *  PST+h
 *  PST
 * - thomas 2000-03-15

However this doesn't actually work in all cases:

regression=# select '12:34:00 PDT+00:30'::timetz;
 timetz

 12:34:00-07:30
(1 row)

regression=# select '12:34:00 PDT-00:30'::timetz;
ERROR:  invalid input syntax for type time with time zone: 12:34:00 PDT-00:30

(The behavior varies depending on which PG release you try it with, but
I can't find any that produce the expected result for a negative
fractional-hour offset.)

This syntax is ambiguous against some full timezone names present in the
zic database, such as GMT+0, and it's also responsible for a number of
really ugly special cases in the datetime parser.  In view of the fact
that it's never entirely worked and never been documented, I'm inclined
to take it out.  Comments?  Is anyone actually using this?

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] [GENERAL] Anyone using POSIX time zone offset capability?

2006-10-16 Thread Brandon Aiken
What about time zones like Tehran (GMT+3:30), Kabul (GMT+4:30), Katmandu
(GMT+5:45) and other non-cardinal-hour GMT offsets?  Is this handled in
some *documented* way already?

--
Brandon Aiken
CS/IT Systems Engineer

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Tom Lane
Sent: Monday, October 16, 2006 6:06 PM
To: pgsql-hackers@postgreSQL.org; pgsql-general@postgreSQL.org
Subject: [GENERAL] Anyone using POSIX time zone offset capability?

While trying to clean up ParseDateTime so it works reliably with full
timezone names, I found out about a feature that so far as I can tell
has never been documented except in comments in datetime.c.  The
datetime input code tries to recognize what it calls POSIX time zones,
which are timezone abbreviations followed by an additional hour/minute
offset:

/* DecodePosixTimezone()
 * Interpret string as a POSIX-compatible timezone:
 *  PST-hh:mm
 *  PST+h
 *  PST
 * - thomas 2000-03-15

However this doesn't actually work in all cases:

regression=# select '12:34:00 PDT+00:30'::timetz;
 timetz

 12:34:00-07:30
(1 row)

regression=# select '12:34:00 PDT-00:30'::timetz;
ERROR:  invalid input syntax for type time with time zone: 12:34:00
PDT-00:30

(The behavior varies depending on which PG release you try it with, but
I can't find any that produce the expected result for a negative
fractional-hour offset.)

This syntax is ambiguous against some full timezone names present in the
zic database, such as GMT+0, and it's also responsible for a number of
really ugly special cases in the datetime parser.  In view of the fact
that it's never entirely worked and never been documented, I'm inclined
to take it out.  Comments?  Is anyone actually using this?

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] [GENERAL] Anyone using POSIX time zone offset capability?

2006-10-16 Thread Tom Lane
Brandon Aiken [EMAIL PROTECTED] writes:
 What about time zones like Tehran (GMT+3:30), Kabul (GMT+4:30), Katmandu
 (GMT+5:45) and other non-cardinal-hour GMT offsets?  Is this handled in
 some *documented* way already?

Sure.  This has worked since PG 7.2 or so:

regression=# select '12:34:00 IRT'::timetz;
 timetz

 12:34:00+03:30
(1 row)

Also you can just do

regression=# select '12:34:00 +03:30'::timetz;
 timetz

 12:34:00+03:30
(1 row)

regression=#

The weird thing about this allegedly-POSIX notation is the combination
of a symbolic name and a further offset from it.  Back when we didn't
have customizable timezone abbreviations, maybe there would be some
point in making that work, but I don't see the point now.  I'm not
entirely convinced that it really is a POSIX-sanctioned notation,
either --- the POSIX syntax the zic code knows about is different.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] Postgresql Caching

2006-10-16 Thread Jeremy Drake
On Mon, 16 Oct 2006, [EMAIL PROTECTED] wrote:

 On Sun, Oct 15, 2006 at 06:33:36PM -0700, Jeremy Drake wrote:
   2) When updating a PostgreSQL record, I updated the memcache record
  to the new value. If another process comes along in parallel before
  I commit, that is still looking at an older view, cross-referencing
  may not work as expected.
  Shouldn't you be able to use 2-stage commit for this?  Prepare to commit,
  update the memcache record, then commit?  Or am I thinking of something
  else?

 Two stage commits makes the window of error smaller, it can't eliminate it.

Right, I was thinking there was still some raciness there.  I think what I
remembered is that if you updated the cache and then the transaction
failed (or rolled back for whatever reason) later on, the cache would have
data that was never committed.  The two-phase commit thing is intended to
deal with that eventuality.  Which is also a possibility for a consistency
issue.


-- 
Oh, I have slipped the surly bonds of earth,
And danced the skies on laughter silvered wings;
Sunward I've climbed and joined the tumbling mirth
Of sun-split clouds and done a hundred things
You have not dreamed of --
Wheeled and soared and swung
High in the sunlit silence.
Hovering there
I've chased the shouting wind along and flung
My eager craft through footless halls of air.
Up, up along delirious, burning blue
I've topped the wind-swept heights with easy grace,
Where never lark, or even eagle flew;
And, while with silent, lifting mind I've trod
The high untrespassed sanctity of space,
Put out my hand, and touched the face of God.
-- John Gillespie Magee Jr., High Flight

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] [GENERAL] Anyone using POSIX time zone offset capability?

2006-10-16 Thread Tom Lane
I wrote:
 ... I'm not entirely convinced that it really is a POSIX-sanctioned
 notation, either --- the POSIX syntax the zic code knows about is
 different.

Actually, I take that back: it is a subset of the same notation, but
the datetime.c code is misinterpreting the spec!

The POSIX timezone notation as understood by the zic code includes
the possibility of

zoneabbrev[+-]hh[:mm[:ss]]

but the meaning is that hh:mm:ss *is* the offset from GMT, and
zoneabbrev is being defined as the abbreviation for that offset.
What the datetime.c code is doing is trying to find the zoneabbrev
in a built-in timezone table, and then adding the two together.
This is simply wacko.

Given where the code stands now, I think the best solution is to
rip out DecodePosixTimezone and instead pass the syntax off to the
zic code (which can handle it via tzparse()).  Since the datetime
input parser is ultimately only interested in the GMT offset value,
this would mean that the zoneabbrev part would become a noise word.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Upgrading a database dump/restore

2006-10-16 Thread Chuck McDevitt
-Original Message-

I think we had that problem solved too in principle: build the new
catalogs in a new $PGDATA directory alongside the old one, and hard-link
the old user table files into that directory as you go.  Then pg_upgrade
never needs to change the old directory tree at all.  This gets a bit
more complicated in the face of tablespaces but still seems doable.
(I suppose it wouldn't work in Windows for lack of hard links, but
anyone trying to run a terabyte database on Windows deserves to lose
.

regards, tom lane

---(end of broadcast)---

FYI:

Windows NTFS has always supported hard links.  It was symlinks it didn't
support until recently (now it has both).
And there isn't any reason Terabyte databases shouldn't work as well on
Windows as on Linux, other than limitations in PostgreSQL itself.



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match