Re: [GENERAL] to_timestamp() and timestamp without time zone

2011-06-24 Thread David Johnston
First: I would suggest your use of "Local Time" is incorrect and that you
would be better off thinking of it as "Abstract Time".  My responses below
go into more detail but in short you obtain a "Local" time by "Localizing"
and "Abstract" time.  The process of "Localization" requires a relevant
"Locale" input which, for date/time values, is a "TimeZone".  Since you
define your "Local Time" as being "Without Timezone" this is an
inconsistency and so, because we want to define something without a TimeZone
we need to rename "Local Time" to "Abstract Time".  And, no, "Wall Time"
will not work either since a "Wall" exists "Somewhere" and thus is
"Localized".

 

You say: (I am applying the above directly to your two definitions)

1)  "TimestampTZ" is an "INSTANT" - but what exactly is an Instant?  In
this case we have a "Wall Calendar" and a "Wall Clock" as a means of
describing an instant.  However, that "Wall" has to be "Somewhere" and, in
combination, the calendar and clock have to display real and valid values
according to that physical location.  So Instant, by definition, means
Local, which requires a TimeZone.  So "TimeStampTZ" DOES imply a "TimeZone"
via a definition of "INSTANT".

 

2)  "Timestamp" is a[n] [ABSTRACT]DATETIME (reworded to remove the
prefix LOCAL which, from above, I feel is misleading).  That this does not
use TimeZone is correct.

 

On the other hand, "TIMESTAMP WITHOUT TIMEZONE" is a wholy different concept
(neither 'wider' or narrow' type than the other). It's just the local
calendar time, it's (conceptually) like a tuple of
numbers{year,month,day,hour,min,sec, fracsecs}, the "date and time" as it's
understood in the business-civil world, with the timezone information
missing. 

This is the type that should be used for that concept, when it's critical
for me that 'If I stored "2011,09,11 at 23:59:59", I want to get that
precise calendar date.

 

Your definition of "calendar time" is incomplete (though I do get your
point).  The date component is "local" because you have (implicitly)
specified that you are using a "Gregorian Calendar-like" rule set.  However,
by omitting the "Location Time Rules (TimeZone)" you are in fact creating an
Abstract Time and not anything that is guaranteed to be valid (meaningful)
when "Localized".  As soon as you say "local" you must tell the computer
what "local" means by specifying a TimeZone.  Otherwise you simply have an
Abstract Time based on the (I think) Babylonian system.

 

Docs should make clear this, and the fact that Postgresql currently lacks a
"FULL" datetime type. This would correspond conceptually to the tuple
{INSTANT,TIMEZONE} (which implies univocally also a LOCALDATETIME). Nor does
it implemented the SQL standard "TIMESTAMP WITH TIMEZONE" (actually a
{INSTANT,OFFSET} tuple). I'm ok with that, I agree that the later is
crippled, and the former is complex and not standard, and that both have
little support from upper layers. 

 

Fair enough; but in reality, other than the 25 hour day issue the chosen
implementation is quite useful.  Once you have created a valid instance of a
"timestamptz" you can change it to any TimeZone, using the proper rules, and
are guaranteed to still have a valid value.  So you might as well normalize
the "storage" TimeZone as PostgreSQL does.  My only qualm is coercing the
input so that a valid "timestamptz" is always created.  But even that isn't
a big deal if you indeed want to ensure that the value entered is a valid
"timestampz". 

 

In this case you are using a function that returns a "timestamptz" while you
are working with "timestamp".  You just said that they are completely
different so the fact that this fails should be of no surprise.   That a
suitable function, that returns a "timestamp", does not exist is the only
real complaint.  It has already been shown that the described behavior of a
PostgreSQL "timestamp" is consistent with what you describe it should be.
That it can be auto-casted to a "timestamptz" is a debatable point.  But
this again comes simply back to the decision to coerce the input of
"timestamptz".  That is, in the vast majority of cases where the conversion
makes sense the ability to directly cast is great.  Casting is always
necessary IF you want to convert your Abstract Time (i.e., "timestamp") into
a Local Time (i.e., "timestamptz").

 

I'm ignoring the concept of "OFFSET" intentionally as that is likely to
confuse the issue and I haven't had time to fully contemplate that aspect of
things.

 

David J.

 

Note: I am writing this post and a response to Steve at the same time (no
pun intended.)

 



Re: [GENERAL] to_timestamp() and timestamp without time zone

2011-06-24 Thread David Johnston
> I meant that time-calculations themselves have lots of issues and
subtleties.

Fair enough, and I agree there is no magic API to solve the difficulties of
adapting rational, logic based systems to a Calendar system last edited by
the Pope and based upon the imperfect movement of Sol relative to Earth.
But we've already detailed why this specific case could use some more
attention.

> Calculations for long prior dates/times have things like a few minute jump
> when (at least in the US) an interval crosses Sunday, November 18,
> 1883 ("the day of two noons"). And although October 1582 (Catholic
> regions) or September 1752 (Protestant regions/Unix-assumption) or later
> (Orthodox) are missing 10-days, PostgreSQL follows the SQL standard which
> does not show those dates as missing at all.
> 

This really falls into application knowledge since the vast majority of
use-cases use fairly recent dates.  Those who deal with long-ago dates
should be expected to understand the limitations of their reality and would
devise means to accommodate them.  Likewise, from the omitted next
paragraph, those who are relying on time need to take into consideration
that changes happen.  The effort to deal with that change is then trade-off
against the cost of the failure occurring.  

In some/most cases, over a short timeframe, the proper solution is to be
flexible and/or relative. Examples:
Scheduler:  Run the schedule the first chance you get when the "system"
hour/minute is after/greater than the "schedule" hour/minute.  You may or
may not need to ensure that "schedule" hour/minute entries are sorted so
those with a longer delay are completed first  - just as they would be in
normal circumstances.

Hospital:  Dispense the next dose 5 hours from now (about the broadest unit
you can safely use is DAY).  In this case the software should be able to
"count" forward minute-by-minute, using the TimeZone rules to skip around if
necessary, and determine whether 5 hours from 0:30 is 4:30, 5:30, or 6:30.
The API implements this "counting" via the "addition operator". 

In theory adding "months/years" should be forbidden and a "procedure" that
applies a consistent "rule set" should be used instead.  Some standard ones
can be provided and the user can always create their own.  One possible rule
would be that adding or subtracting months to a date that is the last day of
its month always returns the last of the resultant month.  Another
rule/function could implement the current behavior where the day does not
change (and then you have two variations - if the new date is invalid you
either fail or coerce).  

While the discussion or core vs. extension comes up consider that many users
and evaluators are going to look at the core first and, as I've said before,
if they see something that appears like it will work they will just use it.
So you'd either want to have no (or very minimal) time-oriented API or have
it be full-featured (and also have a "save me from myself" quality to it -
given time's complexities).

All this said, I am getting worked up by this particular thread but, in
reality, the status-quo does not truly harm me that I know of - but my usage
of PostgreSQL is very light/flexible (lots of spare capacity).  I'm coming
at this both from a desire to learn more and also "what would I do if I was
starting from scratch?".  The best approach, since we are not starting from
scratch, would be for interested and capable parties to work on a full-blown
"time" extension that, while maybe less user friendly, is safer to use and
much more explicit.  However, there are likely to be some components in such
an extension that would be forward-only and thus could be introduced to the
core API without any difficulty (a "to_timestamp_abstract(text,
text)->timestamp" function comes to mind - note the name change; see other
recent post for reason).  And since interested and capable are not mutually
inclusive those who are interested but not capable would probably appreciate
more than just "here is a workaround" from the community.  At the same time,
interested parties need to put together a precise and coherent proposal that
can be discussed and acted upon - with a clear (even if possibly incorrect)
assertion about why something is either wrong or difficult to use.


> 
> There are different definitions of when a year starts so be sure not to
grab
> the wrong week-number or day-number - ISO and Julian are not the same.
> 

Agreed; but people who are going to choose a calendar other than the
Gregorian Calendar should be expected to learn and abide by the rules of
that Calendar.  The responsibility of the API is to correctly apply those
rules (and help the user abide by them where possible).

> And, of course, everything starts with the ethnocentric assumption of what
> calendar system to use. From my experience, there is not a lot of good SQL
> support for data using Islamic, Chinese, Hebrew, Hindu, Iranian, Coptic or
> Ethiopian calendars.

Supply

Re: [GENERAL] PostgreSQL 8.4.8 bringing my website down every evening

2011-06-24 Thread Tomas Vondra
Dne 25.6.2011 02:15, John R Pierce napsal(a):
> indeed, this can really bite you on partitioned tables.My lead
> Oracle programmer would like to see support for prepared statements that
> are parsed but not preplanned...  our standard coding model has all the
> queries prepared up front as part of process initialization, and queries
> executed against these prepared statements.

AFAIK Oracle does a plan recheck, i.e. verifies that the prepared plan
is reasonable.

The most serious issue with inheritance and plans (prepared statement
did not reflect partitions created afterwards) has already been fixed.

Tomas

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL 8.4.8 bringing my website down every evening

2011-06-24 Thread John R Pierce

On 06/24/11 4:51 PM, Tomas Vondra wrote:

And there's a downside too - with prepared statements the the planner
can't use the actual parameter values to choose the plan (it does not
know them), so it may choose a plan that's good on average but sucks for
some parameter values.


indeed, this can really bite you on partitioned tables.My lead 
Oracle programmer would like to see support for prepared statements that 
are parsed but not preplanned...  our standard coding model has all the 
queries prepared up front as part of process initialization, and queries 
executed against these prepared statements.



--
john r pierceN 37, W 122
santa cruz ca mid-left coast


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL 8.4.8 bringing my website down every evening

2011-06-24 Thread Tomas Vondra
Dne 20.6.2011 18:47, Alexander Farber napsal(a):
> isn't having prepared statements good for overall performance?

I've already mentioned that in my previous post, but let's make this
clear. Prepared statements are good for performance, but only if you're
going to execute the statement multiple times in the same session.

When running a SQL statement, the database has to parse it and plan it
first, which may be a lot of work (depending on how complex the
statement is etc.). Prepared statements allow you to do this (parsing
and planning) only once, which may significantly improve the performance.

Let's say you have a statement that takes 10ms to parse/plan and 50ms to
actually execute, and you want to execute it 100x.

If you're going to do this without prepared statements, then you'll
spend 100x 10ms for planning and 100x 50ms for execution. That's 6
seconds in total.

With prepared statements, this takes only 5 seconds. Yes, it all depends
on how much time you spend in planning vs. executing the query.

And there's a downside too - with prepared statements the the planner
can't use the actual parameter values to choose the plan (it does not
know them), so it may choose a plan that's good on average but sucks for
some parameter values.

If my assumption that your code executes each SQL exactly once per
session is right, then get right of the prepared statements and use
plain PDO::query instead. That should fix the problems you currently
have - you can keep the transaction mode in pgpool, you won't get those
annoying prepared statement exceptions and you don't need to put the
transactions there. Plus it's very likely the optimizer will be able to
come up with a better plan.

Tomas

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] glitch installing xml support in 9.1.beta2

2011-06-24 Thread Rob Sargent
OpenSuse 11.4 x86-64

gmake install  builds and places the requisite pieces as expected.

Running
psql --username postgres -d postgres -f xml2--1.0,sql
results in

psql:xml2--1.0.sql:8: ERROR:  function "xml_valid" already exists
with same argument types
psql:xml2--1.0.sql:12: ERROR:  could not access file
"MODULE_PATHNAME": No such file or directory
psql:xml2--1.0.sql:16: ERROR:  could not access file
"MODULE_PATHNAME": No such file or directory
psql:xml2--1.0.sql:20: ERROR:  could not access file
"MODULE_PATHNAME": No such file or directory
psql:xml2--1.0.sql:24: ERROR:  could not access file
"MODULE_PATHNAME": No such file or directory
psql:xml2--1.0.sql:28: ERROR:  could not access file
"MODULE_PATHNAME": No such file or directory
psql:xml2--1.0.sql:34: ERROR:  could not access file
"MODULE_PATHNAME": No such file or directory

(This from second run, so xml_valid worked the first time). 

I see that the xml2.control file defines module_pathname =
'$libdir/pgxml', but how is that passed to the psql run? Or am I to
manually update the sql scripts?
 .

Here's my configuration of the main build
./configure --with-libraries=/usr/local/lib --with-python
--with-openssl --with-ldap --with-ossp-uuid --with-libxml --with-libxslt
--enable-debug --prefix=/opt/PostgreSQL/9.1

Haven't gone into uuid stuff yet...

Cheers,



Re: [GENERAL] pg_dumpall custom format?

2011-06-24 Thread Guillaume Lelarge
On Wed, 2011-06-22 at 16:17 +0100, Thom Brown wrote:
> On 13 May 2010 18:28, Guillaume Lelarge  wrote:
> > Le 13/05/2010 19:24, Thom Brown a écrit :
> >> On 13 May 2010 17:49, Guillaume Lelarge  wrote:
> >>
> >>> Le 13/05/2010 14:12, Thom Brown a écrit :
>  On 5 March 2009 12:08, Thom Brown  wrote:
> > [...]
>  I'm bringing this thread back to life to see if there are any further
>  thoughts on this.  It would be nice to have a complete backup of a
> >>> database
>  cluster in custom format, and use pg_restore to restore an individual
>  database in the same way that it can restore individual schemas and
> >>> tables
>  etc.
> 
> >>>
> >>> Nope. But this is something I'd like to have too. I didn't actually look
> >>> into it much more (I mean, in the code), but I intend to work on this
> >>> for 9.1 (and other pg_dump stuff).
> >>>
> >>>
> >>>
> >> You're now my favourite PostgreSQL developer!  Thanks Guillaume!
> >>
> >
> > Don't get your expectations too high. I just said I'll try :)
> 
> Have you had any opportunity to look at this yet? :)
> 

Nope, still in my TODO list.


-- 
Guillaume
  http://blog.guillaume.lelarge.info
  http://www.dalibo.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] UUID-OSP contrib module

2011-06-24 Thread Hiroshi Saito

Yeah, although cheap contribution by me.:-)
Thanks!

(2011/06/25 2:10), Michael Gould wrote:
> Thanks for the reply.  We're using Windows 32 of Postgres in 
development but

> would like to deploy windows 64 when we are ready.  I agree, I think that
> Postgres ought to provide fully support for GUID's like most other
> databases.
>
> Best Regards
>
> Michael Gould
>
>
> "Hiroshi Saito"  wrote:
>> Hi Michael-san.
>>
>> Sorry very late reaction.
>>
>> Although it is several years ago, I did the work origin of OSSP-UUID on
>> windows platform with Ralf-san. He is Great developer!.
>> http://www.ossp.org/pkg/lib/uuid/
>> However, I made the mistake in patch then
>>
>> After late, correction patch was made this.
>> http://winpg.jp/~saito/pg_work/OSSP_win32/
>> At this time, Ralf-san was passing the individual very busy time.
>> He has full knowledge of these contents. then, In the new version, it
>> will to be released with build by VisualStudio. But, but...It is not
>> released yet by various reasons.:-(
>> sorry again to all.
>>
>> If this offer is helpful, I am wishing it is supplied from EnterpriseDB.
>> Thanks!
>>
>> Regards,
>> Hiroshi Saito
>>
>> (2011/06/23 23:46), Michael Gould wrote:
>>> I remember a few months ago that someone said that the UUID-OSP contrib
>>> module did not work on Windows 64. Is this still a limitation?
>>>
>>> Best Regards
>>>
>>> Michael Gould
>>> Intermodal Software Solutions
>>> 


>>> Michael Gould, Managing Partner
>>> Intermodal Software Solutions, LLC
>>> 904.226.0978
>>> 904.592.5250 fax
>>
>>
>> --
>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-general
>>
>
> --
> Michael Gould, Managing Partner
> Intermodal Software Solutions, LLC
> 904.226.0978
> 904.592.5250 fax
>
>
>
>

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] to_timestamp() and timestamp without time zone

2011-06-24 Thread hernan gonzalez
> As I understand it, documentation patches are welcomed:)

I'd indeed wish some radical changes to the documentation.

To start with, the fundamental data type names are rather misleading; SQL
standard sucks here, true, but Postgresql also has its idiosincracies, and
the docs do not help much:

http://www.postgresql.org/docs/9.0/interactive/datatype-datetime.html

The most confusing thing is that PG's "TIMESTAMP WITH TIMEZONE" is NOT... a
timestamp with timezone! (not even in the crippled sense of the SQL
standard, which thinks of a "timestamp with offset"). It actually has no
relation with timezones. It's actually a plain "timestamp", as the world is
commonly used in computing, the "INSTANT" of time in which (typically) an
event happened. It's a physical concept, not related with civil things (even
with calendars). Typical example: the "modification time" of a file in a
unix filesystem: here the timestamp is stored internally as a Unix integer
time (seconds, relative to the Unix epoch), and when doing a "ls" the OS
displays it using the current timezone; but it's very clear that changing
the timezone only changes the output representation. The same happens in
Postgresql. A stored "TIMESTAMP WITH TIMEZONE" will be the same physical
instant (say, the instant for the first plane crash at S/11) regardless of
the timezone that is used (ONLY!) for output/input representation:
'2011-09-11 08:46:40-04' and '2011-09-11 09:46:40-03' are the same INSTANT.


db=# set TIMEZONE='US/Eastern';
db=# select inst from test_dates where rid=5;
 2011-09-11 08:46:40-04
db=# set TIMEZONE='America/Argentina/Buenos_Aires';
db=# select inst from test_dates where rid=5;
 2011-09-11 09:46:40-03

This is the type that should normally be used to record the time at which an
event happened (typically a record modification - like MYSQL uses the world
"TIMESTAMP").

On the other hand, "TIMESTAMP WITHOUT TIMEZONE" is a wholy different concept
(neither 'wider' or narrow' type than the other). It's just the local
calendar time, it's (conceptually) like a tuple of
numbers{year,month,day,hour,min,sec, fracsecs}, the "date and time" as it's
understood in the business-civil world, with the timezone information
missing.
This is the type that should be used for that concept, when it's critical
for me that 'If I stored "2011,09,11 at 23:59:59", I want to get that
precise calendar date, no matter what the server timezone is, no matter it
changes, or the DST rules changes, or have bugs or anything': timezones
should not be involved AT ALL when manipulating them(parsing, outputing,
storing, retrieving, comparing, extracting date or time, etc). Of course,
this is internally implemented (for mere convenience of space and
arithmetic) using a UTC timestamp (but this should not leak - from the user
point of view, everything should be as if the tuple of numbers was stored).

To resume:

"TIMESTAMP WITH TIMEZONE" is an INSTANT
"TIMESTAMP WITHOUT TIMEZONE" is a LOCALDATETIME

(BTW, this mirrors the Joda-Time JAVA API terminology
http://joda-time.sourceforge.net/ )

Neither of them has or implies a TIMEZONE. The first can accept/use one TZ
for input/ouput representation, that's all.

Docs should make clear this, and the fact that Postgresql currently lacks a
"FULL" datetime type. This would correspond conceptually to the tuple
{INSTANT,TIMEZONE} (which implies univocally also a LOCALDATETIME). Nor does
it implemented the SQL standard "TIMESTAMP WITH TIMEZONE" (actually a
{INSTANT,OFFSET} tuple). I'm ok with that, I agree that the later is
crippled, and the former is complex and not standard, and that both have
little support from upper layers.

The problem, for me, is that Postgresql does not emphasize the conceptual
difference between this types, and that it tends too easily IMO to "assume"
the missing information (the timezone) even when probably the user does not
want to make that assumption. For example, I hope someday PG disables the
implicit casting between these two types.

In general, I feel that sometimes Postgresql pretends to know too much about
the data. Say, when I store a local datetime (TIMESTAMP WITHOUT TIMEZONE), I
wish to tell him: "this a LOCAL datetime, please don't pretend EVER to know
its timezone, you don't need it; if I asked you to select all LOCAL DATE
TIMES before "2001/2/3 0:0:0", if you for doing that are invoking some
timezone convertion or intelligence, you are messsing with my data - stop
doing that".

You can see in this SO question some (non PG specific) discussion about
datetimes and DBs with potentially many timezones.
http://stackoverflow.com/questions/2532729/daylight-saving-time-and-timezone-best-practices/2533378#2533378
The most upvoted recipe is: "Persist globally, display locally". I.e., don't
pretend to store the full date time info, only the instant, let the upper
layers fill the tz info (perhaps from other data, the user profile, or
whatever) and do the display. I don't endorse this fully (sometimes the DB
must kno

Re: [GENERAL] to_timestamp() and timestamp without time zone

2011-06-24 Thread Steve Crawford

On 06/23/2011 02:45 PM, David Johnston wrote:

...
As for "Time handling has lots of subtleties that take time to digest"; a
good programmer and API do their best to minimize the number of hidden
subtleties to be learned


I meant that time-calculations themselves have lots of issues and 
subtleties.


The length of a day, month and a year are all varying periods of time 
leading to things like adding then subtracting a month does not return 
the original date:

select '2011-01-31'::date + '1 month'::interval - '1 month'::interval;
  ?column?
-
 2011-01-28 00:00:00

The 30/360 accounting method takes care of this by simply assuming that 
every month has 30 days and there are 360 days in a year. There are 
plenty of shift-work systems and contracts that simply decree a shift to 
be 8-hours regardless if your shift is actually 7- or 9-hours due to DST 
changeover.


Since DST changes are not synchronized to a common point-in-time 
worldwide, one can easily attempt to scheduled synchronized 
early-morning jobs between East and West coast that will fail when the 
East changes DST several hours ahead of the West. Even tracking what 
rule to apply is tricky. Parts of Arizona observe daylight savings. 
Others do not. And while we're at it, what about those pesky leap-seconds?


Calculations for long prior dates/times have things like a few minute 
jump when (at least in the US) an interval crosses Sunday, November 18, 
1883 ("the day of two noons"). And although October 1582 (Catholic 
regions) or September 1752 (Protestant regions/Unix-assumption) or later 
(Orthodox) are missing 10-days, PostgreSQL follows the SQL standard 
which does not show those dates as missing at all.


There is also an assumption that date calculations continue backward in 
history prior to the actual development of the concept of time-zones. 
And, lacking prescience, calculations for future dates assume that 
time-zone definitions won't change so the answer you get today may not 
be the answer you get if you run the same calculation tomorrow.


There are different definitions of when a year starts so be sure not to 
grab the wrong week-number or day-number - ISO and Julian are not the same.


And, of course, everything starts with the ethnocentric assumption of 
what calendar system to use. From my experience, there is not a lot of 
good SQL support for data using Islamic, Chinese, Hebrew, Hindu, 
Iranian, Coptic or Ethiopian calendars.


Until one considers which of the many issues inherent to date 
calculation may be important, one will not even know what assumptions to 
check for in the software being used.


Cheers,
Steve


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] declare variable in postgresql

2011-06-24 Thread Brar Piening
On Fri, 24 Jun 2011 10:16:52 -0500, Merlin Moncure 
 wrote:

http://www.postgresql.org/docs/9.0/static/plpgsql.html
http://www.postgresql.org/docs/9.0/static/plpgsql-declarations.html#PLPGSQL-DECLARATION-PARAMETERS


Plus http://www.postgresql.org/docs/current/static/sql-do.html

if want it the non procedural TSQL way...

Regards,

Brar

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] declare variable in postgresql

2011-06-24 Thread Merlin Moncure
On Fri, Jun 24, 2011 at 3:41 AM, Jignesh Ramavat
 wrote:
> Hello,
>
> need help,
>
> Declare @TypeTransactionID As int";
>
> Select @TypeTransactionID=ID from TypeTransaction Where TypeTransactionCode
> = 'TxnBackupInc' ";
>
> these is in sqlserver. If i want to do same in postgresql then ???

http://www.postgresql.org/docs/9.0/static/plpgsql.html
http://www.postgresql.org/docs/9.0/static/plpgsql-declarations.html#PLPGSQL-DECLARATION-PARAMETERS

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] declare variable in postgresql

2011-06-24 Thread Pavel Stehule
Hello

2011/6/24 Jignesh Ramavat :
> Hello,
>
> need help,
>
> Declare @TypeTransactionID As int";
>
> Select @TypeTransactionID=ID from TypeTransaction Where TypeTransactionCode
> = 'TxnBackupInc' ";
>
> these is in sqlserver. If i want to do same in postgresql then ???
>

you can't do it - PostgreSQL does not support session variables

Regards

Pavel Stehule

> --
> Thanks & Regards,
> Jignesh Ramavat
> Software Engineer
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Pgfoundry problem with Npgsql bug tracker

2011-06-24 Thread Francisco Figueiredo Jr.
Hi all!

Sorry for sending this message here, but I couldn't find a better
place after asking in the pgfoundry support forum and not receive any
answer. As it is full of spam, I don't know if it is being read very
often.

Here is my problem:

when I access Npgsql bugs tracker, I get the following error:

Error

Database Error: ERROR: could not open relation "artifact_message": No
such file or directory


I thought it was a general problem with pgfoundry, but other projects
I picked up at random don't have this problem when accessing their
respective bugs tracker.

This situation is like this for a week. In the past we had some
problems which didn't last more than a day, so I think this may be
something else.

Would you mind to get in touch with the person who could check that
for me, or ask him/her to send me a reply?

We are having problems to submit new bug reports and checking the
status of current bugs.

Thanks in advance and again, sorry for bothering you in this list.

-- 
Regards,

Francisco Figueiredo Jr.
Npgsql Lead Developer
http://www.npgsql.org
http://fxjr.blogspot.com
http://twitter.com/franciscojunior

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Remote connection issues

2011-06-24 Thread Aritz Dávila
On Thursday, June 23, 2011 02:46:52 pm you wrote:
> On 6/23/11 3:24:12 AM, Aritz Dávila wrote:
> > Hi list,
> > 
> > I have installed postgresql 8.4 on Ubuntu server 10.4. I would like to
> > have remote access to this database so after reading I found out that
> > modifying pg_hba.conf and postgresql.conf will allow me to access
> > remotely.
> > 
> > The postgresql database is on 192.168.2.122. The port 5432 is open,
> > checked it with nmap -p1-65535 localhost. The server is comunicating
> > with other pcs from the subnet, I can connect to it through ssh.
> > 
> > Here is what I have done:
> > I enabled the following on the postgresql.conf file:
> > listen_addresses = '*'
> > port = 5432
> > 
> > My subnet is under 192.168.2.xxx so I added the following to the
> > pg_hba.conf: hostall all 192.168.2.0/32  trust
> 
> Ditto Raymond that you probably mean /24 here.
> 
> > After doing all this things, if I try to connect remotely I got a
> > connection refused error.
> > psql -h 192.168.2.122 -d database
> > psql: could not connect to server: Connection refused
> > 
> >  Is the server running on host "192.168.2.122" and accepting
> >  TCP/IP connections on port 5432?
> > 
> > Another strange thing is the following one, if I do the following on the
> > database server: psql -h localhost -d database, I grant access but if I
> > do the following psql -h 192.168.2.122 -d database on the database
> > server, I got a connection refused error.
> 
> Given that this is Linux, I would guess that there's some SELinux stuff
> enabled by default that's disallowing the connection, and that it really
> doesn't have anything to do with PostgreSQL.  I've had personal
> frustrations (and watched many others as well) with SELinux default
> configs that tend to deny lots of access by default and not really
> log anything telling you that they're denying it.
> 
> Could also be a firewall rule or any other OS mechanism that limits/
> controls access through IP.  With -h localhost, you're probably
> connecting through the unix domain socket, which isn't controlled
> by any firewall I'm aware of, and seems to be ignored as always
> safe to allow by most SELinux configs.
> 
> May want to consider disabling SELinux altogether (even if only as
> a temporary debugging step) and see if things start to work.


Thx for the answers, I'll take a look at the linux systems because as I though 
postgre configuration seems ok.

Cheers

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] declare variable in postgresql

2011-06-24 Thread Jignesh Ramavat
Hello,

need help,

Declare @TypeTransactionID As int";

Select @TypeTransactionID=ID from TypeTransaction Where TypeTransactionCode
= 'TxnBackupInc' ";

these is in sqlserver. If i want to do same in postgresql then ???

-- 
Thanks & Regards,
Jignesh Ramavat
Software Engineer


Re: [GENERAL] to_timestamp() and timestamp without time zone

2011-06-24 Thread Adrian Klaver
On Thursday, June 23, 2011 6:18:18 pm David Johnston wrote:

> Also, is this coercion noted in the documentation anywhere?  I looked in
> the obvious locations (Data Type, Function, Appendix B).  There should
> probably be something obvious, in the Data Type section, like:
> 
> "When a Time Stamp with time zone is created the 'effective' time zone is
> determined and the input value is evaluated according to that time zone.
> If, due to Daylight Savings Time changes, the indicated point-in-time does
> not exist the time component is interpreted as if it were Standard Time and
> then converted to DST (commonly +1 hours)  For example: '2007-12-30
> 00:30:00 ART' does not exist because '2007-12-30' is the day of the change
> to DST; the attempt to create a timestamptz with this value will result in
> '2007-12-30 01:30:00 ART' which then is stored as '2007-12-29 10:30:00
> GMT' (ART = GMT - 3).  Be aware that during DST-to-STD changeover there
> are no 'missing' times but there is no way to reliably specify whether you
> are dealing with the first or the second occurrence of the time on that
> particular day.  The TimeZone specification does not allow one to
> specifically state '1:30AM during DST (1)' or '1:30AM during STD (2)'."

As I understand it, documentation patches are welcomed:)

> 
> David J.

-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] UUID-OSP contrib module

2011-06-24 Thread Hiroshi Saito

Hi Michael-san.

Sorry very late reaction.

Although it is several years ago, I did the work origin of OSSP-UUID on 
windows platform with Ralf-san. He is Great developer!.

http://www.ossp.org/pkg/lib/uuid/
However, I made the mistake in patch then

After late, correction patch was made this.
http://winpg.jp/~saito/pg_work/OSSP_win32/
At this time, Ralf-san was passing the individual very busy time.
He has full knowledge of these contents. then, In the new version, it 
will to be released with build by VisualStudio. But, but...It is not 
released yet by various reasons.:-(

sorry again to all.

If this offer is helpful, I am wishing it is supplied from EnterpriseDB.
Thanks!

Regards,
Hiroshi Saito

(2011/06/23 23:46), Michael Gould wrote:

I remember a few months ago that someone said that the UUID-OSP contrib
module did not work on Windows 64. Is this still a limitation?

Best Regards

Michael Gould
Intermodal Software Solutions

Michael Gould, Managing Partner
Intermodal Software Solutions, LLC
904.226.0978
904.592.5250 fax



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Oracle / PostgreSQL comparison...

2011-06-24 Thread Dave Coventry
I would just like to add my voice to those praising the community
support provided by this list.

I am not a DBA, and merely tinker with a few databases, mostly on the
web. As such, my questions have occasionally bordered on the very
silly, but I have always had them answered courteously, helpfully and
with a great deal of patience, in stark contrast to MySQL lists which
tend to veer towards the sarcastic, the minimal and the obscure.

Kind Regards,

Dave Coventry

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Oracle / PostgreSQL comparison...

2011-06-24 Thread Leif Biberg Kristensen
On Friday 24. June 2011 06.01.31 Greg Smith wrote:

> The idea that PostgreSQL is reverse engineered from Oracle is
> ridiculous.

Maybe he believes that SQL was invented by Oracle?

regards, Leif

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help making tablespaces work for my application

2011-06-24 Thread Craig Ringer

On 06/23/2011 09:37 PM, Natusch, Paul wrote:

I have an application for which data is being written to many disks
simultaneously. I would like to use a postgres table space on each disk.
If one of the disks crashes it is tolerable to lose that data, however,
I must continue to write to the other disks.


About the only way you'll be able to do that with PostgreSQL is to run 
one PostgreSQL instance per disk. Give each its own port, datadir, 
shared_buffers, etc.


I wouldn't expect that setup to perform particularly well, and it costs 
you the ability to have ACID rules apply between data on different 
disks. It's also a horribly inefficient use of RAM.


For this kind of task, it is typical to use a simple, dedicated tool to 
capture the writes from the sensors or whatever you are logging. Once 
the data has hit disk, another tool can read it in small batches and add 
it to the database for analysis and reporting.


Perhaps it'd help if you explained what you want - and why - with a 
little more background and detail?


--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL 8.4.8 bringing my website down every evening

2011-06-24 Thread tv
> Sorry for the late reply - but I still haven't found a solution,
> for example I have a PHP script with 5 consecutive SELECT
> statements (source code + problem described again under:
>
> http://stackoverflow.com/questions/6458246/php-and-pgbouncer-in-transaction-mode-current-transaction-is-aborted
>
> ) and if I add $db->beginTransaction(); $db->commit();
> around each $db->prepare("select ..."); execute();
> then my script will fail very often with
>
> SQLSTATE[25P02]: In failed sql transaction: 7 ERROR:  current
> transaction is aborted, commands ignored until end of transaction
> block

In that case there's something wrong - probably an error or something,
that aborts a transaction. You have to investigate this first, see the
postgresql log and try to call errorInfo/errorCode of the PDO.

BTW why are you using prepared statements? That's great for running a
query multiple times with various parameter values, but according to the
PHP you're executing each statement exactly once. Even if the function is
called repeatedly, the statements will be prepared over and over because
you're calling a 'prepare' right before the execute.

So what you actually get is this

PREPARE
EXECUTE
PREPARE
EXECUTE

but you'd like to get this

PREPARE
EXECUTE
EXECUTE

If you can't do this, just use a plain PDO::query and those prepared
statement errors will be gone. Plus it might actually improve the
performance (with prepared statements the optimizer does not know the
actual values when planning the query, which sometimes prevents him from
choosing a good plan).

regards
Tomas


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Oracle / PostgreSQL comparison...

2011-06-24 Thread Vincent Veyron
Le jeudi 23 juin 2011 à 18:14 -0700, Rodrigo E. De León Plicet a écrit :
> Here:
> 
> http://cglendenningoracle.blogspot.com/2011/06/oracle-vs-postgres-postgresql.html
> 
> Any comments?
> 

There is a previous post by the same author :

http://craigglendenning.blogspot.com/2009/03/i-fight-to-stay-focusedand-often-lose.html

First two paragraphs.

No further comments.

-- 
Vincent Veyron
http://marica.fr/
Logiciel de gestion des sinistres et des contentieux pour le service juridique


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Oracle / PostgreSQL comparison...

2011-06-24 Thread Leif Biberg Kristensen
On Friday 24. June 2011 03.14.39 Rodrigo E. De León Plicet wrote:
> Here:
> 
> http://cglendenningoracle.blogspot.com/2011/06/oracle-vs-postgres-postgresq
> l.html
> 
> Any comments?

I think he got a point in «Oracle as the second largest software company in 
the world» which is a killer argument from the PHB point of view. They're big 
because they're big.

regards, Leif

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Oracle / PostgreSQL comparison...

2011-06-24 Thread Greg Smith

On 06/23/2011 10:28 PM, Stephen Frost wrote:

I love how he finishes with the claim that Oracle "keep their finger on
the pulse of where IT is headed", right after admitting that their
client is actually a huge piece of junk.
   


Oracle is able to keep their finger on the pulse of their customers, 
because they have their hands where they can firmly squeeze their...uh, 
wallets.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us
"PostgreSQL 9.0 High Performance": http://www.2ndQuadrant.com/books


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help making tablespaces work for my application

2011-06-24 Thread Greg Smith

On 06/23/2011 09:37 AM, Natusch, Paul wrote:


I have an application for which data is being written to many disks 
simultaneously.  I would like to use a postgres table space on each 
disk.   If one of the disks crashes it is tolerable to lose that data, 
however, I must continue to write to the other disks.




Tablespaces are not useful for implementing this idea yet.


1. There is a single WAL log for the entire cluster, located in the 
pg_log subdirectory. If the disk containing the pg_log file crashed, 
does that mean the system would come to a halt.  Is there anyway to 
distribute this data so that WAL is located on the same media as the 
table space? An alternative would be to use raid with the disk that 
stores the pg_log subdirectory but that adds cost to the system.




Loss of the pg_xlog subdirectory and the WAL contained in it normally 
results in catastrophic database failure.  Recommended practice is to 
use a RAID-1 volume to make odds of that failure lower.


2.  If #1 was solved by using the raid approach, what happens if one 
of the disks containing one of my table spaces crashes.  At some point 
postgres will want to write the data from the WAL file to the crashed 
(unavailable) disk.Will postgres will be blocked at this point? 
 Is there some way to notify postgres that a specific disk is no 
longer available and that the entries in the WAL for this disk should 
either be purged or ignored? ( I'm willing to "throw away" the data on 
the crashed disk).




PostgreSQL can't be expected to operate sanely when faced with the loss 
of an individual tablespace.  It may be possible to recover from it, but 
you'll be doing something it's not designed to handle, and that effort 
may not succeed.


Note that any tablespace failure is likely to require taking down the 
database to repair the involved tablespaces, so you're likely to have 
downtime between a component failure and when you notice to take 
action.  The database really does not like having tablespaces just go 
away in the middle of operations.


PostgreSQL 9.1 (not released yet, currently in beta) includes a new 
feature called "unlogged tables" that might make this sort of deployment 
possible.  If you created a tablespace for disposable data and put an 
unlogged table onto it, loss of that tablespace would me much less 
likely to cause a problem.  So long as you recreated a new space for the 
unlogged table after restarting, you could probably recover having only 
lost the data on the crashed disk in this situation.



Clearly using raid on all of the disks would be a solution, but that 
is cost prohibitive.





On a cheap server I can easily RAID-1 mirror a pair of drives on Linux 
using software RAID, and individual drives are $50 to $100 each.  If 
your data isn't worth that much, And even that's not enough to really 
make me feel secure about the data--you really need to keep another copy 
around as a backup, too.  You can treat your data as disposable and 
expect to lose it when any single component fails, or you can include 
some good redundancy practices in the design to reduce odds of a 
failure.  There really isn't really a good solution providing partial 
protection in the middle of those two.


--
Greg Smith   2ndQuadrant usg...@2ndquadrant.comBaltimore, MD
PostgreSQL Training, Services, and 24x7 Supportwww.2ndQuadrant.us
"PostgreSQL 9.0 High Performance":http://www.2ndQuadrant.com/books



Re: [GENERAL] Oracle / PostgreSQL comparison...

2011-06-24 Thread John R Pierce

I love how he finishes with the claim that Oracle "keep their finger on
the pulse


presumably, he means, the jugular ...



--
john r pierceN 37, W 122
santa cruz ca mid-left coast


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Oracle / PostgreSQL comparison...

2011-06-24 Thread Albe Laurenz
Stephen Frost wrote:
> I love how he finishes with the claim that Oracle "keep their finger on
> the pulse of where IT is headed", right after admitting that their
> client is actually a huge piece of junk.

I guess that was just a typo.
Shouldn't it read "[Oracle can] keep their fingers on the throat of where IT is 
headed"?

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general