Re: [GENERAL] Time Zone design issues

2007-09-16 Thread Troy Rasiah


Alvaro Herrera wrote:
> novnov escribió:
>> I think I get the picture; this post is the closest to making sense to me (my
>> lack of understanding is the issue, obviously). But:
>>
>> What's the postgresql client, in a web app?
>>
>> When you write "The web app sets timezone='EST5EDT' and inserts a time of
>> '2007-07-11 12:30:00'." that's the black box that I'm asking about. There is
>> no web app other than the one I am writing. So, I think it means I need to
>> write code in my web app to handle the conversion of the postgres stored UTC
>> into user-relevant time, with their local time zone etc.
> 
> You are misunderstanding the "sets timezone=".  What it means is that
> you call the following SQL command:
> SET timezone TO 'EST5EDT';
> early in your webapp code.  The value to use, you get from the user
> profile (stored in a table perhaps).

Sorry for the ignorance...but what defines a session in this context in
say..perl

Would it be something like

$dbh = DBI->connect();

do your set timezone stuff here
do your insert

$rc  = $dbh->disconnect;


> 
>> If that's so, what I'm asking about is one the webapp side and perhaps
>> people here are not so used to dealing with that. I don't know. Just
>> like one of the other (much appreciated) responders in this thread
>> suggested, the user's browser is likely offering the user's tz
>> info...but I am interested in seeing someone's mature system for
>> returning the user's local time based their tz as stored in their
>> profile.
> 
> I wouldn't trust the browser's TZ, and you would need a way to override
> it.  So storing it in a table seems the easiest way.
> 

-- 
Troy Rasiah

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] help w/ SRF function

2007-09-16 Thread Ow Mun Heng
On Mon, 2007-09-17 at 09:21 +0800, Ow Mun Heng wrote:
> Hi,
> 
> I want to use a SRF to return multi rows.
> 
> current SRF is pretty static.
> 
> create type foo_type as (
> id smallint
> data1 int
> data2 int
> )
> 
> CREATE OR REPLACE FUNCTION foo_func()
>   RETURNS SETOF foo AS
> $BODY$
>   SELECT
>   TRH.ID,
>   TRH.data1,
>   TRH.data2,
>   FROM D 
>   INNER JOIN  TS 
>ON TS.id = D.id
>   inner join TRH
>on ts.id = trh.id
>   WHERE D.start_timestamp BETWEEN '8/1/2007' AND '9/8/2007'
>   And D.code IN ('ID_123')
> $BODY$
> LANGUAGE 'sql' IMMUTABLE STRICT;
> 
> I would like for the above to be a little bit more dynamic in that the
> start_timestamp and the code can be input-fields.
> 
> eg:
> 
> CREATE OR REPLACE FUNCTION foo_func(fromdate timestamp, todate
> timestamp, code text)
>   RETURNS SETOF foo AS
> $BODY$
>   SELECT
>   TRH.ID,
>   TRH.data1,
>   TRH.data2,
>   FROM D 
>   INNER JOIN  TS 
>ON TS.id = D.id
>   inner join TRH
>on ts.id = trh.id
>   WHERE D.start_timestamp BETWEEN fromdate AND todate
>   And D.code IN (code)
> $BODY$
> LANGUAGE 'sql' IMMUTABLE STRICT;
> 
> How can I go about this this? The above will fail due to missing columns
> fromdate/todate/code.
> 
> Or should I use plpgsql as SQL cannot handle variable substitution?
> 
> What about doing  dynamic SQL eg:
> 
> Dsql = select X,Y,Z from foo, join bar on bar.a = foo.a 
>where D.start_timestamp between ' || fromdate ||' and ' ||
> todate||'
> 
> execute DSQL
> 
> Thanks for any/all help.


Seems like I found this after I posted the question. (Doh! Why does this
always happen)

Variable substition can happen using $1/$2/$3 notation.

CREATE OR REPLACE FUNCTION foo_func(timestamp,timestamp,code)
 RETURNS SETOF foo AS
BODY$
SELECT
TRH.ID,
TRH.data1,
TRH.data2,
FROM D 
INNER JOIN  TS 
 ON TS.id = D.id
inner join TRH
 on ts.id = trh.id
WHERE D.start_timestamp BETWEEN $1 AND $2
And D.code IN ($3)
$BODY$
LANGUAGE 'sql' IMMUTABLE STRICT;


But If I were to use ALIASINg, I get an error

eg: DECLARE 
  DECLARE
fromdate ALIAS for $1;
todate ALIAS for $2;
code ALIAS for $3;


ERROR:  syntax error at or near "ALIAS"
LINE 5: fromdate ALIAS for $1;
  ^



---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] New/Custom DataType - Altering definition / seeing definition in pgAdmin3

2007-09-16 Thread Ow Mun Heng
I've created a new custom datatype

eg: 

create type foo as (
id smallint
data1 int
data2 int
)

to view the datatype in psql I do
=> \d foo

what is the equilvalent on pgadmin3 or any other SQL query tool?


As i understand from the docs, I can't alter / change the datatype
definition. If I need to change it, I need to drop it and re-create it
again. Correct me if I'm wrong.







---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Documentation fix regarding atan2

2007-09-16 Thread Andrew Maclean
Thanks.

On 9/15/07, Bruce Momjian <[EMAIL PROTECTED]> wrote:
>
>
> Change made.  Thanks.  Your documentation changes can be viewed in five
> minutes using links on the developer's page,
> http://www.postgresql.org/developer/testing.
>
>
> ---
>
>
> Andrew Maclean wrote:
> > In Table 9.4 of the documentation atan2 is described as follows:
> >   atan2(*x*, *y*) inverse tangent of *x*/*y*
> >
> > I am sure it should read as:
> >   atan2(*y*, x) inverse tangent of y/x
> >
> > This looks to be the standard C++/c atan2(y,x) function.
> >
> > You can easily test this:
> > If y = 2, x = 1, then degrees(atan(y/x)) =63.4 but if we proceed
> according
> > to the documentation; degrees(atan2(x,y))=degrees(atan2(1,2))=25.6 which
> is
> > not the same as degrees(atan(y/x)).
> > So it must be degrees(atan2(y,x))=degrees(atan2(2,1))=63.4.
> >
> >
> > Thanks
> >Andrew
> >
> > --
> > ___
> > Andrew J. P. Maclean
> > Centre for Autonomous Systems
> > The Rose Street Building J04
> > The University of Sydney  2006  NSW
> > AUSTRALIA
> > Ph: +61 2 9351 3283
> > Fax: +61 2 9351 7474
> > URL: http://www.acfr.usyd.edu.au/
> > ___
>
> --
> Bruce Momjian  <[EMAIL PROTECTED]>  http://momjian.us
> EnterpriseDB   http://www.enterprisedb.com
>
> + If your life is a hard drive, Christ can be your backup. +
>
> ---(end of broadcast)---
> TIP 4: Have you searched our list archives?
>
>   http://archives.postgresql.org/
>



-- 
___
Andrew J. P. Maclean
Centre for Autonomous Systems
The Rose Street Building J04
The University of Sydney  2006  NSW
AUSTRALIA
Ph: +61 2 9351 3283
Fax: +61 2 9351 7474
URL: http://www.acfr.usyd.edu.au/
___


Re: [GENERAL] pg_standby observation

2007-09-16 Thread Jeff Davis
On Sun, 2007-09-16 at 09:25 +0100, Simon Riggs wrote:
> Well, the definition of it working correctly is that a "restored log
> file..." message occurs. Even with archive_timeout set there could be
> various delays before that happens. We have two servers and a network
> involved, so the time might spike occasionally.
> 

The problem is, a "restored log file message" might appear in a
different language or with a different prefix, depending on the
settings. That makes it hard to come up with a general solution, so
everyone has to use their own scripts that work with their logging
configuration.

In my particular case, I want to know if those logs aren't being
replayed, regardless of whether it's a network problem or a postgres
problem.

It would be nice if there was a more standardized way to see when
postgres replays a log successfully.

> Touching a file doesn't really prove its working either.
> 

Right. It's the best I have now, however, and should detect "most" error
conditions.

Regards,
Jeff Davis


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Statistics collection question

2007-09-16 Thread Phoenix Kiula
Well first question: how can I check if autovacuum is working?


On 04/09/2007, Tom Lane <[EMAIL PROTECTED]> wrote:
> "Phoenix Kiula" <[EMAIL PROTECTED]> writes:
> > Basically, what I am missing is some info on actually tweaking the
> > postgresql.conf to suit my system.
>
> No, that's *not* what you're missing.  I'm not sure what the problem
> is in your system, but I'm pretty sure that everything you have
> frantically been tweaking is unrelated if not outright
> counterproductive.  You need to stop tweaking and start some methodical
> evidence-gathering to figure out what the problem actually is.
>
> Here are some things I would suggest trying:
>
> 1. Do a VACUUM VERBOSE when the system is fast, and save the output.
> When the system is slow, do another VACUUM VERBOSE, and compare file
> sizes to see if anything seems markedly bloated.  (It might be less
> labor-intensive to copy pg_class.relname, reltuples, relpages columns
> into another table for safekeeping after the first VACUUM, and use SQL
> queries to look for markedly different sizes after the second VACUUM.)


Did this. Saved the files as text files. Did not find much difference
for the tables and indexes stuff. Number of pages required overall
remains the same, by and large. Do I also need to compare the
"pg_toast" type stuff?



> 2. Set up a task to dump the results of
> select * from pg_locks, pg_stat_activity where pid = procpid
> into a log file every few seconds.  Compare what you see when things
> are fast with when they are slow.  In particular you should fairly
> easily be able to tell if the slow queries are waiting long for locks.



Yes, did. Saved them into four different tables (scores1, scores2,
where scores1 represents a time when queries were superfast,
scores4 when it was pathetically slow). Then joined them all, two at a
time, to track differences. The only four rows that are different
across these four tables are related to my two major tables:

#  select scores4.relname, scores4.reltuples, scores4.relpages,
scores1.relpages from scores4
left join scores1 on scores4.relname = scores1.relname where
scores4.relpages <> scores1.relpages
;


   relname|  reltuples  | relpages | relpages
--+-+--+--
 idx_trads_userid | 2.82735e+06 |11652 |11644
 idx_trads_modifydate | 2.82735e+06 | 7760 | 7744
 tradcount|  201349 | 1391 | 1388
 trads_alias_key  | 2.82735e+06 |16172 |16135
(6 rows)

Time: 2.073 ms


What do I make from this? From what I observe, some of the indexes
have a few more values and a few more pages thereof. This is exactly
how it should be, right? This is from a small database.




> 3. Log the output of "vmstat 1" over time, compare fast and slow
> periods.
>


Following is the vmstat from slow time:


 ~ > vmstat 1
procs ---memory-- ---swap-- -io --system-- cpu
 r  b   swpd   free   buff  cache   si   sobibo   incs us sy id wa
 0 12  14136  15608   5208 355451600   200   1408 7  2  1 86 12
 0 14  14136  17208   5200 355296400 052 1137   372  0  0 23 77
 0 15  14136  17336   5204 355114000 060 1085   237  0  0 10 89
 0 16  14136  16832   5204 35511400064 0 1108   323  0  0 25 75
 0 15  14136  15872   5204 355114000 0 0 1066   242  0  0 25 75
 0 16  14136  17360   5196 354646800   492   304 1144   570  1  1 29 69
 0 17  14152  17744   5192 35428160   48 0   188 1127   169  1  0 25 74
 0 10  14172  23312   5216 354043200   528   292 1244   453  0  1 25 74
 2  3  14064  15888   5276 355014800  6644   964 1192   427  1  1 65 33
 0  2  13840  16656   5232 354859600 2470860 1413   882  1  2 75 23


Not sure how to read this. We're on 4GB RAM.

Thanks.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Locking entire database

2007-09-16 Thread Martijn van Oosterhout
On Sun, Sep 16, 2007 at 01:46:44PM +0300, Panagiwths Pediadiths wrote:
> Many transactions doing this in parallel end up inserting the value many
> times. Could i be missing something regarding how to set up the isolation
> level?
> Thanks!

No, the only easy way you can guarentee you won't insert duplicates is with
a unique index. The reason is that even under serialisable mode your
algorithm can produce duplicates, because postgres doesn't do predicate
locking.

The reason unique indexes can do it is because they have special
locking requirements that, when met, guarentee the result. If for some
reason you can't use a unique index (can't think of one, but perhaps)
then you need to implement this locking yourself.

The easiest way is to have one lock and take it before running your
procedure. For more concurrency you can partition them (unique indexes
can be very efficient about this because they can see rows that you
can't; they go outside the normal visibility checks).

Have a nice day,
-- 
Martijn van Oosterhout   <[EMAIL PROTECTED]>   http://svana.org/kleptog/
> From each according to his ability. To each according to his ability to 
> litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] Locking entire database

2007-09-16 Thread Trevor Talbot
On 9/15/07, Panagiwths Pediadiths <[EMAIL PROTECTED]> wrote:

> Shouldn't the serializable level prevent these duplicates? As I understand
> it serializable
> should give the same result as if the transactions were performed the one
> after the other.

http://www.postgresql.org/docs/8.2/static/transaction-iso.html#MVCC-SERIALIZABILITY

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Locking entire database

2007-09-16 Thread Panagiwths Pediadiths


On Sat, 15 Sep 2007, Ron Johnson wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> On 09/15/07 19:59, Panagiwths Pediadiths wrote:
> >
> > On Sat, 15 Sep 2007, Ron Johnson wrote:
> >
> > On 09/15/07 03:28, Panagiwths Pediadiths wrote:
>  Thats the fun part, I actually need to allow duplicates in specific cases
>  but not in this one :)
> > Same table?
> >> Yup
> >
>  Shouldn't the serializable level prevent these duplicates? As I 
>  understand
>  it serializable
>  should give the same result as if the transactions were performed the one
>  after the other.
> > (Please don't top-post.)
> >
> > Seems to me that you are confused as to the "essence" of relational
> > databases.  In other words, the best (heck, even the acceptable) way
> > to design schemas, and how to control the flow of data in order to
> > achieve your ultimate "data" goal.
> >
> >
> >> I dont see why the case i suggest is so obscene
>
> Then you have not explained it to us clearly.
>
> (English language difficulties *are* an acceptable excuse...)
>
> >> More specifically consider a table with to columns where the unique index
> >> is the two columns together
> >
> >> However at some stage of the application I want to insert into the
> >> database only if there is no element
> >> with a value at column 1 equal to that that i intend to insert.
> >
> >> Oddly, in serializable isolation mode, two transactions performing such an
> >> insertion in parallel one of the
> >> two transaction hits the phantom read case, whereas it should be protected
> >> by the isolation level.
>
> It should, *if* you do it properly.
>
> IOW, is your program structured like:
> BEGIN
> SELECT COUNT(*) INTO :cnt
> FROM rdf WHERE segment_1 = :some_val;
> IF :cnt == 1 THEN
>do one thing
> ELSE
>do another
> END IF;
> COMMIT;
>
> or is it structured:
> BEGIN
> SELECT COUNT(*) INTO :cnt
> FROM rdf WHERE segment_1 = :some_val;
> COMMIT;
> BEGIN
> IF :cnt == 1 THEN
>do one thing
> ELSE
>do another
> END IF;
> COMMIT;

Everything is done in the context of one transaction e.g.

BEGIN
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE

INSERT INTO table SELECT somwhere.value1, somewhere.value2 FROM
somewhere WHERE somewhere.value1 NOT IN ( SELECT table.segment1 FROM
table)

END

Many transactions doing this in parallel end up inserting the value many
times. Could i be missing something regarding how to set up the isolation
level?
Thanks!

>
>  On Fri, 14 Sep 2007, Scott Marlowe wrote:
> 
> > On 9/14/07, Panagiotis Pediaditis <[EMAIL PROTECTED]> wrote:
> >> A simpler example,
> >> In the context of one transaction i do many queries of the form
> >>   INSERT INTO table value WHERE value NOT IN TABLE;
> >>
> >> If i have 2 processes running the same 100s of these at the same time i
> >> end up with duplicates.
> >> Even with isolation set to serializable
> >> any ideas?
> > Unique index?
>
> - --
> Ron Johnson, Jr.
> Jefferson LA  USA
>
> Give a man a fish, and he eats for a day.
> Hit him with a fish, and he goes away for good!
>
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1.4.6 (GNU/Linux)
>
> iD8DBQFG7IaUS9HxQb37XmcRAq/bAJwNlJG2BNqfTbXPxd2sa6GsQn3nwQCfXaDo
> BMR4Lple09XnPB5w11geonY=
> =g8lJ
> -END PGP SIGNATURE-
>
> ---(end of broadcast)---
> TIP 3: Have you checked our extensive FAQ?
>
>http://www.postgresql.org/docs/faq
>

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] pg_standby observation

2007-09-16 Thread Simon Riggs
On Thu, 2007-09-13 at 11:38 -0700, Jeff Davis wrote:
> I think it would be useful if pg_standby (in version 8.3 contrib) could
> be observed in some way.
> 
> Right now I use my own standby script, because every time it runs, it
> touches a file in a known location. That allows me to monitor that file,
> and if it is too stale, I know something must have gone wrong (I have an
> archive_timeout set), and I can send an SNMP trap.
> 
> Would it be useful to add something similar to pg_standby? Is there a
> better way to detect a problem with a standby system, or a more
> appropriate place?
> 
> The postgres logs do report this also, but it requires more care to
> properly intercept the "restored log file ... from archive" messages.

Well, the definition of it working correctly is that a "restored log
file..." message occurs. Even with archive_timeout set there could be
various delays before that happens. We have two servers and a network
involved, so the time might spike occasionally.

Touching a file doesn't really prove its working either.

Not sure what to suggest otherwise.

-- 
  Simon Riggs
  2ndQuadrant  http://www.2ndQuadrant.com


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] pg_standby observation

2007-09-16 Thread Simon Riggs
On Thu, 2007-09-13 at 15:13 -0500, Erik Jones wrote:
> On Sep 13, 2007, at 3:02 PM, Jeff Davis wrote:
> 
> > On Thu, 2007-09-13 at 14:05 -0500, Erik Jones wrote:
> >> If you include the -d option pg_standby will emit logging info on
> >> stderr so you can tack on something like 2>> logpath/standby.log.
> >> What it is lacking, however, is timestamps in the output when it
> >> successfully recovers a WAL file.  Was there something more ou were
> >> looking for?
> >
> > I don't think the timestamps will be a problem, I can always pipe it
> > through something else.
> >
> > I think this will work, but it would be nice to have something  
> > that's a
> > little more well-defined and standardized to determine whether some  
> > kind
> > of error happens during replay.
> 
> Right.  The problem there is that there really isn't anything  
> standardized about pg_standby, yet.  Or, if it is, it hasn't been  
> documented, yet.  Perhaps you could ask Simon about the possible  
> outputs on error conditions so that you'll have a definite list to  
> work with?

There's a few different kinds of errors pg_standby can generate, though
much of its behaviour depends upon the command line switches. 

I wasn't planning on documenting all possible failure states. We don't
do that anywhere else in the docs.

Happy to consider any requests for change. 

-- 
  Simon Riggs
  2ndQuadrant  http://www.2ndQuadrant.com


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] PostgreSQL Glossary?

2007-09-16 Thread Filip Rembiałkowski
2007/9/13, Nikolay Samokhvalov <[EMAIL PROTECTED]>:
> Hi all,
>
> does anybody know where to find a good list of PostgreSQL terms
> (including both traditional terms and Postgres-specific ones)?

Yes.
Use http://www.postgresql.org/docs/8.2/static/bookindex.html

-- 
Filip Rembiałkowski

---(end of broadcast)---
TIP 6: explain analyze is your friend