Re: [GENERAL] Fwd: Re: Dynamic update of a date field

2012-02-19 Thread Jasen Betts
On 2012-02-17, Steve Crawford scrawf...@pinpointresearch.com wrote:
 On 02/16/2012 02:45 PM, John R Pierce wrote:
 On 02/16/12 2:34 PM, David Salisbury wrote:

 Would it not be advantageous to replicate information in the above
 form if you wanted to, say, get all records in the month of May, and
 therefore create an index on the month field?  I would think that
 would be more efficient than creating a functional index on a timestamp.
 And if you're not too picky, that would give an index that couldn't
 be done on a timestamptz field, as that's mutable.

 an index on a timestamp will work just fine on date_trunc('month', 
 timestampfield)



 You can try but PostgreSQL will respond:
 ...functions in index expression must be marked IMMUTABLE...

 The current month returned by extract or date_trunc depends on the 
 current time zone. New York will see March 3-hours ahead of us 
 left-coasters.

David is claiming than a funtional index isnt needed at all, perhaps
postgres knows what date_trunc does and knows how to use an ordinary
btree index.  (he could be right)

Date_trunc doesn't get you the month, it gets you the year and month,
date_runc on timestamp or date is immutable and can be indexed upon.

the original post was storing month and day (this information could be
useful for finding records having anniversaries eg: birthdays) it was
not storing year, month and day can be got using extract, the same 
rules of mutablilty apply. 

 Note: storing the month in a separate field does not solve this problem 
 - it just shuffles it around and requires additional mechanisms to 
 update that field when the timestamp field changes.

yeah, I can't see any way that storing these values separately is more
efficient, the indexes are the same size, but the data records on disk
are larger.  I don't think postgres will re-compute the function result
unless one of the function's inputs hase been changed.

-- 
⚂⚃ 100% natural


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Fwd: Re: Dynamic update of a date field

2012-02-19 Thread Jasen Betts
On 2012-02-16, David Salisbury salisb...@globe.gov wrote:


 On 2/16/12 7:27 AM, Andreas Kretschmer wrote:
 Musial, Jan (GIUB)jan.mus...@giub.unibe.ch  wrote:
 smallint,month smallint,day smallint,time_stamp date); I would like to

 That's silly, use one (and only one) field, timestamp (or timestamptz)
 Don't use never ever multiple columns for the same information!

 Would it not be advantageous to replicate information in the above
 form if you wanted to, say, get all records in the month of May, and
 therefore create an index on the month field?  I would think that
 would be more efficient than creating a functional index on a timestamp.
 And if you're not too picky, that would give an index that couldn't
 be done on a timestamptz field, as that's mutable.

using at timezone  can convert timestamptz to timestamp. 
which can then be fragmented immutably for indexing using extract
or to_char.



-- 
⚂⚃ 100% natural


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Easy form of insert if it isn't already there?

2012-02-19 Thread Jasen Betts
On 2012-02-15, Chris Angelico ros...@gmail.com wrote:
 Periodically I find myself wanting to insert into some table,
 specifying the primary key column(s), but to simply ignore the request
 if it's already there. Currently I have two options:

 1) Do the insert as normal, but suppress errors.
 SAVEPOINT foo;
 INSERT INTO table (col1,col2,col3) VALUES (val1,val2,val3);
 (if error) ROLLBACK TO SAVEPOINT foo;

 2) Use INSERT... SELECT:
 INSERT INTO table (col1,col2,col3) SELECT val1,val2,val3 WHERE NOT
 EXISTS (SELECT * FROM table WHERE col1=val1 AND col2=val2)

 The former makes unnecessary log entries, the latter feels clunky. Is
 there some better way?

neither of those work all of the time.

It's not until the transaction is committed that you can know that it
was successful (ignoring 3-phase for the sake of clarity)

the best way is probably method 2 but remember to handle the errors
that you will still get sometimes.

-- 
⚂⚃ 100% natural


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Postgres 9.1 package for i.MX51X processor from Freescale

2012-02-19 Thread Jayashankar K B
Hi,

Is there a ready postgres 9.1 Package for the i.MX51X processor (ARM Cortex 
architecture) available or do I need to compile the Postgres source myself ?
I need it for a board having the i.MX51 processor and Linux (a variant of 
Debian distribution based on Linux kernel 2.6.35).
If I need to compile postgres source, then what CC Flags should I use for this 
particular processor architecture ?
If any ideas, please share.

Thanks and Regards
Jay



Larsen  Toubro Limited

www.larsentoubro.com

This Email may contain confidential or privileged information for the intended 
recipient (s) If you are not the intended recipient, please do not use or 
disseminate the information, notify the sender and delete it from your system.


Re: [GENERAL] Question on Rules

2012-02-19 Thread David Johnston
-Original Message-
From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of 
mgo...@isstrucksoftware.net
Sent: Saturday, February 18, 2012 5:17 AM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Question on Rules

I am creating a rule which will copy a record when it is changed to a 
audittable.  My question is that the first column is a UUID data type with a 
defined as auditaccessorid uuid DEFAULT
isscontrib.uuid_generate_v4() NOT NULL,

Right now I've got that set to NULL to allow the parser to compile. 
What value should I have in here since I want a newly created UUID?

CREATE RULE log_accessor AS ON UPDATE TO iss.accessor
WHERE NEW.*  OLD.*
DO INSERT INTO iss.auditaccessor VALUES (NULL,
  'C',
  new.loaddtlid, 
  new.seqno, 
  new.billable,
  new.payind,
  new.code,
  new.description, 
  new.ref,
  new.tractororcarrierflag, 
  new.tractororcarrierno,
  new.tractorpct,
  new.charge,
  new.type,
  new.checkdate,
  new.checkno,
  new.processed,
  new.itemflag, 
  new.tractortermloc,
  new.cost,
  new.batchno,
  new.editdatetime,  
  new.edituser);

Best Regards,
 
Michael Gould
Intermodal Software Solutions, LLC
904-226-0978




INSERT INTO table (serial_col1) VALUES (DEFAULT);

Also, I presume you have a good reason for using a RULE instead of a TRIGGER?  
If not you should default to a TRIGGER for this kind of behavior.

David J.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [GENERA]: Postgresql-9.1.1 synchronous replication issue

2012-02-19 Thread Venkat Balaji
On Thu, Feb 16, 2012 at 8:14 PM, Adrian Klaver adrian.kla...@gmail.comwrote:

 On Wednesday, February 15, 2012 10:21:02 pm Venkat Balaji wrote:
  Andrian,
 
  Thanks a lot !
 
  So in this case you are not waiting for confirmation of the commit being
 
   flushed
   to disk on the standby.  It that case you are bypassing the primary
   reason for
   sync replication. The plus is transactions on the master will complete
   faster
   and do so in the absence of the standby. The minus is that you are in
   sort of an
   in between state.
 
  I understand. My worry and requirement is to ensure master is not
 disturbed
  for any reason.
  In sync rep, the biggest worry is if standby server is unavailable and is
  down for longer time, master hangs and will be in the same state until
  standby comes back up or replication must be broken temporarily (until
  standby comes back up) so that master runs without interruption. This is
 a
  costly exercise on production from downtime perspective.

 So just use regular streaming replication without sync rep. You get record
 based
 transaction shipping without having to wait for the standby.  You will
 need to
 make sure that wal_keep_segments is big enough to cover any down time on
 the
 standby(you would need that for sync rep also).


As we already have streaming replication configured. We have rolled back
the plan of setting up synchronous replication.

Thanks,
VB