Re: [GENERAL] Get Unix timestamp from SQL timestamp through libpq

2010-01-11 Thread Yan Cheng Cheok
Not working. strftime is use to convert date and time to a string.

Thanks and Regards
Yan Cheng CHEOK


--- On Mon, 1/11/10, Vincenzo Romano  wrote:

> From: Vincenzo Romano 
> Subject: Re: [GENERAL] Get Unix timestamp from SQL timestamp through libpq
> To: "Yan Cheng Cheok" 
> Cc: pgsql-general@postgresql.org
> Date: Monday, January 11, 2010, 3:25 PM
> man 3 strftime
> 
> 
> 2010/1/11 Yan Cheng Cheok :
> > I try already. Neither of them can accept string
> parameter, and convert them to unit timestamp (long).
> >
> > Thanks and Regards
> > Yan Cheng CHEOK
> >
> >
> > --- On Mon, 1/11/10, Vincenzo Romano 
> wrote:
> >
> >> From: Vincenzo Romano 
> >> Subject: Re: Get Unix timestamp from SQL timestamp
> through libpq
> >> To: "Yan Cheng Cheok" 
> >> Cc: pgsql-general@postgresql.org
> >> Date: Monday, January 11, 2010, 2:58 PM
> >> Give a try to:
> >>
> >> man 2 time
> >> man 3 ctime
> >>
> >>
> >> 2010/1/11 Yan Cheng Cheok :
> >> > I know I can convert SQL timestamp to unix
> timestamp,
> >> using the following way.
> >> >
> >> > SELECT extract(epoch FROM now());
> >> >
> >> > Now, I have a stored procedure function,
> which will
> >> directly return a table row to the caller. One of
> the row
> >> field is "timestamp" type.
> >> >
> >> > In my application, I am using libpq. I wish
> to use
> >> libpq functions (or any c/c++ function), to
> convert
> >> "2010-01-11 13:10:55.283" into unix timestamp. Off
> course, I
> >> can create another stored procedure named
> >> >
> >> > SQLTimestamp2UnixTimestamp
> >> > SELECT extract(epoch FROM $1);
> >> >
> >> > But I just wish to accomplish this task with
> a single
> >> c/c++ function call, without involving stored
> procedure.
> >> >
> >> > Any suggestion? Thanks!
> >> >
> >> > Thanks and Regards
> >> > Yan Cheng CHEOK
> >> >
> >> >
> >> >
> >> >
> >> >
> >> > --
> >> > Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> >> > To make changes to your subscription:
> >> > http://www.postgresql.org/mailpref/pgsql-general
> >> >
> >>
> >>
> >>
> >> --
> >> Vincenzo Romano
> >> NotOrAnd Information Technologies
> >> cel. +39 339 8083886  | gtalk. vincenzo.rom...@notorand.it
> >> fix. +39 0823 454163  | skype. notorand.it
> >> fax. +39 02 700506964 | msn.   notorand..it
> >> NON QVIETIS MARIBVS NAVTA PERITVS
> >>
> >
> >
> >
> >
> >
> 
> 
> 
> -- 
> Vincenzo Romano
> NotOrAnd Information Technologies
> cel. +39 339 8083886  | gtalk. vincenzo.rom...@notorand.it
> fix. +39 0823 454163  | skype. notorand.it
> fax. +39 02 700506964 | msn.   notorand.it
> NON QVIETIS MARIBVS NAVTA PERITVS
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
> 





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Set Returning C-Function with cache over multiple calls (with different arguments)

2010-01-11 Thread Thilo Schneider
Dear list,

Currently I am working on a user C-Function which should create a cache object 
on the first call and afterwards return a set of computed values for each 
argument combination it is called with.

My Problem is how to get the cache object saved over multiple calls. Without 
the SRF I could use fcinfo->flinfo->fn_extra for my pointer to the data. This 
is now used by the FuncCallContext structure. This structure is destroyed every 
time SRF_RETURN_DONE is called, thus user_fctx also is not the way to go.

As a minimal example look at the function provided below.

--- snip 
---
PG_FUNCTION_INFO_V1(test);
Datum test(PG_FUNCTION_ARGS) 
{
MemoryContext old_context;
FuncCallContext *funcctx;


if (SRF_IS_FIRSTCALL()) {
funcctx = SRF_FIRSTCALL_INIT();

// This is the structure potentially generated in previous calls
str = funcctx->user_fctx;

// If the structure does not exist or the geometry array has 
changed, it has to be created.
if ( ! str) {
elog(NOTICE, "create new");

old_context = 
MemoryContextSwitchTo(fcinfo->flinfo->fn_mcxt);

// Fill str with data around here ...

MemoryContextSwitchTo(old_context);

funcctx->user_fctx = str;
}

}
funcctx = SRF_PERCALL_SETUP();
SRF_RETURN_DONE(funcctx); 
}
--- snip 
---

To make the problem perfectly clear the SQL-Code this should work with:

--- snip 
---
CREATE OR REPLACE FUNCTION test(int, int[] )
  RETURNS SETOF int AS 'myfunc', 'test'
  LANGUAGE 'c' IMMUTABLE STRICT
  COST 1;

SELECT test(number, array(SELECT integers FROM another_table)) FROM numbers;
--- snip 
---

As creating the cache object is by far the most expensive part in the desired 
function, it should be possible to create the cache only once over the whole 
query - using only the arguments in the array, which do not change over 
multiple calls.

Is there a way around this problem? Another pointer I could use and do not know 
of yet?

Thanks in advance,
Thilo Schneider 
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Get Unix timestamp from SQL timestamp through libpq

2010-01-11 Thread Craig Ringer

On 11/01/2010 2:16 PM, Yan Cheng Cheok wrote:

I know I can convert SQL timestamp to unix timestamp, using the following way.

SELECT extract(epoch FROM now());

Now, I have a stored procedure function, which will directly return a table row to the 
caller. One of the row field is "timestamp" type.

In my application, I am using libpq. I wish to use libpq functions (or any c/c++ 
function), to convert "2010-01-11 13:10:55.283" into unix timestamp. Off 
course, I can create another stored procedure named

SQLTimestamp2UnixTimestamp
SELECT extract(epoch FROM $1);

But I just wish to accomplish this task with a single c/c++ function call, 
without involving stored procedure.

Any suggestion? Thanks!


Wrap strptime(...) to provide a fixed format-string. strptime is is 
available on any POSIX-2001 system. It's not present on Windows, so if 
you have to target win32 as well you'll need to include your own 
implementation on that platform.


See "man 3 strptime"

--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to get DATE in server locale format

2010-01-11 Thread Andrus

Server lc_times contains non-US locale.

SELECT CURRENT_DATE::TEXT

still returns date in format -MM-DD

How to get date in server locale format ?

Andrus.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Custom Field for a table row returned from stored procedure

2010-01-11 Thread Yan Cheng Cheok
I have a "lot" table with 2 columns, with one of the column is current 
timestamp.

I try to return another custom fields, which its calculation is based on 
timestamp.

(For simplicity, I include only 1 field in following example)

CREATE OR REPLACE FUNCTION create_lot(text)
  RETURNS lot AS
$BODY$DECLARE
configurationFile ALIAS FOR $1;
_lot lot;
BEGIN 
INSERT INTO lot(configuration_file)
VALUES(configurationFile) RETURNING  *, extract(epoch from timestamp) as 
timestampex INTO _lot;
return _lot;
END;$BODY$
  LANGUAGE 'plpgsql' VOLATILE
  COST 100;
ALTER FUNCTION create_lot(text) OWNER TO postgres;

I expect the table returns two field for a single row. 1 is configuration_file, 
2nd is timestamp, 3rd is timestampex.

However, I only able to retrieve configuration_file and timestamp. How can I 
obtain timestampex too?

Thanks and Regards
Yan Cheng CHEOK


  


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to get DATE in server locale format

2010-01-11 Thread Leif Biberg Kristensen
On Sunday 10. January 2010 22.57.38 Andrus wrote:
> Server lc_times contains non-US locale.
> 
> SELECT CURRENT_DATE::TEXT
> 
> still returns date in format -MM-DD
> 
> How to get date in server locale format ?

Pg doesn't care about lc_times.

http://www.postgresql.org/docs/current/static/datatype-datetime.html#datatype-
datetime-output2-table

pgslekt=> SELECT CURRENT_DATE::TEXT;
text

 2010-01-11
(1 row)

pgslekt=> set datestyle to german;
SET
pgslekt=> SELECT CURRENT_DATE::TEXT;
text

 11.01.2010
(1 row)

regards,
-- 
Leif Biberg Kristensen
http://solumslekt.org/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Custom Field for a table row returned from stored procedure

2010-01-11 Thread A. Kretschmer
In response to Yan Cheng Cheok :
> I have a "lot" table with 2 columns, with one of the column is current 
> timestamp.
> 
> I try to return another custom fields, which its calculation is based on 
> timestamp.
> 
> (For simplicity, I include only 1 field in following example)
> 
> CREATE OR REPLACE FUNCTION create_lot(text)
>   RETURNS lot AS
> $BODY$DECLARE
> configurationFile ALIAS FOR $1;
> _lot lot;
> BEGIN 
> INSERT INTO lot(configuration_file)
> VALUES(configurationFile) RETURNING  *, extract(epoch from timestamp) as 
> timestampex INTO _lot;
> return _lot;
> END;$BODY$
>   LANGUAGE 'plpgsql' VOLATILE
>   COST 100;
> ALTER FUNCTION create_lot(text) OWNER TO postgres;
> 
> I expect the table returns two field for a single row. 1 is 
> configuration_file, 2nd is timestamp, 3rd is timestampex.
^^   ^

I think, you expected the function returns 3 fields, right?


> 
> However, I only able to retrieve configuration_file and timestamp. How can I 
> obtain timestampex too?

You have defined your function "RETURNS lot", and the data-type "lot"
contains only 2 fields.

I think, you have to rewrite your function, simple example:

test=# create table lot (a int, b int);
CREATE TABLE
test=*# create or replace function insert_lot(in _a int, in _b int, out x 
int,out y int,out z int) as $$begin 
  insert into lot values (_a, _b); 
  x:=_a; 
  y:=_b; 
  z:=_a+_b; 
  return; 
end;$$ language plpgsql;
CREATE FUNCTION
test=*# select * from insert_lot(2,5);
 x | y | z
---+---+---
 2 | 5 | 7
(1 row)



HTH, Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: -> Header)
GnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431  2EB0 389D 1DC2 3172 0C99

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Composite types questions

2010-01-11 Thread Vincenzo Romano
Hi all.

It's not clear to me how composite values are used in  conditions (WHERE/CHECK).
In my case I have something like this:

-- begin snippet

CREATE TABLE test_tab (
  col1 timestamp not null,
  col2 int8 not null,
  col3 text not null
);

CREATE INDEX i_test_tab_col1 ON test_tab( col1 );

SELECT *
  FROM test_tab
  WHERE (date_trunc('week',col1),col2,col3)=('val1'::timestamp,val2,'val3')
;
-- end snippet

For a number of reasons I cannot split the the WHERE condition in the
"usual" AND-list.
My questions are:
Can the i_test_tab_col1 INDEX be taken into account from the query planner?
What if I define a functional index on col1?
Does the same apply to TABLE-level CHECK conditions?

Thanks in advance.

-- 
Vincenzo Romano
NotOrAnd Information Technologies
NON QVIETIS MARIBVS NAVTA PERITVS

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Huge iowait during checkpoint finish

2010-01-11 Thread Anton Belyaev
Hello Greg,

Thanks for you extensive reply.

2010/1/9 Greg Smith :
> Anton Belyaev wrote:
>>
>> I think all the IOwait comes during sync time, which is 80 s,
>> according to the log entry.
>>
>
> I believe you are correctly diagnosing the issue.  The "sync time" entry in
> the log was added there specifically to make it easier to confirm this
> problem you're having exists on a given system.
>
>> bgwriter_lru_maxpages = 0 # BG writer is off
>> checkpoint_segments = 45
>> checkpoint_timeout = 60min
>> checkpoint_completion_target = 0.9
>>
>
> These are reasonable settings.  You can look at pg_stat_bgwriter to get more
> statistics about your checkpoints; grab a snapshot of that now, another one
> later, and then compute the difference between the two.  I've got an example
> of that http://www.westnet.com/~gsmith/content/postgresql/chkp-bgw-83.htm
>
> You should be aiming to have a checkpoint no more than every 5 minutes, and
> on a write-heavy system shooting for closer to every 10 is probably more
> appropriate.  Do you know how often they're happening on yours?  Two
> pg_stat_bgwriter snapshots from a couple of hours apart, with a timestamp on
> each, can be used to figure that out.
>

Checkpoint happens about once an hour, sometimes a bit more offen (30
minutes) - during daily peaks.

>> I had mostly the same config with my 8.3 deployment.
>> But hardware is different:
>> Disk is software RAID-5 with 3 hard drives.
>> Operating system is Ubuntu 9.10 Server x64.
>>
>
> Does the new server have a lot more RAM than the 8.3 one?  Some of the
> problems in this area get worse the more RAM you've got.
>

Yes, new server has 12 GB while old one only 8 GB.

> Does the new server use ext4 while the old one used ext3?
>

Same ext3 filesystem.

> Basically, you have a couple of standard issues here:
>
> 1) You're using RAID-5, which is not known for good write performance.  Are
> you sure the disk array performs well on writes?  And if you didn't
> benchmark it, you can't be sure.
>

I did some dd benchmarks (according to
http://www.westnet.com/~gsmith/content/postgresql/pg-disktesting.htm):
Old server with its "hardware RAID-1" shows 60 mb/s on write.
New server with software RAID-5 shows 85 mb/s on write.

> 2) Linux is buffering a lot of writes that are only making it to disk at
> checkpoint time.  This could be simply because of (1)--maybe the disk is
> always overloaded.  But it's possible this is just due to excessive Linux
> buffering being lazy about the writes.  I wrote something about that topic
> at http://notemagnet.blogspot.com/2008/08/linux-write-cache-mystery.html you
> might find interesting.
>

Old server has
dirty_ratio = 10
dirty_background_ratio = 5

New server had
dirty_ratio = 20
dirty_background_ratio = 10

Assuming all the tests and measures above:

Server has more RAM, leaving Linux some room for write cache. During
dd test DirtyPages of /proc/meminfo were up to 2 GB.

RAID-5 is a bit faster (at least on sequential write). Drives arent
overloaded, because their utilization during lengthy checkpoint is
low. IOwait problems occur only at final sync part of checkpoint. And
during this short period drives are almost 100% utilized (according to
sar -d 1).

I played a bit, setting dirty_background_ratio = 1, but this had
negative effect somehow.
And this is strange. I hoped this will force to distribute the load
from 2 min sync period to 1 hour checkpoint span, but it did not.

As the result, I am dont know still where is the real problem. Drives
arent overloaded. Linux cache is really misterious, but modifying its
parameters does not give the desired effect.

Thanks.
Anton.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Huge iowait during checkpoint finish

2010-01-11 Thread Anton Belyaev
2010/1/9 Scott Marlowe :
> On Fri, Jan 8, 2010 at 3:07 PM, Greg Smith  wrote:
>> Basically, you have a couple of standard issues here:
>>
>> 1) You're using RAID-5, which is not known for good write performance.  Are
>> you sure the disk array performs well on writes?  And if you didn't
>> benchmark it, you can't be sure.
>
> This can be doubly bad if he's now moved to a set of disks that are
> properly obeying fsync but was on disks that were lying about it
> before.
>

Scott, thanks for the interesting suggestion.

And how do I check this?
Old RAID-1 has "hardware" LSI controller.
I still have access to old server.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Table appears on listing but can't drop it

2010-01-11 Thread Fernando Morgenstern
Em 08/01/2010, às 15:58, Adrian Klaver escreveu:

> 
> Actually what is strange is that your previous listing :
> postgres=# select '"' || datname || '"' from pg_database;
>   ?column?
> -
>  "template1"
>  "template0"
>  "t1"
>  "skynet"
> 
> is not the same as the one above:
> 
> postgres=# select '"' || datname || '"' from pg_database;
>  ?column?
> -
> "template1"
> "template0"
> "postgres"
> "t1"
> "pgpool"
> "skynet"
> 
> In particular the presence of postgres,t1 and pgpool.
> 
> Are you sure which cluster you are pointing at and whether the psql version 
> matches the server version?
> 
> -- 
> Adrian Klaver
> adrian.kla...@gmail.com

Hi,

The reason for pgpool is that we were using it, but decided to stop due to some 
problems. At this moment we have pgpool with one node only. Also, i am 
connecting directly to postgres in order to verify this problem.

And the difference between this and previous listing is because i am manually 
removing databases name as they contain client names that i don't want to share 
here.

Best Regards,
---

Fernando Marcelo
www.consultorpc.com
ferna...@consultorpc.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Table appears on listing but can't drop it

2010-01-11 Thread Fernando Morgenstern
Em 09/01/2010, às 19:40, hubert depesz lubaczewski escreveu:

> On Fri, Jan 08, 2010 at 02:39:03PM -0200, Fernando Morgenstern wrote:
>> postgres=# drop database skynet;
>> ERROR:  database "skynet" does not exist
> 
> do:
> 
> psql -l | hexump -C
> and examine output.
> 
> Best regards,
> 
> depesz
> 
> -- 
> Linkedin: http://www.linkedin.com/in/depesz  /  blog: http://www.depesz.com/
> jid/gtalk: dep...@depesz.com / aim:depeszhdl / skype:depesz_hdl / gg:6749007


Hi,

I have done:

# psql -U postgres -p 4000 -l | hexdump -C

And got the two databases: http://pastebin.ca/1746711

I couldn't find any difference here.

Best Regards,
---

Fernando Marcelo
www.consultorpc.com
ferna...@consultorpc.com
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Table appears on listing but can't drop it

2010-01-11 Thread hubert depesz lubaczewski
On Mon, Jan 11, 2010 at 08:58:57AM -0200, Fernando Morgenstern wrote:
> Em 09/01/2010, às 19:40, hubert depesz lubaczewski escreveu:
> 
> > On Fri, Jan 08, 2010 at 02:39:03PM -0200, Fernando Morgenstern wrote:
> >> postgres=# drop database skynet;
> >> ERROR:  database "skynet" does not exist
> > 
> > do:
> > 
> > psql -l | hexump -C
> > and examine output.
> > 
> > Best regards,
> > 
> > depesz
> > 
> > -- 
> > Linkedin: http://www.linkedin.com/in/depesz  /  blog: http://www.depesz.com/
> > jid/gtalk: dep...@depesz.com / aim:depeszhdl / skype:depesz_hdl / gg:6749007
> 
> 
> Hi,
> 
> I have done:
> 
> # psql -U postgres -p 4000 -l | hexdump -C
> 
> And got the two databases: http://pastebin.ca/1746711
> 
> I couldn't find any difference here.

Could you add -qAt to psql options and rerun the command?

Best regards,

depesz

-- 
Linkedin: http://www.linkedin.com/in/depesz  /  blog: http://www.depesz.com/
jid/gtalk: dep...@depesz.com / aim:depeszhdl / skype:depesz_hdl / gg:6749007

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Table appears on listing but can't drop it

2010-01-11 Thread Fernando Morgenstern

Em 11/01/2010, às 09:04, hubert depesz lubaczewski escreveu:

>> Hi,
>> 
>> I have done:
>> 
>> # psql -U postgres -p 4000 -l | hexdump -C
>> 
>> And got the two databases: http://pastebin.ca/1746711
>> 
>> I couldn't find any difference here.
> 
> Could you add -qAt to psql options and rerun the command?
> 
> Best regards,
> 
> depesz
> 
> -- 
> Linkedin: http://www.linkedin.com/in/depesz  /  blog: http://www.depesz.com/
> jid/gtalk: dep...@depesz.com / aim:depeszhdl / skype:depesz_hdl / gg:6749007

Hello,

Same result: http://pastebin.ca/1746714

Regards,
---

Fernando Marcelo
www.consultorpc.com
ferna...@consultorpc.com
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Table appears on listing but can't drop it

2010-01-11 Thread hubert depesz lubaczewski
On Mon, Jan 11, 2010 at 09:08:27AM -0200, Fernando Morgenstern wrote:
> Same result: http://pastebin.ca/1746714

It looks like there is problem with system catalogs. I would suggest to
pg_dump what you can, rm $PGDATA, initdb, and load from backup.

Best regards,

depesz

-- 
Linkedin: http://www.linkedin.com/in/depesz  /  blog: http://www.depesz.com/
jid/gtalk: dep...@depesz.com / aim:depeszhdl / skype:depesz_hdl / gg:6749007

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Rows missing from table despite FK constraint

2010-01-11 Thread Konrad Garus
2010/1/8 Alban Hertroys :

> Did you turn off seqscans in the postgres.conf?

Seq scan is enabled.

> Could you try a "REINDEX TABLE attachment" again in case you somehow 
> reindexed the wrong index or table?

How about this test?

On a dump from before the rows were gone:

# select count(*) from attachment where when_uploaded < '2010-01-01';
  count
--
 22523642
(1 row)

On production database:

# explain select count(*) from attachment where when_uploaded < '2010-01-01';
  QUERY PLAN
--
 Aggregate  (cost=1794931.20..1794931.21 rows=1 width=0)
   ->  Seq Scan on attachment  (cost=0.00..1738076.24 rows=22741985 width=0)
 Filter: (when_uploaded < '2010-01-01 00:00:00'::timestamp
without time zone)
(3 rows)

# select count(*) from attachment where when_uploaded < '2010-01-01';
  count
--
 22523639
(1 row)



-- 
Konrad Garus

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] R: aggregate over tables in different schema

2010-01-11 Thread Ivan Sergio Borgonovo
On Sun, 10 Jan 2010 10:49:48 +0100
Vincenzo Romano  wrote:

> Try using inheritance.

One of the things I didn't mention is: I've to join these tables
with other tables that may or may not (public) belong to the same
schema.

select sum(i.qty) from s1.list_items li
  join public.item i on i.itemid=li.itemid;

Now I'd like to pick up the sum over all list_items tables across
all the schemas.

If I define the summary table as the child of all the sub-tables I'm
going to write dynamic SQL anyway.
So I guess I should define a common ancestor for all the tables
(list_items) in different schema.

create public.list_items (
  itemid int primary key, // trouble
  name varchar(32)
);

create table s1.list_items (
) inherits (public.list_items);

create table s2.list_items (
) inherits (public.list_items);

But I can't see how am I going to write the query.

Furthermore the children should have their own pk and not share them.

thanks

-- 
Ivan Sergio Borgonovo
http://www.webthatworks.it


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Get Unix timestamp from SQL timestamp through libpq

2010-01-11 Thread Alban Hertroys
On 11 Jan 2010, at 7:16, Yan Cheng Cheok wrote:

> I know I can convert SQL timestamp to unix timestamp, using the following way.
> 
> SELECT extract(epoch FROM now());
> 
> Now, I have a stored procedure function, which will directly return a table 
> row to the caller. One of the row field is "timestamp" type. 
> 
> In my application, I am using libpq. I wish to use libpq functions (or any 
> c/c++ function), to convert "2010-01-11 13:10:55.283" into unix timestamp.

The format of that "timestamp" you're referring to is dependant on client and 
server settings (locale among others). If you go this way make sure you specify 
the format when making the connection.

Another option is to add an extra epoch column to your result-row and return 
that row instead of the original row. You'd have to change the return type to 
include the extra column of course (see RETURNS TABLE in the docs).

Alban Hertroys

--
If you can't see the forest for the trees,
cut the trees and you'll see there is no forest.


!DSPAM:737,4b4b0e7a10733449211764!



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Get Unix timestamp from SQL timestamp through libpq

2010-01-11 Thread Merlin Moncure
On Mon, Jan 11, 2010 at 1:16 AM, Yan Cheng Cheok  wrote:
> I know I can convert SQL timestamp to unix timestamp, using the following way.
>
> SELECT extract(epoch FROM now());
>
> Now, I have a stored procedure function, which will directly return a table 
> row to the caller. One of the row field is "timestamp" type.
>
> In my application, I am using libpq. I wish to use libpq functions (or any 
> c/c++ function), to convert "2010-01-11 13:10:55.283" into unix timestamp. 
> Off course, I can create another stored procedure named
>
> SQLTimestamp2UnixTimestamp
> SELECT extract(epoch FROM $1);
>
> But I just wish to accomplish this task with a single c/c++ function call, 
> without involving stored procedure.

libpqtypes can do this (and it's much faster if you use libpqtypes to
issue the query).

PGtimestamptz ts;
res = PQexef("select now()");
PQgetf(res, 0, "%timestamptz", 0 , &ts);
int epoch = ts.epoch;

http://libpqtypes.esilo.com/

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] access computed field of RECORD variable

2010-01-11 Thread Steve White
Alvaro,

I followed your advice, but using PL/Python.  I succeeded, but only with
great difficulty.

To close this off, I'll write these down, together with the work-arounds.

Some of this info would be of use if it were in the documentation.

Problems


1) (documentation)
   The doc says Python functions can't be called directly from Python!
   But more accurately, PostgreSQL functions written in Python can't be
   called from Python.

2) Tragically, the plpy.execute() function flattens composite fields to
   string.  In my case a work-around was simple.  For others, this will
   be a killer.

3) PL/Python doesn't permit functions that return RECORD or SETOF RECORD.
   But I want to return a generic table... (or do I?)

Work-arounds


1) Python is one of those languages that allow functions to be defined 
within the body of another function.

So within the body of a PostgreSQL function definition, you can build a
little Python environment containing everything you would expect:
functions, classes, etc, and proceed to do serious programming.

One can basically treat the body of a PL/Python function as an individual
Python script, with the 'plpy' module already included.

2) I wrote about this in a different posting, that nobody has yet answered.

In my case, I was able to parse the string without too much trouble.
But this is really a crying shame.

3a) So PL/Python doesn't return SETOF RECORD.  The trick I implemented was
to call it from a PL/pgSQL which created a temp table, which was filled
by the Python.  This function in turn could return the set of generic
records using RETURN NEXT.

It would be nice PL/Python could return RECORD--I don't know why it
shouldn't.

3b) In my case though, It didn't matter. I ended up needing functions that
returned tables of a set of known types anyway.  But the set is large, and 
unfortunately, there is no way to pass the return type of a function as a 
parameter.  It has to be known at create time.

The solution was, to write a function that did a CREATE FUNCTION to create
a function of the desired types.  This way for each known table, only one
SELECT needs to be done, to create the needed function for that table.

Thanks!

-- 
| -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -
| Steve White +49(331)7499-202
| e-Science / AstroGrid-D   Zi. 35  Bg. 20
| -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -
| Astrophysikalisches Institut Potsdam (AIP)
| An der Sternwarte 16, D-14482 Potsdam
|
| Vorstand: Prof. Dr. Matthias Steinmetz, Peter A. Stolz
|
| Stiftung privaten Rechts, Stiftungsverzeichnis Brandenburg: III/7-71-026
| -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to get DATE in server locale format

2010-01-11 Thread Tom Lane
"Andrus"  writes:
> How to get date in server locale format ?

You might be able to get what you want with the to_char() function,
if setting datestyle doesn't do the trick for you.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Huge iowait during checkpoint finish

2010-01-11 Thread Scott Marlowe
On Mon, Jan 11, 2010 at 3:53 AM, Anton Belyaev  wrote:
> 2010/1/9 Scott Marlowe :
>> On Fri, Jan 8, 2010 at 3:07 PM, Greg Smith  wrote:
>>> Basically, you have a couple of standard issues here:
>>>
>>> 1) You're using RAID-5, which is not known for good write performance.  Are
>>> you sure the disk array performs well on writes?  And if you didn't
>>> benchmark it, you can't be sure.
>>
>> This can be doubly bad if he's now moved to a set of disks that are
>> properly obeying fsync but was on disks that were lying about it
>> before.
>>
>
> Scott, thanks for the interesting suggestion.
>
> And how do I check this?
> Old RAID-1 has "hardware" LSI controller.
> I still have access to old server.

The old RAID card likely had a battery backed cache, which would make
the fsyncs much faster, as long as you hadn't run out of cache.

While RAID-5 can have reasonable throughput sequentially, it has a
much higher random write costs, (two reads and two writes for each
random write).

Likely those two things, lack of cache, and more expensive writes is
why the new system seems so much slower.

If you can shoehorn one more drive, you could run RAID-10 and get much
better performance.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to get DATE in server locale format

2010-01-11 Thread Andrus

You might be able to get what you want with the to_char() function,
if setting datestyle doesn't do the trick for you.


setting datestyle changes style for whole sql statement.

How to get this in a single conversion in sql statement so that other 
expressions in same sql statement are not affected.


to_char() requires hard-coded format.
Different servers have different locales so that format is not know at 
application design time.


How to get localized date for single conversion inside SELECT statement so 
that it works in different server lc_time settings ?


Andrus. 



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to get DATE in server locale format

2010-01-11 Thread A. Kretschmer
In response to Andrus :
> >You might be able to get what you want with the to_char() function,
> >if setting datestyle doesn't do the trick for you.
> 
> setting datestyle changes style for whole sql statement.
> 
> How to get this in a single conversion in sql statement so that other 
> expressions in same sql statement are not affected.
> 
> to_char() requires hard-coded format.
> Different servers have different locales so that format is not know at 
> application design time.
> 
> How to get localized date for single conversion inside SELECT statement so 
> that it works in different server lc_time settings ?

As Tom said, you can use to_char():

test=*# show lc_time;
   lc_time
-
 en_US.UTF-8
(1 row)

test=*# select to_char(current_date, 'TMDay - TMMonth - ');
 to_char
-
 Monday - January - 2010
(1 row)

test=*# set lc_time = 'de_DE.UTF-8';
SET
test=*# select to_char(current_date, 'TMDay - TMMonth - ');
to_char

 Montag - Januar - 2010
(1 row)


Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: -> Header)
GnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431  2EB0 389D 1DC2 3172 0C99

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] migration: parameterized statement and cursor

2010-01-11 Thread Aleksey Onopriyenko
Hello.

We are trying to migrate from Informix 9.4 to PostgreSQL. As part of
migration we are porting our client application.

So we need reimplement such functionality:
1. Declare a cursor using to _parameterized_ SELECT statement. It should be
possible to specify cursor's name (and, perhaps, the statement) dynamically.
2. Open that cursor (in another C-function) passing parameters to it.
3. Fetch records from result set.
4. Close the cursor.

Steps 2-4 may be repeated if necessary.

Is it possible with ecpg or libpq?


Best regards,
Aleksey


Re: [GENERAL] Table appears on listing but can't drop it

2010-01-11 Thread Adrian Klaver
On Monday 11 January 2010 3:08:27 am Fernando Morgenstern wrote:
> Em 11/01/2010, às 09:04, hubert depesz lubaczewski escreveu:
> >> Hi,
> >>
> >> I have done:
> >>
> >> # psql -U postgres -p 4000 -l | hexdump -C
> >>
> >> And got the two databases: http://pastebin.ca/1746711
> >>
> >> I couldn't find any difference here.
> >
> > Could you add -qAt to psql options and rerun the command?
> >
> > Best regards,
> >
> > depesz
> >
> > --
> > Linkedin: http://www.linkedin.com/in/depesz  /  blog:
> > http://www.depesz.com/ jid/gtalk: dep...@depesz.com / aim:depeszhdl /
> > skype:depesz_hdl / gg:6749007
>
> Hello,
>
> Same result: http://pastebin.ca/1746714
>
> Regards,
> ---
>
> Fernando Marcelo
> www.consultorpc.com
> ferna...@consultorpc.com

The only thing I can think of at this point is look at what is in the 
pg_database flat file in $PGDATA/pg_global. At a guess the issue is related to 
the problems you had with pgpool. You may end up having to do as Hubert 
suggested. 

-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Set Returning C-Function with cache over multiple calls (with different arguments)

2010-01-11 Thread Merlin Moncure
On Mon, Jan 11, 2010 at 2:45 AM, Thilo Schneider
 wrote:
> Dear list,
>
> Currently I am working on a user C-Function which should create a cache 
> object on the first call and afterwards return a set of computed values for 
> each argument combination it is called with.
>
> My Problem is how to get the cache object saved over multiple calls. Without 
> the SRF I could use fcinfo->flinfo->fn_extra for my pointer to the data. This 
> is now used by the FuncCallContext structure. This structure is destroyed 
> every time SRF_RETURN_DONE is called, thus user_fctx also is not the way to 
> go.
>
> As a minimal example look at the function provided below.
>
> --- snip 
> ---
> PG_FUNCTION_INFO_V1(test);
> Datum test(PG_FUNCTION_ARGS)
> {
>        MemoryContext old_context;
>        FuncCallContext     *funcctx;
>
>
>        if (SRF_IS_FIRSTCALL()) {
>                funcctx = SRF_FIRSTCALL_INIT();
>
>                // This is the structure potentially generated in previous 
> calls
>                str = funcctx->user_fctx;
>
>                // If the structure does not exist or the geometry array has 
> changed, it has to be created.
>                if ( ! str) {
>                        elog(NOTICE, "create new");
>
>                        old_context = 
> MemoryContextSwitchTo(fcinfo->flinfo->fn_mcxt);
>
>                        // Fill str with data around here ...
>
>                        MemoryContextSwitchTo(old_context);
>
>                        funcctx->user_fctx = str;
>                }
>
>        }
>        funcctx = SRF_PERCALL_SETUP();
>        SRF_RETURN_DONE(funcctx);
> }
> --- snip 
> ---
>
> To make the problem perfectly clear the SQL-Code this should work with:
>
> --- snip 
> ---
> CREATE OR REPLACE FUNCTION test(int, int[] )
>  RETURNS SETOF int AS 'myfunc', 'test'
>  LANGUAGE 'c' IMMUTABLE STRICT
>  COST 1;
>
> SELECT test(number, array(SELECT integers FROM another_table)) FROM numbers;
> --- snip 
> ---
>
> As creating the cache object is by far the most expensive part in the desired 
> function, it should be possible to create the cache only once over the whole 
> query - using only the arguments in the array, which do not change over 
> multiple calls.
>
> Is there a way around this problem? Another pointer I could use and do not 
> know of yet?

have you ruled out simply keeping a static pointer around and using malloc()?

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Composite types questions

2010-01-11 Thread Merlin Moncure
On Mon, Jan 11, 2010 at 5:14 AM, Vincenzo Romano
 wrote:
> Hi all.
>
> It's not clear to me how composite values are used in  conditions 
> (WHERE/CHECK).
> In my case I have something like this:
>
> -- begin snippet
>
> CREATE TABLE test_tab (
>  col1 timestamp not null,
>  col2 int8 not null,
>  col3 text not null
> );
>
> CREATE INDEX i_test_tab_col1 ON test_tab( col1 );
>
> SELECT *
>  FROM test_tab
>  WHERE (date_trunc('week',col1),col2,col3)=('val1'::timestamp,val2,'val3')
> ;
> -- end snippet
>
> For a number of reasons I cannot split the the WHERE condition in the
> "usual" AND-list.
> My questions are:
> Can the i_test_tab_col1 INDEX be taken into account from the query planner?
me
yes

> What if I define a functional index on col1?

sure (if you pull the data the way it is done in the function)

> Does the same apply to TABLE-level CHECK conditions?

probably.

You are not really asking about composite types.  What you
demonstrated is row constructor syntax...they are similar but not the
same.  For the most part, at least from 8.2 onwards, postgres is
pretty smart about row constructor and should do what you want with
minimal fuss.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg.dropped

2010-01-11 Thread Tom Lane
=?UTF-8?Q?Filip_Rembia=C5=82kowski?=  writes:
> Full test case, reproduced in 8.4.2 on two different hosts

> create table test (id serial primary key, t1 text, t2 text);
> create function myhash(test) returns text as 'select md5($1::text)' language
> sql immutable;
> create index myhash on test( myhash(test) );
> alter table test add t3 text;
> alter table test drop t3;
> insert into test(t1,t2) select 'foo', 'bar';

I've applied a patch for this in HEAD and 8.4.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Composite types questions

2010-01-11 Thread Vincenzo Romano
2010/1/11 Merlin Moncure :
> On Mon, Jan 11, 2010 at 5:14 AM, Vincenzo Romano
>  wrote:
>> Hi all.
>>
>> It's not clear to me how composite values are used in  conditions 
>> (WHERE/CHECK).
>> In my case I have something like this:
>>
>> -- begin snippet
>>
>> CREATE TABLE test_tab (
>>  col1 timestamp not null,
>>  col2 int8 not null,
>>  col3 text not null
>> );
>>
>> CREATE INDEX i_test_tab_col1 ON test_tab( col1 );
>>
>> SELECT *
>>  FROM test_tab
>>  WHERE (date_trunc('week',col1),col2,col3)=('val1'::timestamp,val2,'val3')
>> ;
>> -- end snippet
>>
>> For a number of reasons I cannot split the the WHERE condition in the
>> "usual" AND-list.
>> My questions are:
>> Can the i_test_tab_col1 INDEX be taken into account from the query planner?
> me
> yes

I need to be more precise.
That WHERE condition should fetch all the (relevant) rows whose col1
value falls within a defined week.
I'm not sure whether the query planner understands this without a
specific index. Read below.

>
>> What if I define a functional index on col1?
>
> sure (if you pull the data the way it is done in the function)

I mean an index like this:

CREATE INDEX i_test_tab_col1_trunc ON test_tab( date_trunc( 'week',col1 ) );

or, even better, like this:

CREATE INDEX i_test_tab_misc ON test_tab(  date_trunc('week',col1),col2,col3 );

It sounds like to me that Merlin says so.

>
>> Does the same apply to TABLE-level CHECK conditions?
>
> probably.

"Probably" is too fuzzy to me, sorry.

> You are not really asking about composite types.  What you
> demonstrated is row constructor syntax...they are similar but not the
> same.  For the most part, at least from 8.2 onwards, postgres is
> pretty smart about row constructor and should do what you want with
> minimal fuss.

I mean, if I define a TABLE-level CHECK condition just like this:

...
CHECK  ( date_trunc('week',col1),col2,col3)=('val1'::timestamp,val2,'val3') )


for a number of inherited tables, and then have a query like this:

SELECT * FROM master_tab
  WHERE col1>='a date':timestamp AND col1<'another date'::timestamp
AND col2=42 AND col3='the answer';

will the constraint_exclusion filter skip all those tables in the
inheritance hierarchy whose col1 columns fall outside a needed week?

> merlin

Thanks a lot so far.

-- 
Vincenzo Romano
NotOrAnd Information Technologies
NON QVIETIS MARIBVS NAVTA PERITVS

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Test build of postgres v8.4.2 available

2010-01-11 Thread Michael Felt
I have compiled postgres for AIX and tested installation on a fresh
installation of AIX 6.1.3. I am interested in feedback on the package and
shall make improvements in the packaging as needed.

for enhanced portability readline and zlib were not included in the build.
I'll be looking into that at a later date.

Michael

p.s. A post with link and instructions can be found at
http://www.rootvg.net/content/view/391/89/


[GENERAL] does autovacuum in postgres-8.1.2 use prepared transactions ??

2010-01-11 Thread tamanna madaan
Hi All

 

I am using postgres-8.1.2.

 

Can anyone please let me know if autovacuum in postgres-8.1.2 uses
prepared transactions.

 

 

Thanks a lot in advance

 

 

Regards

Tamanna 



Re: [GENERAL] Set Returning C-Function with cache over multiple calls (with different arguments)

2010-01-11 Thread Thilo Schneider
Dear list,

I solved my own problem - as so often, once you write it down and press the 
send button you get the idea.

The problem was:

> Currently I am working on a user C-Function which should create a cache 
> object on the first call and afterwards return a set of computed values for 
> each argument combination it is called with.
> 
> My Problem is how to get the cache object saved over multiple calls. Without 
> the SRF I could use fcinfo->flinfo->fn_extra for my pointer to the data. This 
> is now used by the FuncCallContext structure. This structure is destroyed 
> every time SRF_RETURN_DONE is called, thus user_fctx also is not the way to 
> go.

My solution:

--- snip 
---
struct myData {
FuncCallContext *funcctx; 
// own Data
int cachedObject;
} myData

PG_FUNCTION_INFO_V1(test);
Datum test(PG_FUNCTION_ARGS) 
{
MemoryContext old_context;
FuncCallContext *funcctx;
myData *str;

// Get fn_extra
str = fcinfo->flinfo->fn_extra; 
if ( ! str) {
elog(NOTICE, "create new");

old_context = MemoryContextSwitchTo(fcinfo->flinfo->fn_mcxt);
// Fill str with data around here ...
MemoryContextSwitchTo(old_context);
str->funcctx = NULL;
}

// This is the situation the SRF-macros expect:
fcinfo->flinfo->fn_extra = str->funcctx; 

if (SRF_IS_FIRSTCALL()) {
funcctx = SRF_FIRSTCALL_INIT();

// Your commands
}
funcctx = SRF_PERCALL_SETUP();

// This is the macro SRF_RETURN_DONE(funcctx); 
// Before we finally return we save our str in fn_extra and fn_extra in 
str->funcctx.
do { 
ReturnSetInfo *rsi; 
end_MultiFuncCall(fcinfo, funcctx); 
rsi = (ReturnSetInfo *) fcinfo->resultinfo; 
rsi->isDone = ExprEndResult; 
// -- Modify macro here --
str->funcctx = fcinfo->flinfo->fn_extra; 
fcinfo->flinfo->fn_extra = str; 
// -- End modification --
PG_RETURN_NULL(); 
} while (0);

// Of course, SRF_RETURN_DATUM has to be adapted the same way!
}
--- snip 
---

Regards,
Thilo Schneider
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] how much left for restore?

2010-01-11 Thread Francisco Reyes

Ivan Sergio Borgonovo writes:


Is there a way to know/estimate how much is left to complete a
restore?


Not sure on plain ASCII files but if your pg_dump used Fc then at restore 
you can  pass the -v flag.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] how much left for restore?

2010-01-11 Thread Ivan Sergio Borgonovo
On Mon, 11 Jan 2010 12:30:45 -0500
Francisco Reyes  wrote:

> Ivan Sergio Borgonovo writes:
> 
> > Is there a way to know/estimate how much is left to complete a
> > restore?
> 
> Not sure on plain ASCII files but if your pg_dump used Fc then at
> restore you can  pass the -v flag.

It get a bit better but even knowing what are the largest tables it
is hard to get an estimate of how much is missing before complete
restore.

I'm really looking at rough figures... even a: I've read 40% of the
file will give a more usable information than: I've already loaded
table A.

thanks

-- 
Ivan Sergio Borgonovo
http://www.webthatworks.it


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Database size

2010-01-11 Thread Leonardo M.
A customer of mine asked me to periodically delete old, unneeded records
containing ByteA fields, because he think it will reduce the database
size on disk. Is this true?. For example, in Firebird, the database size
is not reduced until you do a Backup-Restore of the database.

Thanks in advance,
Leonardo M. Ramé
Griensu S.A. - Medical IT Córdoba
Tel.: 0351-4247979


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Using a lock to avoid: could not open relation with OID

2010-01-11 Thread Francisco Reyes

I need to replace a table with a new one.
Example..
I create a script that continously does selects like
select count(*) from tmp_deleteme_francisco;
 enough selects to last the duration of second script
select count(*) from tmp_deleteme_francisco;


Another script then does
begin;
select * into tmp_deleteme_francisco_2 from xxx;
alter table tmp_deleteme_francisco rename to tmp_deleteme_francisco_old;
alter table tmp_deleteme_francisco_2 rename to tmp_deleteme_francisco;
drop table tmp_deleteme_francisco_old;
commit;

That results in the script doing the selects getting 
could not open relation with OID .


I thought using an explicit access exclusive lock would do the trick and 
tried a few variations including


begin;
select * into tmp_deleteme_francisco_2 from xxx;
lock tmp_deleteme_francisco  in access exclusive mode;
alter table tmp_deleteme_francisco rename to tmp_deleteme_francisco_old;
alter table tmp_deleteme_francisco_2 rename to tmp_deleteme_francisco;
drop table tmp_deleteme_francisco_old;
commit;

If I get the access exclusive lock I thought nothing else could be 
accessing the table after the lock is obtained.


Any ideas?

Postgresql 8.4.1 on CentOS 5.3


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Composite types questions

2010-01-11 Thread Merlin Moncure
On Mon, Jan 11, 2010 at 11:08 AM, Vincenzo Romano > SELECT * FROM master_tab
>  WHERE col1>='a date':timestamp AND col1<'another date'::timestamp
> AND col2=42 AND col3='the answer';

queries of this sort are optimally handled via row constructor for 8.2 onwards:

create index col231_idx  on master_tab(col2, col3, col1);

select * from master_tab where (col2,col3,co1) >= (42, 'the answer',
a_date) and (col2,col3,co1) < (42, 'the answer', b_date);

one caveat: the planner is lousy in regards to estimating index
selectivity in these cases which can cause strange behaviors.

regarding constraint_exclusion, I don't think there are any
issues...you should test it to be sure.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Composite types questions

2010-01-11 Thread Vincenzo Romano
2010/1/11 Merlin Moncure :
> On Mon, Jan 11, 2010 at 11:08 AM, Vincenzo Romano > SELECT * FROM master_tab
>>  WHERE col1>='a date':timestamp AND col1<'another date'::timestamp
>> AND col2=42 AND col3='the answer';
>
> queries of this sort are optimally handled via row constructor for 8.2 
> onwards:
>
> create index col231_idx  on master_tab(col2, col3, col1);
>
> select * from master_tab where (col2,col3,co1) >= (42, 'the answer',
> a_date) and (col2,col3,co1) < (42, 'the answer', b_date);
>
> one caveat: the planner is lousy in regards to estimating index
> selectivity in these cases which can cause strange behaviors.
>
> regarding constraint_exclusion, I don't think there are any
> issues...you should test it to be sure.
>
> merlin
>

Great news for me!
Thanks Merlin ... the Wizard!  :-)


-- 
Vincenzo Romano
NotOrAnd Information Technologies
cel. +39 339 8083886  | gtalk. vincenzo.rom...@notorand.it
fix. +39 0823 454163  | skype. notorand.it
fax. +39 02 700506964 | msn.   notorand.it
NON QVIETIS MARIBVS NAVTA PERITVS

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Database size

2010-01-11 Thread John R Pierce

Leonardo M. Ramé wrote:

A customer of mine asked me to periodically delete old, unneeded records
containing ByteA fields, because he think it will reduce the database
size on disk. Is this true?. For example, in Firebird, the database size
is not reduced until you do a Backup-Restore of the database.
  



the space occupied by deleted tuples will available for reuse after they 
are vacuumed...  so it might not shrink the database but it will help 
keep the database from growing.





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] how much left for restore?

2010-01-11 Thread Francisco Reyes

Ivan Sergio Borgonovo writes:


It get a bit better but even knowing what are the largest tables it
is hard to get an estimate of how much is missing before complete
restore.


Agree. Also building indexes can also take quite some time.


I'm really looking at rough figures... even a: I've read 40% of the
file will give a more usable information than: I've already loaded
table A.


I don't believe there is anything like that and have not see  anything in 
the TODO list (that I recall) either.


It likely is not easy to implement. For starters to get such an estimate you 
would need to know how much data a table has. That would require to do a 
pass to get size or somehow store table size on the restore for the 
estimate. Then you have indexes.. they are just one line on the backup, but 
can take quite a while to build depending on size.


For the foreseable future -v is likely all that is and will be available.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] how much left for restore?

2010-01-11 Thread Sam Mason
On Fri, Jan 08, 2010 at 11:28:15AM +0100, Ivan Sergio Borgonovo wrote:
> Is there a way to know/estimate how much is left to complete a
> restore?

maybe something like "pv" would help?

  http://www.ivarch.com/programs/pv.shtml

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Database size

2010-01-11 Thread Francisco Reyes

Leonardo M. Ramé writes:


A customer of mine asked me to periodically delete old, unneeded records
containing ByteA fields, because he think it will reduce the database
size on disk. Is this true?.


No.


For example, in Firebird, the database size
is not reduced until you do a Backup-Restore of the database.


Not sure about ByteA, but in general... in potgres you need to do that or a 
vacuum full.. or copy all the data of the table into a new table and rename.


Having said that.. if you delete old rows and do a vacuum and/or have 
autovacuum enabled the  space of those old rows can be re-used, slowing down 
the growth of the table. So yes, deleting old rows and doing a vacuum is a 
good thing.. but no I don't believe it will reduce database size.


I say "don't believe" because I don't recall if byteA was stored in the 
table itself or was stored outside using TOAST.. so I am not sure about 
how/when space is released for it.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] how much left for restore?

2010-01-11 Thread Francisco Reyes

Sam Mason writes:


maybe something like "pv" would help?

  http://www.ivarch.com/programs/pv.shtml


I think it may help the OP, but indexes are still going to be a rough spot.. 
if  large table has several indexes the output from pv will be missleading.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Database size

2010-01-11 Thread John R Pierce

Francisco Reyes wrote:
I say "don't believe" because I don't recall if byteA was stored in 
the table itself or was stored outside using TOAST.. so I am not sure 
about how/when space is released for it.


like all other data, that depends on the size of the data.if the 
entire row (tuple) is under 4K (I think, don't quote me), it will be 
stored in the table, otherwise it will be toasted and stored in the 
PG_TOAST tables which also need vacuuming




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Database size

2010-01-11 Thread Raymond O'Donnell
On 11/01/2010 18:00, Leonardo M. Ramé wrote:
> A customer of mine asked me to periodically delete old, unneeded records
> containing ByteA fields, because he think it will reduce the database
> size on disk. Is this true?. For example, in Firebird, the database size
> is not reduced until you do a Backup-Restore of the database.

When you delete a row, the space isn't reclaimed until you do a VACUUM.

Ray.


-- 
Raymond O'Donnell :: Galway :: Ireland
r...@iol.ie

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Database size

2010-01-11 Thread Steve Crawford

Leonardo M. Ramé wrote:

A customer of mine asked me to periodically delete old, unneeded records
containing ByteA fields, because he think it will reduce the database
size on disk. Is this true?. For example, in Firebird, the database size
is not reduced until you do a Backup-Restore of the database.
  
From this perspective, bytea should be pretty much like any other data 
in PostgreSQL.


If you delete the records, the on-disk size will not be reduced.

If you vacuum the table after deleting (or let autovacuum do it for 
you), the on-disk size will not be reduced but the space formerly 
occupied by the deleted records can be reused to store new data.


To actually shrink the on-disk size requires a vacuum full or a cluster. 
Cluster is typically preferred since it is far faster and rebuilds the 
indexes but it does require sufficient disk-space to hold the original 
copy and the new copy of the data.


Both vacuum full and cluster lock the table.

If you delete unneeded records promptly and in small batches (and 
assuming no weird distribution of bytea data-sizes), autovaccuum should 
to a reasonable job of keeping bloat under control.


Cheers,
Steve

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] does autovacuum in postgres-8.1.2 use prepared transactions ??

2010-01-11 Thread Tom Lane
"tamanna madaan"  writes:
> Can anyone please let me know if autovacuum in postgres-8.1.2 uses
> prepared transactions.

Nope, it does not.  Any prepared transactions you see hanging around
were created by some external client.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] how much left for restore?

2010-01-11 Thread Ivan Sergio Borgonovo
On Mon, 11 Jan 2010 18:36:18 +
Sam Mason  wrote:

> On Fri, Jan 08, 2010 at 11:28:15AM +0100, Ivan Sergio Borgonovo
> wrote:
> > Is there a way to know/estimate how much is left to complete a
> > restore?

> maybe something like "pv" would help?

>   http://www.ivarch.com/programs/pv.shtml

Nice. Start to look more as what I was looking for... and yeah...
I'm aware it could be misleading.

It would be nice to have it integrated in pg_restore/dump.
If the file is compressed pg_* may make a better work to give an
estimate.

thanks

-- 
Ivan Sergio Borgonovo
http://www.webthatworks.it


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] migration: parameterized statement and cursor

2010-01-11 Thread Andy Colson

On 1/11/2010 8:16 AM, Aleksey Onopriyenko wrote:

Hello.

We are trying to migrate from Informix 9.4 to PostgreSQL. As part of
migration we are porting our client application.

So we need reimplement such functionality:
1. Declare a cursor using to _parameterized_ SELECT statement. It should
be possible to specify cursor's name (and, perhaps, the statement)
dynamically.
2. Open that cursor (in another C-function) passing parameters to it.
3. Fetch records from result set.
4. Close the cursor.

Steps 2-4 may be repeated if necessary.

Is it possible with ecpg or libpq?


Best regards,
Aleksey


You dont need cursors, really.

In code (a little sudo-code), do:

q := prepare('select f1, f2 from table_x where id = $1');

then you can pass it around as you like, until...

q.params[0] := 42;
q.open();

while not q.eof()
begin
print q.field[0].asString, "\n";
q.next();
end



This is NOT libpq sudo-code, its delphi'ish, which uses libpq under the 
hood.  My point being, you dont really need cursors, just prepare a 
select statement, set it params, fire it off, iterate the result set... 
done.  And its re-useable.


Unless I missed your purpose.

-Andy

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Huge iowait during checkpoint finish

2010-01-11 Thread Greg Smith

Scott Marlowe wrote:

On Mon, Jan 11, 2010 at 3:53 AM, Anton Belyaev  wrote:
  

Old RAID-1 has "hardware" LSI controller.
I still have access to old server.



The old RAID card likely had a battery backed cache, which would make
the fsyncs much faster, as long as you hadn't run out of cache.
  


To be super clear here:  it's possible to see a 100:1 performance drop 
going from a system with a battery-backed write cache to one that 
doesn't.  This one of the three main weak spots of software RAID that 
still keeps hardware RAID vendors in business:  it can't do anything to 
speed up the type of writes done during transactions commit and at 
checkpoint time.  (The others are that it's hard to setup transparent 
failover after failure in software RAID so that it always works at boot 
time, and that motherboard chipsets can easily lose their minds and take 
down the whole system when one drive goes bad).



If you can shoehorn one more drive, you could run RAID-10 and get much
better performance.
  
And throwing drives at the problem may not help.  I've see a system with 
a 48 disk software RAID-10 that only got 100 TPS when running a 
commit-heavy test, because it didn't have any way to cache writes 
usefully for that purpose.


If the old system had a write caching card, and the new one doesn't, 
that's certainly your most likely suspect for the source of the 
slowdown.  As for testing that specifically, if you have the old system 
too you can look at the slides I've got for "Database Hardware 
Benchmarking" at 
http://www.westnet.com/~gsmith/content/postgresql/index.htm and use the 
sysbench example I show on P26 to measure commit fsync rate.  There's a 
video of that presentation where I explain a lot of the background in 
this area too.


--
Greg Smith2ndQuadrant   Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com  www.2ndQuadrant.com



Re: [GENERAL] Huge iowait during checkpoint finish

2010-01-11 Thread Scott Marlowe
On Mon, Jan 11, 2010 at 2:59 PM, Greg Smith  wrote:
> Scott Marlowe wrote:

> If you can shoehorn one more drive, you could run RAID-10 and get much
> better performance.
>
>
> And throwing drives at the problem may not help.  I've see a system with a
> 48 disk software RAID-10 that only got 100 TPS when running a commit-heavy
> test, because it didn't have any way to cache writes usefully for that
> purpose.

A 4 disk RAID-10 will be about 4 to 8 times faster than a RAID-5 of 3
disks.  It won't be as fast as a good sized RAID-10 with HW caching,
but it would be a big improvement.

> If the old system had a write caching card, and the new one doesn't, that's
> certainly your most likely suspect for the source of the slowdown.  As for

Agreed.  If the new machine is limited to 3 disks, and any one is big
enough to hold the db, I'd look at a two disk mirror with a hot spare
on a HW RAID controller with battery backed chat.  If they can't get a
HW RAID controller than switching to SW RAID-1 would be a positive
step.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to get DATE in server locale format

2010-01-11 Thread Craig Ringer

On 11/01/2010 9:44 PM, A. Kretschmer wrote:

In response to Andrus :

How to get localized date for single conversion inside SELECT statement so
that it works in different server lc_time settings ?


As Tom said, you can use to_char():


It looks like the OP wants a localized date, just one different to that 
specified by the datestyle GUC, for just one date-to-string conversion 
within a function that may have others.


Essentially, I think they're after

SET datestyle = DMY;
SELECT date_with_style(somedate, 'MDY'), somedate::text;

... which doesn't exist.

Andrus: Is the date style you need for the one "different" call fixed? 
Or does it vary?


If it's fixed, you *can* just use to_char with a constant format string.

If the format needed for the "different" call varies, you'll have to 
either have your app send a suitable to_char format string, or you'll 
need to write a function that accepts a Pg datestyle parameter and 
internally calls to_char with the appropriate format depending on the 
value of the passed datestyle.


There's no way to say:

   -- This does not work
   SELECT CAST(somedate AS text WITH DATESTYLE 'iso')

or the like.

--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Using a lock to avoid: could not open relation with OID

2010-01-11 Thread Craig Ringer

On 12/01/2010 2:04 AM, Francisco Reyes wrote:

I need to replace a table with a new one.
Example..
I create a script that continously does selects like
select count(*) from tmp_deleteme_francisco;
 enough selects to last the duration of second script
select count(*) from tmp_deleteme_francisco;


Another script then does
begin;
select * into tmp_deleteme_francisco_2 from xxx;
alter table tmp_deleteme_francisco rename to tmp_deleteme_francisco_old;
alter table tmp_deleteme_francisco_2 rename to tmp_deleteme_francisco;
drop table tmp_deleteme_francisco_old;
commit;

That results in the script doing the selects getting could not open
relation with OID .


Possible workaround: Instead of your table creation, renaming and 
dropping, use TRUNCATE.


--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Database size

2010-01-11 Thread Craig Ringer

On 12/01/2010 2:00 AM, Leonardo M. Ramé wrote:

A customer of mine asked me to periodically delete old, unneeded records
containing ByteA fields, because he think it will reduce the database
size on disk. Is this true?. For example, in Firebird, the database size
is not reduced until you do a Backup-Restore of the database.


It won't shrink it unless you do a VACUUM FULL or CLUSTER, but it *will* 
prevent growth or (if you're adding more data than you're deleting) 
reduce the rate of growth.


Make sure autovacuum is running and (for pre-8.4 databases) your free 
space map settings are sufficient.


--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] replication from multiple "master" servers to a single read-only slave

2010-01-11 Thread Omar Mehmood
I'm wondering if it's possible to have a setup with multiple "master" servers 
replicating to a single slave.  I can guarantee that each server will generate 
unique PK values for all tables and all the data is partitioned (logically by 
server) across the servers.  I would simply like to have a read-only slave that 
is a picture of all the servers' data (relatively up to date).  The individual 
"master" servers never need to know about each other's data (i.e. they do not 
_need_ to "sync" with each other, nor do I want them to be sync'd).

Would it be possible to use PostgreSQL PITR feature to support this 
functionality ?  All of the data created/updated/deleted per server is unique 
to that server, so replaying the log to the slave should technically be safe 
and the replaying logs from multiple servers should be safe as well (as long as 
the relative order of replay is preserved).  I'm just wondering how to get 
around the numbering of the log (WAL) files and the slave's tracking of the log 
files that it has already processed.

I can certainly write my own application log module that runs on each server, 
ship over the log to the slave machine and replay the logs to the slave (in the 
meanwhile ensuring that the order of replay is preserved and all that good 
stuff), but I'm trying to find a quick(er) solution for the short term.

Please note that I'd like to avoid using PostgreSQL data partitioning as well 
as any DB triggers (in case anyone was going to go down that path as part of 
the solution).

Omar


  

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] replication from multiple "master" servers to a single read-only slave

2010-01-11 Thread David Fetter
On Mon, Jan 11, 2010 at 03:02:18PM -0800, Omar Mehmood wrote:
> I'm wondering if it's possible to have a setup with multiple
> "master" servers replicating to a single slave.  I can guarantee
> that each server will generate unique PK values for all tables and
> all the data is partitioned (logically by server) across the
> servers.  I would simply like to have a read-only slave that is a
> picture of all the servers' data (relatively up to date).  The
> individual "master" servers never need to know about each other's
> data (i.e. they do not _need_ to "sync" with each other, nor do I
> want them to be sync'd).
> 
> Would it be possible to use PostgreSQL PITR feature to support this
> functionality ?

No, but you could use something like Slony to do this.

Cheers,
David.
-- 
David Fetter  http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] replication from multiple "master" servers to a single read-only slave

2010-01-11 Thread Omar Mehmood
Thanks for the suggestions.

I really don't want to use separate schemas for each master to logically 
partition the data.  I ensure that the data on each master will not clash with 
each other (in terms of any DB level contraints such as PK), so I'd much prefer 
they all reside in a single schema.  Also, my understanding is that Slony uses 
DB triggers to track changes (but I want to avoid using DB triggers).

Another additional constraint-- the master servers may not always have 
connectivity to the slave machine, so the chosen mechanism needs to be robust 
and not assume 100% uptime.

For truly simple, I could do a periodic data dump of the database on the 
server, ship to slave, and restore.  However, this is way overkill since there 
won't be that many changes in the data for the period of time that it will run 
(e.g. every 15-20 minutes).  Plus, I might end up in the situation where the 
slave will start to increasingly lag behind over time (depends on the number of 
master servers and amount of data).

I will check out Bucardo.

Omar

--- On Mon, 1/11/10, Ben Chobot  wrote:

> From: Ben Chobot 
> Subject: Re: [GENERAL] replication from multiple "master" servers to a single 
> read-only slave
> To: "Omar Mehmood" 
> Date: Monday, January 11, 2010, 6:58 PM
> I'm pretty sure you can do this with
> Bucardo, and I *think* you can do it via Slony, if you're
> willing to use seperate schemas for each master. I know you
> cannot do this with PITR.
> 
> On Jan 11, 2010, at 4:02 PM, Omar Mehmood wrote:
> 
> > I'm wondering if it's possible to have a setup with
> multiple "master" servers replicating to a single
> slave.  I can guarantee that each server will generate
> unique PK values for all tables and all the data is
> partitioned (logically by server) across the servers. 
> I would simply like to have a read-only slave that is a
> picture of all the servers' data (relatively up to
> date).  The individual "master" servers never need to
> know about each other's data (i.e. they do not _need_ to
> "sync" with each other, nor do I want them to be sync'd).
> > 
> > Would it be possible to use PostgreSQL PITR feature to
> support this functionality ?  All of the data
> created/updated/deleted per server is unique to that server,
> so replaying the log to the slave should technically be safe
> and the replaying logs from multiple servers should be safe
> as well (as long as the relative order of replay is
> preserved).  I'm just wondering how to get around the
> numbering of the log (WAL) files and the slave's tracking of
> the log files that it has already processed.
> > 
> > I can certainly write my own application log module
> that runs on each server, ship over the log to the slave
> machine and replay the logs to the slave (in the meanwhile
> ensuring that the order of replay is preserved and all that
> good stuff), but I'm trying to find a quick(er) solution for
> the short term.
> > 
> > Please note that I'd like to avoid using PostgreSQL
> data partitioning as well as any DB triggers (in case anyone
> was going to go down that path as part of the solution).
> > 
> > Omar
> > 
> > 
> > 
> > 
> > -- 
> > Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> > To make changes to your subscription:
> > http://www.postgresql.org/mailpref/pgsql-general
> 
> 




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Huge iowait during checkpoint finish

2010-01-11 Thread Craig Ringer
Greg Smith wrote:

> If the old system had a write caching card, and the new one doesn't
> that's certainly your most likely suspect for the source of the
> slowdown.

Note that it's even possible that the old system had a card with write
caching enabled, but *no* battery backed cache. That's crazily
dangerous, but does get you the speed benefits of write caching without
the need for a BBU. I only mention this to help explain a possible
performance difference - you should NEVER do this if you care about your
data.

--
Craig Ringer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] replication from multiple "master" servers to a single read-only slave

2010-01-11 Thread Craig Ringer
Omar Mehmood wrote:
> Would it be possible to use PostgreSQL PITR feature to support this
> functionality ?  All of the data created/updated/deleted per server
> is unique to that server, so replaying the log to the slave should
> technically be safe and the replaying logs from multiple servers
> should be safe as well (as long as the relative order of replay is
> preserved).  I'm just wondering how to get around the numbering of
> the log (WAL) files and the slave's tracking of the log files that it
> has already processed.

No, it can't be done with PITR and WAL-shipping. The write-ahead logs
are at too low a level and rely on the block layout of the Pg cluster.
WAL-shipping only works where master and slave start out with the exact
same data directory contents, with all the same block layout, same oids
for tables/types/etc. Just having the same tuples in tables of the same
names is not sufficient.

Given that you can't even WAL-ship from a master to a slave created by
pg_restore from a dump of the master, you can probably see why
WAL-shipping from multiple masters absolutely cannot work.

For this, you need something higher level that replicates at the
tuple-change level. A trigger-based system like Slony or Bucardo is most
likely to fit your needs.

> Please note that I'd like to avoid using PostgreSQL data partitioning
> as well as any DB triggers (in case anyone was going to go down that
> path as part of the solution).

At present Pg doesn't offer safe and convenient C-level hooks for
replication systems to attach to in order to record tuple changes.
Recording of tuple changes for replication must be done with triggers.
As you need a tuple-change level replication system, you're pretty much
out of luck.

--
Craig Ringer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Best Practice when Encounter Invalid Stored Procedure Parameters

2010-01-11 Thread Yan Cheng Cheok
In c++, whenever we encounter an unexpected parameters, here is what we usually 
did :

bool fun(int i) {
if (i < 0) { 
return false;
}
}

void fun(int i) {
if (i < 0) { 
throw std::exception("Invalid parameter");
}
}

void fun(int i) {
assert (i >= 0);
}

How about stored procedure? Now, I have the following stored procedure :

CREATE OR REPLACE FUNCTION insert_unit(text[], text[])
  RETURNS unit AS
$BODY$DECLARE
_measurement_types ALIAS FOR $1;
_measurement_units ALIAS FOR $2;
_unit unit;
BEGIN
IF array_upper(_measurement_values, 1) != array_upper(_measurement_units, 
1) THEN
RAISE NOTICE 'What I should do here to return early???';
END IF;

May I know what is the good practice to handle invalid parameters? I am using 
libpq to interface with PostgreSQL.

Thanks and Regards
Yan Cheng CHEOK


  


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] transaction logging in the form of SQL statements

2010-01-11 Thread Omar Mehmood
Is there any way to enable transaction logging in the format of SQL statements 
for committed transactions only ?  In other words, a way to log all the SQL 
statements (including START TRANSACTION and COMMIT statements) for all 
committed mod type statements (INSERT UPDATE DELETE etc).

Thanks,
Omar


  

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] transaction logging in the form of SQL statements

2010-01-11 Thread AI Rumman
Use
log_min_duration_statement=0
at postgresql.conf file to log every statement.


On Tue, Jan 12, 2010 at 7:50 AM, Omar Mehmood  wrote:

> Is there any way to enable transaction logging in the format of SQL
> statements for committed transactions only ?  In other words, a way to log
> all the SQL statements (including START TRANSACTION and COMMIT statements)
> for all committed mod type statements (INSERT UPDATE DELETE etc).
>
> Thanks,
> Omar
>
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Using a lock to avoid: could not open relation with OID

2010-01-11 Thread Francisco Reyes

Craig Ringer writes:

Possible workaround: Instead of your table creation, renaming and 
dropping, use TRUNCATE.


Have to support both 8.1 and 8.4.
If I recall correctly 8.1 did not support truncate inside of a transaction.

We are in the process of upgrading everything to 8.4, but until then.. have 
to deal with all the 8.1 and 8.2 machines.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] location for pg_default tablespace

2010-01-11 Thread AI Rumman
I am new at Postgresql. Previously I used to work with Oracle.
I am surprised to see that the location for pg_default tablespace in my
database  for a Postgresql cluster is null.
Could anyone please tell me what is the location of default tablespace in
postgresql and how could I find it?


Re: [GENERAL] location for pg_default tablespace

2010-01-11 Thread John R Pierce

AI Rumman wrote:

I am new at Postgresql. Previously I used to work with Oracle.
I am surprised to see that the location for pg_default tablespace in 
my database  for a Postgresql cluster is null.
Could anyone please tell me what is the location of default tablespace 
in postgresql and how could I find it?


Its the $PGDATA directory.  The location of this varies widely by 
operating system and postgres distribution.  For instance, the postgres 
supplied with fedora and redhat linuxes tend to put this in 
/var/lib/pgsql/data ...   If you build postgres from source on a 
unix-like system, using full defaults, its probably /usr/local/pgsql/data




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] location for pg_default tablespace

2010-01-11 Thread AI Rumman
Thanks.
But actually I want to know that why the value in spclocation is null is
pg_tablespace for pg_default.
Moreover, $PGDATA/pg_tblspc has no file.
Could you please tell me why?

On Tue, Jan 12, 2010 at 11:26 AM, John R Pierce  wrote:

>  AI Rumman wrote:
>
>> I am new at Postgresql. Previously I used to work with Oracle.
>> I am surprised to see that the location for pg_default tablespace in my
>> database  for a Postgresql cluster is null.
>> Could anyone please tell me what is the location of default tablespace in
>> postgresql and how could I find it?
>>
>
> Its the $PGDATA directory.  The location of this varies widely by operating
> system and postgres distribution.  For instance, the postgres supplied with
> fedora and redhat linuxes tend to put this in /var/lib/pgsql/data ...   If
> you build postgres from source on a unix-like system, using full defaults,
> its probably /usr/local/pgsql/data
>
>
>


Re: [GENERAL] location for pg_default tablespace

2010-01-11 Thread Greg Smith

AI Rumman wrote:
But actually I want to know that why the value in spclocation is null 
is pg_tablespace for pg_default.

Moreover, $PGDATA/pg_tblspc has no file.
Could you please tell me why?


PostgreSQL ships with a blank tablespace setting, which it interprets as 
meaning you want to put the database table files into the default 
tablespace which is stored in $PGDATA/base


When you create a new tablespace using CREATE TABLESPACE:  
http://www.postgresql.org/docs/current/static/sql-createtablespace.html


Then you'll find that pg_tblspc contains a symbolic link to the new 
directory you've told it to use.  At that point, you can then point all 
new creation toward that location by setting default_tablespace, or put 
individual bits of data onto there with the appropriate options to 
CREATE:  
http://www.postgresql.org/docs/current/static/manage-ag-tablespaces.html


--
Greg Smith2ndQuadrant   Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com  www.2ndQuadrant.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] An issue with max() and order by ... limit 1 in postgresql8.3-beta3

2010-01-11 Thread A. Kretschmer
In response to Scott Marlowe :
> On Sat, Jan 9, 2010 at 2:46 PM, Andreas Kretschmer
>  wrote:
> > Stefan Kaltenbrunner  wrote:
> >
> >> Andreas Kretschmer wrote:
> >>> zxo102 ouyang  wrote:
> >>>
>  Hi everyone,    I am using postgresql 8.3-beta3. I have a table
>  'test' with three fields:
> >>>
> >>> I'm guessing you mean 8.4-beta3, right?
> >>
> >> either of those are unsuitable for any kind of production use...
> >
> > Hey, we needs beta-testers, right? And yes, read again, the table is
> > called 'test' ...
> 
> True, but if you're gonna test betas / alphas, I'd think 8.5 alpha
> would be the choice for testing.  8.4's beta ended quite some time
> ago.

I'm stupid, i meant 8.5 ...


Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: -> Header)
GnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431  2EB0 389D 1DC2 3172 0C99

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] location for pg_default tablespace

2010-01-11 Thread AI Rumman
I used the followings:

create tablespace mytabspc location '/var/lib/pgsql/data/pg_tblspc';

create database mydb with tablespace=mytabspc;

drop database mydb;

drop tablespace mytabspc;
ERROR: tablspace 'mytabspc' is not empty

Please tell me why?


On Tue, Jan 12, 2010 at 11:50 AM, Greg Smith  wrote:

> AI Rumman wrote:
>
>> But actually I want to know that why the value in spclocation is null is
>> pg_tablespace for pg_default.
>> Moreover, $PGDATA/pg_tblspc has no file.
>> Could you please tell me why?
>>
>
> PostgreSQL ships with a blank tablespace setting, which it interprets as
> meaning you want to put the database table files into the default tablespace
> which is stored in $PGDATA/base
>
> When you create a new tablespace using CREATE TABLESPACE:
> http://www.postgresql.org/docs/current/static/sql-createtablespace.html
>
> Then you'll find that pg_tblspc contains a symbolic link to the new
> directory you've told it to use.  At that point, you can then point all new
> creation toward that location by setting default_tablespace, or put
> individual bits of data onto there with the appropriate options to CREATE:
> http://www.postgresql.org/docs/current/static/manage-ag-tablespaces.html
>
> --
> Greg Smith2ndQuadrant   Baltimore, MD
> PostgreSQL Training, Services and Support
> g...@2ndquadrant.com  www.2ndQuadrant.com 
>
>


Re: [GENERAL] location for pg_default tablespace

2010-01-11 Thread Greg Smith

AI Rumman wrote:

I used the followings:
 
create tablespace mytabspc location '/var/lib/pgsql/data/pg_tblspc';
 
create database mydb with tablespace=mytabspc;
 
drop database mydb;
 
drop tablespace mytabspc;

ERROR: tablspace 'mytabspc' is not empty
 
Please tell me why?


You don't put things in pg_tblspace yourself; that directory is for the 
database to manage.  Your tablespace should be somewhere completely 
outside of /var/lib/pgsql/data altogether.


--
Greg Smith2ndQuadrant   Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com  www.2ndQuadrant.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] location for pg_default tablespace

2010-01-11 Thread Scott Mead
On Tue, Jan 12, 2010 at 7:17 AM, Greg Smith  wrote:

> AI Rumman wrote:
>
>> I used the followings:
>>  create tablespace mytabspc location '/var/lib/pgsql/data/pg_tblspc';
>>  create database mydb with tablespace=mytabspc;
>>  drop database mydb;
>>  drop tablespace mytabspc;
>> ERROR: tablspace 'mytabspc' is not empty
>>  Please tell me why?
>>
>
> You don't put things in pg_tblspace yourself; that directory is for the
> database to manage.  Your tablespace should be somewhere completely outside
> of /var/lib/pgsql/data altogether.


Tablespaces in postgres are quite a bit different from Oracle.  In Oracle,
you define a tablespace to be a location that uses one or more datafiles,
and everything belongs in a tablespace.

In postgres, a tablespace is not required.  It is nothing more than a
directory on some filesystem that you tell postgres about so that you can
store relations in it.  If you don't define a tablespace in postgres, all of
your relations are going to be located in /var/lib/pgsql/base.


That's the difference, you don't NEED a tablespace like Oracle, you'll
only use them in postgres when you're trying to store data in postgres on a
different mountpoint.

 Hope this helps.

--Scott


Re: [GENERAL] transaction logging in the form of SQL statements

2010-01-11 Thread Pavel Stehule
2010/1/12 Omar Mehmood :
> Is there any way to enable transaction logging in the format of SQL 
> statements for committed transactions only ?  In other words, a way to log 
> all the SQL statements (including START TRANSACTION and COMMIT statements) 
> for all committed mod type statements (INSERT UPDATE DELETE etc).

no, you can log only all SQL statements.

Regards
Pavel Stehule

>
> Thanks,
> Omar
>
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Best Practice when Encounter Invalid Stored Procedure Parameters

2010-01-11 Thread Pavel Stehule
hello

2010/1/12 Yan Cheng Cheok :
> In c++, whenever we encounter an unexpected parameters, here is what we 
> usually did :
>
> bool fun(int i) {
>    if (i < 0) {
>        return false;
>    }
> }
>
> void fun(int i) {
>    if (i < 0) {
>        throw std::exception("Invalid parameter");
>    }
> }
>
> void fun(int i) {
>    assert (i >= 0);
> }
>
> How about stored procedure? Now, I have the following stored procedure :
>
> CREATE OR REPLACE FUNCTION insert_unit(text[], text[])
>  RETURNS unit AS
> $BODY$DECLARE
>    _measurement_types ALIAS FOR $1;
>    _measurement_units ALIAS FOR $2;
>    _unit unit;
> BEGIN
>    IF array_upper(_measurement_values, 1) != array_upper(_measurement_units, 
> 1) THEN
>        RAISE NOTICE 'What I should do here to return early???';
>    END IF;
>
> May I know what is the good practice to handle invalid parameters? I am using 
> libpq to interface with PostgreSQL.

see RAISE EXCEPTION

http://www.depesz.com/index.php/2008/05/14/waiting-for-84-plpgsql-raise/

Regards
Pavel Stehule


>
> Thanks and Regards
> Yan Cheng CHEOK
>
>
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] how much left for restore?

2010-01-11 Thread Adrian von Bidder
On Friday 08 January 2010 11.28:15 Ivan Sergio Borgonovo wrote:
> It would be enough just knowing which part of the file is being
> restored (without causing too much extra IO, that will definitively
> put my notebook on its knee).

Highly platform dependent, but has helped me a lot recently with various 
(non-pg) tasks: on recent Linux, /proc//fdinfo/ contains the current 
file pointer, so you can see how far in the dump file pg has read.  
(Assuming the restore reads the dump file just once from start to finish.)

Same problem as the other solutions with index creation etc. of course.

cheers
-- vbi

-- 
And those meanies used what we said to you and they misconstrued it to
mean what we said, and that's so unfair.
-- SCO lawyers about IBM (paraphrased - groklaw)


signature.asc
Description: This is a digitally signed message part.


[GENERAL] Backup strategies with significant bytea data

2010-01-11 Thread Leigh Dyer
Hi,

For years now I've simply backed up my databases by doing a nightly
pg_dump, but since we added the ability for users to import binary files
in to our application, which are stored in a bytea fields, the dump
sizes have gone through the roof — even with gzip compression, they're
significantly larger than the on-disk size of the database. My guess is
that this due to the way that the binary data from the bytea fields is
encoded in the dump file when it's produced.

Is there any way I can reduce the size of my pg_dump files?
Alternatively, are there other backup methods worth investigating? PITR
looks promising, but I have about a dozen databases on the machine, and
I'd need to be able to restore them individually.

Thanks
Leigh




Please consider the environment before printing this message

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] migration: parameterized statement and cursor

2010-01-11 Thread Pavel Stehule
2010/1/11 Aleksey Onopriyenko :
> Hello.
>
> We are trying to migrate from Informix 9.4 to PostgreSQL. As part of
> migration we are porting our client application.
>
> So we need reimplement such functionality:
> 1. Declare a cursor using to _parameterized_ SELECT statement. It should be
> possible to specify cursor's name (and, perhaps, the statement) dynamically.

cursors are supported, but parametrised cursors are supported only
inside plpgsql. Maybe you can transform your code to plpgsql function.
http://www.postgresql.org/docs/8.4/interactive/plpgsql-cursors.html


> 2. Open that cursor (in another C-function) passing parameters to it.

only in plpgsql

> 3. Fetch records from result set.

yes

Regards
Pavel Stehule

> 4. Close the cursor.
>
> Steps 2-4 may be repeated if necessary.
>
> Is it possible with ecpg or libpq?
>


> 
> Best regards,
> Aleksey
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Best Practice when Encounter Invalid Stored Procedure Parameters

2010-01-11 Thread Yan Cheng Cheok
Very nice. Thanks!

Thanks and Regards
Yan Cheng CHEOK


--- On Tue, 1/12/10, Pavel Stehule  wrote:

> From: Pavel Stehule 
> Subject: Re: [GENERAL] Best Practice when Encounter Invalid Stored Procedure  
> Parameters
> To: "Yan Cheng Cheok" 
> Cc: pgsql-general@postgresql.org
> Date: Tuesday, January 12, 2010, 2:33 PM
> hello
> 
> 2010/1/12 Yan Cheng Cheok :
> > In c++, whenever we encounter an unexpected
> parameters, here is what we usually did :
> >
> > bool fun(int i) {
> >    if (i < 0) {
> >        return false;
> >    }
> > }
> >
> > void fun(int i) {
> >    if (i < 0) {
> >        throw std::exception("Invalid parameter");
> >    }
> > }
> >
> > void fun(int i) {
> >    assert (i >= 0);
> > }
> >
> > How about stored procedure? Now, I have the following
> stored procedure :
> >
> > CREATE OR REPLACE FUNCTION insert_unit(text[],
> text[])
> >  RETURNS unit AS
> > $BODY$DECLARE
> >    _measurement_types ALIAS FOR $1;
> >    _measurement_units ALIAS FOR $2;
> >    _unit unit;
> > BEGIN
> >    IF array_upper(_measurement_values, 1) !=
> array_upper(_measurement_units, 1) THEN
> >        RAISE NOTICE 'What I should do here to
> return early???';
> >    END IF;
> >
> > May I know what is the good practice to handle invalid
> parameters? I am using libpq to interface with PostgreSQL.
> 
> see RAISE EXCEPTION
> 
> http://www.depesz.com/index.php/2008/05/14/waiting-for-84-plpgsql-raise/
> 
> Regards
> Pavel Stehule
> 
> 
> >
> > Thanks and Regards
> > Yan Cheng CHEOK
> >
> >
> >
> >
> >
> > --
> > Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> > To make changes to your subscription:
> > http://www.postgresql.org/mailpref/pgsql-general
> >
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
> 





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] does autovacuum in postgres-8.1.2 use prepared transactions ??

2010-01-11 Thread tamanna madaan
Thanks Tom ...

-Original Message-
From: Tom Lane [mailto:t...@sss.pgh.pa.us] 
Sent: Tuesday, January 12, 2010 1:35 AM
To: tamanna madaan
Cc: pgsql-general@postgresql.org; Gaurav Katiyar
Subject: Re: [GENERAL] does autovacuum in postgres-8.1.2 use prepared
transactions ?? 

"tamanna madaan"  writes:
> Can anyone please let me know if autovacuum in postgres-8.1.2 uses
> prepared transactions.

Nope, it does not.  Any prepared transactions you see hanging around
were created by some external client.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general